Karpenter on EKS Fargate¶
This pattern demonstrates how to provision Karpenter on a serverless cluster (serverless data plane) using Fargate Profiles.
Code¶
The areas of significance related to this pattern are highlighted in the code provided below.
Cluster¶
################################################################################
# Cluster
################################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.24"
cluster_name = local.name
cluster_version = "1.30"
# Give the Terraform identity admin access to the cluster
# which will allow it to deploy resources into the cluster
enable_cluster_creator_admin_permissions = true
cluster_endpoint_public_access = true
cluster_addons = {
# Enable after creation to run on Karpenter managed nodes
# coredns = {}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
# Fargate profiles use the cluster primary security group
# Therefore these are not used and can be skipped
create_cluster_security_group = false
create_node_security_group = false
fargate_profiles = {
karpenter = {
selectors = [
{ namespace = "karpenter" }
]
}
}
tags = merge(local.tags, {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery" = local.name
})
}
output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}"
}
Karpenter Resources¶
locals {
namespace = "karpenter"
}
################################################################################
# Controller & Node IAM roles, SQS Queue, Eventbridge Rules
################################################################################
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = "~> 20.24"
cluster_name = module.eks.cluster_name
enable_v1_permissions = true
namespace = local.namespace
# Name needs to match role name passed to the EC2NodeClass
node_iam_role_use_name_prefix = false
node_iam_role_name = local.name
# EKS Fargate does not support pod identity
create_pod_identity_association = false
enable_irsa = true
irsa_oidc_provider_arn = module.eks.oidc_provider_arn
tags = local.tags
}
################################################################################
# Helm charts
################################################################################
resource "helm_release" "karpenter" {
name = "karpenter"
namespace = local.namespace
create_namespace = true
repository = "oci://public.ecr.aws/karpenter"
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter"
version = "1.0.2"
wait = false
values = [
<<-EOT
dnsPolicy: Default
settings:
clusterName: ${module.eks.cluster_name}
clusterEndpoint: ${module.eks.cluster_endpoint}
interruptionQueue: ${module.karpenter.queue_name}
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: ${module.karpenter.iam_role_arn}
webhook:
enabled: false
EOT
]
lifecycle {
ignore_changes = [
repository_password
]
}
}
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
spec:
amiSelectorTerms:
- alias: bottlerocket@latest
role: ex-karpenter
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ex-karpenter
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ex-karpenter
tags:
karpenter.sh/discovery: ex-karpenter
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["2"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
VPC¶
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
# Tags subnets for Karpenter auto-discovery
"karpenter.sh/discovery" = local.name
}
tags = local.tags
}
Deploy¶
See here for the prerequisites and steps to deploy this pattern.
Validate¶
-
Test by listing the nodes in the cluster. You should see two Fargate nodes in the cluster:
-
Provision the Karpenter
EC2NodeClass
andNodePool
resources which provide Karpenter the necessary configurations to provision EC2 resources: -
Once the Karpenter resources are in place, Karpenter will provision the necessary EC2 resources to satisfy any pending pods in the scheduler's queue. You can demonstrate this with the example deployment provided. First deploy the example deployment which has the initial number replicas set to 0:
-
When you scale the example deployment, you should see Karpenter respond by quickly provisioning EC2 resources to satisfy those pending pod requests:
-
Listing the nodes should now show some EC2 compute that Karpenter has created for the example deployment:
kubectl get nodes NAME STATUS ROLES AGE VERSION fargate-ip-10-0-16-92.us-west-2.compute.internal Ready <none> 2m3s v1.30.0-eks-404b9c6 fargate-ip-10-0-8-95.us-west-2.compute.internal Ready <none> 2m3s v1.30.0-eks-404b9c6 ip-10-0-21-175.us-west-2.compute.internal Ready <none> 88s v1.30.1-eks-e564799 # <== EC2 created by Karpenter
Destroy¶
Scale down the deployment to de-provision Karpenter created resources first:
Remove the Karpenter Helm chart:
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
terraform destroy -target="module.eks" -auto-approve
terraform destroy -auto-approve
See here for more details on cleaning up the resources created.