Karpenter on EKS MNG¶
This pattern demonstrates how to provision Karpenter on an EKS managed node group. Deploying onto standard EC2 instances created by an EKS managed node group will allow for daemonsets to run on the nodes created for the Karpenter controller, and therefore better unification of tooling across your data plane. This solution is comprised of the following components:
- An EKS managed node group that applies both a taint as well as a label for the Karpenter controller. We want the Karpenter controller to target these nodes via a
nodeSelector
in order to avoid the controller pods from running on nodes that Karpenter itself creates and manages. In addition, we are applying a taint to keep other pods off of these nodes as they are primarily intended for the controller pods. We apply a toleration to the CoreDNS addon, to allow those pods to run on the controller nodes as well. This is needed so that when a cluster is created, the CoreDNS pods have a place to run in order for the Karpenter controller to be provisioned and start managing the additional compute requirements for the cluster. Without letting CoreDNS run on these nodes, the controllers would fail to deploy and the data plane would be in a "deadlock" waiting for resources to deploy but unable to do so. - The
eks-pod-identity-agent
addon has been provisioned to allow the Karpenter controller to utilize EKS Pod Identity for AWS permissions via an IAM role. - The VPC subnets and node security group have been tagged with
"karpenter.sh/discovery" = local.name
for discoverability by the controller. The controller will discover these resources and use them to provision EC2 resources for the cluster. - An IAM role for the Karpenter controller has been created with a trust policy that trusts the EKS Pod Identity service principal. This allows the EKS Pod Identity service to provide AWS credentials to the Karpenter controller pods in order to call AWS APIs.
- An IAM role for the nodes that Karpenter will create has been created along with a cluster access entry which allows the nodes to acquire permissions to join the cluster. Karpenter will create and manage the instance profile that utilizes this IAM role.
- An SQS queue has been created that is subscribed to certain EC2 CloudWatch events. This queue is used by Karpenter, allowing it to respond to certain EC2 lifecycle events and gracefully migrate pods off the instance before it is terminated.
Code¶
The areas of significance related to this pattern are highlighted in the code provided below.
Cluster¶
################################################################################
# Cluster
################################################################################
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.24"
cluster_name = local.name
cluster_version = "1.30"
# Give the Terraform identity admin access to the cluster
# which will allow it to deploy resources into the cluster
enable_cluster_creator_admin_permissions = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
configuration_values = jsonencode({
tolerations = [
# Allow CoreDNS to run on the same nodes as the Karpenter controller
# for use during cluster creation when Karpenter nodes do not yet exist
{
key = "karpenter.sh/controller"
value = "true"
effect = "NoSchedule"
}
]
})
}
eks-pod-identity-agent = {}
kube-proxy = {}
vpc-cni = {}
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
karpenter = {
ami_type = "BOTTLEROCKET_x86_64"
instance_types = ["m5.large"]
min_size = 2
max_size = 3
desired_size = 2
labels = {
# Used to ensure Karpenter runs on nodes that it does not manage
"karpenter.sh/controller" = "true"
}
taints = {
# The pods that do not tolerate this taint should run on nodes
# created by Karpenter
karpenter = {
key = "karpenter.sh/controller"
value = "true"
effect = "NO_SCHEDULE"
}
}
}
}
node_security_group_tags = merge(local.tags, {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery" = local.name
})
tags = local.tags
}
output "configure_kubectl" {
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig"
value = "aws eks --region ${local.region} update-kubeconfig --name ${module.eks.cluster_name}"
}
Karpenter Resources¶
locals {
namespace = "karpenter"
}
################################################################################
# Controller & Node IAM roles, SQS Queue, Eventbridge Rules
################################################################################
module "karpenter" {
source = "terraform-aws-modules/eks/aws//modules/karpenter"
version = "~> 20.24"
cluster_name = module.eks.cluster_name
enable_v1_permissions = true
namespace = local.namespace
# Name needs to match role name passed to the EC2NodeClass
node_iam_role_use_name_prefix = false
node_iam_role_name = local.name
create_pod_identity_association = true
tags = local.tags
}
################################################################################
# Helm charts
################################################################################
resource "helm_release" "karpenter" {
name = "karpenter"
namespace = local.namespace
create_namespace = true
repository = "oci://public.ecr.aws/karpenter"
repository_username = data.aws_ecrpublic_authorization_token.token.user_name
repository_password = data.aws_ecrpublic_authorization_token.token.password
chart = "karpenter"
version = "1.0.2"
wait = false
values = [
<<-EOT
nodeSelector:
karpenter.sh/controller: 'true'
settings:
clusterName: ${module.eks.cluster_name}
clusterEndpoint: ${module.eks.cluster_endpoint}
interruptionQueue: ${module.karpenter.queue_name}
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: karpenter.sh/controller
operator: Exists
effect: NoSchedule
webhook:
enabled: false
EOT
]
lifecycle {
ignore_changes = [
repository_password
]
}
}
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
spec:
amiSelectorTerms:
- alias: bottlerocket@latest
role: ex-karpenter-mng
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ex-karpenter-mng
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ex-karpenter-mng
tags:
karpenter.sh/discovery: ex-karpenter-mng
---
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
requirements:
- key: "karpenter.k8s.aws/instance-category"
operator: In
values: ["c", "m", "r"]
- key: "karpenter.k8s.aws/instance-cpu"
operator: In
values: ["4", "8", "16", "32"]
- key: "karpenter.k8s.aws/instance-hypervisor"
operator: In
values: ["nitro"]
- key: "karpenter.k8s.aws/instance-generation"
operator: Gt
values: ["2"]
limits:
cpu: 1000
disruption:
consolidationPolicy: WhenEmpty
consolidateAfter: 30s
VPC¶
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
# Tags subnets for Karpenter auto-discovery
"karpenter.sh/discovery" = local.name
}
tags = local.tags
}
Deploy¶
See here for the prerequisites and steps to deploy this pattern.
Validate¶
-
Test by listing the nodes in the cluster. You should see four Fargate nodes in the cluster:
-
Provision the Karpenter
EC2NodeClass
andNodePool
resources which provide Karpenter the necessary configurations to provision EC2 resources: -
Once the Karpenter resources are in place, Karpenter will provision the necessary EC2 resources to satisfy any pending pods in the scheduler's queue. You can demonstrate this with the example deployment provided. First deploy the example deployment which has the initial number replicas set to 0:
-
When you scale the example deployment, you should see Karpenter respond by quickly provisioning EC2 resources to satisfy those pending pod requests:
-
Listing the nodes should now show some EC2 compute that Karpenter has created for the example deployment:
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-23-32.us-west-2.compute.internal Ready <none> 10m v1.30.4-eks-a737599 ip-10-0-46-239.us-west-2.compute.internal Ready <none> 20s v1.30.1-eks-e564799 # <== EC2 created by Karpenter ip-10-0-6-222.us-west-2.compute.internal Ready <none> 10m v1.30.4-eks-a737599
Destroy¶
Scale down the deployment to de-provision Karpenter created resources first:
Remove the Karpenter Helm chart:
terraform destroy -target="module.eks_blueprints_addons" -auto-approve
terraform destroy -target="module.eks" -auto-approve
terraform destroy -auto-approve
See here for more details on cleaning up the resources created.