Creating self-managed nodes for an EKS cluster

Learn&Grow
5 min readJun 8, 2024

--

You can have managed and self-managed node groups for an EKS cluster. The purpose of the self-managed node groups is to gain complete control of the infrastructure where the Kubernetes workloads run. In such unique use cases , you could use this below document.

When you create a self-managed node group,

  • you would be responsible for all the maintenance and upgrade of the nodes.
  • you can do Custom configurations and flexibility in node setup
  • you can use custom AMIs and specific instance types

Disclaimer: All the required steps to add self-managed node groups are included, making the document comprehensive and detailed. This is a tested and working process

Pre-requisite:

An EKS cluster provisioned and an EC2/local terminal accessing the EKS cluster.

Install Kubectl based on the Kubernetes version of the cluster. Note that you are using a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane, follow official doc

# :: Based on my cluster eks 1.26 and connected from an EC2 ubuntu machine ::

curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.27.12/2024-04-19/bin/linux/amd64/kubectl
#checksum validation
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.26.15/2024-04-19/bin/linux/amd64/kubectl.sha256
#install kubectl
chmod +x ./kubectl
#copy the binary to the the $PATH
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
kubectl --version

Install AWSCLI to enable you to run the aws cli commands to connect to the EKS cluster and modify its configurations

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Assumptions :

  1. EKS cluster is in running state, awscli and kubectl is installed.

2 . Update the kube-config to connect to the kubernetes cluster

aws eks update-kubeconfig --region eu-west-1 --name eks-test
ubuntu@ip-:~$ kubectl get nodes
No resources found

High-level Steps to add the self-managed node group

  1. Create a node group with 1 or more instances using a launch template
  2. Attach appropriate roles to the nodes
  3. Ensure the nodes are able to connect to the kubernetes cluster
  4. Enable the Kubernetes control plane to authorize the nodes joining the cluster
  5. Traffic between the nodes, and within the node group and control plane is seamless

Create IAM roles which would be attached to the nodes

  • Firstly, create the trust-policy.json with the following content allowing the ec2 service to assume role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
}
}
]
}
  • Create the IAM role and attach the role-policies required for the EKS connectivity
aws iam create-role \
--assume-role-policy-document file://trust-policy.json \
--role-name EKS_ROLE \
--tags Key=<input>,Value=<input>

# attach the policies AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly and AmazonEKS_CNI_Policy

aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
--role-name EKS_ROLE

aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
--role-name EKS_ROLE


aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
--role-name EKS_ROLE
  • Check for the configmap ‘aws-auth’ in the kube-system namespace.
Kubectl get cm aws-auth -n kube-system
# if exists , add only the corresponding roles to the configmap data section

# full content of the aws-auth configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <arn of IAM role EKS_ROLE>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
  • Create an instance profile to attach the roles to the nodes.
aws iam create-instance-profile \
--instance-profile-name EKS_INSTANCE_PROFILE \
--tags Key=<input>,Value=<input>

#attach the role `EKS_ROLE` to the instance profile `EKS_INSTANCE_PROFILE`

aws iam add-role-to-instance-profile \
--instance-profile-name EKS_INSTANCE_PROFILE\
--role-name EKS_ROLE]

Now, create the EC2 launch template to launch the nodes

  • Creating network settings for the communication between nodes and master
# create ec2 security group
aws ec2 create-security-group \
--description "EKS cluster self-managed launch template" \
--group-name =EKS_NODE_Launch_Template \
--vpc-id <VPC of the EKS cluster> \
--tag-specifications "ResourceType=security-group,Tags=[{Key=<input>,Value=<input>}]"

# :: output - NODE SG GROUP ID to be used in the below steps

# Allowing traffic within the node group
aws ec2 authorize-security-group-ingress \
--group-id <NODE SG ID>\
--source-group <NODE SG ID> \
--protocol all

# Allowing traffic from control plane to NODE SG
aws ec2 authorize-security-group-ingress \
--group-id <NODE SG ID>\
--source-group <EKS CLUSTER SG ID>\
--protocol all

# Allowing traffic from NODE SG to CONTROLPLANE
aws ec2 authorize-security-group-ingress \
--group-id <EKS CLUSTER SG ID> \
--source-group <NODE SG ID> \
--protocol all

Allow ssh to NODES(if you require):

aws ec2 authorize-security-group-ingress \
--group-id <NODE SG ID> \
--cidr <source cidr range > \
--protocol tcp \
--port 22
  • Identify the EKS Optimized AMI based on the Kubernetes version using following official doc
  • Retrieve the EKS cluster info used in the authorization steps using the official doc
# :: fetch the endpoint for your Kubernetes API server :: #
aws eks describe-cluster \
--name eks-test \
--query "cluster.endpoint" \
--output text

# :: The certificate-authority-data for your cluster :: #

aws eks describe-cluster \
--name eks-test \
--query "cluster.certificateAuthority.data" \
--output text

# :: Kubernetes network configuration for the cluster :: #
# If you didn't specify a CIDR block when you created the cluster, then Kubernetes assigns addresses from either the 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks.
aws eks describe-cluster \
--name upgrade-test \
--query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" \
--output text
  • fetch the kubedns/coredns IP as per your cluster from the svc running in the kube-system namespace , in my cluster it was 172.20.0.10.
  • Create userdata.txt which would act as the user data for the nodes ,base64 format of this file must be used.
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash

set -ex

EKS_CLUSTER_API="<output of cluster.endpoint>"
EKS_CLUSTER_CA="<output of certificateAuthority.data>"
EKS_CLUSTER_DNS_IP="<kubedns/coredns IP>"

/etc/eks/bootstrap.sh upgrade-test \\
--apiserver-endpoint "\$EKS_CLUSTER_API" \\
--b64-cluster-ca "\$EKS_CLUSTER_CA" \\
--dns-cluster-ip "\$EKS_CLUSTER_DNS_IP" \\
--container-runtime containerd \\
--kubelet-extra-args '--max-pods=17' \\
--use-max-pods false

--==MYBOUNDARY==--

Now we have all the necessary details for the launch template to be created and launched via an auto-scaling group.

  • Create a launch-template.json
# :: All values found from the previous sections :: #
{
"ImageId": "<EKS optimized AMI found>",
"InstanceType": "<instance type of your choice>",
"UserData": "<base64 output of the userdata.txt>"
"SecurityGroupIds": ["<NODE_SG_ID>"],
"KeyName": "<SSH KEY to access node>",
"IamInstanceProfile": {
"Name": "<INSTANCE PROFILE NAME>"
},
"PrivateDnsNameOptions": {
"EnableResourceNameDnsARecord": true
},
"Monitoring": {
"Enabled": true
}
}

# create launch template

aws ec2 create-launch-template \
--launch-template-name EKS_Launch_Template \
--launch-template-data file://launch-template.json \
--tag-specifications "ResourceType=launch-template,Tags=[{Key=<input>,Value=<input>}]"
  • Spin an auto-scaling group using the launch template
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name EKS_Auto_Scaling_Group \
--launch-template LaunchTemplateName=EKS_Launch_Template,Version=1 \
--vpc-zone-identifier <subnets - I used the same as cluster> \
--tags Key=<input>,Value=<input>,PropagateAtLaunch=true \
--health-check-grace-period 15 \
--new-instances-protected-from-scale-in \
--capacity-rebalance \
--min-size <your choice>\
--max-size <your choice>\
--desired-capacity <your choice>

Validate the AWS console and wait for the nodes to join the cluster.

# validate the readiness of the nodes for allocation of workloads
kubectl get node

# try creating a sample pod and see if it is allocated to the node and running
kubectl create deployment nginx --image=nginx:alpine --replicas=1

# validate the pods
kubectl get pods -o wide

I hope this is useful for anyone searching for ways to host a self-managed node group in EKS. If you’re just reading through, you should now have a clear understanding of how to set it up.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Learn&Grow
Learn&Grow

Written by Learn&Grow

Technology Architect with profound understanding on the Multi-Cloud Platform engineering with a strong DevOps Solutioning Experience.

No responses yet

Write a response