How To Setup a AWS EKS Kubernetes Cluster

And then came EKS

What are we doing today

  • Deploy a EKS Cluster
  • View the resources to see what was provisioned on AWS
  • Interact with Kubernetes using kubectl
  • Terminate a Node and verify that the ASG replaces the node
  • Scale down your worker nodes
  • Run a pod on your cluster

Install Pre-Requirements

$ pip install awscli
$ brew update 
$ brew install kubernetes-cli
$ brew tap weaveworks/tap 
$ brew install weaveworks/tap/eksctl

Deploy EKS

$ ssh-keygen -b 2048 -f ~/.ssh/eks -t rsa -q -N ""
$ aws --profile dev --region eu-west-1 ec2 import-key-pair --key-name "eks" --public-key-material file://~/.ssh/eks.pub
$ eksctl --profile dev --region eu-west-1 create cluster --name my-eks-cluster --version 1.14 --nodes 3 --node-type t2.small --ssh-public-key eks[ℹ]  eksctl version 0.9.0
[ℹ] using region eu-west-1
[ℹ] setting availability zones to [eu-west-1a eu-west-1b eu-west-1c]
[ℹ] subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for eu-west-1b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-f27f560e" will use "ami-059c6874350e63ca9" [AmazonLinux2/1.14]
[ℹ] using Kubernetes version 1.14
[ℹ] creating EKS cluster "my-eks-cluster" in "eu-west-1" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=my-eks-cluster'
[ℹ] CloudWatch logging will not be enabled for cluster "my-eks-cluster" in "eu-west-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=my-eks-cluster'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-eks-cluster" in "eu-west-1"
[ℹ] 2 sequential tasks: { create cluster control plane "my-eks-cluster", create nodegroup "ng-f27f560e" }
[ℹ] building cluster stack "eksctl-my-eks-cluster-cluster"
[ℹ] deploying stack "eksctl-my-eks-cluster-cluster"
[ℹ] building nodegroup stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[ℹ] --nodes-min=3 was set automatically for nodegroup ng-f27f560e
[ℹ] --nodes-max=3 was set automatically for nodegroup ng-f27f560e
[ℹ] deploying stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[+] all EKS cluster resources for "my-eks-cluster" have been created
[+] saved kubeconfig as "/Users/ruan/.kube/config"
[ℹ] adding identity "arn:aws:iam::000000000000:role/eksctl-my-eks-cluster-nodegroup-n-NodeInstanceRole-SNVIW5C3J3SM" to auth ConfigMap
[ℹ] nodegroup "ng-f27f560e" has 0 node(s)
[ℹ] waiting for at least 3 node(s) to become ready in "ng-f27f560e"
[ℹ] nodegroup "ng-f27f560e" has 3 node(s)
[ℹ] node "ip-192-168-42-186.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-75-87.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-8-167.eu-west-1.compute.internal" is ready
[ℹ] kubectl command should work with "/Users/ruan/.kube/config", try 'kubectl get nodes'
[+] EKS cluster "my-eks-cluster" in "eu-west-1" region is ready

View the Provisioned Resources

Navigate using Kubectl

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-42-186.eu-west-1.compute.internal Ready <none> 8m50s v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal Ready <none> 8m55s v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal Ready <none> 8m54s v1.14.7-eks-1861c5
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
aws-node-btfbk 1/1 Running 0 11m
aws-node-c6ktk 1/1 Running 0 11m
aws-node-wf8mc 1/1 Running 0 11m
coredns-759d6fc95f-ljxzf 1/1 Running 0 17m
coredns-759d6fc95f-s6lg6 1/1 Running 0 17m
kube-proxy-db46b 1/1 Running 0 11m
kube-proxy-ft4mc 1/1 Running 0 11m
kube-proxy-s5q2w 1/1 Running 0 11m
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 19m
kube-system kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 19m

Testing the ASG

$ kubectl get nodes NAME                                           STATUS   ROLES    AGE   VERSION ip-192-168-42-186.eu-west-1.compute.internal   Ready    <none>   37m   v1.14.7-eks-1861c5 ip-192-168-75-87.eu-west-1.compute.internal    Ready    <none>   37m   v1.14.7-eks-1861c5 ip-192-168-8-167.eu-west-1.compute.internal    Ready    <none>   37m   v1.14.7-eks-1861c5
$ aws --profile dev ec2 describe-instances --query 'Reservations[*].Instances[?PrivateDnsName==`ip-192-168-42-186.eu-west-1.compute.internal`].[InstanceId][]' --output text
i-0d016de17a46d5178
$ aws --profile dev ec2 terminate-instances --instance-id i-0d016de17a46d51782
{
"TerminatingInstances": [
{
"CurrentState": {
"Code": 32,
"Name": "shutting-down"
},
"InstanceId": "i-0d016de17a46d5178",
"PreviousState": {
"Code": 16,
"Name": "running"
}
}
]
}
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-75-87.eu-west-1.compute.internal Ready <none> 41m v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal Ready <none> 41m v1.14.7-eks-1861c5
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-42-61.eu-west-1.compute.internal Ready <none> 50s v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal Ready <none> 42m v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal Ready <none> 42m v1.14.7-eks-1861c5

Run a Pod

$ kubectl run --rm -it --generator run-pod/v1 my-busybox-pod --image busybox -- /bin/sh
/ $ busybox | head -1
BusyBox v1.31.1 (2019-10-28 18:40:01 UTC) multi-call binary.
/ $ exit
Session ended, resume using 'kubectl attach my-busybox-pod -c my-busybox-pod -i -t' command when the pod is running
pod "my-busybox-pod" deleted

Scaling Nodes

$ eksctl --profile dev --region eu-west-1 get clusters
NAME REGION
my-eks-cluster eu-west-1
$ eksctl --profile dev --region eu-west-1 get nodegroup --cluster my-eks-cluster
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
my-eks-cluster ng-f27f560e 2019-11-16T16:55:41Z 3 3 3 t2.small ami-059c6874350e63ca9
$ eksctl --profile dev --region eu-west-1 scale nodegroup --cluster my-eks-cluster --nodes 1 ng-f27f560e[ℹ]  scaling nodegroup stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e" in cluster eksctl-my-eks-cluster-cluster
[ℹ] scaling nodegroup, desired capacity from 3 to 1, min size from 3 to 1
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-8-167.eu-west-1.compute.internal Ready <none> 73m v1.14.7-eks-1861c5

Clean Up

$ eksctl --profile dev --region eu-west-1 delete cluster --name my-eks-cluster[ℹ]  eksctl version 0.9.0
[ℹ] using region eu-west-1
[ℹ] deleting EKS cluster "my-eks-cluster"
[+] kubeconfig has been updated
[ℹ] cleaning up LoadBalancer services
[ℹ] 2 sequential tasks: { delete nodegroup "ng-f27f560e", delete cluster control plane "my-eks-cluster" [async] }
[ℹ] will delete stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[ℹ] waiting for stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e" to get deleted
[ℹ] will delete stack "eksctl-my-eks-cluster-cluster"
[+] all cluster resources were deleted

Further Reading on Kubernetes

Thank You

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store