CloudFormation VPC with EKS

A guide to setting up an EKS cluster on AWS according to recommended networking best practices

Posted by Harry Lascelles on May 29, 2019
Here at Bamboo we like to stand on the shoulders of giants and let others run our infrastructure for us, and EKS has made running clusters on AWS easy and reliable. By following this post you will create an EKS cluster using networking principles that follow AWS best practices, using just one CloudFormation file, and one Kubernetes template.

The much needed gap

There are other guides on the web that take you through the creation of an EKS cluster, and many example CloudFormation templates. However, as others have noted, the CloudFormation examples provided by AWS contradict their own advice on VPC partitioning.

There are other tools available for this kind of “one command” creation, such as eksctl. However, if your deploy process is based around CloudFormation, then it makes sense to keep your templates in source control (eksctl doesn’t currently allow exporting them). This is especially true if you use Service Catlog.

The aim of this guide is to deploy the simplest possible system that demonstrates a working EKS cluster with the VPC partitioned correctly. A number of components will be created:

  1. A VPC with public and private subnets. This is recommended by AWS, and it ensures your nodes are not exposed to the internet directly. In this example, a NAT is also created, as it is likely your setup will require the pulling of container images (such as helm) from public repositories.

    With this set up any services that are marked "type": "LoadBalancer" will have their ELB created in the public subnet. This is better for network security, as the public subnet routetable can be restricted to only send traffic to the private subnet (and not other services like RDS instances), and no nodes in your entire VPC need have SSH access enabled.

  2. A single EKS cluster. If you wish to run more than one EKS cluster, you should follow AWS recommendations and create a new VPC for each cluster provisioned. This isolates cluster networks, helping to minimise the blast radius if something were to go wrong.

    Keeping to the rule of one cluster per VPC helps when testing cluster version upgrades. Although clusters can be upgraded in place, for mission critical stacks it would be wise to roll out a new cluster in parallel. You can then create a new VPC/EKS stack, and repoint your route53 domain to the new ELB.

  3. A worker node autoscaling group. This manages the nodes that business logic pods will run on. The EC2 instances have a role that has no special permissions other than those required to join the cluster (ie, AmazonEKSClusterPolicy and AmazonEKSServicePolicy).

Deploying the example

To get this example cluster up and running, ensure you have an SSH key pair, and have made a note of the key name, then perform the following:

# Clone the repository
git clone git@github.com:bambooengineering/example-cloudformation-eks.git
cd example-cloudformation-eks

# Set the target region for your cluster and the existing AWS SSH key you wish to use.
export AWS_DEFAULT_REGION="eu-west-1"
export KEY_NAME=some-key-name

# Run the script. It will take about 15 minutes.
./setup.sh

What does this script do?

The script performs few tasks.

  1. The aws cloudformation create-stack command creates all the infrastructure from a single CloudFormation template. A big thanks to Amit at Spotinst for his work on which this template is based.

    The stack creation is instantaneous, but the actual provisioning of all the required resources (VPC, EKS cluster, subnets and so on) will take around 15 minutes. The immediate response from the command gives you the arn of your new stack.

     {
         "StackId": "arn:aws:cloudformation:eu-west-1:900000000000:stack/example-eks-cluster/some-uuid"
     }
    

    The next line of the script simply waits for all the creation to finish.

  2. The connection config for the cluster is downloaded. Without this kubectl will not know where to connect to. You will see:

    Added new context
      arn:aws:eks:eu-west-1:900000000000:cluster/example-eks-cluster
      to /home/user/.kube/config
    
  3. The script then applies the aws-auth-cm.yaml template which authorises the nodes so they can join the cluster.

  4. It waits for the nodes to be up, and we’re done!

Ground control to Major Tom

We can run a few basic commands to check we have established connectivity:

# Check that we have some running nodes
kubectl get nodes

# Review the running pods
kubectl get pods --all-namespaces

Note, the nodes in your VPC are completely private. However, you can still access the running pods using kubectl (via the AWS control plane) and even run a shell on one:

POD=$(kubectl get pod -l k8s-app=aws-node -o jsonpath="{.items[0].metadata.name}" --namespace kube-system)
kubectl exec -ti $POD --namespace kube-system -- /bin/bash

# You will now be inside a running pod. Try running a simple command:
date
# => Fri 24 May 16:44:17 BST 2019

Wrapping up

The setup you have created demonstrates a cluster with AWS network recommendations applied. The setup is designed to be as simple as possible, but no simpler.

Beyond applying your business logic, there are a few ways to proceed from here:

  1. Install helm, the “package manager” for Kubernetes. Make sure you secure it correctly.
  2. Add pod authorisation using kiam. That is the subject of another post on this blog.

Good luck with your voyage into the Kubernetes world!

image



Interested in joining our team? We're hiring! Contact us at