2018-08-23 - Setting up Terraform for AWS EKS

 
Motivation: I want to explore the new managed Kubernetes service in AWS called AWS EKS (Elastic Kubernetes Service). To get started I will be following these steps:

Requirements:

  • AWS Account


Creating a workspace container to configure the setup of the AWS EKS

Source code:
https://bitbucket.org/geircode/setting_up_aws_eks_with_terraform
 
Container Image:
https://hub.docker.com/r/geircode/aws_eks_terraform
 
Now I have a sandbox workspace that can be used anywhere. Very handy!
 

Installing Terraform in the Workspace Container

Go to Terraform.io and download the correct package.
My workspace container is running:

That means that I can install the 64-bit version of the terraform files in my container. Hopefully.
 
Install Terraform by inserting this into the Dockerfile:

ENV TERRAFORM_VERSION=0.11.8
RUN apt-get -y install openssl unzip wget && \
cd /tmp && \
wget https://releases.hashicorp.com/terraform/$\{TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /usr/bin && \
rm -rf /tmp/* && \
rm -rf /var/cache/apk/* && \
rm -rf /var/tmp/* 



Build new Container Image and execute the "docker-compose.up.bat" file.

Ok. Terraform executable is installed in the container and seems to be working.

Using Terraform to create a EC2 instance

 
Following the Pluralsight course, I am using the exercise files. In this case, the ModuleOne that creates a EC2 instance.
moduleone.tf
First, we need to find or create the variable values in AWS:

##################################################################################
# VARIABLES
##################################################################################
 
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "private_key_path" {}
variable "key_name" {
  default = "PluralsightKeys"
}


 
This article: https://hackernoon.com/introduction-to-aws-with-terraform-7a8daf261dc0describes how to create the values for these variables.



Now I have the values for:


variable "aws_access_key" {}
variable "aws_secret_key" {}


but what is the:


variable "private_key_path" {}
 
To create "private_key_path", go to the EC2 console and create a new pair of keys:
https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#KeyPairs:sort=keyName

Click on "Create key Pair"

And this will trigger a download of the private key "terraform-keypair-001.pem" file to the computer.
Keep it safe. Keep it hidden. And so on.
 
Unrelated Question: How do I create a sharable public Container Workspace that is also using secrets?
Normally, these secrets are stored directly on the laptop or in the Environment of the Container. So how can I insert the secrets without storing them in the Git repository?
 
Perhaps I can mount a read-only volume from the Docker Host containing the secrets?
Turns out it was even easier using "docker secrets". First, create a file outside the repository somewhere and add the secrets there. Then add this into the docker-compose definition like so:

This will of course only work when running the container locally, because of the relative path. If I wanted to run this container in the cloud, then I would manage these secrets differently. Different clouds have different ways to do this.
 
Next, I create my own version of the "moduleone.tf" file and configure it to point to these secrets that are added to the Container in "var/run/secrets".
 
Ok, enough dillydally. Onwards!
 
Apparently, the image provided in the Pluralsight course is not available in Ireland (eu-west-1). So my next question is:

How to find a suitable AWS AMI?


  • Go to AWS Marketplace
  • Search for "Ubuntu",
    • filter on "Software Pricing Plans" and select "free"
    • filter on "Instance Type => Micro Instances (Free Tier)". Select "t2.micro".
  • Choose whatever image. I chose "Ubuntu 18.04 LTS – Bionic"
  • Click on "Continue to Subscribe" and Accept Terms.
  • Click on "Continue to Configuration"

  • Select Region "EU (Ireland)", and copy the "Ami Id" to clipboard
  • Go back to the terraform file and update the value for "ami"


 
Now the terraform module should be ready to run.
Execute "terraform apply" in the directory as the terraform file and see what happens.
 
If you are getting this:

then check this solution:
To connect to the instance with SSH, then go to the EC2 console and right-click on the running instance:

And this will open a popup:

Copy the example into the Workspace Container, and update the location of the private key:
 
ssh -i /var/run/secrets/terraform_keypair_001_pem ubuntu@ec2-34-244-137-248.eu-west-1.compute.amazonaws.com
ssh: connect to host ec2-34-244-137-248.eu-west-1.compute.amazonaws.com port 22: Connection refused
Why can't I connect?
Turns out my default VPC in AWS needs to be configured for SSH.

How to configure AWS VPC to connect to SSH


Open AWS Console and navigate to the running EC2 instance and open the associated "Security group":

Add Rule for SSH:

 
Now that the VPC is open for SSH, I can try to connect again with SSH:

But the "secret" files does not have the correct file mode.
Apparently, I need to set the mode in the docker-compose file:
{+}https://docs.docker.com/compose/compose-file/#long-syntax-2+
 

Create EKS cluster

I will be creating Terraform based on this example:
https://www.terraform.io/docs/providers/aws/r/eks_cluster.html
 
First, we need a "role_arn" and this can be created in the AWS Console:
https://console.aws.amazon.com/iam/home
Go to "Roles":
 



Yay. I have created a new role that will be used to create AWS EKS.
 
 
And this is why.

Ok, Ireland is not yet supported, but US East is available.
 
Btw, here is a guide to setup AWS EKS from the AWS Console:
https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/
 
Since Ireland is not supported yes, I need to update the Terraform script to use this region:

provider "aws" {
region = "us-east-1"
shared_credentials_file = "${var.shared_credentials_file}"
profile = "terraform"
}


 
That was easy(smile) I added a tag in the default VPC in that region with: {my-vpc-key = vpc-eks}
 
Turns out that the only supported availability zones [us-east-1a, us-east-1c, us-east-1d].
 
Running the terraform apply again:

Yay. I've got a cluster running! In the USA.
 
Looking into the AWS Console:

 
Interesting. The only thing I can do from the Console, is to delete the cluster. This means that I need to configure AWS EKS entirely from Terraform.
 
What's next?

  • Run a Container Image on my new Cluster