Home

Awesome

Rakam-API Terraform Installation

Screen Recording 2019-11-14 at 06 18 PM

This script will run on your cloud provider, we support only AWS at the moment. The following dependencies assumed to be already installed.

The script consists of several deployments as follows;

Post Installations:


Step 0: Set terraform backend

To store safely your tfstate files, setup your backend on backend.tf. Running terraform init will also initalize your backend alongside with required modules.

STEP 1: Open provider.tf and make the following changes:

provider "aws" {
  region     = "${var.aws_region}"
  access_key = "my-access-key"
  secret_key = "my-secret-key"
}

STEP 2: Rename variables.example.tf to variables.tf and make the following changes if needed:

Step 3: Put your license key to the same folder.

This project uses a private container registry of rakam. Copy the license.json to the same directory where the .tf files are.

STEP 4: terraform init: Download the required modules

Change your directory to the terraform scripts folder and execute terraform init: This will install the required official terraform modules.

### (OPTIONAL) STEP: terraform plan: See what will happen Execute terraform plan, this will show you the detailed resources to be created on the next step.

STEP 5: terraform apply: Provisioning phase.

Provisioning will start on your terraform apply command. You will see the same planning results as on terraform plan but this time asked to confirm. Upon completion which will take about ~30 minutes, two-state files as terraform.tfstate and terraform.tfstate.backup will be created.

⚠️⚠️⚠️: Store .tfstate files on a secure location. Upon update, If state files are lost, all resources have to be mapped manually or provisioned again.

STEP 6: Validate your ACM certificate.

terraform apply command will output various information. cert-dns shows you the required DNS validation for a valid certificate provisioning.

cert-dns = [
  {
    "domain_name" = "testelb.rakam.io"
    "resource_record_name" = "_a1aa2b7b49c946d1485999c517cfd45c.testelb.rakam.io."
    "resource_record_type" = "CNAME"
    "resource_record_value" = "_a921985328036837a7a63a1e66d10d4b.kirrbxfjtw.acm-validations.aws."
  },
]

For the example given above, create a CNAME record from _a1aa2b7b49c946d1485999c517cfd45c.testelb.rakam.io to _a921985328036837a7a63a1e66d10d4b.kirrbxfjtw.acm-validations.aws Grab a freshly ground coffee ☕️, this might take few minutes depending on DNS propagation.

STEP 7: Connect worker nodes to EKS cluster.

run ./configure script located on the main directory. You may need to execute chmod +x ./configure to make the file executable. This script will set the kube config file located at ~/.kube/config. Second step of the script is to assign the worker pools to the EKS cluster.


(OPTIONAL) STEP: Install Kubernetes Dashboard UI

Run the following command in main directory to install the kubernetes web UI; cd ./kubernetes-web-ui && chmod +x ./configure.sh && ./configure.sh

To connect your cluster you can execute the connect script as follows:

chmod +x ./connect.sh && ./connect.sh

This will output a temporary service-account-token to log in the UI:

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      RANDOM_TOKEN...
Starting to serve on 127.0.0.1:8001

The connect.sh script will create a local-port forwarding on port: 8001. Navigate to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login and log in using the token.


Scaling up/Down, Changing instance-type

By altering the instance-capacity variables you can manually scale up or down the cluster. If you change the instance-type variable and perform terraform apply command, the autoscaling groups default instance type will change. However, it will not roll-up new nodes.

You have to go to your AWS console, navigate to EC2 -> Auto Scaling Groups -> terraform-eks-rakam and choose the Instances tab. Select your old instance and detach from the group. Also, select Add a new instance to the Auto Scaling group to balance the load (do this step one-by-one per node)

Screen Shot 2019-11-13 at 21 19 24

Note that, while changing the instance-type old nodes will not be terminated automatically. After completing the same step for each node. You may terminate the old instances. However, while only changing the instance-capacity you don't have to drain/terminate any instance. This will be done automatically by the autoscaling group policies.

Screen Shot 2019-11-13 at 18 42 30