Home

Awesome

Cloud Native High-Availability - Hands-On Lab

Introduction

This project is a hands-on lab to demo how to add basic HA to a cloud native application. For this lab we will add and test a NGINX server, which will be used as a proxy and load-balance the requests between two instances of a web applications.

Why is high-availability important for Cloud Native applications?

When it comes to applications deployed in the cloud, perhaps the most fundamental question asked about non-functional requirements is "How do I ensure that my application is still running even if something fails?" In the cloud, you can not guarantee the availability of the different components, so you have to design for failure.

High-availability architecture for Cloud Native

In order to secure the availability of your cloud-native application, your architecture has to take into account high-availability. Here are some architectural best practices:

Of course, it is very important to understand the business and technical requirements for high-availability to design the right architecture. There is no "one-size fits all" solution!

Hands-on lab description

Hands-on lab architecture

For this limited hands-on lab, we will use a simplified architecture:

Prerequisites

Summary of the hands-on labs steps

The main steps of this lab are:

  1. edit the nginx load balancing configuration file
  2. deploy nginx configuration file to your kubernetes cluster
  3. deploy nginx to your kubernetes cluster
  4. check load balancer
  5. simulate a problem with one of your application instances
  6. verify that the application is still available
  7. restart the stopped instance
  8. check load balancer

1 - Edit the nginx load balancing configuration file

git clone https://github.com/ibm-cloud-architecture/refarch-cloudnative-nginx
cd refarch-cloudnative-nginx

2 - Deploy nginx configuration file to your kubernetes cluster

kubectl create configmap nginx-config --from-file=nginx.conf

3 - Deploy nginx to your kubernetes cluster

kubectl create -f nginx-pod.yaml
kubectl expose po nginx --type=NodePort
( kubectl get nodes | grep -v NAME | awk '{print $1}'; echo ":"; kubectl get services | grep nginx | sed 's/.*:\([0-9][0-9]*\)\/.*/\1/g') | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n//g'

You should obtain an url, like this one :

184.172.112.213:32659

4 - Check load balancing

Instance2 Instance1

5 - Simulate a problem with one of your application instances

kubectl scale --replicas=0 deploy/bluecompute-web-deployment

6 - Validate continuity

7 - Restart the stopped instance

kubectl scale --replicas=1 deploy/bluecompute-web-deployment

8 - Check load balancing