There are multiple strategies you could use to introduce new service versions to an existing product. In this article I will go over how you can implement Canary deployments using Istio in a k82 cluster. The same approach can be used to implement Blue/Green deployments too.
- Canary deployments: Route a small percentage of traffic to a new service vs. just shifting the entire traffic to the new version. This gives you a chance to route real production traffic to a small sampling of requests, collect data and then decide on next course of action.
- Blue-Green Deployments: You maintain two production environments. One environment (say Blue) is currently serving production traffic. The other environment, Green, is idle. You deploy and test a new service version to the idle Green environment. Once you are satisfied you can reroute traffic from current production Blue env to the new Green environment. One added advantage is that if some issues do come up with the rollout to the Green environment, then you can quickly revert back to the Blue environment. I must add that in todays’ dynamic on-demand Cloud architectures, you do not need to keep an always available 2nd production environment. Just bring that up when you have new version to deploy. This also means you have to have a solid Infrastructure as Code strategy using tools such as Terraform (or Cloud provider specific frameworks such as CloudFormation for AWS, Azure Resource Manager template or GCP Cloud Deployment Manager).
- Mirroring Deployments: You mirror production traffic to another parallel environment. This is especially useful if you intend to implement dark launches of products prior to actually releasing them. Very useful for legacy modernization projects where you want to build pieces of the existing system in a new architecture and test it against production traffic as you make the transformation journey.
Prerequisites
I am running this on a Mac. Some things may (probably will) differ for other operating systems.
- Install Docker – See instructions for mac at https://docs.docker.com/docker-for-mac/install/
- Install kubectl – See instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
- Install minikube – https://kubernetes.io/docs/tasks/tools/install-minikube/
Start minikube
1 2 3 4 5 6 7 |
# start minikube. feel free to up the memory and cpus if you can afford it minikube start --memory=8192 --cpus=4 # to get access to minikube provided loadbalancer minikube tunnel # keep the tunnel terminal open |
Install Isto
See instructions at https://istio.io/docs/setup/kubernetes/install/kubernetes/
1 2 3 4 5 6 7 8 9 10 11 |
# Download Istio curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.4 sh - # Add to istio to your path. export ISTIO_HOME=<your istio folder> export PATH=$ISTIO_HOME/bin:$PATH # Stay in istio home folder and run cd $ISTIO_HOME for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done kubectl apply -f install/kubernetes/istio-demo.yaml |
Verify Istio installation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
$ kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.111.207.130 <none> 3000/TCP 43m istio-citadel ClusterIP 10.103.188.104 <none> 8060/TCP,15014/TCP 43m istio-egressgateway ClusterIP 10.106.105.205 <none> 80/TCP,443/TCP,15443/TCP 43m istio-galley ClusterIP 10.103.153.108 <none> 443/TCP,15014/TCP,9901/TCP 43m istio-ingressgateway LoadBalancer 10.96.195.128 10.96.195.128 15020:31249/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31846/TCP,15030:30838/TCP,15031:30572/TCP,15032:30145/TCP,15443:31279/TCP 43m istio-pilot ClusterIP 10.99.97.185 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 43m istio-policy ClusterIP 10.105.103.36 <none> 9091/TCP,15004/TCP,15014/TCP 43m istio-sidecar-injector ClusterIP 10.102.30.47 <none> 443/TCP 43m istio-telemetry ClusterIP 10.102.230.97 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 43m jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 43m jaeger-collector ClusterIP 10.99.24.79 <none> 14267/TCP,14268/TCP 43m jaeger-query ClusterIP 10.106.164.187 <none> 16686/TCP 43m kiali ClusterIP 10.104.232.50 <none> 20001/TCP 43m prometheus ClusterIP 10.110.95.52 <none> 9090/TCP 43m tracing ClusterIP 10.108.217.222 <none> 80/TCP 43m zipkin ClusterIP 10.105.133.14 <none> 9411/TCP 43m |
Look at the line above for istio-ingressgateway. After starting Minikube we created a tunnel to get access to the Minikube provided load balancer. This tunnel runs from your local machine to the k8s cluster. The external-ip is the means to get access to your service. If you kill the tunnel (which you should have open in a separate terminal) the external-ip will change to <pending>. Bring it back up and viola you have an external ip assigned again.
Verify Istio Pods are deployed
Validate that there exists a istio-system namespace and list the pods in that namespace.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
$ kubectl get ns NAME STATUS AGE default Active 3h57m istio-system Active 3h55m kube-node-lease Active 3h57m kube-public Active 3h57m kube-system Active 3h57m $ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-6575997f54-5crp9 1/1 Running 0 46m istio-citadel-7fff5797f-tgrhc 1/1 Running 0 46m istio-cleanup-secrets-1.2.4-zfvtw 0/1 Completed 0 47m istio-egressgateway-7b7787749-dgtcp 1/1 Running 0 47m istio-galley-74d4d7b4db-6n7vn 1/1 Running 0 47m istio-grafana-post-install-1.2.4-69jlg 0/1 Completed 0 47m istio-ingressgateway-78db96dfc7-h76nq 1/1 Running 0 46m istio-pilot-7f5bc44868-gcsks 2/2 Running 0 46m istio-policy-589f98d49b-jrxkj 2/2 Running 2 46m istio-security-post-install-1.2.4-nnksd 0/1 Completed 0 47m istio-sidecar-injector-578bfd76d7-8xvvz 1/1 Running 0 46m istio-telemetry-6cf9fffdbc-5wtrr 2/2 Running 1 46m istio-tracing-555cf644d-4r4xp 1/1 Running 0 46m kiali-6cd6f9dfb5-cqwcq 1/1 Running 0 46m prometheus-7d7b9f7844-mfw28 1/1 Running 0 46m |
Make sure that the pods are running (or marked as Completed) before moving to the next step.
Istio Service Mesh Configuration for traffic routing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
# Label the namespace with istio-injection=enabled # Hereafter any pods started in this namespace will # have Envoy sidecars injected automatically $ kubectl label namespace default istio-injection=enabled # Check which namespaces have istio injection enabled $ kubectl get namespace -L istio-injection $ kubectl create -f myapp.yaml --validate=false service "myapp" created deployment "myapp-v1" created deployment "myapp-v2" created $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 55m myapp ClusterIP 10.104.132.89 <none> 80/TCP 40m # Get list of Pods with your two service versions running # Make sure the status is Running state before moving to next step $ kubectl get pod NAME READY STATUS RESTARTS AGE myapp-v1-6cc48d95f-2ns64 2/2 Running 0 1m myapp-v2-6b9f59c74f-xzlhk 2/2 Running 0 1m To test whether your individual deployments are working, do this port forward trick (CTRL+C after testing) $ kubectl port-forward deployment/myapp-v1 8081:80 $ curl localhost:8081 CTRL+C the port-forward terminal $ kubectl port-forward deployment/myapp-v2 8081:80 $ curl localhost:8081 CTRL+C the port-forward terminal $ kubectl apply -f app-gateway.yaml gateway.networking.istio.io/app-gateway created destinationrule.networking.istio.io/myapp created virtualservice.networking.istio.io/myapp created |
Access the service
You ran this previously. Repeating it again. Note down the EXTERNAL-IP address
1 2 3 4 5 |
$ kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... removed for clarity istio-ingressgateway LoadBalancer 10.96.195.128 10.96.195.128 15020:31249/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31846/TCP,15030:30838/TCP,15031:30572/TCP,15032:30145/TCP,15443:31279/TCP 43m ... removed for clarity |
In my case I can access the service at 10.96.195.128
1 2 3 4 5 6 7 8 9 10 |
$ while true; do curl -s 10.96.195.128; sleep 1; done Hello Green World! Hello Green World! Hello Green World! Hello Blue World! Hello Green World! Hello Blue World! Hello Blue World! Hello Green World! Hello Blue World! |
Route more traffic to v2…
Edit the app-gateway.yaml and adjust the route weights for v1 to 10 and v2 to 90.
1 |
$ kubectl apply -f app-gateway.yaml |
Now when you access the service most requests will route to v2. This is how you can adjust traffic routing between your old service and new service.
Cleanup…
1 2 3 4 5 6 |
$ kubectl delete -f app-gateway.yaml $ kubectl delete -f myapp.yaml $ minikube stop # to completely delete all the work you just did run # minikube delete |
Most developers (other than startups) may never have to setup a k8s cluster. So this blog shows you more than you need to know. But I find this approach to learning using a local k8s cluster more fun. Once you are past this go ahead and setup a k8s test cluster in GCP and rerun this same example example against that.
(Optional) Building the docker Images
For the examples above I use my Docker images in dockerhub. But if you care to build your own then here are the steps. Use code from simple node project at https://github.com/thomasma/expressjs_docker
Build the Blue docker image and run inside minikube. Replace mattazoid with your dockerhub account (or you can stay local).
1 2 3 4 5 6 7 8 9 10 11 12 |
docker build -t mattazoid/hello:v1 . docker push mattazoid/hello:v1 kubectl run hellov1 --image=mattazoid/hello:v1 --port=80 kubectl expose deployment hellov1 --type=NodePort kubectl get pod kubectl get service # Access service minikube service hellov1 --url # use the IP:PORT you get from previous step (something like the one below) curl http://192.168.99.108:32647 |
Build the Green docker image and run inside minikube. First modify basicexpresshello.js and replace Blue with Green or some text that will help you distingusih between the two services – old vs new.
- docker build -t mattazoid/hello:v2 .
- docker push mattazoid/hello:v2
- kubectl create hellov2 –image=mattazoid/hello:v2 –port=3000
- kubectl expose deployment hellov2 –type=NodePort
- kubectl get pod
- kubectl get service
- Access service – curl $(minikube service hellov2 –url)
References
- Kubernetes cheatsheet – https://kubernetes.io/docs/reference/kubectl/cheatsheet/
- Istio install instructions – https://istio.io/docs/setup/kubernetes/install/kubernetes/
- Minikube tunnel – https://istio.io/docs/setup/kubernetes/platform-setup/minikube/