Hello everyone, thanks for coming today! Ready to learn Kubernetes? You will first become familiar with Compute Engine before working through an example Guestbook application, and then move on to more advanced Kubernetes experiments.

Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters, from public clouds to on-premise deployments, from virtual machines to bare metal.

For the purpose of this codelab, using a managed environment such as Google Container Engine (a Google-hosted version of Kubernetes running on Compute Engine) will allow you to focus more on experiencing Kubernetes rather than setting up the underlying infrastructure but you should feel free to use your favorite environment instead.

Self-paced environment setup

If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and create a new project:

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

Next, you'll need to enable billing in the Developers Console in order to use Google Cloud resources and enable the Container Engine API.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "cleanup" section at the end of this document). Google Container Engine pricing is documented here.

New users of Google Cloud Platform are eligible for a $300 free trial.

Google Cloud Shell

While Google Cloud and Kubernetes can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID :

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If for some reason the project is not set, simply issue the following command :

gcloud config set project <PROJECT_ID>

Looking for you PROJECT_ID? Check out what ID you used in the setup steps or look it up in the console dashboard :

IMPORTANT. Finally, set the default zone and project configuration:

gcloud config set compute/zone us-central1-f

You can pick and choose different zones too. Learn more about zones in Regions & Zones documentation.

In this section, you'll create Compute Engine instances, deploy nginx, and finally put a network balancer in the front. You can create a Compute Engine instance from either the graphical console or from the command line. This lab will walk you through using the command lines.

Create an instance with default settings:

$ gcloud compute instances create myinstance
Created [...].
NAME       ZONE           MACHINE_TYPE  PREEMPTIBLE INTERNAL_IP EXTERNAL_IP    STATUS
myinstance us-central1-f n1-standard-1             10.240.X.X  X.X.X.X        RUNNING

Note down the EXTERNAL_IP - that's important later on.

The instance was created with some default values:

Run gcloud compute instances create --help to see all the defaults.

By default, Google Cloud Platform only allows few port accesses. Since we'll be installing Nginx soon - let's enable port 80 in the firewall configuration first.

$ gcloud compute firewall-rules create allow-80 --allow tcp:80
Created [...].
NAME     NETWORK SRC_RANGES RULES  SRC_TAGS TARGET_TAGS
allow-80 default 0.0.0.0/0  tcp:80

This will create a firewall rule named allow-80 that has the following default values:

Run gcloud compute firewall-rules create --help to see all the defaults.

To SSH into the instance from the command line:

$ gcloud compute ssh myinstance
...
Do you want to continue (Y/n)? Y
...
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): [Hit Enter]
Enter same passphrase again: [Hit Enter]
...

yourusername@myinstance:~#

That's it! pretty easy. (In production, make sure you enter a passphrase :)

Alternatively, you can also SSH into the instance directly from the console, by navigating to Compute Engine > VM Instances, and clicking on SSH.

Log into myinstance, the newly created instance, and install nginx:

$ sudo su - 
# apt-get update
# apt-get install -y nginx
# service nginx start
# exit

Test that the server is running using wget from myinstance:

$ wget -q -O - localhost:80
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body bgcolor="white" text="black">
<center><h1>Welcome to nginx!</h1></center>
</body>
</html>

Find the external IP for your instance by listing your instances either via the web UI:

Make sure you exit from SSH, and run this command from the Cloud Shell:

$ gcloud compute instances list
NAME       ZONE           MACHINE_TYPE  PREEMPTIBLE INTERNAL_IP EXTERNAL_IP    STATUS
myinstance us-central1-f n1-standard-1             10.240.0.2  104.155.42.166 RUNNING

Then navigate to http://EXTERNAL_IP/ where EXTERNAL_IP is the public IP of myinstance and you should be able to see the Nginx page:

Rather than setting up the instance every time, you can use a startup script to initialize the instance upon startup.

Let's open up another Cloud Shell session by click + to add a new session tab:

A new session requires you to update the configuration again:

$ gcloud config set compute/zone us-central1-f
$ gcloud config set compute/region us-central1

Create a file named startup.sh with following content (you can use your favorite text editor: vim, nano, or emacs):

#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"$HOSTNAME"'/' /var/www/html/index.nginx-debian.html

To create an instance with the startup script, this may take up to 2 to 3 minutes:

$ gcloud compute instances create nginx \
         --metadata-from-file startup-script=startup.sh 
Created [...].
NAME       ZONE           MACHINE_TYPE  PREEMPTIBLE INTERNAL_IP EXTERNAL_IP    STATUS
nginx      us-central1-f n1-standard-1             10.X.X.X    X.X.X.X   RUNNING

Browse to http://EXTERNAL_IP/ and you should see the updated home page. If the page doesn't show immediately retry after a couple of seconds, the host might be still starting nginx.

To create a cluster of servers, you first need to create an Instance Template. Once an instance template is created, you can then create an instance group to manage the number of instances to create.

First, create an instance template using the startup script, this could take a few minutes:

$ gcloud compute instance-templates create nginx-template \
         --metadata-from-file startup-script=startup.sh
Created [...].
NAME           MACHINE_TYPE  PREEMPTIBLE CREATION_TIMESTAMP
nginx-template n1-standard-1             2015-11-09T08:44:59.007-08:00

Second, let's create a target pool. A target pool allows us to have a single access point to all the instances in a group and is necessary for load balancing in the future steps.

$ gcloud compute target-pools create nginx-pool
Created [...].
NAME       REGION       SESSION_AFFINITY BACKUP HEALTH_CHECKS
nginx-pool us-central1

Finally, create an instance group using the template:

$ gcloud compute instance-groups managed create nginx-group \
         --base-instance-name nginx \
         --size 2 \
         --template nginx-template \
         --target-pool nginx-pool
Created [...].
NAME        ZONE           BASE_INSTANCE_NAME SIZE TARGET_SIZE GROUP       INSTANCE_TEMPLATE AUTOSCALED
nginx-group us-central1-f nginx                   2           nginx-group nginx-template

This will create 2 Compute Engine instances with names that are prefixed with nginx-.

List the compute engine instances and you should see all of the instances created!

$ gcloud compute instances list
NAME       ZONE           MACHINE_TYPE  PREEMPTIBLE INTERNAL_IP EXTERNAL_IP    STATUS
myinstance us-central1-f n1-standard-1             10.240.X.X  X.X.X.X           RUNNING
nginx      us-central1-f n1-standard-1             10.240.X.X  X.X.X.X           RUNNING
nginx-7wvi us-central1-f n1-standard-1             10.240.X.X  X.X.X.X           RUNNING
nginx-9mwd us-central1-f n1-standard-1             10.240.X.X  X.X.X.X           RUNNING

There are two types of load balancers in Google Cloud Platform :

Let's create a network load balancer targeting our instance group:

$ gcloud compute forwarding-rules create nginx-lb \
         --ports 80 \
         --target-pool nginx-pool

NAME     REGION       IP_ADDRESS     IP_PROTOCOL TARGET
nginx-lb us-central1 104.155.48.184 TCP         us-central1/targetPools/nginx-pool

You can then visit the load balancer from the browser http://IP_ADDRESS/ where IP_ADDRESS is the address shown as the result of running the previous command.

Due to the time, we will not be creating a HTTP load balancer today.

Don't forget to shut down your cluster, otherwise they'll keep running and accruing costs. The following commands will delete the Google Compute Engine instances, Instance Group, Targeting Group, and the Load Balancer.

$ gcloud compute forwarding-rules delete nginx-lb
The following forwarding rules will be deleted:
 - [nginx-lb] in [us-central1]
Do you want to continue (Y/n)?  Y
Deleted [...].

$ gcloud compute instance-groups managed delete nginx-group
The following instance group managers will be deleted:
 - [nginx-group] in [us-central1-f]
Do you want to continue (Y/n)?  Y
Deleted [...].

$ gcloud compute target-pools delete nginx-pool
The following target pools will be deleted:
 - [nginx-pool] in [us-central1]
Do you want to continue (Y/n)?  Y
Deleted [...].

$ gcloud compute instance-templates delete nginx-template
The following instance templates will be deleted:
 - [nginx-template]
Do you want to continue (Y/n)?  Y
Deleted [...].

$ gcloud compute instances delete myinstance
The following instances will be deleted. Attached disks configured to 
be auto-deleted will be deleted unless they are attached to any other 
instances. Deleting a disk is irreversible and any data on the disk 
will be lost.
 - [myinstance] in [us-central1-f]
Do you want to continue (Y/n)? Y
Deleted [...].

$ gcloud compute instances delete nginx
The following instances will be deleted. Attached disks configured to 
be auto-deleted will be deleted unless they are attached to any other 
instances. Deleting a disk is irreversible and any data on the disk 
will be lost.
 - [nginx] in [us-central1-f]
Do you want to continue (Y/n)? Y
Deleted [...].

$ gcloud compute firewall-rules delete allow-80
The following firewalls will be deleted:
 - [allow-80]
Do you want to continue (Y/n)?  Y
Deleted [...].

We're going to work through this guestbook example. This example is built with Spring Boot, with a frontend using Spring MVC and Thymeleaf, and with two microservices. It requires MySQL to store guestbook entries, and Redis to store session information.

The first step is to create a cluster to work with. We will create a Kubernetes cluster using Google Container Engine.

In Cloud Shell, don't forget to set the default zone and region configuration if you haven't already:

$ gcloud config set compute/zone us-central1-f
$ gcloud config set compute/region us-central1

Create a Kubernetes cluster in Google Cloud Platform is very easy! Use Container Engine to create a cluster:

$ gcloud container clusters create guestbook --num-nodes 3

This will take a few minutes to run. Behind the scenes, it will create Google Compute Engine instances, and configure each instance as a Kubernetes node. These instances don't include the Kubernetes Master node. In Google Container Engine, the Kubernetes Master node is managed service so that you don't have to worry about it!

You can see the newly created instances in the Google Compute Engine > VM Instances page.

It is very easy to deploy deploy NGINX into the Kubernetes cluster. Let's take a look at a couple of commands before we move on to the details.

To deploy NGINX, simply run:

$ kubectl run nginx --image=nginx --replicas=3
deployment "nginx" created

This will create a replication controller to spin up 3 pods, each pod runs the NGINX container. You can see the status of deployment by running:

$ kubectl get pods -owide
NAME          READY     STATUS    RESTARTS   AGE       NODE
nginx-fffsc   1/1       Running   0          1m        gke-demo-2-43558313-node-sgve
nginx-nk1ok   1/1       Running   0          1m        gke-demo-2-43558313-node-hswk
nginx-x86ck   1/1       Running   0          1m        gke-demo-2-43558313-node-wskh

You can see that each NGINX pod is now running in a different node (virtual machine).

Once all pods have the Running status, you can then expose the NGINX cluster as an external service:

$ kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
service "nginx" exposed

This command will then create a network load balancer to load balance traffic to the three NGINX instances. To find the network load balancer address:

$ kubectl get service nginx
NAME      CLUSTER_IP      EXTERNAL_IP      PORT(S)   SELECTOR    AGE
nginx     10.X.X.X        X.X.X.X          80/TCP    run=nginx   1m

It may take a couple of minutes to see the value of EXTERNAL_IP. If you don't see it the first time with the above command, retry every minute or so until the value of EXTERNAL_IP is displayed.

You can then visit http://EXTERNAL_IP/ to see the server!

Pretty easy right? Let's undeploy NGINX before we move on to deploy a full stack application:

First, delete the service:

$ kubectl delete service nginx
service "nginx" deleted

Next, delete the replication controller. This will subsequently delete the pods (all of the NGINX instances) as well:

$ kubectl delete deployment nginx
deployment "nginx" deleted

Let's see how we can deploy a full stack in the next section.

Start by cloning the github repository for the Guestbook application:

$ git clone https://github.com/saturnism/spring-boot-docker

And move into the kubernetes examples directory.

$ cd spring-boot-docker/examples/kubernetes

We will be using the yaml files in this directory. Every file describes a resource that needs to be deployed into Kubernetes. Without giving much detail on its contents, but you are definitely encouraged to read them and see how pods, services, and others are declared. We'll talk a couple of these files in detail.

A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, disk, etc. All containers within the same pod instance will live and die together. It's especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container.

First create a pod using kubectl, the Kubernetes CLI tool:

$ kubectl create -f redis-pod.yaml

You should see a Redis instance running:

$ kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
redis     1/1       Running   0          20s

Optional interlude: Look at your pod running in a Docker container on the VM

If you ssh to that machine (find the node the pod is running on by using kubectl describe pod <pod-name> | grep Node), you can then ssh into the machine with gcloud compute ssh <node-name> . Finally, run sudo docker ps to see the actual pod:

$ sudo docker ps
CONTAINER ID        IMAGE           COMMAND                CREATED             STATUS              
67672e8118fd        redis:latest    "/entrypoint.sh        About an hour ago   Up 

End of Optional interlude: make sure you exit from the SSH before you continue.

Note: If you see other containers running don't worry, those are other services that are part of the management of Kubernetes clusters.

A service provides a single access point to a set of pods matching some constraints.

Create the Redis service:

$ kubectl create -f redis-service.yaml

And check it:

$ kubectl get services
NAME         LABELS                                    SELECTOR                 IP(S)            PORT(S)
kubernetes   component=apiserver,provider=kubernetes   <none>                   10.107.240.1     443/TCP
redis        name=redis,role=master,visualize=true     name=redis,role=master   10.107.254.132   6379/TCP

MySQL uses persistent storage. Rather than writing the data directly into the container image itself, our example stores the MySQL in a Google Compute Engine disk. Before you can deploy the pod, you need to create a disk that can be mounted inside of the MySQL container:

$ gcloud compute disks create mysql-disk --size 200GB
Created [...].
NAME       ZONE           SIZE_GB TYPE        STATUS
mysql-disk us-central1-f 200     pd-standard READY

You can then deploy both the MySQL Pod and the Service with a single command:

$ kubectl create -f mysql.yaml -f mysql-service.yaml 

Lastly, you can see the pods and service status via the command line. Recall the command you can use to see the status (hint: kubectl get ...). Wait until the status is Running before continuing.

We have two separate services to deploy:

Both services are containers whose images contain self-executing Jar files. The source is available in the examples directory if you are interested in seeing it. Let's deploy them one at a time:

First, deploy the Hello World Replication Controller:

$ kubectl create -f helloworldservice-controller-v1.yaml

Once created, you can see the replicas with:

$ kubectl get rc
CONTROLLER                        CONTAINER(S)        IMAGE(S)                                       SELECTOR                             REPLICAS
helloworldservice-controller-v1   helloworldservice   saturnism/spring-boot-helloworld-service:1.0   name=helloworldservice,version=1.0   2

The last line corresponds to our replication controller. Its responsibility is to achieve the desired state of having two Hello World instances running. You can see the pods running (this may take a couple of minutes):

$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
helloworldservice-controller-v1-dsas3   1/1       Running   0          3m
helloworldservice-controller-v1-s27n5   1/1       Running   0          3m
mysql                                   1/1       Running   0          35m
redis                                   1/1       Running   0          3h

You can also look at each pod's log output by running:

$ kubectl logs -f helloworldservice-controller-v1-XXXXX

Note: The -f flag tails the log. To stop tailing, press Ctrl+C.

Lastly, let's create the Guestbook Service replication controller and service too!

$ kubectl create -f guestbookservice-controller.yaml \
                 -f guestbookservice-service.yaml

In Kubernetes every pod has a unique IP address! You can "login" into one of these pods by using the kubectl exec command. This can drop you into a shell and execute commands inside of the container:

$ kubectl exec -ti mysql /bin/bash
root@mysql:/#

You are now in a shell inside of the MySQL container. You can run ps, and hostname:

root@mysql:/# ps auwx
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mysql        1  0.0 12.3 994636 470492 ?       Ssl  20:32   0:01 mysqld
root       128  0.0  0.0  20224  3208 ?        Ss   21:09   0:00 /bin/bash
root       136  0.0  0.0  17488  2108 ?        R+   21:11   0:00 ps auwx

root@mysql:/# hostname -i
10.104.0.8

root@mysql:/# exit

Don't forget to exit :). Try it with another pod, like one of the Hello World Service pods and see its IP address.

$ kubectl exec -ti helloworldservice-controller-v1-ABCD /bin/bash

Since we are running two instances of the Hello World Service (one instance in one pod), and that the IP addresses are not only unique, but also ephemeral - how will a client reach our services? We need a way to discover the service.

In Kubernetes, Service Discovery is a first class citizen. We can create a Service that will:

Let's create a Hello World Service:

$ kubectl create -f helloworldservice-service.yaml

If you login into a container, you can access the helloworldservice via the DNS name:

$ kubectl exec -ti redis /bin/bash
root@redis:/data# wget -qO- http://helloworldservice:8080/hello/Ray
{"greeting":"Hello Ray from helloworldservice-controller-v1-s27n5 with 1.0","hostname":"helloworldservice-controller-v1-s27n5","version":"1.0"}root@red
is:/data#
root@redis:/data# exit

Pretty simple right!?

You know the drill by now. We first need to create the replication controller that will start and manage the frontend pods, followed by exposing the service. The only difference is that this time, the service needs to be externally accessible. In Kubernetes, you can instruct the underlying infrastructure to create an external load balancer, by specifying the Service Type as a LoadBalancer.

You can see it in the helloworldui-service.yaml:

kind: Service
apiVersion: v1
metadata:
  name: helloworldui
  labels:
    name: helloworldui
    visualize: "true"
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: http
  selector:
    name: helloworldui

Let's deploy both the replication controller and the service at the same time:

$ kubectl create -f helloworldui-controller-v1.yaml \
                 -f helloworldui-service.yaml

You can also access the public IP running, and look for LoadBalancer Ingress IPs in the output:

$ kubectl describe services helloworldui
Name:                   helloworldui
Namespace:              default
Labels:                 name=helloworldui,visualize=true
Selector:               name=helloworldui
Type:                   LoadBalancer
IP:                     10.107.255.103
LoadBalancer Ingress:   X.X.X.X
Port:                   <unnamed>       80/TCP
NodePort:               <unnamed>       32155/TCP
Endpoints:              10.104.1.6:8080,10.104.1.7:8080
Session Affinity:       None
No events.

You can now access the guestbook via the ingress IP address by navigating the browser to http://INGRESS_IP/.

You should see something like this:

Scaling the number of replicas of our Hello World controller is as simple as running :

$ kubectl scale rc helloworldui-controller-v1 --replicas=12

You can very quickly see that the replication controller has been updated:

$ kubectl get rc

Let's take a look at the status of the pods:

$ kubectl get pods
NAME                                       READY     STATUS    RESTARTS   AGE
guestbookservice-controller-latest-6sofx   1/1       Running   0          6m
guestbookservice-controller-latest-vpkb8   1/1       Running   0          6m
helloworldservice-controller-v1-dsas3      1/1       Running   0          36m
helloworldservice-controller-v1-s27n5      1/1       Running   0          36m
helloworldui-controller-v1-23hey           1/1       Running   0          16s
helloworldui-controller-v1-43vro           1/1       Running   0          1m
helloworldui-controller-v1-5chmo           1/1       Running   0          30m
helloworldui-controller-v1-6w9y6           0/1       Pending   0          16s
helloworldui-controller-v1-8sq05           1/1       Running   0          4m
helloworldui-controller-v1-9agp6           1/1       Running   0          1m
helloworldui-controller-v1-dga63           0/1       Pending   0          16s
helloworldui-controller-v1-o9ug5           1/1       Running   0          2m
helloworldui-controller-v1-ojobz           1/1       Running   0          30m
helloworldui-controller-v1-ru0jh           0/1       Pending   0          16s
helloworldui-controller-v1-s3ywn           1/1       Running   0          2m
helloworldui-controller-v1-texmi           1/1       Running   0          4m
mysql                                      1/1       Running   0          1h
redis                                      1/1       Running   0          4h

Oh no! Some of the pods are in the Pending state! That is because we only have three physical nodes, and the underlying infrastructure has run out of capacity to run the containers with the requested resources.

Pick a pod name that is associated with the Pending state to confirm the lack of resources in the detailed status:

$ kubectl describe pod helloworldui-controller-v1-25exv
Name:                           helloworldui-controller-v1-149nb
Namespace:                      default
Image(s):                       saturnism/spring-boot-helloworld-ui:v1
Node:                           /
Labels:                         name=helloworldui, ...
Status:                         Pending
...
Events:
  FirstSeen                             LastSeen                        Count   From            SubobjectPath   Reason                  Message
  Mon, 09 Nov 2015 22:38:21 +0100       Mon, 09 Nov 2015 22:39:24 +0100 7       {scheduler }                    FailedScheduling        Failed for reason PodExceedsFreeCPU and possibly others

The good news is that we can easily spin up another Compute Engine instance to append to the cluster.

First, find the Compute Engine Instance Group that's managing the Kubernetes nodes (the name is prefixed with "gke-")

$ gcloud compute instance-groups list
NAME                         ZONE           NETWORK MANAGED INSTANCES
gke-guestbook-a3e896df-group us-central1-f default Yes     3

You can resize the number of nodes by updating the Instance Group size:

$ gcloud compute instance-groups managed resize gke-guestbook-a3e896df-group \
         --size 5
Updated [https://www.googleapis.com/compute/v1/projects/causal-scarab-112414/zones/us-central1-f/instanceGroupManagers/gke-guestbook-a3e896df-group].
---
baseInstanceName: gke-guestbook-a3e896df-node
creationTimestamp: '2015-11-09T09:22:05.904-08:00'
currentActions:
  abandoning: 0
  creating: 1
  deleting: 0
...

You can see a new Compute Engine instance is starting:

$ gcloud compute instances list
...

It may take several minutes for the new instance to start. Once the new instance has joined the Kubernetes cluster, you'll should be able to see it with this command:

$ kubectl get nodes
NAME                               LABELS                       STATUS
gke-guestbook-a3e896df-node-3d99   kubernetes.io/hostname=...   Ready
gke-guestbook-a3e896df-node-dt8a   kubernetes.io/hostname=...   Ready
gke-guestbook-a3e896df-node-rqfg   kubernetes.io/hostname=...   Ready
gke-guestbook-a3e896df-node-vt3l   kubernetes.io/hostname=...   Ready

We're now breaking away from the guestbook example. It's easy to update & rollback.

Switch to the examples/helloworld-ui directory and make a minor change to the templates/index.html (e.g., change the background color, title, etc.).

After that, rebuild the container and upload it to the Google Container Registry.

You can look up your project id by running gcloud config list.

$ cd ~/spring-boot-docker/examples/helloworld-ui
$ vim templates/index.html
$ docker build -t gcr.io/<your-project-id>/helloworld-ui:v2 .
$ gcloud docker push gcr.io/<your-project-id>/helloworld-ui:v2

Next, let's update the replication controller file to prepare for the rolling update:

$ cd ../kubernetes
$ vim helloworldui-controller-v2.yaml

And, replace the image attribute with the path to the image you just pushed (gcr.io/<your-project-id>/helloworld-ui:v2)

apiVersion: v1
kind: ReplicationController
metadata:
  ...
spec:
  ...
  template:
    ...
    spec:
      containers:
       - name: helloworldui
         image: gcr.io/<your-project-id>/helloworld-ui:v2
         ...

To do a rolling update to the new version you use the kubectl rolling-update command:

$ kubectl rolling-update helloworldui-controller-v1 \
           -f helloworldui-controller-v2.yaml \
           --update-period=2s

Rollback is basically the reverse of update:

$ kubectl rolling-update helloworldui-controller-v2 \
        -f helloworldui-controller-v1.yaml \
        --update-period=2s

To canary both version of the service at the same time, simply deploy both version 1 and version 2 of the replication controller. Because the helloworldui service doesn't differentiate between the two versions, it'll try to load balance the frontend requests to both versions.

You can try it with the Hello World Service too. There are already two versions you can use in helloworldservice-controller-v1.yaml and helloworldservice-controller-v2.yaml.

During the lab, you've used kubectl logs command to retrieve the logs of a container running inside of Kubernetes. When you use Google Container Engine to run managed Kubernetes clusters, all of the logs are automatically forwarded and stored in Google Cloud Logging. You can see all the log output from the pods by navigating to Operations > Logging, and find the logs by pod name:

From here, you can optionally export the logs into Google BigQuery for further log analysis, or setup log-based alerting. We won't get to do this during the lab today.

From Cloud Shell, execute the following commands :

$ gcloud container clusters describe guestbook | egrep "password"
    password: vUYwC5ATJMWa6goh
$ kubectl cluster-info
  ... 
  KubeUI is running at https://<ip-address>/api/v1/proxy/namespaces/kube-system/services/kube-ui
  ...

Navigate to the URL that is shown under after KubeUI is running at and log in with username "admin" and the password retrieved above and enjoy the Kubernetes graphical dashboard!

Don't forget to shut down your cluster, otherwise they'll keep running and accruing costs. The following commands will delete the persistent disk, the GKE cluster, and also the contents of the private repository.

$ gcloud container clusters delete guestbook
$ gcloud compute disks delete mysql-disk

$ gsutil ls
gs://artifacts.<PROJECT_ID>.appspot.com/
...

$ gsutil rm -r gs://artifacts.<PROJECT_ID>.appspot.com/
Removing gs://artifacts.<PROJECT_ID>.appspot.com/...

Of course, you can also delete the entire project but note that you must first disable billing on the project. Additionally, deleting a project will only happen after the current billing cycle ends.

Here are some ideas for next steps.

Create a pod configuration for an existing container and start it on your cluster.

Pick a container from Docker Hub and run it in the cluster using all the knowledge you learned from this lab.

Build a simple chaos monkey

Explore the kubectl commands to figure out how to delete a pod. Write a script in your favorite language to pick a random pod and delete it every few minutes. It should be automatically replaced by the Replication Controllers. You're now testing to make sure your infrastructure really is resilient to failures!

DIY Kubernetes cluster on Compute Engine

Download the open source version, build it and deploy a cluster yourself with the kubernetes tools.

This can be as simple as running: 'curl -sS https://get.k8s.io | bash'

Give us your feedback

Kubernetes

Google Container Engine

Google Compute Engine