Notes after Katacoda Training on Kubernetes Container Orchestration

A few weeks ago, I dedicated two days to follow the turorials available on Katacoda, the interactive learning platform for Kubernetes or any other container orchestration platform. I’m sharing my notes which I happen to use regularly as a cheat sheet.

If you haven’t tried Katacode yet and have an interest in Kubernetes, Docker or any of the courses covered, you will be amazed by how easy, fast and efficient they popularize technologies. Complementary to the course, they provide sandbox accesses, called playgrounds, to CoreOS, DC/OS and Kubernetes. In less than a minute, you’ll be logged in and ready to test any of those plateforms.

Launch A Single Node Cluster

Learn how to launch a Single Node Minikube cluster including DNS and Kube UI

Installation involves:

Minikube runs a single-node Kubernetes cluster inside a VM on your laptop:

From now one, Kubernetes is available:

Starting a container is similair to docker:

Kubernetes natively handles TCP/HTTP routing:

As with Docker, to get container informations such as the PORT, use go templates:

The Kubernetes dashboard is available on port 8080.

Launch a multi-node cluster using Kubeadm

Bootstrap a Kubernetes cluster using Kubeadm

Kubeadm handles TLS encryption configuration, deploys the core Kubernetes components and ensures that additional nodes can easily join the cluster.

Here is a nice presentation of the Kubernetes architecture.

To initialize a cluster:

To configure and connect the client:

An alternative is to set the Kubernetes master address as an environment variable:

The Container Network Interface (CNI) defines how the different nodes and their workloads should communicate.

Network providers are available here.

To install Weave Net:

To deploy a pod:

To install Kubernetes web-based dashboard UI:

Deploy Guestbook Web App Example

How to deploy the Guestbook example using Kubernetes

Use Pods, Replication Controllers, Services, NodePorts by installing Redis with one master for storage and a replicated set of redis slaves.

The launch script installs the following components:

The Kubelet is the primary “node agent” that runs on each node. The Kurnetes program is directly downloaded from the Internet (curl + chmod u+x).

The Kubernetes network proxy runs on each node and is used to reach services. It does TCP, UDP stream forwarding or round robin TCP,UDP forwarding across a set of backends.

DNS is a built-in service. Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.

To enable DNS discovery:

Once installed, client environment is available after:

Kubernetes service deployment has, at least, two definitions:

  • replication controller: ensure that a pod or a homogeneous set of pods is always up and available. It defines how many instances should be running, the Docker Image to use, and a name to identify the service.
  • service: defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service.

RC definition connect Redis slaves to its worker using “GET_HOSTS_FROM” with the value “dns” to find service host information from DNS at runtime.

Service defined as NodePort set well-known ports shared across the entire cluster. This is like -p 80:80 in Docker.

To find the assigned NodePort using kubectl:

Deploy Containers Using Kubectl

Use Kubectl to launch containers and make them accessible

Use Kubectl to create and launch Deployments, Replication Controllers and expose them via Services without writing yaml definitions.

A deployment controller is a Kubernetes object which provides declarative updates for Pods and ReplicaSets.

The definition describes a desired state in a Deployment object and the controller changes the actual state to the desired state at a controlled rate. Deployments are used to create new ReplicaSets, or to remove existing deployments and adopt all their resources with new deployments.

Kubectl run is similar to docker run but at a cluster level and it creates a deployment.

View the status of the deployments:

Describe the deployment process (optionally with the pod name at the end):

Expose a port to the host external IP:

This creates a service exposing the port “8000”:

When using docker run with option hostport, the Pod is not a service and is exposed via Docker port mapping. With docker ls, we see that it is not the container which exposes the ports but the pod. Other containers in the pod share the same network namespace. This improves network performance and allows multiple containers to communicate over the same network interface.

To scale the number of Pods running for a particular deployment or replication controller:

Deploy Containers Using YAML

Learn how to use YAML definitions to deploy containers

YAML definitions define the Kubernetes Objects that are scheduled for deployment. The objects can be updated and redeployed to the cluster to change the configuration.

A service definition matches applications using labels:

Networking capabilities are controlled via the Service definition with nodePort:

Use kubectl apply to reflect changes to a definition file:

Create Ingress Routing

Define host and path based Ingress routing

Ingress allows inbound connections to the cluster, allowing external traffic to reach the correct Pod. Functionnalities enable externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting…

Ingress stands for incoming connections while egress stands for outgoing connections. Kubernetes, in its latest version, supports policies for both types.

Ingress rules can be based on a request host (domain), or the path of the request, or a combination of both.

To deploy ingress object types:

To view all the Ingress rules:

I just learned a new trick with HTTP and curl useful for testing. Instead of creating a new entry inside “/etc/hosts” to fake an HTTP hostname, pass the “Host” header:

Use Kubernetes To Manage Secrets And Passwords

Keep secrets secure

Kubernetes allows you to create secrets that are mounted to a pod via environment variables or as a volume. This allows secrets, such as SSL certificates or passwords, to only be managed via an infrastructure team in a secure way instead of having the passwords stored within the application’s deployment artefacts.

Secret are created as Kubernetes objects.

Here’s how they look like:

To create and view secrets:

If running docker ps, you’re wondering what are the pause containers, here’s how Eric Paris describes them

The pause container is a container which holds the network namespace for the pod. It does nothing ‘useful’. (It’s actually just a little bit of assembly that goes to sleep and never wakes up)

This means that your ‘apache’ container can die, and come back to life, and all of the network setup will still be there. Normally if the last process in a network namespace dies the namespace would be destroyed and creating a new apache container would require creating all new network setup. With pause, you’ll always have that one last thing in the namespace.

A Pod which has environment variables populated includes something like:

kubectl exec is designed after docker exec.

To view the populated environmental variables:

To mount the secret in a file, create the pod with a volume and mount it:

Be careful, permissions must be enforced, default is ‘444’.

Liveness and Readiness Healthchecks

Ensure containers health using Liveness and Readiness probes

Readiness Probe checks if an application is ready to start processing traffic. This probe solves the problem of the container having started, but the process being still warming up and configuring itself meaning it’s not ready to receive traffic.

Liveness Probes ensure that the application is healthy and capable of processing requests.

Deploying from source onto Kubernetes

Get from source to running service in Kubernetes

The .spec.revisionHistoryLimit specifies the number of old ReplicaSets to retain to allow rollback.

The imagePullPolicy property accept is one of Always (default), Never or IfNotPresent.

The dnsPolicy property accepts one of ClusterFirst (default), ClusterFirstWithHostNet, Default.

A container registry is a central service that hosts images.

To push a local container into a custom container registry:

The registry can be reference as part of the Docker image in the deployment definition:

Kubernetes kubectl automatically reads “~/.kube/config”.

Forge automates service deployment into Kubernetes and does the following:

  • build the Dockerfile
  • push the image to a registry
  • build de deployment definition
  • deploy the container into Kubernetes

Helm Package Manager

Use Helm Package Manager for Kubernetes to deploy Redis

Helm is the package manager for Kubernetes. Packages are called charts and consist of pre-configured Kubernetes resources.

Helm has two parts: a client (helm) and a server (tiller); Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.

To install Helm:

To retrieve package information:

Monocular is a web UI for managing Kubernetes applications packaged as Helm Charts, it looks promising.


Also published on Medium.

By |2018-06-05T22:36:42+00:00December 14th, 2017|Categories: Container|Tags: , , , |0 Comments

About the Author:

Passionate with programming, data and entrepreneurship, I participate in shaping Adaltas to be a team of talented engineers to share our skills and experiences.

Leave A Comment