Kubernetes 101
Be kind to the WiFi!
Don't use your hotspot.
Don't stream videos or download big files during the workshop.
Thank you!
Slides: http://container.training/
Hello! We are:
✨ Bridget (@bridgetkromhout)
🌟 Samantha (@zruty)
The workshop will run from 9:00am-12:40pm, with two breaks
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials
Credit is also due to multiple contributors — thank you!
You can also follow along on your own, at your own pace
We included as much information as possible in these slides
We recommend having a mentor to help you ...
... Or be comfortable spending some time reading the Kubernetes documentation ...
... And looking for answers on StackOverflow and other outlets
All the content is available in a public GitHub repository:
You can get updated "builds" of the slides there:
All the content is available in a public GitHub repository:
You can get updated "builds" of the slides there:
👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.
This slide has a little magnifying glass in the top left corner
This magnifiying glass indicates slides that provide extra details
Feel free to skip them if:
you are in a hurry
you are new to this and want to avoid cognitive overload
you want only the most essential information
You can review these slides another time if you want, they'll be waiting for you ☺
(auto-generated TOC)
Pre-requirements
(automatically generated title slide)
Be comfortable with the UNIX command line
navigating directories
editing files
a little bit of bash-fu (environment variables, loops)
Some Docker knowledge
docker run
, docker ps
, docker build
ideally, you know how to write a Dockerfile and build it
(even if it's a FROM
line and a couple of RUN
commands)
It's totally OK if you are not a Docker expert!
Tell me and I forget.
Teach me and I remember.
Involve me and I learn.
Misattributed to Benjamin Franklin
(Probably inspired by Chinese Confucian philosopher Xunzi)
The whole workshop is hands-on
We are going to build, ship, and run containers!
You are invited to reproduce all the demos
All hands-on sections are clearly identified, like the gray rectangle below
This is the stuff you're supposed to do!
Go to container.training to view these slides
Join the chat room: In person!
Each person gets a private cluster of cloud VMs (not shared with anybody else)
They'll remain up for the duration of the workshop
You should have a little card with login+password+IP addresses
You can automatically SSH from one VM to another
The nodes have aliases: node1
, node2
, etc.
Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
"The whole team downloaded all these container images from the WiFi!
... and it went great!" (Literally no-one ever)
All you need is a computer (or even a phone or tablet!), with:
an internet connection
a web browser
an SSH client
On Linux, OS X, FreeBSD... you are probably all set
On Windows, get one of these:
On Android, JuiceSSH (Play Store) works pretty well
Nice-to-have: Mosh instead of SSH, if your internet connection tends to lose packets
You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!
Mosh is "the mobile shell"
It is essentially SSH over UDP, with roaming features
It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
To install it: (apt|yum|brew) install mosh
It has been pre-installed on the VMs that we are using
To connect to a remote machine: mosh user@host
(It is going to establish an SSH connection, then hand off to UDP)
It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
node1
) with your SSH clientnode2
:ssh node2
exit
or ^D
to come back to node1
If anything goes wrong — ask for help!
Use something like Play-With-Docker or Play-With-Kubernetes
Zero setup effort; but environment are short-lived and might have limited resources
Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
Create a bunch of clusters for you and your friends (instructions)
Bigger setup effort; ideal for group training
These remarks apply only when using multiple nodes, of course.
Unless instructed, all commands must be run from the first VM, node1
We will only checkout/copy the code on node1
During normal operations, we do not need access to the other nodes
If we had to troubleshoot issues, we would use a combination of:
SSH (to access system logs, daemon status...)
Docker API (to check running containers and container engine status)
Once in a while, the instructions will say:
"Open a new terminal."
There are multiple ways to do this:
create a new window or tab on your machine, and SSH into the VM;
use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
Tmux is a terminal multiplexer like screen
.
You don't have to use it or even know about it to follow along.
But some of us like to use it to switch between terminals.
It has been preinstalled on your workshop nodes.
kubectl versiondocker versiondocker-compose -v
"Validates" = continuous integration builds
The Docker API is versioned, and offers strong backward-compatibility
(If a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way)
Our sample application
(automatically generated title slide)
We will clone the GitHub repository onto our node1
The repository also contains scripts and tools that we will use through the workshop
node1
:git clone git://github.com/jpetazzo/container.training
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
Let's start this before we look around, as downloading will take a little time...
Go to the dockercoins
directory, in the cloned repo:
cd ~/container.training/dockercoins
Use Compose to build and run all containers:
docker-compose up
Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs.
Visit the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training
The application is in the dockercoins subdirectory
Let's look at the general layout of the source code:
there is a Compose file docker-compose.yml ...
... and 4 other services, each in its own directory:
rng
= web service generating random byteshasher
= web service computing hash of POSTed dataworker
= background process using rng
and hasher
webui
= web interface to watch progressParticularly relevant if you have used Compose before...
Compose 1.6 introduced support for a new Compose file format (aka "v2")
Services are no longer at the top level, but under a services
section
There has to be a version
key at the top level, with value "2"
(as a string, not an integer)
Containers are placed on a dedicated network, making links unnecessary
There are other minor differences, but upgrade is easy and straightforward
We do not hard-code IP addresses in the code
We do not hard-code FQDN in the code, either
We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
worker/worker.py
redis = Redis("redis")def get_random_bytes(): r = requests.get("http://rng/32") return r.contentdef hash_bytes(data): r = requests.post("http://hasher/", data=data, headers={"Content-Type": "application/octet-stream"})
(Full source code available here)
Containers can have network aliases (resolvable through DNS)
Compose file version 2+ makes each container reachable through its service name
Compose file version 1 did require "links" sections
Network aliases are automatically namespaced
you can have multiple apps declaring and using a service named database
containers in the blue app will resolve database
to the IP of the blue database
containers in the green app will resolve database
to the IP of the green database
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
How DockerCoins works:
worker
asks to rng
to generate a few random bytes
worker
feeds these bytes into hasher
and repeat forever!
every second, worker
updates redis
to indicate how many loops were done
webui
queries redis
, and computes and exposes "hashing speed" in your browser
On the left-hand side, the "rainbow strip" shows the container names
On the right-hand side, we see the output of our containers
We can see the worker
service making requests to rng
and hasher
For rng
and hasher
, we see HTTP access logs
"Logs are exciting and fun!" (No-one, ever)
The webui
container exposes a web dashboard; let's view it
With a web browser, connect to node1
on port 8000
Remember: the nodeX
aliases are valid only on the nodes themselves
In your browser, you need to enter the IP address of your node
A drawing area should show up, and after a few seconds, a blue graph will appear.
It looks like the speed is approximately 4 hashes/second
Or more precisely: 4 hashes/second, with regular dips down to zero
Why?
It looks like the speed is approximately 4 hashes/second
Or more precisely: 4 hashes/second, with regular dips down to zero
Why?
The app actually has a constant, steady speed: 3.33 hashes/second
(which corresponds to 1 hash every 0.3 seconds, for reasons)
Yes, and?
The worker doesn't update the counter after every loop, but up to once per second
The speed is computed by the browser, checking the counter about once per second
Between two consecutive updates, the counter will increase either by 4, or by 0
The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
What can we conclude from this?
The worker doesn't update the counter after every loop, but up to once per second
The speed is computed by the browser, checking the counter about once per second
Between two consecutive updates, the counter will increase either by 4, or by 0
The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
What can we conclude from this?
If we interrupt Compose (with ^C
), it will politely ask the Docker Engine to stop the app
The Docker Engine will send a TERM
signal to the containers
If the containers do not exit in a timely manner, the Engine sends a KILL
signal
^C
If we interrupt Compose (with ^C
), it will politely ask the Docker Engine to stop the app
The Docker Engine will send a TERM
signal to the containers
If the containers do not exit in a timely manner, the Engine sends a KILL
signal
^C
Some containers exit immediately, others take longer.
The containers that do not handle SIGTERM
end up being killed after a 10s timeout. If we are very impatient, we can hit ^C
a second time!
docker-compose down
Kubernetes concepts
(automatically generated title slide)
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
What does that really mean?
atseashop/api:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Keep processing requests during the upgrade; update my containers one at a time
Basic autoscaling
Blue/green deployment, canary deployment
Long running services, but also batch (one-off) jobs
Overcommit our cluster and evict low-priority jobs
Run services with stateful data (databases etc.)
Fine-grained access control defining what can be done by whom on which resources
Integrating third party services (service catalog)
Automating complex tasks (operators)
Ha ha ha ha
OK, I was trying to scare you, it's much simpler than that ❤️
The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI
(Courtesy of Yongbok Kim)
The second one is a simplified representation of a Kubernetes cluster
(Courtesy of Imesh Gunaratne)
The nodes executing our containers run a collection of services:
a container Engine (typically Docker)
kubelet (the "node agent")
kube-proxy (a necessary but not sufficient network component)
Nodes were formerly called "minions"
(You might see that word in older articles or documentation)
The Kubernetes logic (its "brains") is a collection of services:
the API server (our point of entry to everything!)
core services like the scheduler and controller manager
etcd
(a highly available key/value store; the "database" of Kubernetes)
Together, these services form the control plane of our cluster
The control plane is also called the "master"
It is common to reserve a dedicated node for the control plane
(Except for single-node development clusters, like when using minikube)
This node is then called a "master"
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
Normal applications are restricted from running on this node
(By using a mechanism called "taints")
When high availability is required, each service of the control plane must be resilient
The control plane is then replicated on multiple nodes
(This is sometimes called a "multi-master" setup)
The services of the control plane can run in or out of containers
For instance: since etcd
is a critical service, some people
deploy it directly on a dedicated cluster (without containers)
(This is illustrated on the first "super complicated" schema)
In some hosted Kubernetes offerings (e.g. GKE), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
In that case, there is no "master node"
For this reason, it is more accurate to say "control plane" rather than "master".
No!
No!
By default, Kubernetes uses the Docker Engine to run containers
We could also use rkt
("Rocket") from CoreOS
Or leverage other pluggable runtimes through the Container Runtime Interface
(like CRI-O, or containerd)
Yes!
Yes!
In this workshop, we run our app on a single node first
We will need to build images and ship them around
We can do these things without Docker
(and get diagnosed with NIH¹ syndrome)
Docker is still the most stable container engine today
(but other options are maturing very quickly)
On our development environments, CI pipelines ... :
Yes, almost certainly
On our production servers:
Yes (today)
Probably not (in the future)
More information about CRI on the Kubernetes blog
The Kubernetes API defines a lot of objects called resources
These resources are organized by type, or Kind
(in the API)
A few common resource types are:
And much more! (We can see the full list by running kubectl get
)
The first diagram is courtesy of Weave Works
a pod can have multiple containers working together
IP addresses are associated with pods, not with individual containers
The second diagram is courtesy of Lucas Käldström, in this presentation
Both diagrams used with permission.
Declarative vs imperative
(automatically generated title slide)
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Declarative seems simpler at first ...
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Declarative seems simpler at first ...
... As long as you know how to brew tea
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
Did you know there was an ISO standard specifying how to brew tea?
Imperative systems:
simpler
if a task is interrupted, we have to restart from scratch
Declarative systems:
if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary
we need to be able to observe the system
... and compute a "diff" between what we have and what we want
Virtually everything we create in Kubernetes is created from a spec
Watch for the spec
fields in the YAML files later!
The spec describes how we want the thing to be
Kubernetes will reconcile the current state with the spec
(technically, this is done by a number of controllers)
When we want to change some resource, we update the spec
Kubernetes will then converge that resource
Kubernetes network model
(automatically generated title slide)
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
In detail:
all nodes must be able to reach each other, without NAT
all pods must be able to reach each other, without NAT
pods and nodes must be able to reach each other, without NAT
each pod is aware of its IP address (no NAT)
Kubernetes doesn't mandate any particular implementation
Everything can reach everything
No address translation
No port translation
No new protocol
Pods cannot move from a node to another and keep their IP address
IP addresses don't have to be "portable" from a node to another
(We can use e.g. a subnet per node and use a simple routed topology)
The specification is simple enough to allow many various implementations
Everything can reach everything
if you want security, you need to add network policies
the network implementation that you use needs to support them
There are literally dozens of implementations out there
(15 are listed in the Kubernetes documentation)
Pods have level 3 (IP) connectivity, but services are level 4
(Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)
kube-proxy
is on the data path when connecting to a pod or container,
and it's not particularly fast (relies on userland proxying or iptables)
The nodes that we are using have been set up to use Weave
We don't endorse Weave in a particular way, it just Works For Us
Don't worry about the warning about kube-proxy
performance
Unless you:
If necessary, there are alternatives to kube-proxy
; e.g.
kube-router
The CNI has a well-defined specification for network plugins
When a pod is created, Kubernetes delegates the network setup to CNI plugins
Typically, a CNI plugin will:
allocate an IP address (by calling an IPAM plugin)
add a network interface into the pod's network namespace
configure the interface as well as required routes etc.
Using multiple plugins can be done with "meta-plugins" like CNI-Genie or Multus
Not all CNI plugins are equal
(e.g. they don't all implement network policies, which are required to isolate pods)
First contact with kubectl
(automatically generated title slide)
kubectl
kubectl
is (almost) the only tool we'll need to talk to Kubernetes
It is a rich CLI tool around the Kubernetes API
(Everything you can do with kubectl
, you can do directly with the API)
On our machines, there is a ~/.kube/config
file with:
the Kubernetes API address
the path to our TLS certificates used to authenticate
You can also use the --kubeconfig
flag to pass a config file
Or directly --server
, --user
, etc.
kubectl
can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...
kubectl get
Node
resources with kubectl get
!Look at the composition of our cluster:
kubectl get node
These commands are equivalent:
kubectl get nokubectl get nodekubectl get nodes
kubectl get
can output JSON, YAML, or be directly formattedGive us more info about the nodes:
kubectl get nodes -o wide
Let's have some YAML:
kubectl get no -o yaml
See that kind: List
at the end? It's the type of our result!
kubectl
and jq
kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity"
kubectl
has pretty good introspection facilities
We can list all available resource types by running kubectl get
We can view details about a resource with:
kubectl describe type/namekubectl describe type name
We can view the definition for a resource type with:
kubectl explain type
Each time, type
can be singular, plural, or abbreviated type name.
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
There is already one service on our cluster: the Kubernetes API itself.
A ClusterIP
service is internal, available from the cluster only
This is useful for introspection from within containers
Try to connect to the API:
curl -k https://10.96.0.1
-k
is used to skip certificate verification
Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc
A ClusterIP
service is internal, available from the cluster only
This is useful for introspection from within containers
Try to connect to the API:
curl -k https://10.96.0.1
-k
is used to skip certificate verification
Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc
The error that we see is expected: the Kubernetes API requires authentication.
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
These are not the pods you're looking for. But where are they?!?
kubectl get namespaceskubectl get namespacekubectl get ns
kubectl get namespaceskubectl get namespacekubectl get ns
You know what ... This kube-system
thing looks suspicious.
By default, kubectl
uses the default
namespace
We can switch to a different namespace with the -n
option
kube-system
namespace:kubectl -n kube-system get pods
By default, kubectl
uses the default
namespace
We can switch to a different namespace with the -n
option
kube-system
namespace:kubectl -n kube-system get pods
Ding ding ding ding ding!
The kube-system
namespace is used for the control plane.
etcd
is our etcd server
kube-apiserver
is the API server
kube-controller-manager
and kube-scheduler
are other master components
kube-dns
is an additional component (not mandatory but super useful, so it's there)
kube-proxy
is the (per-node) component managing port mappings and such
weave
is the (per-node) component managing the network overlay
the READY
column indicates the number of containers in each pod
the pods with a name ending with -node1
are the master components
(they have been specifically "pinned" to the master node)
kube-public
?kube-public
namespace:kubectl -n kube-public get pods
kube-public
?kube-public
namespace:kubectl -n kube-public get pods
kube-public
keeping?kube-public
?kube-public
namespace:kubectl -n kube-public get pods
kube-public
keeping?kube-public
namespace:kubectl -n kube-public get secrets
kube-public
?kube-public
namespace:kubectl -n kube-public get pods
kube-public
keeping?kube-public
namespace:kubectl -n kube-public get secrets
kube-public
is created by kubeadm & used for security bootstrappingSetting up Kubernetes
(automatically generated title slide)
How did we set up these Kubernetes clusters that we're using?
We used kubeadm
on freshly installed VM instances running Ubuntu 16.04 LTS
Install Docker
Install Kubernetes packages
Run kubeadm init
on the master node
Set up Weave (the overlay network)
(that step is just one kubectl apply
command; discussed later)
Run kubeadm join
on the other nodes (with the token produced by kubeadm init
)
Copy the configuration file generated by kubeadm init
Check the prepare VMs README for more details
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Doesn't set up multi-master (no high availability)
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Doesn't set up multi-master (no high availability)
(At least ... not yet!)
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Doesn't set up multi-master (no high availability)
(At least ... not yet!)
"It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
If you like Ansible: kubespray
If you like Terraform: typhoon
You can also learn how to install every component manually, with the excellent tutorial Kubernetes The Hard Way
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
There are also many commercial options available!
For a longer list, check the Kubernetes documentation:
it has a great guide to pick the right solution to set up Kubernetes.
Running our first containers on Kubernetes
(automatically generated title slide)
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
In that container in the pod, we are going to run a simple ping
command
Then we are going to start additional copies of the pod
kubectl run
1.1.1.1
, Cloudflare's
public DNS resolver:kubectl run pingpong --image alpine ping 1.1.1.1
kubectl run
1.1.1.1
, Cloudflare's
public DNS resolver:kubectl run pingpong --image alpine ping 1.1.1.1
OK, what just happened?
kubectl run
kubectl run
kubectl get all
kubectl run
kubectl run
kubectl get all
We should see the following things:
deployment.apps/pingpong
(the deployment that we just created)replicaset.apps/pingpong-xxxxxxxxxx
(a replica set created by the deployment)pod/pingpong-xxxxxxxxxx-yyyyy
(a pod created by the replica set)Note: as of 1.10.1, resource types are displayed in more detail.
A deployment is a high-level construct
allows scaling, rolling updates, rollbacks
multiple deployments can be used together to implement a canary deployment
delegates pods management to replica sets
A replica set is a low-level construct
makes sure that a given number of identical pods are running
allows scaling
rarely used directly
A replication controller is the (deprecated) predecessor of a replica set
pingpong
deploymentkubectl run
created a deployment, deployment.apps/pingpong
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/pingpong 1 1 1 1 10m
replicaset.apps/pingpong-xxxxxxxxxx
NAME DESIRED CURRENT READY AGEreplicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
pod/pingpong-xxxxxxxxxx-yyyyy
NAME READY STATUS RESTARTS AGEpod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
We'll see later how these folks play together for:
Let's use the kubectl logs
command
We will pass either a pod name, or a type/name
(E.g. if we specify a deployment or replica set, it will get the first pod in it)
Unless specified otherwise, it will only show logs of the first container in the pod
(Good thing there's only one in ours!)
ping
command:kubectl logs deploy/pingpong
Just like docker logs
, kubectl logs
supports convenient options:
-f
/--follow
to stream logs in real time (à la tail -f
)
--tail
to indicate how many lines you want to see (from the end)
--since
to get logs only after a given timestamp
ping
command:kubectl logs deploy/pingpong --tail 1 --follow
kubectl scale
pingpong
deployment:kubectl scale deploy/pingpong --replicas 8
Note: what if we tried to scale replicaset.apps/pingpong-xxxxxxxxxx
?
We could! But the deployment would notice it right away, and scale back to the initial level.
The deployment pingpong
watches its replica set
The replica set ensures that the right number of pods are running
What happens if pods disappear?
kubectl get pods -w
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
What if we wanted to start a "one-shot" container that doesn't get restarted?
We could use kubectl run --restart=OnFailure
or kubectl run --restart=Never
These commands would create jobs or pods instead of deployments
Under the hood, kubectl run
invokes "generators" to create resource descriptions
We could also write these resource descriptions ourselves (typically in YAML),
and create them on the cluster with kubectl apply -f
(discussed later)
With kubectl run --schedule=...
, we can also create cronjobs
When we specify a deployment name, only one single pod's logs are shown
We can view the logs of multiple pods by specifying a selector
A selector is a logic expression using labels
Conveniently, when you kubectl run somename
, the associated objects have a run=somename
label
run=pingpong
label:kubectl logs -l run=pingpong --tail 1
Unfortunately, --follow
cannot (yet) be used to stream the logs from multiple containers.
If you're wondering this, good question!
Don't worry, though:
APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.
It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC!
Exposing containers
(automatically generated title slide)
kubectl expose
creates a service for existing pods
A service is a stable address for a pod (or a bunch of pods)
If we want to connect to our pod(s), we need to create a service
Once a service is created, kube-dns
will allow us to resolve it by name
(i.e. after creating service hello
, the name hello
will resolve to something)
There are different types of services, detailed on the following slides:
ClusterIP
, NodePort
, LoadBalancer
, ExternalName
ClusterIP
(default type)
NodePort
These service types are always available.
Under the hood: kube-proxy
is using a userland proxy and a bunch of iptables
rules.
LoadBalancer
NodePort
service is created, and the load balancer sends traffic to that port)ExternalName
kube-dns
will just be a CNAME
to a provided recordThe LoadBalancer
type is currently only available on AWS, Azure, and GCE.
ping
doesn't have anything to connect to, we'll have to run something elseStart a bunch of ElasticSearch containers:
kubectl run elastic --image=elasticsearch:2 --replicas=7
Watch them being started:
kubectl get pods -w
The -w
option "watches" events happening on the specified resources.
Note: please DO NOT call the service search
. It would collide with the TLD.
ClusterIP
serviceExpose the ElasticSearch HTTP API port:
kubectl expose deploy/elastic --port 9200
Look up which IP address was allocated:
kubectl get svc
You can assign IP addresses to services, but they are still layer 4
(i.e. a service is not an IP address; it's an IP address + protocol + port)
This is caused by the current implementation of kube-proxy
(it relies on mechanisms that don't support layer 3)
As a result: you have to indicate the port number for your service
Running services with arbitrary port (or port ranges) requires hacks
(e.g. host networking mode)
Let's obtain the IP address that was allocated for our service, programatically:
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:9200/
Let's obtain the IP address that was allocated for our service, programatically:
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:9200/
We may see curl: (7) Failed to connect to _IP_ port 9200: Connection refused
.
This is normal while the service starts up.
Let's obtain the IP address that was allocated for our service, programatically:
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:9200/
We may see curl: (7) Failed to connect to _IP_ port 9200: Connection refused
.
This is normal while the service starts up.
Once it's running, our requests are load balanced across multiple pods.
Sometimes, we want to access our scaled services directly:
if we want to save a tiny little bit of latency (typically less than 1ms)
if we need to connect over arbitrary ports (instead of a few fixed ones)
if we need to communicate over another protocol than UDP or TCP
if we want to decide how to balance the requests client-side
...
In that case, we can use a "headless service"
A headless service is obtained by setting the clusterIP
field to None
(Either with --cluster-ip=None
, or by providing a custom YAML)
As a result, the service doesn't have a virtual IP address
Since there is no virtual IP address, there is no load balancer either
kube-dns
will return the pods' IP addresses as multiple A
records
This gives us an easy way to discover all the replicas for a deployment
A service has a number of "endpoints"
Each endpoint is a host + port where the service is available
The endpoints are maintained and updated automatically by Kubernetes
elastic
service:kubectl describe service elastic
In the output, there will be a line starting with Endpoints:
.
That line will list a bunch of addresses in host:port
format.
When we have many endpoints, our display commands truncate the list
kubectl get endpoints
If we want to see the full list, we can use one of the following commands:
kubectl describe endpoints elastickubectl get endpoints elastic -o yaml
These commands will show us a list of IP addresses
These IP addresses should match the addresses of the corresponding pods:
kubectl get pods -l run=elastic -o wide
endpoints
not endpoint
endpoints
is the only resource that cannot be singular$ kubectl get endpointerror: the server doesn't have a resource type "endpoint"
This is because the type itself is plural (unlike every other resource)
There is no endpoint
object: type Endpoints struct
The type doesn't represent a single endpoint, but a list of endpoints
In this part, we will:
build images for our app,
ship these images with a registry,
run deployments using these images,
expose these deployments so they can communicate with each other,
expose the web UI so we can access it from outside.
Build on our control node (node1
)
Tag images so that they are named $REGISTRY/servicename
Upload them to a registry
Create deployments using the images
Expose (with a ClusterIP) the services that need to communicate
Expose (with a NodePort) the WebUI
We could use the Docker Hub
Or a service offered by our cloud provider (ACR, GCR, ECR...)
Or we could just self-host that registry
We'll self-host the registry because it's the most generic solution for this workshop.
We need to run a registry:2
container
(make sure you specify tag :2
to run the new version!)
It will store images and layers to the local filesystem
(but you can add a config file to use S3, Swift, etc.)
Docker requires TLS when communicating with the registry
unless for registries on 127.0.0.0/8
(i.e. localhost
)
or with the Engine flag --insecure-registry
Our strategy: publish the registry container on a NodePort,
so that it's available through 127.0.0.1:xxxxx
on each node
Deploying a self-hosted registry
(automatically generated title slide)
Create the registry service:
kubectl run registry --image=registry:2
Expose it on a NodePort:
kubectl expose deploy/registry --port=5000 --type=NodePort
View the service details:
kubectl describe svc/registry
Get the port number programmatically:
NODEPORT=$(kubectl get svc/registry -o json | jq .spec.ports[0].nodePort)REGISTRY=127.0.0.1:$NODEPORT
/v2/_catalog
curl $REGISTRY/v2/_catalog
/v2/_catalog
curl $REGISTRY/v2/_catalog
We should see:
{"repositories":[]}
Make sure we have the busybox image, and retag it:
docker pull busyboxdocker tag busybox $REGISTRY/busybox
Push it:
docker push $REGISTRY/busybox
curl $REGISTRY/v2/_catalog
The curl command should now output:
{"repositories":["busybox"]}
Go to the stacks
directory:
cd ~/container.training/stacks
Build and push the images:
export REGISTRYexport TAG=v0.1docker-compose -f dockercoins.yml builddocker-compose -f dockercoins.yml push
Let's have a look at the dockercoins.yml
file while this is building and pushing.
version: "3"services: rng: build: dockercoins/rng image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest} deploy: mode: global ... redis: image: redis ... worker: build: dockercoins/worker image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest} ... deploy: replicas: 10
Just in case you were wondering ... Docker "services" are not Kubernetes "services".
latest
tagMake sure that you've set the TAG
variable properly!
If you don't, the tag will default to latest
The problem with latest
: nobody knows what it points to!
the latest commit in the repo?
the latest commit in some branch? (Which one?)
the latest tag?
some random version pushed by a random team member?
If you keep pushing the latest
tag, how do you roll back?
Image tags should be meaningful, i.e. correspond to code branches, tags, or hashes
Deploy redis
:
kubectl run redis --image=redis
Deploy everything else:
for SERVICE in hasher rng webui worker; do kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAGdone
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
💡 Oh right! We forgot to expose
.
Exposing services internally
(automatically generated title slide)
Three deployments need to be reachable by others: hasher
, redis
, rng
worker
doesn't need to be exposed
webui
will be dealt with later
kubectl expose deployment redis --port 6379kubectl expose deployment rng --port 80kubectl expose deployment hasher --port 80
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
We should now see the worker
, well, working happily.
Exposing services for external access
(automatically generated title slide)
Now we would like to access the Web UI
We will expose it with a NodePort
(just like we did for the registry)
Create a NodePort
service for the Web UI:
kubectl expose deploy/webui --type=NodePort --port=80
Check the port that was allocated:
kubectl get svc
Alright, we're back to where we started, when we were running on a single node!
The Kubernetes dashboard
(automatically generated title slide)
Kubernetes resources can also be viewed with a web dashboard
We are going to deploy that dashboard with three commands:
1) actually run the dashboard
2) bypass SSL for the dashboard
3) bypass authentication for the dashboard
Kubernetes resources can also be viewed with a web dashboard
We are going to deploy that dashboard with three commands:
1) actually run the dashboard
2) bypass SSL for the dashboard
3) bypass authentication for the dashboard
There is an additional step to make the dashboard available from outside (we'll get to that)
Kubernetes resources can also be viewed with a web dashboard
We are going to deploy that dashboard with three commands:
1) actually run the dashboard
2) bypass SSL for the dashboard
3) bypass authentication for the dashboard
There is an additional step to make the dashboard available from outside (we'll get to that)
Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.
We need to create a deployment and a service for the dashboard
But also a secret, a service account, a role and a role binding
All these things can be defined in a YAML file and created with kubectl apply -f
kubectl apply -f https://goo.gl/Qamqab
The goo.gl URL expands to:
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
The Kubernetes dashboard uses HTTPS, but we don't have a certificate
Recent versions of Chrome (63 and later) and Edge will refuse to connect
(You won't even get the option to ignore a security warning!)
We could (and should!) get a certificate, e.g. with Let's Encrypt
... But for convenience, for this workshop, we'll forward HTTP to HTTPS
Do not do this at home, or even worse, at work!
We are going to run socat
, telling it to accept TCP connections and relay them over SSL
Then we will expose that socat
instance with a NodePort
service
For convenience, these steps are neatly encapsulated into another YAML file
kubectl apply -f https://goo.gl/tA7GLz
The goo.gl URL expands to:
https://gist.githubusercontent.com/jpetazzo/c53a28b5b7fdae88bc3c5f0945552c04/raw/da13ef1bdd38cc0e90b7a4074be8d6a0215e1a65/socat.yaml
All our dashboard traffic is now clear-text, including passwords!
kubectl -n kube-system get svc socat
You'll want the 3xxxx
port.
The dashboard will then ask you which authentication you want to use.
We have three authentication options at this point:
token (associated with a role that has appropriate permissions)
kubeconfig (e.g. using the ~/.kube/config
file from node1
)
"skip" (use the dashboard "service account")
Let's use "skip": we get a bunch of warnings and don't see much
The dashboard documentation explains how to do this
We just need to load another YAML file!
Grant admin privileges to the dashboard so we can see our resources:
kubectl apply -f https://goo.gl/CHsLTA
Reload the dashboard and enjoy!
The dashboard documentation explains how to do this
We just need to load another YAML file!
Grant admin privileges to the dashboard so we can see our resources:
kubectl apply -f https://goo.gl/CHsLTA
Reload the dashboard and enjoy!
By the way, we just added a backdoor to our Kubernetes cluster!
We took a shortcut by forwarding HTTP to HTTPS inside the cluster
Let's expose the dashboard over HTTPS!
The dashboard is exposed through a ClusterIP
service (internal traffic only)
We will change that into a NodePort
service (accepting outside traffic)
kubectl edit service kubernetes-dashboard
We took a shortcut by forwarding HTTP to HTTPS inside the cluster
Let's expose the dashboard over HTTPS!
The dashboard is exposed through a ClusterIP
service (internal traffic only)
We will change that into a NodePort
service (accepting outside traffic)
kubectl edit service kubernetes-dashboard
NotFound
?!? Y U NO WORK?!?
kubernetes-dashboard
servicekubernetes-dashboard
serviceIf we look at the YAML that we loaded before, we'll get a hint
The dashboard was created in the kube-system
namespace
kubernetes-dashboard
serviceIf we look at the YAML that we loaded before, we'll get a hint
The dashboard was created in the kube-system
namespace
Edit the service:
kubectl -n kube-system edit service kubernetes-dashboard
Change ClusterIP
to NodePort
, save, and exit
Check the port that was assigned with kubectl -n kube-system get services
Connect to https://oneofournodes:3xxxx/ (yes, https)
The steps that we just showed you are for educational purposes only!
If you do that on your production cluster, people can and will abuse it
For an in-depth discussion about securing the dashboard,
check this excellent post on Heptio's blog
Security implications of kubectl apply
(automatically generated title slide)
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
encrypts our data and ransoms it
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
encrypts our data and ransoms it
☠️☠️☠️
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply -f
is convenient
It's safe if you use HTTPS URLs from trusted sources
Example: the official setup instructions for most pod networks
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply -f
is convenient
It's safe if you use HTTPS URLs from trusted sources
Example: the official setup instructions for most pod networks
It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid)
Scaling a deployment
(automatically generated title slide)
worker
deploymentkubectl get pods -wkubectl get deployments -w
worker
replicas:kubectl scale deploy/worker --replicas=10
After a few seconds, the graph in the web UI should show up.
(And peak at 10 hashes/second, just like when we were running on a single one.)
Daemon sets
(automatically generated title slide)
We want to scale rng
in a way that is different from how we scaled worker
We want one (and exactly one) instance of rng
per node
What if we just scale up deploy/rng
to the number of nodes?
nothing guarantees that the rng
containers will be distributed evenly
if we add nodes later, they will not automatically run a copy of rng
if we remove (or reboot) a node, one rng
container will restart elsewhere
Instead of a deployment
, we will use a daemonset
Daemon sets are great for cluster-wide, per-node processes:
kube-proxy
weave
(our overlay network)
monitoring agents
hardware management tools (e.g. SCSI/FC HBA agents)
etc.
They can also be restricted to run only on some nodes
Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
How do we create the YAML file for our daemon set?
Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
How do we create the YAML file for our daemon set?
option 1: read the docs
option 2: vi
our way out of it
rng
resourceDump the rng
resource in YAML:
kubectl get deploy/rng -o yaml --export >rng.yml
Edit rng.yml
Note: --export
will remove "cluster-specific" information, i.e.:
What if we just changed the kind
field?
(It can't be that easy, right?)
Change kind: Deployment
to kind: DaemonSet
Save, quit
Try to create our new resource:
kubectl apply -f rng.yml
What if we just changed the kind
field?
(It can't be that easy, right?)
Change kind: Deployment
to kind: DaemonSet
Save, quit
Try to create our new resource:
kubectl apply -f rng.yml
We all knew this couldn't be that easy, right!
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
Obviously, it doesn't make sense to specify a number of replicas for a daemon set
Workaround: fix the YAML
replicas
fieldstrategy
field (which defines the rollout mechanism for a deployment)status: {}
line at the enderror validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
Obviously, it doesn't make sense to specify a number of replicas for a daemon set
Workaround: fix the YAML
replicas
fieldstrategy
field (which defines the rollout mechanism for a deployment)status: {}
line at the endOr, we could also ...
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag's actual name is --validate=false
kubectl apply -f rng.yml --validate=false
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag's actual name is --validate=false
kubectl apply -f rng.yml --validate=false
🎩✨🐇
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag's actual name is --validate=false
kubectl apply -f rng.yml --validate=false
🎩✨🐇
Wait ... Now, can it be that easy?
deployment
into a daemonset
?kubectl get all
deployment
into a daemonset
?kubectl get all
We have two resources called rng
:
the deployment that was existing before
the daemon set that we just created
We also have one too many pods.
(The pod corresponding to the deployment still exists.)
deploy/rng
and ds/rng
You can have different resource types with the same name
(i.e. a deployment and a daemon set both named rng
)
We still have the old rng
deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/rng 1 1 1 1 18m
But now we have the new rng
daemon set as well
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/rng 2 2 2 2 2 <none> 9s
If we check with kubectl get pods
, we see:
one pod for the deployment (named rng-xxxxxxxxxx-yyyyy
)
one pod per node for the daemon set (named rng-zzzzz
)
NAME READY STATUS RESTARTS AGErng-54f57d4d49-7pt82 1/1 Running 0 11mrng-b85tm 1/1 Running 0 25srng-hfbrr 1/1 Running 0 25s[...]
If we check with kubectl get pods
, we see:
one pod for the deployment (named rng-xxxxxxxxxx-yyyyy
)
one pod per node for the daemon set (named rng-zzzzz
)
NAME READY STATUS RESTARTS AGErng-54f57d4d49-7pt82 1/1 Running 0 11mrng-b85tm 1/1 Running 0 25srng-hfbrr 1/1 Running 0 25s[...]
The daemon set created one pod per node, except on the master node.
The master node has taints preventing pods from running there.
(To schedule a pod on this node anyway, the pod will require appropriate tolerations.)
(Off by one? We don't run these pods on the node hosting the control plane.)
Let's check the logs of all these rng
pods
All these pods have a run=rng
label:
kubectl run
doesTherefore, we can query everybody's logs using that run=rng
selector
run=rng
:kubectl logs -l run=rng --tail 1
Let's check the logs of all these rng
pods
All these pods have a run=rng
label:
kubectl run
doesTherefore, we can query everybody's logs using that run=rng
selector
run=rng
:kubectl logs -l run=rng --tail 1
It appears that all the pods are serving requests at the moment.
The rng
service is load balancing requests to a set of pods
This set of pods is defined as "pods having the label run=rng
"
rng
service definition:kubectl describe service rng
When we created additional pods with this label, they were
automatically detected by svc/rng
and added as endpoints
to the associated load balancer.
kubectl delete pod ...
?What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
... Because what matters to the replicaset
is the number of pods matching that selector.
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
... Because what matters to the replicaset
is the number of pods matching that selector.
But but but ... Don't we have more than one pod with run=rng
now?
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
... Because what matters to the replicaset
is the number of pods matching that selector.
But but but ... Don't we have more than one pod with run=rng
now?
The answer lies in the exact selector used by the replicaset
...
rng
deployment and the associated replica setShow detailed information about the rng
deployment:
kubectl describe deploy rng
Show detailed information about the rng
replica:
(The second command doesn't require you to get the exact name of the replica set)
kubectl describe rs rng-yyyykubectl describe rs -l run=rng
rng
deployment and the associated replica setShow detailed information about the rng
deployment:
kubectl describe deploy rng
Show detailed information about the rng
replica:
(The second command doesn't require you to get the exact name of the replica set)
kubectl describe rs rng-yyyykubectl describe rs -l run=rng
The replica set selector also has a pod-template-hash
, unlike the pods in our daemon set.
Updating a service through labels and selectors
(automatically generated title slide)
What if we want to drop the rng
deployment from the load balancer?
Option 1:
Option 2:
add an extra label to the daemon set
update the service selector to refer to that label
What if we want to drop the rng
deployment from the load balancer?
Option 1:
Option 2:
add an extra label to the daemon set
update the service selector to refer to that label
Of course, option 2 offers more learning opportunities. Right?
We will update the daemon set "spec"
Option 1:
edit the rng.yml
file that we used earlier
load the new definition with kubectl apply
Option 2:
kubectl edit
We will update the daemon set "spec"
Option 1:
edit the rng.yml
file that we used earlier
load the new definition with kubectl apply
Option 2:
kubectl edit
If you feel like you got this💕🌈, feel free to try directly.
We've included a few hints on the next slides for your convenience!
Reminder: a daemon set is a resource that creates more resources!
There is a difference between:
the label(s) of a resource (in the metadata
block in the beginning)
the selector of a resource (in the spec
block)
the label(s) of the resource(s) created by the first resource (in the template
block)
You need to update the selector and the template (metadata labels are not mandatory)
The template must match the selector
(i.e. the resource will refuse to create resources that it will not select)
Let's add a label isactive: yes
In YAML, yes
should be quoted; i.e. isactive: "yes"
Update the daemon set to add isactive: "yes"
to the selector and template label:
kubectl edit daemonset rng
Update the service to add isactive: "yes"
to its selector:
kubectl edit service rng
run=rng
pods to confirm that exactly one per node is now active:kubectl logs -l run=rng --tail 1
The timestamps should give us a hint about how many pods are currently receiving traffic.
kubectl get pods
The pods of the deployment and the "old" daemon set are still running
We are going to identify them programmatically
List the pods with run=rng
but without isactive=yes
:
kubectl get pods -l run=rng,isactive!=yes
Remove these pods:
kubectl delete pods -l run=rng,isactive!=yes
$ kubectl get podsNAME READY STATUS RESTARTS AGErng-54f57d4d49-7pt82 1/1 Terminating 0 51mrng-54f57d4d49-vgz9h 1/1 Running 0 22srng-b85tm 1/1 Terminating 0 39mrng-hfbrr 1/1 Terminating 0 39mrng-vplmj 1/1 Running 0 7mrng-xbpvg 1/1 Running 0 7m[...]
The extra pods (noted Terminating
above) are going away
... But a new one (rng-54f57d4d49-vgz9h
above) was restarted immediately!
$ kubectl get podsNAME READY STATUS RESTARTS AGErng-54f57d4d49-7pt82 1/1 Terminating 0 51mrng-54f57d4d49-vgz9h 1/1 Running 0 22srng-b85tm 1/1 Terminating 0 39mrng-hfbrr 1/1 Terminating 0 39mrng-vplmj 1/1 Running 0 7mrng-xbpvg 1/1 Running 0 7m[...]
The extra pods (noted Terminating
above) are going away
... But a new one (rng-54f57d4d49-vgz9h
above) was restarted immediately!
Remember, the deployment still exists, and makes sure that one pod is up and running
If we delete the pod associated to the deployment, it is recreated automatically
rng
deployment:kubectl delete deployment rng
rng
deployment:kubectl delete deployment rng
$ kubectl get podsNAME READY STATUS RESTARTS AGErng-54f57d4d49-vgz9h 1/1 Terminating 0 4mrng-vplmj 1/1 Running 0 11mrng-xbpvg 1/1 Running 0 11m[...]
Ding, dong, the deployment is dead! And the daemon set lives on.
When we changed the definition of the daemon set, it immediately created new pods. We had to remove the old ones manually.
How could we have avoided this?
When we changed the definition of the daemon set, it immediately created new pods. We had to remove the old ones manually.
How could we have avoided this?
By adding the isactive: "yes"
label to the pods before changing the daemon set!
This can be done programmatically with kubectl patch
:
PATCH='metadata: labels: isactive: "yes"'kubectl get pods -l run=rng -l controller-revision-hash -o name | xargs kubectl patch -p "$PATCH"
When a pod is misbehaving, we can delete it: another one will be recreated
But we can also change its labels
It will be removed from the load balancer (it won't receive traffic anymore)
Another pod will be recreated immediately
But the problematic pod is still here, and we can inspect and debug it
We can even re-add it to the rotation if necessary
(Very useful to troubleshoot intermittent and elusive bugs)
Conversely, we can add pods matching a service's selector
These pods will then receive requests and serve traffic
Examples:
one-shot pod with all debug flags enabled, to collect logs
pods created automatically, but added to rotation in a second step
(by setting their label accordingly)
This gives us building blocks for canary and blue/green deployments
Rolling updates
(automatically generated title slide)
By default (without rolling updates), when a scaled resource is updated:
new pods are created
old pods are terminated
... all at the same time
if something goes wrong, ¯\_(ツ)_/¯
With rolling updates, when a resource is updated, it happens progressively
Two parameters determine the pace of the rollout: maxUnavailable
and maxSurge
They can be specified in absolute number of pods, or percentage of the replicas
count
At any given time ...
there will always be at least replicas
-maxUnavailable
pods available
there will never be more than replicas
+maxSurge
pods in total
there will therefore be up to maxUnavailable
+maxSurge
pods being updated
We have the possibility to rollback to the previous version
(if the update fails or is unsatisfactory in any way)
kubectl
and jq
:kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"
As of Kubernetes 1.8, we can do rolling updates with:
deployments
, daemonsets
, statefulsets
Editing one of these resources will automatically result in a rolling update
Rolling updates can be monitored with the kubectl rollout
subcommand
worker
serviceGo to the stack
directory:
cd ~/container.training/stacks
Edit dockercoins/worker/worker.py
, update the sleep
line to sleep 1 second
Build a new tag and push it to the registry:
#export REGISTRY=localhost:3xxxxexport TAG=v0.2docker-compose -f dockercoins.yml builddocker-compose -f dockercoins.yml push
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
That rollout should be pretty quick. What shows in the web UI?
At first, it looks like nothing is happening (the graph remains at the same level)
According to kubectl get deploy -w
, the deployment
was updated really quickly
But kubectl get pods -w
tells a different story
The old pods
are still here, and they stay in Terminating
state for a while
Eventually, they are terminated; and then the graph decreases significantly
This delay is due to the fact that our worker doesn't handle signals
Kubernetes sends a "polite" shutdown request to the worker, which ignores it
After a grace period, Kubernetes gets impatient and kills the container
(The grace period is 30 seconds, but can be changed if needed)
Update worker
by specifying a non-existent image:
export TAG=v0.3kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
Check what's going on:
kubectl rollout status deploy worker
Update worker
by specifying a non-existent image:
export TAG=v0.3kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
Check what's going on:
kubectl rollout status deploy worker
Our rollout is stuck. However, the app is not dead (just 10% slower).
Why is our app 10% slower?
Because MaxUnavailable=1
, so the rollout terminated 1 replica out of 10 available
Okay, but why do we see 2 new replicas being rolled out?
Because MaxSurge=1
, so in addition to replacing the terminated one, the rollout is also starting one more
We start with 10 pods running for the worker
deployment
Current settings: MaxUnavailable=1 and MaxSurge=1
When we start the rollout:
Now we have 9 replicas up and running, and 2 being deployed
Our rollout is stuck at this point!
We could push some v0.3
image
(the pod retry logic will eventually catch it and the rollout will proceed)
Or we could invoke a manual rollback
kubectl rollout undo deploy workerkubectl rollout status deploy worker
We want to:
v0.1
The corresponding changes can be expressed in the following YAML snippet:
spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 3 minReadySeconds: 10
We could use kubectl edit deployment worker
But we could also use kubectl patch
with the exact YAML shown before
kubectl patch deployment worker -p "spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 3 minReadySeconds: 10"kubectl rollout status deployment workerkubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
Accessing logs from the CLI
(automatically generated title slide)
The kubectl logs
commands has limitations:
it cannot stream logs from multiple pods at a time
when showing logs from multiple pods, it mixes them all together
We are going to see how to do it better
We could (if we were so inclined), write a program or script that would:
take a selector as an argument
enumerate all pods matching that selector (with kubectl get -l ...
)
fork one kubectl logs --follow ...
command per container
annotate the logs (the output of each kubectl logs ...
process) with their origin
preserve ordering by using kubectl logs --timestamps ...
and merge the output
We could (if we were so inclined), write a program or script that would:
take a selector as an argument
enumerate all pods matching that selector (with kubectl get -l ...
)
fork one kubectl logs --follow ...
command per container
annotate the logs (the output of each kubectl logs ...
process) with their origin
preserve ordering by using kubectl logs --timestamps ...
and merge the output
We could do it, but thankfully, others did it for us already!
Stern is an open source project by Wercker.
From the README:
Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.
The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.
Exactly what we need!
sudo curl -L -o /usr/local/bin/stern \ https://github.com/wercker/stern/releases/download/1.6.0/stern_linux_amd64sudo chmod +x /usr/local/bin/stern
These installation instructions will work on our clusters, since they are Linux amd64 VMs.
However, you will have to adapt them if you want to install Stern on your local machine.
There are two ways to specify the pods for which we want to see the logs:
-l
followed by a selector expression (like with many kubectl
commands)
with a "pod query", i.e. a regex used to match pod names
These two ways can be combined if necessary
stern rng
The --tail N
flag shows the last N
lines for each container
(Instead of showing the logs since the creation of the container)
The -t
/ --timestamps
flag shows timestamps
The --all-namespaces
flag is self-explanatory
weave
system containers:stern --tail 1 --timestamps --all-namespaces weave
When specifying a selector, we can omit the value for a label
This will match all objects having that label (regardless of the value)
Everything created with kubectl run
has a label run
We can use that property to view the logs of all the pods created with kubectl run
kubectl run
:stern -l run
Managing stacks with Helm
(automatically generated title slide)
We created our first resources with kubectl run
, kubectl expose
...
We have also created resources by loading YAML files with kubectl apply -f
For larger stacks, managing thousands of lines of YAML is unreasonable
These YAML bundles need to be customized with variable parameters
(E.g.: number of replicas, image version to use ...)
It would be nice to have an organized, versioned collection of bundles
It would be nice to be able to upgrade/rollback these bundles carefully
Helm is an open source project offering all these things!
helm
is a CLI tool
tiller
is its companion server-side component
A "chart" is an archive containing templatized YAML bundles
Charts are versioned
Charts can be stored on private or public repositories
helm
CLI; then use it to deploy tiller
Install the helm
CLI:
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
Deploy tiller
:
helm init
Add the helm
completion:
. <(helm completion $(basename $SHELL))
Helm permission model requires us to tweak permissions
In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings
cluster-admin
role to kube-system:default
service account:kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin --serviceaccount=kube-system:default
(Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.)
A public repo is pre-configured when installing Helm
We can view available charts with helm search
(and an optional keyword)
View all available charts:
helm search
View charts related to prometheus
:
helm search prometheus
Most charts use LoadBalancer
service types by default
Most charts require persistent volumes to store data
We need to relax these requirements a bit
helm install stable/prometheus \ --set server.service.type=NodePort \ --set server.persistentVolume.enabled=false
Where do these --set
options come from?
helm inspect
shows details about a chart (including available options)stable/prometheus
:helm inspect stable/prometheus
The chart's metadata includes an URL to the project's home page.
(Sometimes it conveniently points to the documentation for the chart.)
We are going to show a way to create a very simplified chart
In a real chart, lots of things would be templatized
(Resource names, service types, number of replicas...)
Create a sample chart:
helm create dockercoins
Move away the sample templates and create an empty template directory:
mv dockercoins/templates dockercoins/default-templatesmkdir dockercoins/templates
while read kind name; do kubectl get -o yaml --export $kind $name > dockercoins/templates/$name-$kind.yamldone <<EOFdeployment workerdeployment hasherdaemonset rngdeployment webuideployment redisservice hasherservice rngservice webuiservice redisEOF
dockercoins
is the path to the chart)helm install dockercoins
dockercoins
is the path to the chart)helm install dockercoins
Since the application is already deployed, this will fail:
Error: release loitering-otter failed: services "hasher" already exists
To avoid naming conflicts, we will deploy the application in another namespace
Namespaces
(automatically generated title slide)
We cannot have two resources with the same name
(Or can we...?)
We cannot have two resources with the same name
(Or can we...?)
We cannot have two resources of the same type with the same name
(But it's OK to have a rng
service, a rng
deployment, and a rng
daemon set!)
We cannot have two resources with the same name
(Or can we...?)
We cannot have two resources of the same type with the same name
(But it's OK to have a rng
service, a rng
deployment, and a rng
daemon set!)
We cannot have two resources of the same type with the same name in the same namespace
(But it's OK to have e.g. two rng
services in different namespaces!)
We cannot have two resources with the same name
(Or can we...?)
We cannot have two resources of the same type with the same name
(But it's OK to have a rng
service, a rng
deployment, and a rng
daemon set!)
We cannot have two resources of the same type with the same name in the same namespace
(But it's OK to have e.g. two rng
services in different namespaces!)
In other words: the tuple (type, name, namespace) needs to be unique
(In the resource YAML, the type is called Kind
)
If we deploy a cluster with kubeadm
, we have three namespaces:
default
(for our applications)
kube-system
(for the control plane)
kube-public
(contains one secret used for cluster discovery)
If we deploy differently, we may have different namespaces
We can create namespaces with a very minimal YAML, e.g.:
kubectl apply -f- <<EOFapiVersion: v1kind: Namespacemetadata: name: blueEOF
If we are using a tool like Helm, it will create namespaces automatically
We can pass a -n
or --namespace
flag to most kubectl
commands:
kubectl -n blue get svc
We can also use contexts
A context is a (user, cluster, namespace) tuple
We can manipulate contexts with the kubectl config
command
blue
namespaceView existing contexts to see the cluster name and the current user:
kubectl config get-contexts
Create a new context:
kubectl config set-context blue --namespace=blue \ --cluster=kubernetes --user=kubernetes-admin
We have created a context; but this is just some configuration values.
The namespace doesn't exist yet.
Use the blue
context:
kubectl config use-context blue
Deploy DockerCoins:
helm install dockercoins
In the last command line, dockercoins
is just the local path where
we created our Helm chart before.
Retrieve the port number allocated to the webui
service:
kubectl get svc webui
Point our browser to http://X.X.X.X:3xxxx
Note: it might take a minute or two for the app to be up and running.
Namespaces do not provide isolation
A pod in the green
namespace can communicate with a pod in the blue
namespace
A pod in the default
namespace can communicate with a pod in the kube-system
namespace
kube-dns
uses a different subdomain for each namespace
Example: from any pod in the cluster, you can connect to the Kubernetes API with:
https://kubernetes.default.svc.cluster.local:443/
Actual isolation is implemented with network policies
Network policies are resources (like deployments, services, namespaces...)
Network policies specify which flows are allowed:
between pods
from pods to the outside world
and vice-versa
We can create as many network policies as we want
Each network policy has:
a pod selector: "which pods are targeted by the policy?"
lists of ingress and/or egress rules: "which peers and ports are allowed or blocked?"
If a pod is not targeted by any policy, traffic is allowed by default
If a pod is targeted by at least one policy, traffic must be allowed explicitly
This remains a high level overview of network policies
For more details, check:
Next steps
(automatically generated title slide)
Alright, how do I get started and containerize my apps?
Alright, how do I get started and containerize my apps?
Suggested containerization checklist:
And then it is time to look at orchestration!
Namespaces let you run multiple identical stacks side by side
Two namespaces (e.g. blue
and green
) can each have their own redis
service
Each of the two redis
services has its own ClusterIP
kube-dns
creates two entries, mapping to these two ClusterIP
addresses:
redis.blue.svc.cluster.local
and redis.green.svc.cluster.local
Pods in the blue
namespace get a search suffix of blue.svc.cluster.local
As a result, resolving redis
from a pod in the blue
namespace yields the "local" redis
This does not provide isolation! That would be the job of network policies.
As a first step, it is wiser to keep stateful services outside of the cluster
Exposing them to pods can be done with multiple solutions:
ExternalName
services
(redis.blue.svc.cluster.local
will be a CNAME
record)
ClusterIP
services with explicit Endpoints
(instead of letting Kubernetes generate the endpoints from a selector)
Ambassador services
(application-level proxies that can provide credentials injection and more)
If you really want to host stateful services on Kubernetes, you can look into:
volumes (to carry persistent data)
storage plugins
persistent volume claims (to ask for specific volume characteristics)
stateful sets (pods that are not ephemeral)
Services are layer 4 constructs
HTTP is a layer 7 protocol
It is handled by ingresses (a different resource kind)
Ingresses allow:
Check out e.g. Træfik
Logging is delegated to the container engine
Metrics are typically handled with Prometheus
(Heapster is a popular add-on)
Two constructs are particularly useful: secrets and config maps
They allow to expose arbitrary information to our containers
Avoid storing configuration in container images
(There are some exceptions to that rule, but it's generally a Bad Idea)
Never store sensitive information in container images
(It's the container equivalent of the password on a post-it note on your screen)
The best deployment tool will vary, depending on:
A few examples:
Sorry Star Trek fans, this is not the federation you're looking for!
Sorry Star Trek fans, this is not the federation you're looking for!
(If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!)
Kubernetes master operation relies on etcd
etcd uses the Raft protocol
Raft recommends low latency between nodes
What if our cluster spreads to multiple regions?
Kubernetes master operation relies on etcd
etcd uses the Raft protocol
Raft recommends low latency between nodes
What if our cluster spreads to multiple regions?
Break it down in local clusters
Regroup them in a cluster federation
Synchronize resources across clusters
Discover resources across clusters
I've put this last, but it's pretty important!
How do you on-board a new developer?
What do they need to install to get a dev stack?
How does a code change make it from dev to prod?
How does someone add a component to a stack?
Links and resources
(automatically generated title slide)
Kubernetes Community - Slack, Google Groups, meetups
These slides (and future updates) are on → http://container.training/
Hello! We are:
✨ Bridget (@bridgetkromhout)
🌟 Samantha (@zruty)
The workshop will run from 9:00am-12:40pm, with two breaks
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |