Blog Post

Getting Started with Installing Kubernetes On-Prem

,

Let’s get you started on your Kubernetes journey with installing Kubernetes on premises in virtual machines. 

Kubernetes is a distributed system, you will be creating a cluster which will have a master node that is in charge of all operations in your cluster. In this walkthrough we’ll create three workers which will run our applications. This cluster topology is, by no means, production ready. If you’re looking for production cluster builds check out Kubernetes documentation. Here and here. The primary components that need high availability in a Kubernetes cluster are the API Server which controls the state of the cluster and the etcd database which stores the persistent state of the cluster. You can learn more about Kubernetes cluster components here.

In our demonstration here, the master is where the API Server, etcd, and the other control plan functions will live. The workers, will be joined to the cluster and run our application workloads. 

Get your infrastructure sorted

I’m using 4 Ubuntu Virtual machines in VMware Fusion on my Mac. Each with 2vCPUs and 2GB of RAM running Ubuntu 16.04.5. Ubuntu 18 requires a slightly different install. Documented here. In there you will add the Docker repository, then install Docker from there. The instructions below get Docker from Ubuntu’s repository 

  • k8s-master – 172.16.94.15
  • K8s-node1 – DHCP
  • K8s-node2 – DHCP
  • K8s-node3 – DHCP

Ensure that each host has a unique name and that all nodes can have network reachability between each other. Take note of the IPs, because you will need to log into each node with SSH. If you need assistance getting your environment ready, check out my training on Pluralsight to get you started here! I have courses on installation, command line basics all the way up through advanced topics on networking and performance.

Another requirement, which Klaus Aschenbrenner reminded me, is that you need to disable the swap on any system which you will run the kubelet, which in our case is all systems. To do so you need to turn swap off with sudo swapoff -a and edit /etc/fstab removing or commenting out the swap volume entry. 

Overview of the cluster creation process

  • Install Kubernetes packages on all nodes
    • Add Kubernetes’ apt repositories
    • Install the required software for Kubernetes
  • Download deployment files for your pod network
  • Create a Kubernetes cluster on the master
    • We’re going to use a utility called kubeadm to create our cluster with a basic configuration
  • Install a Pod Network
  • Join our three worker nodes to our cluster

Install Kubernetes

Let’s start off with installing Kubernetes on to all of the nodes in our system. This is going to require logging into each server via SSH, adding the Kubernetes apt repositories and installing the correct packages. Perform the following tasks on ALL nodes in your cluster, the master and the three workers. If you add more nodes, you will need to install these packages on those nodes. 

Add the gpg key for the Kubernetes Apt repository to your local system

demo@k8s-master1:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –


Add the Kubernetes Apt repository to your local repository locations

demo@k8s-master1:~$ sudo bash -c ‘cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

> deb https://apt.kubernetes.io/ kubernetes-xenial main

> EOF’


Next, we’ll update our apt package lists

demo@k8s-master1:~$ sudo apt-get update

 
Install the required packages
 

demo@k8s-master1:~$ sudo apt-get install -y kubelet kubeadm kubectl docker.io

 
Then we need to tell apt to not update these packages. In Kubernetes, cluster upgrades will be managed by…you guessed it…Kubernetes

demo@k8s-master1:~$ sudo apt-mark hold kubelet kubeadm kubectl docker.io

 
Here’s what you just installed
  • Kubelet – On each node in the cluster, this is in charge of starting and stopping pods in response to the state defined on the API Server on the master 
  • Kubeadm – Primary command line utility for creating your cluster
  • Kubectl – Primary command line utility for working with your cluster
  • Docker – Remember, that Kubernetes is a container orchestrator so we’ll need a container runtime to run your containers. We’re using Docker. You can use other container runtimes if required

Download the YAML files for your Pod Network

Now, only on the master, let’s download the YAML deployment files for your Pod network and get are cluster created. Networking in Kubernetes is different than what you’d expect. For Pods to be on different nodes to be able to communicate with each other on the same IP network, you’ll want to create a Pod network. Which essentially is an overlay network that gives you a uniform address space for Pods to operate in. The decision of which Pod network to use, or even if you need one is very dependent on your local or cloud infrastructure. For this demo, I’m going to use the Calico Pod network overlay. The code below will download the Pod definition manifests in YAML and we’ll deploy those into our cluster. This start up a container on our system in what’s called a DaemonSet. A DaemonSet is a Kubernetes object that will start the specified container on all or some of the nodes in the cluster. In this case, the calico network Pod will be deployed on all nodes in our cluster. So as we join nodes, you might see some delay in nodes becoming ready…this is because the container is being pulled and started on the node.
 
Download the YAML for the Pod network
 

demo@k8s-master1:~$ wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

demo@k8s-master1:~$ wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml


If you need to change the address of your pod network edit calico.yaml, look for the name: CALICO_IPV4POOL_CIDR and set the value: to your specified CIDR range. It’s 192.168.0.0/16 by default. 

Creating a Kubernetes Cluster

Now we’re ready to create our Kubernetes cluster, we’re going to use kubeadm to help us get this done. It’s a community-based tool that does a lot of the heavy lifting for you.
 
To create a cluster do this, here we’re specifying a CIDR range to match that in our calico.yaml file.
 

demo@k8s-master1:~$ sudo kubeadm init –pod-network-cidr=192.168.0.0/16


What’s happening behind the scenes with kubeadm init:
  • Creates a certificate authority – Kubernetes uses certificates to secure communication between components and also to verify the identity of hosts in the cluster
  • Creates configuration files – On the master, this will create configuration files for various Kubernetes cluster components
  • Pulls control plane images – the services implementing the cluster components are deployed into the cluster as containers. Very cool! You can, of course, run these as local system daemons on the hosts, but Kubernetes suggests keeping them inside containers
  • Bootstraps the control plane pods – starts up the pods and creates static manifests on the master start automatically when the master node starts up
  • Taints the master to just system pods – this means the master will run (schedule) only system Pods, not user Pods. This is ideal for production. In testing, you may want to untaint the master, you’ll really want to do this if you’re running a single node cluster. See this link for details on that.
  • Generates a bootstrap token – used to join worker nodes to the cluster
  • Starts any add-ons – the most common add-ons are the DNS pod and the master’s kube-proxy
If you see this, you’re good to go! Keep that join command handy. We’ll need it in a second.
 

Your Kubernetes master has initialized successfully!

…output omitted

You can now join any number of machines by running the following on each node

as root:

  kubeadm join 172.16.94.15:6443 –token 2a71vm.aat5o5vd0eip9yrx –discovery-token-ca-cert-hash sha256:57b64257181341928e60548314f28aa0d2b15f4d81bf9ae9afdae0cee6baf247

The output from your cluster creation is very important, it’s going to give you the code needed to access your cluster as a non-root user, the code needed to create your Pod network and also the code needed to join worker nodes to your cluster (just go ahead and copy this into a text file right now). Let’s go through each of those together.

Configuring your cluster for access from the master node as a non-privileged user

This will allow you to log into your system with a regular account and administer your cluster.

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Create your Pod network

Now that your cluster is created, you can deploy the YAML files for your Pod network. You must do this prior to adding more nodes to your cluster and certainly before starting any Pods on those nodes. We are going to use kubectl -f to deploy the Pod network from the YAML file we downloaded earlier. 

demo@k8s-master1:~$ kubectl apply -f rbac-kdd.yaml

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

demo@k8s-master1:~$ kubectl apply -f calico.yaml

configmap/calico-config created

service/calico-typha created

deployment.apps/calico-typha created

poddisruptionbudget.policy/calico-typha created

daemonset.extensions/calico-node created

serviceaccount/calico-node created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created


Before moving forward, check for the creation of the Calico pods and also the DNS pods, once these are created and the STATUS is Running then you can proceed. In this output here you can also see the other components of your Kubernetes cluster. You see the containers running etcd, API Server, the Controller Manager, kube-proxy and the Scheduler.

demo@k8s-master1:~$ kubectl get pods –all-namespaces

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE

kube-system   calico-node-6ll9j                     2/2     Running   0          2m5s

kube-system   coredns-576cbf47c7-8dgzl              1/1     Running   0          9m59s

kube-system   coredns-576cbf47c7-cc9x2              1/1     Running   0          9m59s

kube-system   etcd-k8s-master1                      1/1     Running   0          8m58s

kube-system   kube-apiserver-k8s-master1            1/1     Running   0          9m16s

kube-system   kube-controller-manager-k8s-master1   1/1     Running   0          9m16s

kube-system   kube-proxy-8z9t7                      1/1     Running   0          9m59s

kube-system   kube-scheduler-k8s-master1            1/1     Running   0          8m55s

 

Joining worker nodes to your cluster

Now on each of the worker nodes, let’s use kubeadm join to join the worker nodes to the cluster. Go back to the output of kubeadm init and copy the string from that output be sure to put a sudo on the front before you do this on each node. The process below is called a TLS bootstrap. This securely joins the node to the cluster over TLS and authenticates the host with server certificates. 
 

demo@k8s-node1:~$ sudo kubeadm join 172.16.94.15:6443 –token 2a71vm.aat5o5vd0eip9yrx –discovery-token-ca-cert-hash sha256:57b64257181341928e60548314f28aa0d2b15f4d81bf9ae9afdae0cee6baf247

[preflight] running pre-flight checks

[discovery] Trying to connect to API Server “172.16.94.15:6443”

[discovery] Created cluster-info discovery client, requesting info from “https://172.16.94.15:6443”

[discovery] Requesting info from “https://172.16.94.15:6443” again to validate TLS against the pinned public key

[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.16.94.15:6443”

[discovery] Successfully established connection with API Server “172.16.94.15:6443”

[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.12” ConfigMap in the kube-system namespace

[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[preflight] Activating the kubelet service

[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…

[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-node1” as an annotation

 

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.
 

Run ‘kubectl get nodes’ on the master to see this node join the cluster.

 
If you didn’t keep the token or the CA Cert Hash in the earlier steps, go back to the master and run these commands. Also note, that join token is only valid for 24 hours. 
 
To get the current join token
 

demo@k8s-master1:~$ kubeadm token list


To get the CA Cert Hash
 

demo@k8s-master1:~$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’

 
Back on the master, check on the status of your nodes joining the cluster. These nodes are currently NotReady, behind the scenes they’re pulling the Calico container and setting up the Pod network.
 

demo@k8s-master1:~$ kubectl get nodes

NAME          STATUS     ROLES    AGE    VERSION

k8s-master1   Ready      master   14m    v1.12.2

k8s-node1     NotReady   <none>   100s   v1.12.2

k8s-node2     NotReady   <none>   96s    v1.12.2

k8s-node3     NotReady   <none>   94s    v1.12.2

 
And here we are with a fully functional Kubernetes cluster! All nodes joined and Ready.

demo@k8s-master1:~$ kubectl get nodes

NAME          STATUS   ROLES    AGE     VERSION

k8s-master1   Ready    master   15m     v1.12.2

k8s-node1     Ready    <none>   2m34s   v1.12.2

k8s-node2     Ready    <none>   2m30s   v1.12.2

k8s-node3     Ready    <none>   2m28s   v1.12.2


In our next post, we’ll deploy a SQL Server Pod into our freshly created Kubernetes cluster.
 
Please feel free to contact me with any questions regarding Linux or other SQL Server related issues at: aen@centinosystems.com

The post Getting Started with Installing Kubernetes On-Prem appeared first on Centino Systems Blog.

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating