As most tech people, I like playing around with technologies. I am well familiar with docker, and it was time to play around with some container orchestration. There are a lot of options out there Mesosphere Marathon, docker swarm, cattle, kubernetes, etc. I started playing with kubernetes as it is backed by google and is growing in popularity. I wanted to see what the fuzz is about.
If you want to play around with kubernetes, they have a product called minikube. It makes it easy to play around with kubernetes. However it is a single node. The tech person that I am, i require a cluster, for reasons! Not wanting to spend any money, I will be using my home KVM server to setup kubernetes. I want to have 1 master and 2 nodes, which are using LAN IP addresses, 10.0.0.100,10.0.0.101,10.0.0.102 . Because I am most familiar with ubuntu, i first started using ubuntu 16.04 to run my cluster on. However I ran into a lot of issues doing this. So I changed my setup to ferdora 25, it is better supported by kubernetes. Logical, since red hat invests in kubernetes. On your local machine, checkout the kubernetes contrib repository. It contains ansible scripts to provision your master and nodes with. Ansible is a simple way to provision and deploy software onto servers.
Once you have checked out the contrib repository, create a file within ansible/inventory called inventory and add the following content:
Change your ip addresses to your ferdora instances. However all my VM’s are running from within my own home, i want them to only handle the private network. By using internal private network IP’s, your kubernetes nodes become special snowflakes. It makes more sense to use hostnames for provisioning, however in my case IP’s suffice.
Change some settings
Change the file ansible/inventory/group_vars/all.yml. This file contains all specifics for your cluster. For example how ansible should connect via ssh and if you wish to install addons. My advise would be to keep the kube_ui and kube_dash disabled, they are outdated and do not work properly out of the box. And i will later in this post, install them.
Once you are setup run the ansible script “ansible/scripts/deploy-cluster.sh” Ansible will do his magic and provision your machines. However i ran in to several issues The first being a missing python2-dnf library, however this is only applicant when installing the nodes separately
The fix is easy, login to your server and
Done, run ansible again. The following error was interesting, it could not start flannel daemon due to timeouts. The problem sat in the configuration to start flannel with For some reason in the file /etc/sysconfig/flanneld, the FLANNEL_ETC environment variable was empty. It could not start etcd, and therefor flanneld could not start When running ansible again, your cluster should be configured, yey!
Ssh into your kubernetes master. Dashboard Before installing the kubernetes dashboard remove cockpit-ws from each fedora VM. The port mapping interferes with the dashboard, besides who needs server management in a gui.
Enable the dashboard, so we could see if it works by clicking! Server management in a gui, pffff, but container management in a gui, awesome!
Expose the dashboard to the outside world.
Because i do not have any loadbalancer in front of my kubernetes cluster I will expose the dashboard based on node IP addresses. This will make your node a special snowflake, which it already was. And you might want to do some security checking if your kubernetes cluster is exposed to the internet. Check the IP address of the pod:
change kubernetes-dashboard-3095304083-e8di9 to your pod identifier whats visible in the “get pods” command. My output:
As you can see my dashboard is assigned to node 10.0.0.102. I will expose the dashboard to that external ip. Exposing suggests that your pod is reachable through that ip addressing
Your dashboard should be visible at http://10.0.0.102:9090. Once you logged in you should see a few pods crashing due to “no credentials” Fix your pods For example sky-dns
My first thought it was actually a credentials issue, however the true issue is that skydns from the contrib repository is very outdated. You want to update this. On your kubernetes master change the file /etc/kubernetes/addons/dns/skydns-rc.yaml
Delete all the existing pods for dns, and it should automatically reload the dns addon. The same goes for kibana logging and heapster In /etc/kubernetes/addons/cluster-logging/kibana-controller.yaml change the container image version from:
In /etc/kubernetes/addons/cluster-monitoring/heapster-controller.yaml change the container image version from:
Delete the heapster pods and your kubernetes cluster should be up and running in no time! Interesting side note, if you have issues that node A could not reach node B through flannel cluster IP range. The solution could be restarting docker services. ( or reboot ).
If you just want to run some containers at your home cluster, dont use kubernetes :). To be honest it is quite tedious to setup, and is prone to errors. This is probably due to setting this up on your local KVM, with a local IP range. For simple container running, just use docker-compose or hashicorp nomad. Setting up multi-node hashicorp nomad takes you about an hour. If you still want to run containers in kubernetes and a single node is sufficient use minikube!