. A year ago when I have worked on Kubernetes setup on the Ubuntu Linux that virtualization included all the steps involved in
everything automated within Vagrantfile.
 |
Kubernetes Cluster on your Desktop or Laptop or Mac book |
In this post, I would like to share the manual steps that work to build a Kubernetes Cluster on CentOS7. We will be using the Docker EE installed nodes to install Kubernetes. So bringing up vagrant boxes the same thing that we had discussed earlier post proceed further.
Step 1: Check the System requirements
We have three nodes: master, node1, node2.
On ALL Nodes:
CPU Cores 2,
RAM size- 2GB Minimum 4GB good
Otherwise, Master node make it 3GB, Slave nodes with 1.5GB also a wise plan if you have limited resources.
Preparing the host mappings for master and worker nodes, Here I'm using sample names you can change as per your project needs.
hostnamectl set-hostname master-node
cat << EOF >> /etc/hosts
10.128.0.27 master-node
10.128.0.29 node-1 worker-node-1
10.128.0.30 node-2 worker-node-2
EOF
Setup the firewall rules
# master box run this
sudo firewall-cmd --zone=public --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp
# worker box firewall settings
sudo firewall-cmd --zone=public --permanent --add-port={10250,10251}/tcp
#for both boxes
firewall-cmd –reload
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Step 2: Why do we need to do swap disable?
All Kubernetes masters and nodes are expected to have swap disabled. This is recommended by Kubernetes community for deployments. If swap is not disabled, kubelet service will not start on the masters and nodes,
# check swap available
free -m
# if exists then run the following commands
swapoff -a # must for gcloud and aws instances
# permanent swap off from fstab
vi /etc/fstab --> comment swap entry
(OR)
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
yum update -y
systemctl disable firewalld
systemctl stop firewalld
vi /etc/selinux/config ---> disabled
Restart all of the boxes
init 6
Now install Docker if you have not installed yet! this following will installs Docker-CE.
yum install docker -y
systemctl status docker #if it is inactive do the following
systemctl enable docker
systemctl start docker
systemctl start docker # make sure it is active state
Step 4: Add Kubernetes Repo
This repo setting for CentOS boxes on ANY cloud env will works and same will work on vagrant box as well.
vi /etc/yum.repos.d/kubernetes.repo
Enter the following content into the file
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Step 5: Install kubeadm, kubelet, kubectl and start
Now run the following yum installation commands on every node.
yum install kubeadm -y #This will includes kubectl, kubelet part of kubeadmin installation
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet # ensure kubelet is in active state
After starting kubeadm you will get the following:
 |
Kubernetes Installation |
output
Let's configure bridge network for Kubernetes
vi /etc/sysctl.d/k8s.conf
Enter the following lines
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
After file saving run the following command in the command shell.
sysctl --system
In the Master node execute the following command for Kubernetes Cluster initialization:
# NOTE: Please use your host IP address here
# This will do
kubeadm init
#alternatively try
kubeadm init --pod-network-cidr=192.148.0.0/16 --apiserver-advertise-address=192.168.33.100
(OR)
#To ignore preflight checks
kubeadm init --pod-network-cidr=192.148.0.0/16 --apiserver-advertise-address=192.168.33.100 --ignore-preflight-errors=Hostname,SystemVerification,NumCPU
On the Worker / Slave nodes:
kubeadm join 192.168.33.100:6443 --token h1ufen.hvs0nr49ua0my7u8 \
--discovery-token-ca-cert-hash sha256:0bc179854b5c759333360737ff53ca2c4246b61823b033ecbac50593a9c334f6
 |
Kubernetes Worker joining |
On the master node do the following:
vi /etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf
Run the following:
source /etc/profile










(OR)
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Now
 |
flannel network |
kubectl get nodes # all nodes NotReady state
kubectl get pods --all-namespaces
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl get pods --all-namespaces
kubectl get nodes
 |
Get the status of the node in the Kubernetes cluster, after all pods Running |
Validate with Deployment
Let us validate the Kubernetes Cluster Ready for deploy web application
Step 1 Let's take nginx image for deployment creation on the Kubernetes cluster
kubectl create deployment mynginx --image=nginx
 |
First Kubernetes deployment : create deployment |
Now let's see the description of the above 'mynginx' deployment.
 |
Describe Kubernetes deployment |
Scale the 'mynginx' application deployment upto 3
kubectl scale --replicas=3 deployment/mynginx
 |
Scale deployment on Kubernetes Cluster |
List of all pods in the Kubernetes cluster
kubectl get po
kubectl get po -o wide
 |
get the list of pods in Kubernetes |
Next step is Create service using 'mynginx' deployment.
kubectl create service nodeport mynginx --tcp=8080:80
kubectl get services
 |
Service creation in Kubernetes Cluster |
all set to go for checking in the browser
http://192.168.33.110:32286/









As our slave node running on 192.168.33.110 and the node port exposed as 32286.
Here I conclude our Kubernetes cluster working as expected! Please post your comments or suggestions to improve our learnings more useful to many other starters.