Showing posts with label kubernetes on aws. Show all posts
Showing posts with label kubernetes on aws. Show all posts

Thursday, May 7, 2020

K8s Storage Volumes part 1 - EmptyDir

Hello, Dear DevOps enthusiasts, In this post, we are going to explore the emptyDir Volume, which is going to work as local data share between containers in a Pod.

I had read the book titled 'Kubernetes in action', from that book I want to understand Persistance Volumes and Persistence Volume Claims in detail. will run the following example for PV that uses emptyDir volume type.

Every new learning is like a game! if you take each trouble as a game level it will be a wonderful game. Once you finish the desired state it's winning the game! why to wait let's jump on this game

Kubernetes emptyDir Volume

Assumptions

  • Docker installed
  • Kubernetes Installed and configured Cluster
  • AWS access to EC2 instances




We need to create a Tomcat container and Logstash container in the Kubernetes pod. In the below diagram, it will share the log file using Kubernetes volume that is empty dir.



The tomcat and Logstash cant use the network via localhost and it will share the filesystem.
Creating a YAML file :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
        run: tomcat
  template:
    metadata:
      labels:
        run: tomcat
    spec:
      containers:
      - image: tomcat
        name: tomcat
        ports:
          - containerPort: 8080
        env:
        - name: UMASK
          value: "0022"
        volumeMounts:
          - mountPath: /usr/local/tomcat/logs
            name: tomcat-log
      - image: docker.elastic.co/logstash/logstash:7.4.2
        name: logstash
        args: ["-e input { file { path => \"/mnt/localhost_access_log.*\" } } output { stdout { codec => rubydebug } elasticsearch { hosts => [\"http://elasticsearch-svc.default.svc.cluster.local:9200\"] } }"]
        volumeMounts:
          - mountPath: /mnt
            name: tomcat-log
      volumes:
        - name: tomcat-log
          emptyDir: {}

Output :


Now, creating the tomcat and logstash pods.
Syntax : 
kubectl create -f tomcat-longstash.yaml
Output :
Now, checking the tomcat and logstash pods are ready.
Syntax : 
kubectl get pods
Output :


Troubleshooting levels:
We have faced 2 levels of game-changers, when we tried to type the YAML file content it was a lot of indentations which matters a lot if you miss something it throughs error with hint as 'line numbers'. The most interesting thing when we define the two containers one after other, there is 'spec' should be handled properly.

In level 2 it is another adventurous journey, the Logstash image is no longer available in the docker hub. we have tweaked here and there to find what best repo gives this. With the googling reached to the elastic.co where ELK related images are published after replacing of Logstash image name. The second level of game over end entered into next level!


Finally, the last stage of the game, Connecting the logstash container and able to see the /mnt directory which contains logs that are generated from tomcat server. That concludes our experiment is successful.

Syntax : 
kubectl exec -it tomcat-646d5446d4-l5tlv -c logstash -- /bin/bash
ls /mnt

Output :

Hence this will conclude that we can define the manifestation for the inter container sharing the storage.

Enjoy this fun-filled learning on Kubernetes! contact us for support and help on the Infrastructure build and delivery.

Thursday, April 30, 2020

Kubernetes clustering in AWS EC2 (Ubuntu 18.04)

In this post, I would like to share the manual steps that work to build a Kubernetes Cluster on Ubuntu 18.04 LTS. We will be using the Docker to install Kubernetes.


The three-node cluster that we will be forming in this post will consist of one Master node and two Slave nodes. Therefore, follow the steps described below to install Kubernetes on Ubuntu nodes.


Kubernetes Cluster configured on Ubuntu EC2 instances

AWS setup for Kubernetes

Step 1 : 

Install three Ec2 instances in AWS console.
The AMI we are choosing here is Ubuntu 18.04 LTS (HVM).
Choose AMI
In this step, we can choose any instances types with your own perspective. I have taken a general-purpose instance type. Click on "Next:Configure Instance Details".
Instance Types 
In configuring instance details we can directly create more instances if u give the number in the number of instances section. You can see in the below figure.Click on "Next:Add Storage".
Instance details


We can see storage here it is the same for all and it is enough to cluster the Kubernetes. So that we are not adding any particular storage. You can add storage to your requirements.Click on "Next:Add Tags".

Storage
 Tag is a tab which is generally used to assign AWS resource. It contains a key and an optional value. These are user-defined values.It is not mandatory to add a tag.Click on "Next:Configure Security Group".
Create a Tag
 Click on Add Rule and add HTTP & ALL TCP rule.You can generate your custom configure security groups and then Click on "Review and launch".
 Review everything and click on the "Launch" button on the bottom.

Review
If you click on the launch button it will default ask about key pair. It is used for connecting your AWS to gitbash. It is better to use your existing key pair and do not delete it.
Selecting the Key pair to login to EC2 instance
Here we get launching is done.
Launch successful
Instances have been launched and you can see the instance states in the below image.
Instances
Step 2 :
In this step, access your instances as shown in the below image. Here in the place of ec2-13-235-134-115.ap-south-1.compute.amazonaws.com, we have taken the IPv4 Public Ip address.


The terminal we are using here is Git Bash. Open three Git Bash terminals and you need to connect the three nodes in the same way for every step that is coming up.


Kubernetes installation on Ubuntu 18.04

Step 1: Pre-requisites install docker

We need to install the latest version of the docker in three nodes with the below command in the terminal.

# Check the hostname and the IP address are sync in the hosts file
cat /etc/hosts
#If not please edit the /etc/hosts file
hostname -i #should show the IP ADDRESS against the hostname that is mapped
#Install Docker better option is convenence script
apt install docker.io


Add caption

To check for the docker version number the below code is used.

docker version 



Step 2: Auto start enable for docker

We need to enable docker on these three nodes to automatically start when next reboot happens by running the following command.

systemctl enable docker

Step 3 : Install curl command

To transfer the data to the URL we use curl. Run the below command :

sudo apt install curl -y 


Step 4 :

To access the signing keys we need to run the below command :

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add 


Step 5 :

For connecting your Xenial Kubernetes repository just run the below code :
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" 


Step 6 :

To create the kubernetes clustering nodes for faster access path we run the below code :
apt install kubeadm -y 


Check for the Kubernetes version after installing it. Run the below code to get version.

kubeadm version

Kubernetes Deployment :

Step 1 :

For starting Kubernetes deployment first we need to check for swap memory.
Run the command shown here :
cat /etc/fstab
free -m

As you can see swap is 0 it means no swap memory if you are having the swap memory use the following code :
sudo swapoff -a
Run the below command in master node to process the slave nodes this command output is mandatory.

On local VM try:
kubeadm init  --apiserver-advertise-address=192.168.33.250 --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors="all" 
On AWS VM:
kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors="all" 


Run these general commands to your cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deployment of Flannel pod network on cluster.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


Check for the status network and To do this run the below command :
kubectl get pods --all-namespaces



Check the status of nodes :
kubectl get nodes
In the slave node running the below command to generate Kubernetes on the master node:
kubeadm join 192.168.100.6:6443 --token 06tl4c.oqn35jzecidg0r0m --discovery-token-ca-cert-hash  sha256:c40f5fa0aba6ba311efcdb0e8cb637ae0eb8ce27b7a03d47be6d966142f2204cf 


kubectl get nodes


Happy to see that nodes are joining!! 

Categories

Kubernetes (25) Docker (20) git (15) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)