Showing posts with label PV. Show all posts
Showing posts with label PV. Show all posts

Saturday, October 1, 2022

K8s Storage Volumes Part 4 - Dynamic Provisioning

Hello guys! I am back with new learning on the Kubernetes Storage Volume section series of posts, We have already seen that how we can create a PV, And then claiming that with different PVC, then you can use the PVC in the Pod manifestation under volumes section, But, in this post we will be exploring the various options available for Dynamic Provisioning with StorageClass.

StorageClass - PersistentVolumeClaim used in Pod



Wanted to know Kubernetes StorageClasses in depth. Visited many blog posts with different cloud choices people are working. Initially I've gone through the Mumshadmohammad session and practice lab, triedout on GCP platform.
Previous Storage related posts

Basically, Kubernetes maintains two types of StorageClasses:
  1. Default storage class (Standard Storage class)
  2. User-defined storage class (Additional which is created with kubectl)
The additional storage-class will depend on the Public Cloud platform storage, There are different Provisioners :

  • On Azure - kubernetes.io/azure-disk
  • On AWS - kubernetes.io/aws-ebs
In this post, let's explore the AWS EBS option

# storageclass-aws.yml
kind: storageclass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-ebs-storageclass
provisioner: kubernetes.io/aws-ebs 
parameters:
  type: gp2
The key to dynamic provisioning is storage class. Thanks to Kubernetes have this great feature. The storage class manifestation starts with provisioner, this is depends on the Cloud platform which provides different storage abilities as per the access speed and size also matters.
kuberntes.io/gce-pd is the provisioner provided by Google. its related parameters we have define pd-standard, zone, reclaim policy. If you created using a storage class it will inherit its reclaim policy.
The Kubernetes cluster administrator setup one or more number of storage provisioners. Using which admin must create one or more storage classes and then user/developer will create claim (PVC). Where it uses storage class name field and then Kubernetes will creates automatically a PV that actual storage will linked. This way dynamically provisioned based on requested capacity, access mode, reclaim policy and provisioner specified in PVC and the matching storage class. And finally user use that claim as volume.
 
Specific to GCP users:
gcloud beta compute disks create \
  --size 1GB --region us-east1 pd-disk 
You can either use pv or storage class just for your reference here is the pv manifestation file:
#File: pv.yaml  
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gcp-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 500M
  gcePersistentDisk:
    pdName: pd-disk
	fsType: ext4
In the PV defination you can tell the specific size and filesystem type as well in your control. We are going to run this on GKE
gcloud container clusters get-credentials pd-cluster 
Defining the storage class with the following YAML
# File: sc-def.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: google-sc
provisioner: kuberntes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-east1-a 
reclaimPolicy: Delete

Now create it and validate it's creation
kubectl create -f sc-def.yaml
kubectl get sc 
Now let's create this claim PVC as follows:
  
# File: pvc-def.yaml  
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessMode:
    - ReadWriteOnce
  resources:
    requrests:
      storage: 500Mi
  storageClassName: "google-sc"
Here PVC uses the storage class which created in the above step.
kubectl create -f pvc-def.yaml
kubectl get pv,pvc 
Now all set to proceed to use the storage into a Deployment-Pod.
# File: mysql-deploy.yaml 
---
apiVersion: app/v1
kind: Deployment-Pod
metadata:
  name: mysql
  labels:
    app: mysql-db 
spec:
  replicas: 1
  selector:
    matchLabels:
	  pod-labels: mysql-pods
	spec:
	  containers:
	  - name: mysql
	    image: mysql:alpine
		env:
		- name: MYSQL_ALLOW_EMPTY_PASSWORD
		  valure: true
		volumeMounts:
		- name: mysql-data
		  mountPath: /var/lib/mysql
		  subPath: mysql
		volumes:
		- name: mysql-data
		  persistentVolumeClaim:
		    claimName: myclaim 
Create the mysql database processing pod
kubectl create -f mysql-db.yaml
kubectl get deploy,po 
To get into mysql db we need to get the shell acces into the pod.
kubectl exec -it mysql-podid -- /bin/bash 
Inside the container:
mysql
create database clients;
create table clients.project(type varchar(15));

insert into clients.project values ('evcars-stats');
insert into clients.project values ('electric-cars');
insert into clients.project values ('jagwar-cars');
Check the content of the database table:
select * from clients.project; 
Exit from the pod shell and Now try to delete this pod which is in deployment so it will replace new pod automatically. get inside the pod shell again check the database table content if all looks good then our test case is successful! Congratulations you learnt how to use Kubernetes Dynamic provisioning!
Cleanup Storage classes stuff using the kubectl delete command

The sequance goes as this:
1. Delete the Pod: kubectl delete pod sc-pod
2. Delete the PVC: kubectl delete pvc test-pvc
3. Delete StorageClass: kubectl delete sc test-sc

References:

Monday, July 12, 2021

Kubernetes Storage Volumes Part -2 HostPath

 Hello DevOps Guys!!

This post is about Kubernetes Volume type hostPath type. In this post, I've tried multiple options with Volume with hostPath type association with Pods.

Volume type - hostPath 

  • it posts the persistent data to a specific file or directory on the Host machine's file-system
  • Pods running on same node and using the same path in their volume 
  • this hostPath volume is not deleted when Pod crashed or brought down intentionally
  • Specialty of the hostPath Volume is retained, if a new Pod is started as replacement, the files in the hostPath volume will be reused and re-attached to new Pod.
If we compare with emptyDir if the pod dies the Volume will be reclaimed by the Kubernetes Control Plane. whereas in hostPath it remains on the host path.

Pre-requisites

  • Docker Engine installed  
  • Kubernetes Cluster Up and Running (You can do a test on MiniKube as well)
  • Enough disk space to define in the PV manifestation

In this post we will do two experiments
  1. Bare pod using hostPath Volume
  2. Pod Deployment using hostPath Volume (PV, PVC)


# Manifestation of HostPath Volume type
# File: barepod-vol.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-hostpath
spec: containers: - name: nginx-container image: nginx volumeMounts: - mountPath: /test-data name: test-vol volumes: - name: test-vol hostPath: path: /vagrant/data
To create the manifestation for Bare Pod
kubectl create -f barepod-vol.yaml
image here


 
kubectl create -f barepod-vol.yaml
kubectl create -f barepod-vol.yaml
kubectl create -f barepod-vol.yaml
kubectl create -f barepod-vol.yaml
kubectl create -f barepod-vol.yaml

Thursday, May 7, 2020

K8s Storage Volumes part 1 - EmptyDir

Hello, Dear DevOps enthusiasts, In this post, we are going to explore the emptyDir Volume, which is going to work as local data share between containers in a Pod.

I had read the book titled 'Kubernetes in action', from that book I want to understand Persistance Volumes and Persistence Volume Claims in detail. will run the following example for PV that uses emptyDir volume type.

Every new learning is like a game! if you take each trouble as a game level it will be a wonderful game. Once you finish the desired state it's winning the game! why to wait let's jump on this game

Kubernetes emptyDir Volume

Assumptions

  • Docker installed
  • Kubernetes Installed and configured Cluster
  • AWS access to EC2 instances




We need to create a Tomcat container and Logstash container in the Kubernetes pod. In the below diagram, it will share the log file using Kubernetes volume that is empty dir.



The tomcat and Logstash cant use the network via localhost and it will share the filesystem.
Creating a YAML file :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
        run: tomcat
  template:
    metadata:
      labels:
        run: tomcat
    spec:
      containers:
      - image: tomcat
        name: tomcat
        ports:
          - containerPort: 8080
        env:
        - name: UMASK
          value: "0022"
        volumeMounts:
          - mountPath: /usr/local/tomcat/logs
            name: tomcat-log
      - image: docker.elastic.co/logstash/logstash:7.4.2
        name: logstash
        args: ["-e input { file { path => \"/mnt/localhost_access_log.*\" } } output { stdout { codec => rubydebug } elasticsearch { hosts => [\"http://elasticsearch-svc.default.svc.cluster.local:9200\"] } }"]
        volumeMounts:
          - mountPath: /mnt
            name: tomcat-log
      volumes:
        - name: tomcat-log
          emptyDir: {}

Output :


Now, creating the tomcat and logstash pods.
Syntax : 
kubectl create -f tomcat-longstash.yaml
Output :
Now, checking the tomcat and logstash pods are ready.
Syntax : 
kubectl get pods
Output :


Troubleshooting levels:
We have faced 2 levels of game-changers, when we tried to type the YAML file content it was a lot of indentations which matters a lot if you miss something it throughs error with hint as 'line numbers'. The most interesting thing when we define the two containers one after other, there is 'spec' should be handled properly.

In level 2 it is another adventurous journey, the Logstash image is no longer available in the docker hub. we have tweaked here and there to find what best repo gives this. With the googling reached to the elastic.co where ELK related images are published after replacing of Logstash image name. The second level of game over end entered into next level!


Finally, the last stage of the game, Connecting the logstash container and able to see the /mnt directory which contains logs that are generated from tomcat server. That concludes our experiment is successful.

Syntax : 
kubectl exec -it tomcat-646d5446d4-l5tlv -c logstash -- /bin/bash
ls /mnt

Output :

Hence this will conclude that we can define the manifestation for the inter container sharing the storage.

Enjoy this fun-filled learning on Kubernetes! contact us for support and help on the Infrastructure build and delivery.

Categories

Kubernetes (25) Docker (20) git (15) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)