Saturday, October 1, 2022

K8s Storage Volumes Part 4 - Dynamic Provisioning

Hello guys! I am back with new learning on the Kubernetes Storage Volume section series of posts, We have already seen that how we can create a PV, And then claiming that with different PVC, then you can use the PVC in the Pod manifestation under volumes section, But, in this post we will be exploring the various options available for Dynamic Provisioning with StorageClass.

StorageClass - PersistentVolumeClaim used in Pod



Wanted to know Kubernetes StorageClasses in depth. Visited many blog posts with different cloud choices people are working. Initially I've gone through the Mumshadmohammad session and practice lab, triedout on GCP platform.
Previous Storage related posts

Basically, Kubernetes maintains two types of StorageClasses:
  1. Default storage class (Standard Storage class)
  2. User-defined storage class (Additional which is created with kubectl)
The additional storage-class will depend on the Public Cloud platform storage, There are different Provisioners :

  • On Azure - kubernetes.io/azure-disk
  • On AWS - kubernetes.io/aws-ebs
In this post, let's explore the AWS EBS option

# storageclass-aws.yml
kind: storageclass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-ebs-storageclass
provisioner: kubernetes.io/aws-ebs 
parameters:
  type: gp2
The key to dynamic provisioning is storage class. Thanks to Kubernetes have this great feature. The storage class manifestation starts with provisioner, this is depends on the Cloud platform which provides different storage abilities as per the access speed and size also matters.
kuberntes.io/gce-pd is the provisioner provided by Google. its related parameters we have define pd-standard, zone, reclaim policy. If you created using a storage class it will inherit its reclaim policy.
The Kubernetes cluster administrator setup one or more number of storage provisioners. Using which admin must create one or more storage classes and then user/developer will create claim (PVC). Where it uses storage class name field and then Kubernetes will creates automatically a PV that actual storage will linked. This way dynamically provisioned based on requested capacity, access mode, reclaim policy and provisioner specified in PVC and the matching storage class. And finally user use that claim as volume.
 
Specific to GCP users:
gcloud beta compute disks create \
  --size 1GB --region us-east1 pd-disk 
You can either use pv or storage class just for your reference here is the pv manifestation file:
#File: pv.yaml  
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gcp-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 500M
  gcePersistentDisk:
    pdName: pd-disk
	fsType: ext4
In the PV defination you can tell the specific size and filesystem type as well in your control. We are going to run this on GKE
gcloud container clusters get-credentials pd-cluster 
Defining the storage class with the following YAML
# File: sc-def.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: google-sc
provisioner: kuberntes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-east1-a 
reclaimPolicy: Delete

Now create it and validate it's creation
kubectl create -f sc-def.yaml
kubectl get sc 
Now let's create this claim PVC as follows:
  
# File: pvc-def.yaml  
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessMode:
    - ReadWriteOnce
  resources:
    requrests:
      storage: 500Mi
  storageClassName: "google-sc"
Here PVC uses the storage class which created in the above step.
kubectl create -f pvc-def.yaml
kubectl get pv,pvc 
Now all set to proceed to use the storage into a Deployment-Pod.
# File: mysql-deploy.yaml 
---
apiVersion: app/v1
kind: Deployment-Pod
metadata:
  name: mysql
  labels:
    app: mysql-db 
spec:
  replicas: 1
  selector:
    matchLabels:
	  pod-labels: mysql-pods
	spec:
	  containers:
	  - name: mysql
	    image: mysql:alpine
		env:
		- name: MYSQL_ALLOW_EMPTY_PASSWORD
		  valure: true
		volumeMounts:
		- name: mysql-data
		  mountPath: /var/lib/mysql
		  subPath: mysql
		volumes:
		- name: mysql-data
		  persistentVolumeClaim:
		    claimName: myclaim 
Create the mysql database processing pod
kubectl create -f mysql-db.yaml
kubectl get deploy,po 
To get into mysql db we need to get the shell acces into the pod.
kubectl exec -it mysql-podid -- /bin/bash 
Inside the container:
mysql
create database clients;
create table clients.project(type varchar(15));

insert into clients.project values ('evcars-stats');
insert into clients.project values ('electric-cars');
insert into clients.project values ('jagwar-cars');
Check the content of the database table:
select * from clients.project; 
Exit from the pod shell and Now try to delete this pod which is in deployment so it will replace new pod automatically. get inside the pod shell again check the database table content if all looks good then our test case is successful! Congratulations you learnt how to use Kubernetes Dynamic provisioning!
Cleanup Storage classes stuff using the kubectl delete command

The sequance goes as this:
1. Delete the Pod: kubectl delete pod sc-pod
2. Delete the PVC: kubectl delete pvc test-pvc
3. Delete StorageClass: kubectl delete sc test-sc

References:

No comments:

Categories

Kubernetes (24) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)