StorageClass - PersistentVolumeClaim used in Pod |
Wanted to know Kubernetes StorageClasses in depth. Visited many blog posts with different cloud choices people are working. Initially I've gone through the Mumshadmohammad session and practice lab, triedout on GCP platform.
Previous Storage related posts
Basically, Kubernetes maintains two types of StorageClasses:
The key to dynamic provisioning is storage class. Thanks to Kubernetes have this great feature.
The storage class manifestation starts with provisioner, this is depends on the Cloud platform which provides different storage abilities as per the access speed and size also matters.
kuberntes.io/gce-pd is the provisioner provided by Google. its related parameters we have define pd-standard, zone, reclaim policy. If you created using a storage class it will inherit its reclaim policy.
The Kubernetes cluster administrator setup one or more number of storage provisioners. Using which admin must create one or more storage classes and then user/developer will create claim (PVC). Where it uses storage class name field and then Kubernetes will creates automatically a PV that actual storage will linked. This way dynamically provisioned based on requested capacity, access mode, reclaim policy and provisioner specified in PVC and the matching storage class. And finally user use that claim as volume.
- Kubernetes Storage - EmptyDir
- Kubernetes HostPath
- Kubernetes NFS Volume as PV
Basically, Kubernetes maintains two types of StorageClasses:
- Default storage class (Standard Storage class)
- User-defined storage class (Additional which is created with kubectl)
- On Azure - kubernetes.io/azure-disk
- On AWS - kubernetes.io/aws-ebs
In this post, let's explore the AWS EBS option
# storageclass-aws.yml kind: storageclass apiVersion: storage.k8s.io/v1 metadata: name: aws-ebs-storageclass provisioner: kubernetes.io/aws-ebs parameters: type: gp2
kuberntes.io/gce-pd is the provisioner provided by Google. its related parameters we have define pd-standard, zone, reclaim policy. If you created using a storage class it will inherit its reclaim policy.
The Kubernetes cluster administrator setup one or more number of storage provisioners. Using which admin must create one or more storage classes and then user/developer will create claim (PVC). Where it uses storage class name field and then Kubernetes will creates automatically a PV that actual storage will linked. This way dynamically provisioned based on requested capacity, access mode, reclaim policy and provisioner specified in PVC and the matching storage class. And finally user use that claim as volume.
Specific to GCP users:
gcloud beta compute disks create \ --size 1GB --region us-east1 pd-diskYou can either use pv or storage class just for your reference here is the pv manifestation file:
#File: pv.yaml --- apiVersion: v1 kind: PersistentVolume metadata: name: gcp-pv spec: accessModes: - ReadWriteOnce capacity: storage: 500M gcePersistentDisk: pdName: pd-disk fsType: ext4In the PV defination you can tell the specific size and filesystem type as well in your control. We are going to run this on GKE
gcloud container clusters get-credentials pd-clusterDefining the storage class with the following YAML
# File: sc-def.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: google-sc provisioner: kuberntes.io/gce-pd parameters: type: pd-standard zone: us-east1-a reclaimPolicy: Delete
kubectl create -f sc-def.yaml kubectl get scNow let's create this claim PVC as follows:
# File: pvc-def.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessMode: - ReadWriteOnce resources: requrests: storage: 500Mi storageClassName: "google-sc"Here PVC uses the storage class which created in the above step.
kubectl create -f pvc-def.yaml kubectl get pv,pvcNow all set to proceed to use the storage into a Deployment-Pod.
# File: mysql-deploy.yaml --- apiVersion: app/v1 kind: Deployment-Pod metadata: name: mysql labels: app: mysql-db spec: replicas: 1 selector: matchLabels: pod-labels: mysql-pods spec: containers: - name: mysql image: mysql:alpine env: - name: MYSQL_ALLOW_EMPTY_PASSWORD valure: true volumeMounts: - name: mysql-data mountPath: /var/lib/mysql subPath: mysql volumes: - name: mysql-data persistentVolumeClaim: claimName: myclaimCreate the mysql database processing pod
kubectl create -f mysql-db.yaml kubectl get deploy,poTo get into mysql db we need to get the shell acces into the pod.
kubectl exec -it mysql-podid -- /bin/bashInside the container:
mysql create database clients; create table clients.project(type varchar(15)); insert into clients.project values ('evcars-stats'); insert into clients.project values ('electric-cars'); insert into clients.project values ('jagwar-cars');Check the content of the database table:
select * from clients.project;Exit from the pod shell and Now try to delete this pod which is in deployment so it will replace new pod automatically. get inside the pod shell again check the database table content if all looks good then our test case is successful! Congratulations you learnt how to use Kubernetes Dynamic provisioning!
Cleanup Storage classes stuff using the kubectl delete command
The sequance goes as this:
1. Delete the Pod: kubectl delete pod sc-pod
2. Delete the PVC: kubectl delete pvc test-pvc
3. Delete StorageClass: kubectl delete sc test-sc
No comments:
Post a Comment