Wednesday, May 6, 2020

K8s Storage NFS Server on AWS EC2 Instance

Hello DevOps enthuiast, In this psot we would like to explore the options available on Kubernetes Storage and Volume configurations. Especially in AWS environment if we have provisioned the Kubernetes Cluster then how we can use the storage effectively, need to know all the options. In the sequence of learning on 'Kubernetes Storage' experimenting on the NFS server on AWS EC2 instance creation and using it as Persistent Volume. In the later part, we would use the PVC to claim the required space from the available PV. That in turn used inside the Pod as specifying a Volume. 

Kubernetes Storage: NFS PV, PVC

Assumptions

  • Assuming that you have AWS Console access to create EC2 instances. 
  • Basic awareness of the Docker Container Volumes
  • Understand the need for Persistency requirements

Login to your aws console
Go to EC2 Dashboard, click on the Launch instance button
Step 1: Choose an AMI: "CentOS 7 (x86_64) - with updates HVM" Continue from Marketplace









Step 2: Choose instance type:



Step 3: Add storage: Go with default 1 vCPU, 2.5GHz and Intel Xeon Family, meory 1GB EBS, click 'Next'


Step 5: Add Tags: Enter key as 'Name' and Value as 'NFS_Server'


Step 6: Configure Security Group: select existing security group



Step 7: Review instance Launch: click on 'Launch' button



Select the existing key pair or create new key pair as per your need



 Now Let's use PUTTY terminal login as centos and switch to root user using 'sudo -i'
yum install nfs-utils -y
systemctl enable nfs-server
systemctl start nfs-server
 mkdir /export/volume -p
chmod 777 /export/volume
vi /etc/exports
write the following
  /export/volume *(no_root_squash,rw,sync)
Now save and quit from vi editor and run the following command to update the Filesystem.
  exportfs -r
  
Confirm the folder name by turning it to green colored by listing the folder.
  ls -ld /export/
  

Here the NFS volume creation steps completed! ready to use. # Kuernetes PersistentVolume, PersistentVolumeClaim
Create 'nfs-pv.yaml' file as:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /export/volume
    server: 172.31.8.247
 
Let's create the PersistentVolume with NFS mounted on separate EC2 instance.
  kubectl create -f nfs-pv.yaml
Check the pv creation successful by kubectl command
  kubectl get pv
 

Create PersistenceVolumeClaim

Now create a pvc PersistentVolumeClaim with:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-nfs-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
      
Now use the create subcommand:
  kubectl create -f my-nfs-claim.yaml
Let's validate that PVC created
  kubectl get pvc
Now all set to use a Database deployment inside a pod, let's choose MySQL.
proceed with creating a file with name as mysql-deployment.yaml manifestation file:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: welcome1
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: my-nfs-claim
  
Let's create the mysql deployment which will include the pod definitions as well.
kubectl create -f mysql-deployment.yaml
# Check the pod list
kubectl get po -o wide -w
>> Trouble in ContainerCreating take the name of the pod from the previous command.
kubectl logs wordpress-mysql-xx
check nfs mount point shared On the nfs-server ec2 instance
exportfs -r
exportfs -v

Validation of nfs-mount: Using the nfs-server IP mount it on the master node and worker nodes.
mount -t nfs 172.31.8.247:/export/volume /mnt
Example execution:
root@ip-172-31-35-204:/test
# mount -t nfs 172.31.8.247:/export/volume /mnt
mount: /mnt: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.type helper program.
The issue is with nfs on master and cluster verified with:
ls -l /sbin/mount.nfs
Example check it...
root@ip-172-31-35-204:/test# ls -l /sbin/mount.nfs
ls: cannot access '/sbin/mount.nfs': No such file or directory
This confirms that nfs-common not installed on the master node. same thing required on worker nodes aswell. Fix: The client needs nfs-common:
sudo apt-get install nfs-common -y
Now you need th check that mount command works as expected.After confirming mount is working then unmount it.
umount /mnt
Check pod list, where the pod STATUS will be 'Running'.
kubectl get po
SUCCESSFUL!!! As Volume Storage is mounted we can proceed! Let's do the validation NFS volume Enter into the pod and see there is a volume created as per the deployment manifestation.
kubectl exec -it wordpress-mysql-newpod-xy /bin/bash

root@wordpress-mysql-886ff5dfc-qxmvh:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.48 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

create database test_vol;


mysql> create database test_vol;
Query OK, 1 row affected (0.01 sec)

show databases
Test that persistant volume by deleting the pod
kubectl get po
kubectl delete po wordpress-mysql-886ff5dfc-qxmvh
kubectl get po
As it is auto healing applied and new pod will be created, inside the new pod expected to view the volume with
  kubectl exec -it wordpress-mysql-886ff5dfc-tn284 -- /bin/bash
Inside the container now
mysql -u root -p
enter the password 
Surely you could see that "test_vol" database accessible and available from this newly created pod.
show databases;
Hence it is proved that the volume used by pod can be reusable even when pod destroyed and recreated.
Go and check the nfs-server mounted path /export/volume you can see the databases created by the mysql database will be visible. 

Reference: https://unix.stackexchange.com/questions/65376/nfs-does-not-work-mount-wrong-fs-type-bad-option-bad-superblockx

No comments:

Categories

Kubernetes (24) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create deployment (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)