Showing posts with label AWS EC2 instance. Show all posts
Showing posts with label AWS EC2 instance. Show all posts

Sunday, May 15, 2022

Controlling EC2 Instance from CLI: AWS automations

When you start learning few commands about AWS CLI you can plan to automate the process with simple bash shell script where you can include set of aws commands into it with simple bash script controls. 

How to automate AWS EC2 stop/start using aws cli?

Objective of this post is to develop a simple controlling script which can use the AWS CLI commands for start-instances, describe-instances, stop-instances and adding bash scripting logic to it. First we start experiment with each aws ec2 command, then we can proceed by collecting those successful commands to form a automation script. Let's explore now.

How to automate AWS EC2 instance using aws-cli


How to start an AWS EC2 instance "start-instances" command


To start the aws instance you need to pass the instance-id as argument. Following is the command example.
aws ec2 start-instances --instance-id i-instancenumber
Please change to your instance-id value replace the instancenumber with yours.
Execution output looks like this:

aws ec2 start-instances execution initially in pending state


aws ec2 stop-instances command

To stop the AWS EC2 Instance you need to pass the instance-id as a argument. Following is the command example.
aws ec2 stop-instances --instance-id i-instancenumber
Please change to your instance-id value replace the instancenumber with yours.

EXECUTION
aws ec2 stop-instances


Describe instance status


The describe-instances-status subcommand will show the InstanceState, InstanceStatus and also SystemStatus. We can pick any of these as per the automation needs.
aws ec2 describe-instances-status --instance-id i-instancenumber
Please change to your instance-id value replace the instancenumber with yours.
Describing instance status specific to InstanceState which can be extracted as Name value and here trick is use the --output to TEXT format.

aws ec2 describe-instance-status --instance-id i-instancenumber \
 --query 'InstanceStatuses[*].InstanceState.Name' --output text
This output is nicely choped to test wheather an instance is in 'running', 'stopped', or 'pending' state. Using this we can decide how to proceed next, if it is running we can move to the logic where stop the instance works. otherwise nothing [] is status then we can proceed to start instance logic.

Execution outputs as follows:

aws ec2 describe-instance-status execution output

How to get the EC2 Instance Public IP address?

The describe-instances subcommand will help us to retrieve all details of instances. So we use this subcommand to pick Private or Public IP Address of given EC2 instance. You need to provide the instanceid to fetch the EC2 instance public IP Address.

aws ec2 describe-instances --instance-id i-instancenumber \
 --query "Reservation[*].PublicIpAddress" --output text
Results the Public IP Address of given ec2 instance

Using we can prepare nice shell script to automate the instance start and stop and checking the status.

Once you get the INSTANCE_IP that is Public IP we can connect with ssh command as shown below:
  ssh -o "StrictHostKeyChecking=no" -i aws-key.pem centos@$INSTANCE_IP
  
Here option -i is used for identity file
and  option 
-o "StrickHostKeyChecking=no" indicates do not prompt for the SSH finger print value entry. You can understand without given this option see use of this.
running automation script output looks like this.

How to modify the security group for running EC2 instance?
There was a problem when I've ran the aws ec2 run command instance was created and able to see it is in Running state. But unfortunately the ssh connectivity failing with the error message "Port 22 refused connection". Here the solution could be the proper security group must be associated with the EC2 instance.

AWS CLI command to modify the security group which is already existing in my other EC2 instance that is Running state and connectivity also normal, from the AWS Console we can get the seurity-group id from the normal instance (node1) can be used in the Issue instance (node2). Two inputs required here node2 instance-id and node1 security group id.

 aws ec2 modify-instance-attribute --instance-id i-instanacenumber  --groups sg-securitygroupid
  
Example screenshot of execution:

 
AWS CLI to modify attribute for running EC2 isntance



Wednesday, May 6, 2020

K8s Storage NFS Server on AWS EC2 Instance

Hello DevOps enthuiast, In this psot we would like to explore the options available on Kubernetes Storage and Volume configurations. Especially in AWS environment if we have provisioned the Kubernetes Cluster then how we can use the storage effectively, need to know all the options. In the sequence of learning on 'Kubernetes Storage' experimenting on the NFS server on AWS EC2 instance creation and using it as Persistent Volume. In the later part, we would use the PVC to claim the required space from the available PV. That in turn used inside the Pod as specifying a Volume. 

Kubernetes Storage: NFS PV, PVC

Assumptions

  • Assuming that you have AWS Console access to create EC2 instances. 
  • Basic awareness of the Docker Container Volumes
  • Understand the need for Persistency requirements

Login to your aws console
Go to EC2 Dashboard, click on the Launch instance button
Step 1: Choose an AMI: "CentOS 7 (x86_64) - with updates HVM" Continue from Marketplace









Step 2: Choose instance type:



Step 3: Add storage: Go with default 1 vCPU, 2.5GHz and Intel Xeon Family, meory 1GB EBS, click 'Next'


Step 5: Add Tags: Enter key as 'Name' and Value as 'NFS_Server'


Step 6: Configure Security Group: select existing security group



Step 7: Review instance Launch: click on 'Launch' button



Select the existing key pair or create new key pair as per your need



 Now Let's use PUTTY terminal login as centos and switch to root user using 'sudo -i'
yum install nfs-utils -y
systemctl enable nfs-server
systemctl start nfs-server
 mkdir /export/volume -p
chmod 777 /export/volume
vi /etc/exports
write the following
  /export/volume *(no_root_squash,rw,sync)
Now save and quit from vi editor and run the following command to update the Filesystem.
  exportfs -r
  
Confirm the folder name by turning it to green colored by listing the folder.
  ls -ld /export/
  

Here the NFS volume creation steps completed! ready to use. # Kuernetes PersistentVolume, PersistentVolumeClaim
Create 'nfs-pv.yaml' file as:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /export/volume
    server: 172.31.8.247
 
Let's create the PersistentVolume with NFS mounted on separate EC2 instance.
  kubectl create -f nfs-pv.yaml
Check the pv creation successful by kubectl command
  kubectl get pv
 

Create PersistenceVolumeClaim

Now create a pvc PersistentVolumeClaim with:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-nfs-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi
      
Now use the create subcommand:
  kubectl create -f my-nfs-claim.yaml
Let's validate that PVC created
  kubectl get pvc
Now all set to use a Database deployment inside a pod, let's choose MySQL.
proceed with creating a file with name as mysql-deployment.yaml manifestation file:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: welcome1
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: my-nfs-claim
  
Let's create the mysql deployment which will include the pod definitions as well.
kubectl create -f mysql-deployment.yaml
# Check the pod list
kubectl get po -o wide -w
>> Trouble in ContainerCreating take the name of the pod from the previous command.
kubectl logs wordpress-mysql-xx
check nfs mount point shared On the nfs-server ec2 instance
exportfs -r
exportfs -v

Validation of nfs-mount: Using the nfs-server IP mount it on the master node and worker nodes.
mount -t nfs 172.31.8.247:/export/volume /mnt
Example execution:
root@ip-172-31-35-204:/test
# mount -t nfs 172.31.8.247:/export/volume /mnt
mount: /mnt: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.type helper program.
The issue is with nfs on master and cluster verified with:
ls -l /sbin/mount.nfs
Example check it...
root@ip-172-31-35-204:/test# ls -l /sbin/mount.nfs
ls: cannot access '/sbin/mount.nfs': No such file or directory
This confirms that nfs-common not installed on the master node. same thing required on worker nodes aswell. Fix: The client needs nfs-common:
sudo apt-get install nfs-common -y
Now you need th check that mount command works as expected.After confirming mount is working then unmount it.
umount /mnt
Check pod list, where the pod STATUS will be 'Running'.
kubectl get po
SUCCESSFUL!!! As Volume Storage is mounted we can proceed! Let's do the validation NFS volume Enter into the pod and see there is a volume created as per the deployment manifestation.
kubectl exec -it wordpress-mysql-newpod-xy /bin/bash

root@wordpress-mysql-886ff5dfc-qxmvh:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.48 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

create database test_vol;


mysql> create database test_vol;
Query OK, 1 row affected (0.01 sec)

show databases
Test that persistant volume by deleting the pod
kubectl get po
kubectl delete po wordpress-mysql-886ff5dfc-qxmvh
kubectl get po
As it is auto healing applied and new pod will be created, inside the new pod expected to view the volume with
  kubectl exec -it wordpress-mysql-886ff5dfc-tn284 -- /bin/bash
Inside the container now
mysql -u root -p
enter the password 
Surely you could see that "test_vol" database accessible and available from this newly created pod.
show databases;
Hence it is proved that the volume used by pod can be reusable even when pod destroyed and recreated.
Go and check the nfs-server mounted path /export/volume you can see the databases created by the mysql database will be visible. 

Reference: https://unix.stackexchange.com/questions/65376/nfs-does-not-work-mount-wrong-fs-type-bad-option-bad-superblockx

Categories

Kubernetes (25) Docker (20) git (15) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)