Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Friday, August 5, 2022

Switching storage drivers for Docker on Ubuntu

 Switch Storage Driver for Docker on Ubuntu

This blog post is aimed to experiment based on "How to switch to a different storage driver on an existing docker installation in Ubuntu".


Prerequisites

  • Ubuntu Linux VM instance on any Cloud or Local VM
  • Knowledge about filesystem how it works

Step1: What filesystem provided by your OS

cat /proc/filesystem or grep btrfs /etc/filesystem
Is your docker version supports the filesystem available on your VM instance.
grep brtfs /proc/filesystem
Step 2: Take a backup of docker folder
cp -au /var/lib/docker /var/lib/docker.bk
Create loop devices attach them using images created by dd command
  dd if=/dev/zero of=test1 bs=1 count=1 seek=4294967295
  dd if=/dev/zero of=test2 bs=1 count=1 seek=4294969343
Here the /dev/zero provides an endless stream of zero bytes when read (if). This function is provided by the kernel and does not require allocating memory. and all writes (of) to given file location. As a result, when you perform the dd, the system generates number of megabytes in zero bytes that simply get discarded.
ls -lh
Adding the test1, test2 files as loop devices using losetup command.
 losetup -f test1
 losetup -f test2
Check for the loop devices used:
 losetup | grep test
Creating pool of BTRFS Filesystem Let us create the Btrfs file-system now for our logical volumes created above.
 mkfs.btrfs -f /dev/loop4 /dev/loop5
 lsblk
Now all set for the Docker storage driver switch So now mount the BTRFS filesystem for /var/lib/docker path.
 
mkdir -p /var/lib/docker #if already exists ignore this
mount -t btrfs /dev/loop4 /var/lib/docker

# Validate the device mappings
df -h
btrfs filesystem mounting


Get back the docker engine configurations
cp -au /var/lib/docker.bk/* /var/lib/docker/
where you need to create a file called daemon.json if it is not exists. vi /etc/docker/daemon.json that contains by default storage driver 'overlay2'.
{
	"storage-driver": "overlay2"
}
change overlay2 to btrfs driver. After the changes to take effect of the storage driver restart the docker daemon.
# content in the file  /etc/docker/daemon.json
{
  "storage-driver": "btrfs"
}
Start the Docker daemon:
systemctl start docker
systemctl status docker
When we do this experiment changing the storage driver overlay2 to btrfs driver everything in-accessible. Check images list with overlay2 driver:
docker images
Here you could see the difference by checking the docker images list with btrfs, where it is new filesystem no image visible here.

Revert back from BTRFS to Overlay2 driver

This part of exeriment will confirms that all the images which you have saved in the overlay2 are accessible back when storage driver shifted from btrfs to 'Overlay2'.
cd /var/lib/docker/overlay2; ls -lrt
Validate you have storage driver as btrfs `docker info` Stop the docker daemon
systemctl stop docker
Change back to overlay2 in the daemon.json file vi /etc/docker/daemon.json start the docker daemon and check the storage driver:
systemctl start docker; systemctl status docker; docker info

#Check the info filter out the 'Storage Driver' value
docker info -f 'Storage Driver: {{.Driver}}'
#Check the docker images which will show all the previous images.
docker images

Tuesday, July 26, 2022

Docker Restart policies

Hello DevOps/DevSecOps team, welcome back for another intresting post on Docker. In this post we will be discussing and experiment on "docker restart policies".
 

What is Docker restart policies?

Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker daemon restarts.
 
A Docker container’s status can be controlled by providing the restart policy while the container is instantiated with docker run sub command.

 
Docker container restart policies

Docker restart policies are there to keep containers active(Up status) in all possible downfalls, we can leverage it in multiple ways as an example if we have a Tomcat/Nginx web server running on a container and have to keep it Up and running even on bad request we can use restart policy with a flag, it will keep the server "Up" and running till we stopped it manually.
Syntax:
docker run --restart [policy-type] IMAGE [COMMAND] [ARG...]
  
Below are the policy-type for the running a container with --restart option:

1.	no (Default): Will not start containers automatically when in exited state.
2. on-failure [:max-retries] : Restart the container if it exits due to an error ( could be because of a crash ), which manifests as a non-zero exit code. (usually it’ll be in 100+ number 127 or 137,  143 exit codes).
3. unless-stopped: Restarts the container unless it is stopped manually. If done manually then container will not restart even after restarting Docker daemon.
4. always: It restarts always in any given situation, however If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted.


.

How to enable restart of container which is already running
Ensure docker daemon restarts on system/VM reboot.
If I do restart the docker service restart with the following command on RHEL or Ubuntu Linux:
sudo systemctl enable docker.service
[optional] On the Play With Docker you can run the following command to restart the docker daemon.
kill -9 `pgrep dockerd`; dockerd > /docker.log 2>&1 &

1. Default policy no

What will happen to that running container? Does it start by itself? No because it's restart policy by default 'no' so it won't start.
docker container run --name no-c1 --restart no  -d nginx
docker ps
kill -9 `pgrep dockerd`; dockerd > /docker.log 2 > &1 &
docker ps
docker ps -a
After restart of docker daemon also "no" effect to the container. That is the default behaviour of containers.
always The name itself telling us it always try to restart the container when a docker daemon restarts.
# Usecase 1: a container with restart policy using always after the docker daemon restart will restart container
docker container run --name always-c1 --restart always -d nginx
docker ps 
kill -9 `pgrep dockerd`; dockerd > /docker.log 2>&1 &
docker ps
# Usecase 2: When you stop the container and restart the docker daemon
$ docker stop always-c1 
kill -9 `pgrep dockerd`; dockerd > /docker.log 2>&1 &
docker ps
Ensure the docker container has restart policy with update command
docker update --restart=always [container id or container name]
Regular restart policy is no. That means default container if it dies due to some reason, it will not start by itself. 
   docker run --name nginxc0 \
 -d nginx
unless-stopped
This policy work we will expect two use casees, Let's see first use case - This restart policy works same as always, when you are defined as follows the container nginxc1 after regular restart of docker engine it will be starting the container automatically.
   docker run --name nginxc1 \
 --restart unless-stopped -d nginx
 
  sudo systemctl restart docker

Use case 2 
The second use case - here the container will be stopped manually, but when the Docker engine restart completes there is no restarting container.

always
   docker run --name nginxc2 \
 --restart always -d nginx
 
  sudo systemctl restart docker

on_failure
   docker run --name nginxc3 \
 --restart on_failure -d nginx
 
  sudo systemctl restart docker

If we undderstand this setup how it works and saves in production, same policies will be used in the docker compose and docker stack yaml files as well.

Wednesday, July 20, 2022

Docker Command Tricks & Tips

 

Docker container command Tips & Tricks


Here my idea is to use the Unix/Linux 'alias' command for most those common docker container, network, volume commands to form as short-cuts. This trick work on bash shell.

First examine the docker container listing with the --format options as follows.

docker container ps -s \
  --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
docker ps command

To get the logs of any applications that runs in container
alias dkrlogs='docker logs'
alias dkrlogsf='docker logs -f '

docker logs with alias trick

List of the images
alias dkri='docker image ls'
docker image list alias trick

The container list
alias dkrcs='docker container ls'
docker container list alias trick

Remove the 'Exited' status container
alias dkrrm='docker rm'
docker rm alias trick

Docker top list to see the container inside process ID

alias dkrtop='docker top'
docker top alias trick

All the above nice commands will be helpful for the pipelines, Containerization makes CI/CD seamless. 

We could add the following alias lines to our .bashrc or .bashprofile where docker command executable:
alias dkrlogs='docker logs'
alias dkrlogsf='docker logs -f '
alias dkri='docker image ls'
alias dkrcs='docker container ls'
alias dkrrm='docker rm'
alias dkrtop='docker top'
alias cleanall='docker container rm $(docker ps -a -q)'
alias dkrps='docker ps --all --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Status}}"'

How to know which docker volume connected to your container? 


Let's create a container named as myweb with a docker volume attached as web-volume.
docker run -d --name myweb --mount source=web-volume,destination=/usr/share/nginx/html nginx
Using inspect sub-command on the container --format string with JSON forms is going to return a Json block:
docker container inspect --format '{{json .HostConfig.Mounts}}' myweb |jq

alias knv2c="docker container inspect --format '{{json .HostConfig.Mounts}}' $1"
kn2c myweb|jq


alias k2c="docker container inspect -f '{{json .HostConfig.Mounts }}'"
docker container inspect with -f option json format


How to get a container IP from Docker Host?

Let's create a network as vybhava-net and add couple of containers to it as shown:
docker network create vybhava-net
docker run --net vybhava-net --name nginx -d nginx
# same way create n1, n2, n3 etc.

docker network inspect vybhava-net -f '{{json .Containers}}'|jq
  
Image here shows you the filtered JSON output for containers which are attached to the network 'vybhava-net'

Docker network inspect filtered for containers



How to get the network block and IPAddress of a given container? 

Getting the IPaddress of a container is as follows: 

Let's create an example container name as urweb.
  
docker run -d --name urweb \
--mount source=zing,destination=/usr/share/nginx/html nginx:alpine
Command execution related to alias using shell arguments:
docker container \
  inspect  -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' urweb

alias getip="docker container inspect  -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $1"
getip urweb 
The following is the execution of the alias command for the container within the default network.

How to get the User defined bridge network IPAddress? 


Let's make user defined network as mynet.
  
docker network create mynet; docker network ls

#create a mongo container
docker run -d --rm --name mongodb01 --network mynet  \
-e MONGO_INITDB_ROOT_USERNAME=admin  \
-e MONGO_INITDB_ROOT_PASSWORD="vybhavatechnologies" mongo

#Create a mongo-express container 
docker run -it --rm \
    --network mynet \
    --name mongo-express \
    -p 8081:8081 \
    -e ME_CONFIG_OPTIONS_EDITORTHEME="ambiance" \
    -e ME_CONFIG_MONGODB_SERVER="mongodb01" \
    -e ME_CONFIG_MONGODB_ADMINUSERNAME="admin" \
    -e ME_CONFIG_MONGODB_ADMINPASSWORD="vybhavatechnologies" \
    mongo-express
After adding two containers mongo, mongo-express to the mynet we can inspect the Network settings map to this user defined bridge network.
	
$ docker inspect -f '{{.Name}}-{{.NetworkSettings.Networks.mynet.IPAddress }}' 96bbda65987a
/mongo-express-172.19.0.3
[node1] (local) root@192.168.0.8 ~
$ docker inspect -f '{{.Name}}-{{.NetworkSettings.Networks.mynet.IPAddress }}' e2ba199e8307
/mongodb01-172.19.0.2
 

Saturday, November 6, 2021

HEALTHCHECK Instructions in Dockerfile

 Hello Guys in this post I wish to explore more intresting instructions HEALTHCHECK which can be used in Dockerfile. 

The most common requirement for any real-time projects monitoring using a side-car container which could run in parallel and try to check the process or application URL check or container reachability using ping command etc. 

healthcheck docker container monitorong
Dockerfie instruction HEALTHCHECK


In Dockerfile we can have HEALTHCHECK instruction that allows us to know the condition of an application test runs as expected or not, when a container runs this will be returns a status as healthy, unhealthy based on the HEALTHCHECK command exit code. If the exit code 0 then returns healthy otherwise unhealthy.


HEALTHCHECK [options] CMD [health check command]

Example:

HEALTHCHECK --interval=3s CMD ping c1 172.17.0.2

here are the Healthcheck options 

  1.  --interval=time in sec (duration 30s is default)
  2.  --timeout=time in sec (duration 30s is default)
  3.  --start-period=time in sec (duration 0s is default)
  4.  --retries=3 () default 3 


Let's jump into experiment mode:
docker run -dt --name main-target busybox sh; docker ps 
docker inspect main-target

More specific

To get only IPaddress of a containter use the following format option:
alias dcip="docker inspect \
 -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "
ipdc main-target
Note: when you define the alias for ipdc important point don't miss space at the of the line.
Guys, here is glimpse of 'ping' command exit code:
ping -c1 google.com ; echo $?
ping -c1 shekhar.com ; echo $?
Observe that exit codes values, if a website exits returns 0 if not non-zero.


Get the main-target container IPAddress from the docker inspect command output. Now we will create a Dockerfile with the following code:


#File: Dockerfile
FROM busybox 
LABEL AUTHOR Pavan Deverakonda
HEALTHCHECK --interval=5s CMD ping -c 1 172.17.0.2


Note: You could find the IP from previous docker inspect command.


Let's call this image as monitor
docker build -t monitor .
docker run -dt --name monping monitor; docker ps





Observe the STATUS column for corresponding containers.
alias s="docker inspect -f '{{json .State}}' "
s monping |jq


docker inspect output filter with jq for HEALTHCHECK


USECASE 2: HEALTHCHECK for web applications

Let's see the 'curl' command usage in general

The following command returns HTML content which may be multiple lines
curl http://devopshunter.blog.com/test
Let's make this command usage in minimal way using -f or --fail options in curl command:
# Fail in silently single liner 
curl http://devopshunter.blog.com/test.txt -f
curl command with --fail or -f option


Run a container with a healthcheck command using. A Linux command that checks http uri using `curl` that returns HTML code or HTTP code as per the web applicaiton availability.
docker run -dt --name health-con \
 --health-cmd "curl --fail http://localhost" busybox 
Here we have not used any HEALTHCHECK options, so it will try to check by running health-cmd 30sec interval 3 retries and timeout for each as 30sec. that means after 2 minutes you can get the health status as 'unhealthy'. Because busybox don't run any web server inside the container.
We can check the health status of self container or other container which is accessable. Otherwise if it shared a network with other container which is running a web server.
#File: Dockerfile
FROM busybox 
LABEL AUTHOR Pavan Deverakonda
HEALTHCHECK --interval=5s CMD curl --fail http://localhost

Build the monitoring image that contains HEALTHCHECK instruction.
  
docker build -t moncurl .
docker images
For now we will test the same busybox container - unhealthy status.
docker run -dt --name moncurl-con moncurl sh
# Check the container health 
watch docker ps
  

#cleanup
docker rm -v -f health-con 
Now let's see the interval option how it will impact a container health:
docker run -dt --name health-con  --health-cmd "curl --fail http://localhost" --health-interval=3s busybox
watch docker ps 
My observation - at 0s(when container started) healthcheck starts after 3s test it, retries 3 times that means 3times 3s = 9s you will get the health status changed.

USECASE 3: HEALTHCHECK with Interval and Retries options

We can run a container to check the health with options interval and retries together as: 

 UNHEALTHY
docker run -dt --name health-con3 --health-cmd "curl -f http://localhost" --health-interval=3s --health-retries=1 busybox 
watch docker ps

HEALTHY
docker run -dt --name health-con3 --health-cmd "curl -f http://localhost" --health-interval=3s --health-retries=1 nginx 
watch docker ps



healthy status


Let's build a Healthcheck image
  
#File: Dockerfile
FROM nginx
LABEL AUTHOR Pavan Deverakonda
HEALTHCHECK --interval=5s --timeout=3s CMD curl --fail http://localhost || exit 1
EXPOSE 80
Now build the image
docker build -t moncurl:2 .
docker images
create the container with that image:
docker run -dt --name health-con2 moncurl:2 sh 
Please comment and share with your friends!

Sunday, October 10, 2021

HAProxy on Docker load balance Swarm Nginx web services

What is HAProxy do?


HAProxy is a free and open-source load balancer that enables DevOps/SRE professionals to distribute TCP-based traffic across many backend servers. And also it works for Layer 7 load balancer for HTTP loads.

HAProxy runs on Docker

The goal of this post is to learn more about HAProxy, how to configure with a set of web servers as backend. It will accept requests from HTTP protocol. Generally, it will use the default port 80 to serve the real-time application requirement. HAProxy supports multiple balancing methods. While doing this experiment
HAProxy on Docker traffic routing to Nginx web
Nginx web app load balance with Swarm Ingress to HA proxy

In this post, we have two phases first prepare the web applications running on high availability that is using multiple Docker machines to form a Swarm Cluster and then a web application deployment done using the 'docker service' command.

Step 1: Create swarm Cluster on 3 machines using the following commands

docker swarm init --advertise-addr=192.168.0.10

join the nodes as per the above command output where you can find the docker join command

docker node ls

Deploy web service 

Step 2: Deploying Special image based on Nginx web service that designed for a run on Swarm cluster

docker service create --name webapp1 --replicas=4 --publish 8090:80 fsoppelsa/swarm-nginx:latest


Now, check the service running on the multiple Swarm nodes 

docker service ps webapp1

Load balancer Configuration

Step 3: Create a new Node 5 dedicated for HAProxy load balancing: 


Step 4: Create the configuration file for the HAProxy load balancing the webapp1 running on Swarm nodes, In this experiment, I've modified a couple of times this haproxy.cfg file to get the port binding and to get the health stats of backend webapp1 servers.

vi /opt/haproxy/haproxy.cfg
global
    daemon
    log 127.0.0.1 local0 debug
    maxconn 50000

defaults
  log global
  mode http
  timeout client  5000
  timeout server  5000
  timeout connect 5000

listen health
  bind :8000
  http-request return status 200
  option httpchk

listen stats 
  bind :7000
  mode http
  stats uri /

frontend main
  bind *:7780
  default_backend webapp1  
  
backend webapp1
  balance roundrobin
  mode http
  server node1 192.168.0.13:8090 
  server node2 192.168.0.12:8090 
  server node3 192.168.0.11:8090 
  server node4 192.168.0.10:8090    
>

  

Save the file and we are good to proceed


Running HAProxy on Docker 

The following docker run command will run as detached mode and ports will be published as per the HAProxy configuration, and this uses Docker storage volume for configuration file accessible from host machine to the container in read-only mode :

docker run -d --name my-haproxy \
    -p 8880:7780 -p 7000:7000 \
    -v /opt/haproxy:/usr/local/etc/haproxy:ro
    haproxy


You can see the webapp1 is responding when you hit 7780 port on the browser with the HAProxy running node and its bind port that will be routed to the backend web applications. You can also view the HAProxy stats as well with the 7000 port.

Troubleshoot  hints

1. investigate what is there in haproxy container logs.

docker logs my-haproxy


alias drm='docker rm -v -f $(docker ps -aq)'

 There are many changes in the HAProxy from 1.7 to latest version which I've encounterd and resolved with the following 

  • The 'mode health' doesn't exist anymore. Please use 'http-request return status 200' instead.
  • please use the 'bind' keyword for listening addresses.
  • References:

    Sunday, May 30, 2021

    Docker Service Stack deployment

    To work with Docker service stack deployment we should have an orchestrator either Swarm or Kubernetes. Here, in this post, I will be experimenting with the Docker swarm cluster.

    Prerequisites

  • Docker installed machine
  • Docker swarm initialized and active

  • Create the manifestation file that describes the task, which will be available in the service definition. We can use the declarative definitions for each service in a YAML file. 

    Docker Swarm Service Stack Deployment
    Docker Swarm Service Stack Deployment



    I read the book: Learn Docker - Fundamentals of Docker 19.x, where I found the nice explanation given in the 


     


    Let's run the docker swarm visualizer container, which will show the Swarm cluster node status, container service status it will be very clear to understand how the orchestrator capabilities work for Docker containers.
    docker run -it –d \
      -p 8080:8080 \
      -v /var/run/docker.sock:/var/run/docker.sock  \   
      dockersamples/visualizer
    
    Alternatively, you can use the docker-compose file as a replacement for the container run command:
    version '3'
    
    services:
      viz:
        image:  dockersamples/visualizer
        volumes:
          - "/var/run/docker.sock:/var/run/docker.sock"
        ports:
          - "8080:8080"
    
    You can enhance it's usability with your own parameters to run the containers.

    When no deployments swarm nodes contain empty:

    Initial Swarm Visualization 


    Visualizer shows us the number of master and worker swarm nodes participating in the cluster. The green glow-filled circle indicates the active Node of the Docker Swarm cluster. 

    Docker stack service deployment

    The docker swarm service logs can be viewed in two ways using the parameter as service ID or using service NAME
     
    version: "3.7"
    services:
      whoami:
        image: training/whoami:latest
        networks:
          - test-net
        ports:
          - 81:8000
        deploy:
          replicas: 6
          update_config:
            parallelism: 2
            delay: 10s
          labels:
            app: sample-app
            environment: prod-south
    
    networks:
      test-net:
        driver: overlay
    
    The menifestation tells about the desired state  of the application, where we mentioned what is the port we want to use on the host and corresponding container port. Here 81 is host port, 8000 is container port. This service uses test-net network resource. The deploy section mentioned as replicas: 6, that means 6 containers will be created on Docker Swarm clsuter. You wish to run the multiple containers run parallel on the same node. We can use labels to identify and remove the services in runtime that will help in monitoring the applicaiton, It is also useful to terminate the services by descovering with these labels.

    Run the following docker Stack deployment command as:
    docker stack deploy whoami --compose-file docker-service-whoami.yml
    
    docker stack ls
    
    Stack deployment automatically creates overlay network
    docker network ls
    docker service ls
    docker container ls
    curl http://192.168.33.250:81
    
    The execution will produce output as follows:

    Curl command output from the Docker swarm cluster


    Now let's see the deployment of whoami sample:
    docker service logs whoami_whoami
    docker service logs kiw9ckp68hu5
    
    After the deployment of 'whoami' service see that is distributed amont multiple machines which built-in capability of the Docker swarm.

    Docker Swarm Visualization shows replicas=6 on multi-host deployment

    Docker Cluster Resilience Test case

    Here we can do the test for the resilience two levels:
    • container level
    • node level

    Use case 1: Container level fail-over 

    Step 1: Determine the running containers list
    Step 2: Remove a container from the stack 
    Step 3: The service deployment will automatically create the new container to maintain the DESIRED state as 6 when a container removed ACTUAL State gone to 5, to maintain it Docker Swarm will work to birng up the service.
    docker container ls
      docker container rm -f whoami_whoami.3.selectyourown-container
    docker ps
    


    Use case 1: Remove a container Swarm service recreate the replacement container

    This concludes containers high availability using service deployment using stack on swarm cluster. The actual real usecase that node level will give HA in tne next.

    Use case 2: When the node fails - Service Migration

    Brought down the Machine and see the outcome on the Swarm Visualizer:

    One your Vagrant machines you can stop one of the node to see the impact how it will recover or maintaine the DESIRED State of replicas=6, Here in our example, the service which is running on node1 continue running on available nodes (mstr, node2) by migrating the service to them. This migration will be taken care by the Docker Swarm automatically. We can also define choices in the YAML file. 

    Swarm Visualization of migration of services to other worker: Vagrant node1 halt

    The Visualization tool shows clear picture of how the Docker Swarm capable of migrate the service when there is issue/failure on node.
     
    It will clearly show how High-Availability can be achieved with the Docker Swarm orchestrator.
    Jai Hind!!

    Saturday, May 22, 2021

    Creating your Custom Image using Dockerfile

    How do you create docker image?

    There are multiple ways to create new docker images

    • docker commit: creates a new layer (and a new image) from a running container.
    • docker build: performs a repeatable build sequence using dockerfile.
    • docker import: loads a tarball into Docker engine, as a standalone base layer.

    We don't prefer to use docker import, because the import option can be used for various hacks, but its main purpose is to bootstrap the creation of base images.

    Working on Docker Custom Image

    Docker images can be pulled from Docker registries. Docker owns Docker Hub and Docker cloud public repositories where you could find the free images. Docker store is a place where Docker images could be on sale. You can create an image and sell it who need it. Docker client is CLI which allows us to deal with the Docker objects. Docker objects are Docker images, Containers, Network, volumes and services. An image is a read-only template that allows us to create instances in runtime containers.

    Dockerfile structure



    We can create an image from a base image by adding some customizations like installing new libraries, software. For example, using oraclelinux as base image on top it installs httpd.
    Here base image could be available in the public registry or custom image build for your project.

    The easiest way to customize our docker image using Dockerfile, Where Dockerfile includes instructions to create layers in your docker image. Let's explore how to build our own image with dockerfile with syntax

    How to create an efficient image via a Dockerfile?


    • Start with an appropriate base image
    • Do NOT Combine multiple applications into a single container
    • Avoid installing unnecessary packages
    • Use multi-stage builds 


    Dockerfile format

    The dockerfile is a construct of a set of instructions to build the docker image. The dockerfile starts with comments where every scripting language allows us to use '#'(hash symbol) to comment a line. Next base image FROM where the image can be pulled. Here we have two options Docker Private registry for your organization otherwise Docker Hub public cloud which can be used for the initial learnings or for Proof of Concepts to built.
    Create a Dockerfile in any editor, my preferrable editor is VisualStudio Code, which gives me nice syntax highlighting for Dockerfile instructions.
    # This is first sample Dockerfile
    FROM busybox
    LABEL AUTHOR Pavan Devarakonda
    RUN echo "Building sample docker image"
    CMD echo "Hello Vybhava Container!"
    

    To create the docker image following command:
    docker build -t hello .
    
    Here the important thing is that Dockerfile don’t follow any root directory you need to specify at the end of the build command in the above we have used dot(.) that is the current directory. You have other option too you can define other PATH as per your needs of project team collaboration.
    Docker build command best options
    docker build -f /path/to/a/Dockerfile . #absolute path
    docker build -t vybhava/myapp . #relative path of Dockerfile
    docker build -t vybhava/myapp:1.0 . # with tag label value
    

    Usage of .dockerignorefile


    To increase the build performance, exclude files and directories by adding a .dockerignore file to the context directory.
     *.md
     !README.md
     *.tmp
    

    Escape Parser Directive


    Exploring Docker instructions 

    Let's examine each command in detail how it works.
     

    FROM Instruction

    The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions. Let's see an example
    # This is first sample Dockerfile
    FROM ubuntu
    CMD ["/usr/bin/wc","--help"]
    
    Ref: https://docs.docker.com/engine/reference/builder/#from
     

    COPY and ADD instructions

    copies all files and folders from current directory to /app folder in container-Image
    COPY . /app 
    

    copies everything from the web folder to container-Image /app/web
    COPY ./web /app/web
    

    copies specific file to /data/my-sample.txt file inside container-Image
    COPY sample.txt /data/my-sample.txt
    

    Compressed files such as tar, tar.gz, gz or zip files when copying to container-Image it will uncompresses it which can be any format.
    ADD jdk-8u241-linux-x64.tar.gz /u01/app
    ADD APEX_21.1.zip /u01/
    

    You can add files from the url to a container Image
    ADD http://github.com/sample.txt /data/
    

    EXPOSE

    The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. https://docs.docker.com/engine/reference/builder/#expose The LABEL instruction adds metadata to an image. A LABEL is a key-value pair. To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing. 

    Reference : https://docs.docker.com/engine/reference/builder/#label
     

    RUN instruction

    RUN has 2 forms:
    RUN  (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows)
    RUN ["executable", "param1", "param2"] (exec form)
    
    The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile. 

    Reference: https://docs.docker.com/engine/reference/builder/#run

     

    ENV instruction


    The ENV instruction sets the environment variable 'key' to the value "value". Example:
    ENV JDK_DIR /tmp
    ENV myvar=/home/tomcat
    ENV myvar=/home/apache2 var2=$myvar
    ENV var3=$myvar
    

    The environment variables set using ENV will persist when a container is run from the resulting image.
     

    WORKDIR instruction

    Once the container comes to running state will switch to the directory which you mentioned as WORKDIR instruction. The value you pass it to WORKDIR is a PATH that should be existing in the container.
     
    Example: WORKDIR /tmp

    ENTRYPOINT and CMD instructions

    The ENTRYPOINT and CMD both the commands are executed run-time that is Container startup time. We cannont have multiple ENTRYPOINT or CMD commands in a Dockerfile

    An ENTRYPOINT allows configuring a container that will run as an executable.

    ENTRYPOINT has two forms:

       ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
       ENTRYPOINT command param1 param2 (shell form)
    
    

    Example1

    ENTRYPOINT with CMD params

       
    # This is Entrypoint sample dockerfile
    FROM alpine
    LABEL MAINTAINER BHAVANISHEKHAR@GMAIL.COM
    ENTRYPOINT ["ping"]
    CMD ["-c","4","www.docker.com"]
    
    Build the image from the above instructions:
     docker build -t entrycp:1.0 -f entrycmdparm.Dockerfile .
    
    #Run the container
    
    docker run entrycp:1.0
    

    Example 2:

    ENTRYPOINT param with CMD param

       
    # This is Entrypoint sample dockerfile
    FROM ubuntu
    LABEL USER Pavan
    ENTRYPOINT ["top", "-b"]
    CMD ["-c"]
    

    dockerfile is a blue-print for docker image created with instructions

    Monday, May 10, 2021

    Docker Private Registry with Frontend UI

    Hey! In this DevOps learning post, I've done my experiment on Docker Registry-Server connecting with the docker registry UI as a frontend. To support this Frontend service, we have an Apache HTTP server container used as a docker-registry-frontend image.


    Docker Registry Server connect with Frontend UI
    Docker Registry Server connect with Frontend UI

    Step 1: Go to the docker hub search for 'docker-registry-frontend' given version Pull the image to your local system.

    docker pull registry 
    docker pull konkardkleine/docker-registry-frontend:v2 
    docker images 
    
    docker network create registry 
    docker network ls
    
    Step 2: Create the registry server using a docker container run command which expose the port as 5000 and the network that we have created in the step1. This separate network creation will be easy to isolate with other containers running on the same host.
    docker run --rm -d -p 5000:5000 --name registry-server \
    --network registry 
    To verify the access of registry-server
    docker logs registry-server -f
    
    Step 3: You can find many other web UI options on the Docker Hub, Here I've selected the web-ui image - konradkleine/docker-registry-frontend:v2. Here we have two environment variables need to set HOST and PORT. To access this registry web page we need to open the port with -p option. 

    Create the registry-ui container as follows:
    docker run \
      -d --rm --name registry-ui --network registry \
      -e ENV_DOCKER_REGISTRY_HOST=192.168.33.250 \
      -e ENV_DOCKER_REGISTRY_PORT=5000 \
      -p 8080:80 \
      konradkleine/docker-registry-frontend:v2
                 
    Step 4: To verify the access logs of registry-ui web server run the following command:
    docker logs registry-ui -f
    
    Step 5: Open a duplicate terminal and do the following image tag to use it on our private registry.
    docker tag busybox dockerhost.test.com:5000/busybox
    docker images 
    
    docker push localhost:5000/nginx
    
    Now let's see on the browser where we can access the registry-ui which is actually running with Apache HTTPD 2 webserver


    Docker Registory UI Frontend
    Docker Private Registry Web 



    Step 6:
    After push you can see the images on the registry-ui same you can check with curl command as well like this:
     curl -X GET http://localhost:5000/v2/_catalog
    {"repositories":["nginx"]}
    

    Please write your comments and share if you like and think that it will be helpful for other.

    Tuesday, May 4, 2021

    Backup and Recovery, Migrate Persisted Volumes of Docker

     Namaste !! Dear DevOps enthusiastic, In this article I would like to share the experimenting of data that is persistence which we learned in an earlier blog post

    In real world these data volumes need to be backed up regularly and if in case of any disaster we can restore the data from these backups from such volumes  

    Backup and Recovery Docker Volumes
    Docker Volumes Backup and Restore

    This story can happen in any real-time project, The data from the on-premises data center to migrate to a cloud platform. One more option is when Database servers backup and restore to different Private networks on a cloud platform.

    Here I've tested this use-case on two Vagrant VirtualBox's. One used for backup and other for restore MySQL container.

    Setting up the Docker MySQL container with Volume

    Step 1: Let's create a volume with a name as first_vol:

        docker volume create first_vol #create 
        docker volume ls #Confirm 
    
    Step 2: create a container with volume for backup of My SQL Database image
    docker run -dti --name c4bkp --env MYSQL_ROOT_PASSWORD=welcome1 \
     --volume first_vol:/var/lib/mysql mysql:5.7
    
    Step 3: Enter into the c4bkp container
    docker exec -it c4bkp bash
    

    Create data in MySQL database


    Step 4: Log in to the MYSQL database within the My SQL database container
     mysql -u root -p
     password: 
    
    Enter the password as per the environment variable that is defined at the time of container, key pair

     
  • create a database named as 'trainingdb' mysql prompt in the same container
    CREATE DATABASE trainingdb; use trainingdb;
    
  • Now check the database which is created using show command, then create the TABLE name as traininings_tbl with required fields and their respective datatypes:
      SHOW CREATE DATABASE trainingdb;
      
      create table trainings_tbl(
       training_id INT NOT NULL AUTO_INCREMENT,
       training_title VARCHAR(100) NOT NULL,
       training_author VARCHAR(40) NOT NULL,
       submission_date DATE,
       PRIMARY KEY ( training_id )
    );
    
      
  • Now insert the rows into the training_tlb table, here you can imagine any project table can be used with UI as this is an experiment we are entering manually entering records:
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Docker Foundation", "Pavan Devarakonda", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("AWS", "Viswasri", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("DevOps", "Jyotna", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Kubernetes", "Ismail", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Jenkins", "sanjeev", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Ansible", "Pranavsai", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("AWS DevOps", "Shammi", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("AWS DevOps", "Srinivas", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Azure DevOps", "Rajshekhar", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Middleware DevOps", "Melivin", NOW());
    INSERT INTO trainings_tbl(training_title, training_author, submission_date) VALUES("Automation experiment", "Vignesh", NOW());
    commit;
    
    After storing a couple of records into the table, we are good to go for testing with the persistent data. Check the table data using 'SELECT' query:
        select * from trainings_tbl;
        
    Now let's exit from the container. 

  • Step 5:
    Taking the backup
         docker run --rm --volumes-from c4bkp  -v $(pwd):/backup alpine tar czvf /backup/training_bkp1.tar.gz /var/lib/mysql/trainingdb
         or
         docker run --rm --volumes-from c4bkp  -v $(pwd):/backup ubuntu tar czvf /backup/training_bkp.tar.gz /var/lib/mysql/trainingdb
         ls -l *gz
        
    Here the most important option is --volumes-from it should point to the mysql container. The backup file content we can check with here t indicates test the .gz file.
        tar tzvf training_bkp.tar.gz

    Migrate Data

    Step 6: The data in the tar.gz file as backup which can be migrated/moved to another docker host machine where you wish to restore the data with the same mount location of the container from which image you have created the backup. Example source machine used mysql:5.7 image same should be used on the destination machine. 

    Let me copy to the share folder in the vagrant box so that it can be accessable on the another vagrant box:
    cp training_bkp1.tar.gz /vagrant/
    

    Restore from the Backup

    Step 7: Create a new container and restore the old container volume database Now you cna open two terminals run the following 'restore' volume to run the 'for_restore' container:
    docker run -dti --name for_restore --env MYSQL_ROOT_PASSWORD=welcome1 \
     --volume restore:/var/lib/mysql mysql:5.7
    #check inside container volume location
    docker exec -it for_restore bash -c "ls -l /var/lib/mysql"
    
    To restore the data from the backup tar file which we have already taken from first_vol volume.
    docker run --rm --volumes-from for_restore \
     -v $(pwd):/restore ubuntu bash -c "cd /var/lib/mysql/ && tar xzvf /restore/training_bkp.tar.gz --strip-components=3"
    
    I've googled to understand the --strip-components how it helps us in extracting only specified sub-directory content.
    Now the exciting part of this experiment is here...
  • Recovered Data in MySQL database container

     

    check the other terminal in the /var/lib/mysql folder will be filled with extracted database files.
    Once you see the data is restored this approach can be useful to remote machine then it is the migration strategy for docker volumes.
  • Step 8: Cleanup of Volumes -- optional You can use the following command to delete a single data volume
    docker volume rm datavol 
    

    To delete all unused data volumes using the following command
        docker volume prune
    

    References used for this post

  • Create db on My SQL
  • Crate table insert
  • MySQL database password recovery or restart
  • Sunday, May 2, 2021

    Docker SSHFS plugin external storage as Docker Volume

     Namaste, In this exciting Docker storage volume story, this experiment is going to use two Vagrant VirtualBox. You can also use any two Cloud instances (may be GCP, AWS, Azure etc.,). Where DockerHost is going to run the docker containers and DockerClient box is going to run the SSH daemon. 

    SSHFS Volume in docker


    Docker Volume with External Storage using SSHFS

    Docker allows us to use external storage with constraints. These constraints will be imposed on cloud platforms as well. The external or Remote volume sharing is possible using NFS. 

    How to install SSHFS volume?


    Step-by-step procedure for using external storage as given below :
    1. Install docker plugin for SSHFS with all permission is recommended:
      docker plugin install \
      --grant-all-permissions vieux/sshfs
      
    2. Create a Docker Volume
        docker volume create -d vieux/sshfs \
      -o sshcmd=vagrant@192.168.33.251:/tmp \
      -o password=vagrant -o allow_other sshvolume3
      
    3. Mount the shared folder on the remote host:
      mkdir /opt/remote-volume # In real-time project you must have a shared volume accross ECS instances
      
    4. The remote box in my example, it is a 192.168.33.251 box. check the PasswordAuthentication value, the default value will be no, but if your volume using this remote box communication happen when you provide the ssh login password vagrant@dockerclient1:/tmp$ sudo cat /etc/ssh/sshd_config |grep Pass PasswordAuthentication yes
    5. Check the sshvolume what is its configuration and which remote VM it is connected:
           docker volume inspect sshvolume3
        

      docker inspect sshfs volume
      Docker Volume inspect sshfs 

    6. Now Create an alpine container using the above-created sshvolume3
    7.   docker container run --rm -it -v sshvolume3:/data alpine sh
        
    8. Validate now the data created in side the container will be attached volume that is mapped to remote virtualbox
    9. Enter the following commands to create a file and store a line of text using echo command.
      sshfs using container
      Docker container using sshfs volume
    10. On the remote box check that the file which is created inside the container will be available on the remote box 192.168.33.251 box
    Remote box /tmp containers file


      You can use this remote volume inside docker-compose and also stack deployment YAML files.
    1. Create a service that uses external storage
      docker service create -d \
       --name nfs-service \
       --mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/,"volume-opt=o=10.0.0.10,rw,nfsvers=4,async"' \
       nginx:latest
      
    2. List the service and validate it by accessing it.
      docker service ls
          

    Hope you enjoyed this post of learning. please share this with your friends and colleagues.

    Saturday, April 17, 2021

    Docker Expose Port Understanding Port Mapping and Port Forwarding

     In this, we will discuss an experiment on Docker Container Network port exposed or published. A Netcat command utility will be used to make an echo server. which will be read the message on one socket and the other end sends the message to the terminal.


    Docker expose ports
    Understanding Port forwarding in Docker Containers


    Background on Docker Port

    Docker container ports by default mapping to host ports.

    The -P option will bind the container exposed ports (EXPOSE command in Dockerfile) to random available ports of the host.

    We can bind any port of the container even though it is not pre-defined with EXPOSE ones. For this, you can use -p (lower case) with host port followed by a colon (:) container port 

    Note: This experiment can be successful on ubuntu:14.04 image only. Because other than that ubuntu images don't support nc and host.docker.internal to look into the docker network.

    Here we have four use cases:

    1. Two ports open to run the echo server 
    2. Container access host network
    3. Dynamically port mapping
    4. Expose Port using TCP/UDP protocol

    To understand more on docker network isolation with namespaces, Open 3 terminal set to view all 3 on a single screen.

    Docker expose ports
    Examples of Docker container port 



    USE CASE 1: TWO PORTS EXPOSED - USED AS ECHO-SERVER 

    In this case, let's use three terminals  on the same screen

    Terminal 1: 

    # Create a container as echo-server with expose of 2 different ports 
    docker run --rm -ti --name echoserver \
     -p 5000:5000 -p 5001:5001  \
     ubuntu:14.04 /bin/bash
    # Inside the container pipe between two ports
    nc -lp 5000 | nc -lp 5001
    
    To validate this experiment, how this netcat command will be used to communicate betwen two ports inside the container they work as echoserver.

    Now open the Terminal 2 window and send the text message to localhost with 5000 will be forward to localhost with 5001

    Terminal 2: Run the nc command as shown

    nc localhost 5000
    Vybhava Technologies gives knowledge on Docker

    Terminal 3: Now open the third terminal and run the nc command with 5001 port this will be in waiting state

    nc localhost 5001

    Here you can observe that Terminal 3 having output automatically displaying the same message that you entered and send in Terminal 2 where the docker container acted as echo server

    Docker Expose port forwarding
    Docker Port Fortwarding withing container


    USE CASE 2: Containers using host network - host.docker.internal/host IP 

    Terminal 1 remain the same as we have done it earlier in this Blog post 

    Terminal 2: 

    Note: Docker version 20.x supports following --add-host option way to communicate with host network from container.
    docker run -it --name echoclient1 \
    --add-host=host.docker.internal:host-gateway \
    ubuntu:14.04 bash

    #inside contianer 

    nc host.docker.internal 5000
    echo message here

    Terminal 3: Now run the ubuntu container

    docker run -it --name echoclient2 \
    --add-host=host.docker.internal:host-gateway \
    ubuntu:14.04 bash

    #inside contianer 

    nc host.docker.internal 5001
    container Communicate with Host
    Expose port used by Sender and Receiver container


    Observe that in terminal 2 write message same will displayed into the terminal 3 

    USE CASE 3: Dynamically port mapping to exposed container ports 

    The port inside the container is fixed port

    The port on the host machine or VM is chosen from the available unused ports 

    This allows many containers to run programs with fixed ports 

    this often is used with service discovery programs 


    Terminal 1: 

    docker run --rm -ti \
    -p 5000 -p 5001 --name echoserver \
    ubuntu:14.04 bash

    #inside container 

    nc -lp 5000|nc -lp 5001


    Terminal 2:

    docker port echoserver


    #shows port mappings 

    nc localhost hostport1


    #change hostport1

    Enter a message to echo

    Terminal 3:

    nc localhost hostport2

    #change hostport2

    Now in this case you can observe that text automatically sends the message to Terminal 3 where netcat ready to display on to your terimal with the open port 


    USE CASE 4: USING Export Port with a Protocol TCP or UDP

    docker run -p host-port:container-port/protocol 

    # protocal can be tcp or udp 

    Terminal

    docker run --rm -ti -p 8888/udp \
    --name echoserver ubuntu:14.04 bash

    Here we have not used any host-port to forward it so Docker engine will help us to select the random port from host machine available ports.


    Terminal 2:

    First check the port to which it is bind from the container
    docker port echoserver


    Use the random port to send the message from localhost.
    nc -u localhost mapport1-from-above
    hello from udp
    

    Now here observe that message send back to the terminal running container.

    Docker container echoserver run netcat with UDP
    Expose UDP Port for docker container


    Reference: 

    1. Docker official document port link
    2. Discuss on host docker internal 
    3. Play with NetCat on ubuntu  

    .

    Categories

    Kubernetes (25) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)