Showing posts with label docker overlay network. Show all posts
Showing posts with label docker overlay network. Show all posts

Friday, December 13, 2019

Docker Networking

Hello, dear DevOps enthusiast, welcome back to DevOpsHunter learners site! In this post, we would like to explore the docker networking models and different types of network drivers and their benefits.

What is Docker Networking?

Understanding docker networking

Docker provided multiple network drivers plugin installed as part of Library along with Docker installation. You have choice and flexibilities. Basically the classification happen based on the number of host participation. SinbleHost WILL 

Let's see the types of network drivers in docker ecosystem.

docker network overview

Docker Contianers are aim to built tiny size so there may be some of the regular network commands may not be available. We need to install them inside containers.

Issue #1 Inside my container ip or ifconfig not working, How to resolve 'ip' command not working?

Solution:
  apt update;  apt install -y iproute2  
  


Issue #2: ping command not working how to resolve this ?? Solution: Go inside your container
  apt update;  apt install -y iputils-ping  
  

What is the purpose of Docker Network?


  • Containers need to communicate with external world (network)
  • Reachable from external world to provide service (web applications must be accessable)
  • Allow Containers to talk to Host machine (container to host)
  • inter-container connectivity in same host and multi-host (container to container)
  • Discover services provided by containers automatically (search other services)
  • Load balance network traffic between many containers in a service (entry to containerization)
  • Secure multi-tenant services (Cloud specific network capability)
Features of Docker network
  • Type of network-specific network can be used for internet connectivity
  • Publishing ports - open the access to services runs in a container
  • DNS - Custom DNS setting Load balancing - allow, deny with iptables
  • Traffic flow - application access Logging - where logs should go # check the list of docker networking

To explore docker network object related commands you can use --help

docker network help output

Listing of docker default networks. It also shows three network driver options ready to use :
docker network ls 

# There are 3 types of network drivers in Docker-CE Installation with local scoped(Single host) which are ready to use - bridge, host, null. There is the significance in each type of driver.

docker network create --driver bridge app-net
docker network ls
# To remove a docker network use rm
docker network rm app-net
Let's experiment
docker run -d -it --name con1 alpine
docker exec -it con1 sh
# able to communicate with outbound responses
ping google.com

To check the network route table entries inside a container and on your Linux box where Docker Engine hosted:
ip route
or 
ip r

we are good to go for validate that from con1 container able to communicate other container con2 ping con2 # Bridge utility if it is not found then install
yum -y install bridge-utils
# check the bridge and its associated virtual eth cards brctl show

The 'brctl' show command output

There is the mapping of IP address shows the bridge attached to container

Default Bridge Network in Docker

The following experiment will let you understand the default bridge network that is already available when the docker daemon is in execution.


docker run -d -it --name con1 alpine
docker run -d -it --name con2 alpine

docker ps
docker inspect bridge

default bridge network connected with con1, con2
After looking into the inspect command out you will completely get the idea how the bridge works.

Now let's identify the containers con1 and con2
docker exec -it con1 sh
ip addr show
ping -c 4 google.com
ping -c 4 ipaddr-con2
ping -c 4 con2        --- should not work means dns will not work for default bridges
exit
# remove the containers which we tested docker rm -v -f con1 con2

User-defined Bridge Network


docker network create --driver bridge mynetwork
docker network ls
docker inspect mynetwork

docker run -d -it --name con1 --network mynetwork alpine
docker run -d -it --name con2 --network mynetwork alpine

docker ps
docker inspect mynetwork

Now let's see inside the container with the following:
docker exec -it con1 sh
ip addr show
ping -c 4 google.com
ping -c 4 ipaddr-con2
ping -c 4 con2        --- should WORKS, that means dns is avaialable in mynetwork
exit 

Clean up of all the containers after the above experiment
docker rm -v -f con1 con2 #cleanup
Now we can check the connection between two user defined bridge networks.

docker network create --driver bridge mynetwork1
docker network create --driver bridge mynetwork2

docker run -d -it --name con1 --network mynetwork1 alpine
docker run -d -it --name con2 --network mynetwork1 alpine

docker run -d -it --name con3 --network mynetwork2 alpine
docker run -d -it --name con4 --network mynetwork2 alpine

docker inspect mynetwork1
docker inspect mynetwork2

Now we can get the IP Address of containers of both the networks, using the 'docker inspect' command
docker exec -it con1 sh
ping -c 4 ipaddr-con3 #get the ip address using docker container inspect con3
ping -c 4 con3 # This will not work because two bridges isolated

Docker Host networking diriver

Now in the experiment of Host Network example


 docker run --rm -d --network host --name my_nginx nginx
 curl localhost
Now you know about how to run a container attached to a network, But now a container created already with default bridge network that can be attach to a custom bridge network is it possible ? is that have same IP address?

How to connect a user-defined network object with running Container?

We can move a docker container from default bridge network to a user defined bridge network it is possible. When this change happen the container IP address dynamically changes. 

docker  network connect command is used to connect a running container to an existing user-defined bridge. The syntax as follows:
 docker network connect [OPTIONS] NETWORK CONTAINER

Example my_nginx already running container.
docker run -dit --name ng2 \
  --publish 8080:80 \
  nginx:latest 

docker network connect mynetwork ng2

Observe after connecting to User defined bridge network check the IP Address of the container.

To disconnect use the following:
docker network disconnect mynetwork ng2
Once the disconnected check the IP Address of the container ng2 using 'docker container inspect ug2'. This will concludes that container can be migrated from one network to other without stopping it. There is no errors while performing this.

Overlay Network for Multi-host


Docker overlay network will be part of Docker swarm or Kubernetes where you have Multi-Host Docker ecosystem.
docker network create --driver overlay net-overlay
# above command fails with ERROR
docker network ls
The overlay networks will be visible in the network list once you activate the Swarm on your Docker Engine. and for network overlay driver plugins that support it you can create multiple subnetworks. #if swarm not initialized use following command with the IP Address of your Docker Host
docker swarm init --advertise-addr 10.0.2.15
Overlay Network will serve the containers association with Docker 'service' object instead of 'container' object. # scope of the overlay shows swarm
docker service create --help
# Create service of nginx 
docker service create --network=net-overlay --name=app1 --replicas=4 ngnix
docker service ls|grep app1 
docker service inspect app1 |more # look for the VirtualIPs in th output

Docker Overlay driver allows us not only multi-host communication, the overlay driver plugins that support it you can create multiple subnetworks.
docker network create -d overlay \
                --subnet=192.168.0.0/16 \
                --subnet=192.170.0.0/16 \
                --gateway=192.168.0.100 \
                --gateway=192.170.0.100 \
                --ip-range=192.168.1.0/24 \
                --aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \
                --aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \
                my-multihost-network
docker network ls
docker network inspect my-multihost-network # Check the subnets listed out of this command
The output as follows:
 

Docker doc has an interesting Network summary same as it says --

  • User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host.
  • Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
  • Overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
  • Macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.

Reference:


  1. Docker Networking: https://success.docker.com/article/networking 
  2. User-define bridge network : https://docs.docker.com/network/bridge/ 
  3. Docker Expose Ports externally: https://bobcares.com/blog/docker-port-expose/

Friday, October 4, 2019

Docker Swarm - master workers : Docker Cluster Architecture

Docker Swarm

Docker swarm is one of the crucial components of the Docker ecosystem. Native Docker Clustering with Swarm gives us the ability to do scheduling, high availability, security, and platform scalability.
Kubernetes, fleet, Mesos work similar to achieve the same goal. They get layer abstraction for system resources and allows the interfaces to the cluster manager.

Docker swarm is NOT a plugin it is built-in into docker engine. A basic docker installation can run swarm cluster it does not require any plugin to install.

What is Docker Swarm Orchestration?

The docker swarm is a clustering and scheduling tool for Docker containers. With this docker swarm, DevOps operators and developers can establish and manage a cluster of Docker nodes as a single virtual system

A swarm is a group of nodes that are running on Docker daemon (doccker engine) with Swarm master and worker nodes which will be joined to form a containers cluster to provide HA in production environments.


Docker Swarm features



  • Cluster management: Use the Docker Engine CLI to create a swarm of Docker Engines where you can deploy application services 
  • Declarative service model: Uses a declarative approach to let you define the desired state of the various services in your application stack 
  • Scaling: When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state
  • Desired state reconciliation: Swarm manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state 
  • Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers 
  • Rolling updates: At rollout time you can apply service updates to nodes incrementally. If anything goes wrong, you can roll-back a task to a previous version of the service

  • To work with Docker swarm, we need to enable it. To run proper docker swarm better run at least 3 docker clients VM with docker installed already. Make sure that all the VMs should be in the same timezone and also with the same date and time.
    $setup_docker = <<SCRIPT
    apt-get update;
    DEBIAN_FRONTEND=noninteractive apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common;
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -;
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    apt-get update;
    mkdir -p /etc/systemd/system/docker.service.d
    echo "[Service]" >> /etc/systemd/system/docker.service.d/http-proxy.conf
    echo "EnvironmentFile=-/etc/sysconfig/docker">> /etc/systemd/system/docker.service.d/http-proxy.conf
    apt-get install -y docker-ce docker-ce-cli containerd.io;
    sed -i 's|-H fd://|-H fd:// -H tcp://0.0.0.0:2375|g' /lib/systemd/system/docker.service;
    systemctl daemon-reload && systemctl restart docker.service;
    SCRIPT
    Vagrant.configure(2) do |config|
    config.vm.provider "virtualbox" do |vb|
    vb.memory = "1024"
    end
    config.vm.define "master" do |config|
    config.vm.box = "ubuntu/bionic64"
    config.vm.hostname = "master.devopshunter.localdomain"
    config.vm.network "private_network", ip: "192.168.33.110"
    config.vm.provision "shell", inline: $setup_docker
    end
    config.vm.define "swarmnode1" do |config|
    config.vm.box = "ubuntu/bionic64"
    config.vm.hostname = "swarmnode1.devopshunter.localdomain"
    config.vm.network "private_network", ip: "192.168.33.111"
    config.vm.provision "shell", inline: $setup_docker
    end
    config.vm.define "swarmnode2" do |config|
    config.vm.box = "ubuntu/bionic64"
    config.vm.hostname = "swarmnode2.devopshunter.localdomain"
    config.vm.network "private_network", ip: "192.168.33.112"
    config.vm.provision "shell", inline: $setup_docker
    end
    end



    # IN MANAGER NODE docker swarm 'init' subcommand is used for creating a new docker swarm cluster. The node on which 'init' runs by default becomes a manager.
     docker swarm init --advertise-addr=192.168.33.110
    

    docker swarm init command execution
    # Confirm the manager node in the list
     docker node ls
    

    # In the 192.168.33.111 box
     # To add a worker to this swarm, run the following command:
     docker swarm join --token SWMTKN-1-5th0jl9mqq7fthcpabioyinoc1x109z3b6viahdilm2v841rwc-c8c17dj7stkld1zpwkxibt4zl 192.168.33.110:2377
    docker swarm cluster joining commands execution

    Note: If you did not have the manager's output. to get the token for the worker to join as follows:
    docker swarm join-token worker
    

    In the Manager node check the node list after joining the worker it should show 2 nodes information.
     docker node ls
    

    docker Node list to check the swarm cluster

    Note that we cannot run a node that made as 'Manager' cannot join the worker on the same node. Nodes, services, containers, and tasks

    Service deploy in Swarm 



    • Services are really just running “containers in production” 
    • A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on 
    • Scaling a service changes the number of container instances running that piece of software 

    Service Creation

     Create service across the docker swarm cluster

     docker service create --name webapp1 --replicas=4 --publish 8090:80 nginx
    


    # List of services
     docker service ls
     
    docker service with publish option command execution

    # List of services running
     docker service ps webapp1
    

    You can view the container running Nginx with a default page from any one of the node host IP address:
    Running service in multiple replicas on Swarm cluster nodes

    Scaling service 

    Scale up to 8 tasks
    docker service scale web=8
    

    Global mode service creation
    docker service create --name test-redis --mode global redis
    

    This will run a task of the service on every node in  the Docker Swarm cluster check it with
    docker service ls
    

    Look at the changes in the network after swarm initiated
    docker network ls
    

    # Observed 2 new networks added docker_gwbridge, ingress

    docker swarm - overlay network


     Docker autolock is enabled at two commands: swarm init and swarm update The following is the docker command to enable autolock on an exists
    root@dockerhost:~# docker swarm update --autolock=true
    Swarm updated.
    To unlock a swarm manager after it restarts, run the `docker swarm unlock`
    command and provide the following key:
    
        SWMKEY-1-iWhW+8iF17n1C/2aPKcULTAwpi9pcmEsl2GKHtUwzhU
    
    Please remember to store this key in a password manager, since without it you
    will not be able to restart the manager.
    root@dockerhost:~# vi swarmkey.txt
    

    Default Overlay network

    Remember this, 10.0.0.1/8 is the default address pool used by docker swarm for global scope overlay network

    Previous Post for review:

    GCP Cloud users note: 

    1. On Google Cloud if you are joining the swarm cluster please use the Private IP address of master in 'docker swarm join' command. 
    2. You can take out the node from the swarm cluster using 'docker swarm leave' command on the worker node.
    3. When you try to access the new service port on the GCP open that on the firewall - tcp port for the same.

    References

    Categories

    Kubernetes (25) Docker (20) git (15) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)