Sunday, October 10, 2021

HAProxy on Docker load balance Swarm Nginx web services

What is HAProxy do?


HAProxy is a free and open-source load balancer that enables DevOps/SRE professionals to distribute TCP-based traffic across many backend servers. And also it works for Layer 7 load balancer for HTTP loads.

HAProxy runs on Docker

The goal of this post is to learn more about HAProxy, how to configure with a set of web servers as backend. It will accept requests from HTTP protocol. Generally, it will use the default port 80 to serve the real-time application requirement. HAProxy supports multiple balancing methods. While doing this experiment
HAProxy on Docker traffic routing to Nginx web
Nginx web app load balance with Swarm Ingress to HA proxy

In this post, we have two phases first prepare the web applications running on high availability that is using multiple Docker machines to form a Swarm Cluster and then a web application deployment done using the 'docker service' command.

Step 1: Create swarm Cluster on 3 machines using the following commands

docker swarm init --advertise-addr=192.168.0.10

join the nodes as per the above command output where you can find the docker join command

docker node ls

Deploy web service 

Step 2: Deploying Special image based on Nginx web service that designed for a run on Swarm cluster

docker service create --name webapp1 --replicas=4 --publish 8090:80 fsoppelsa/swarm-nginx:latest


Now, check the service running on the multiple Swarm nodes 

docker service ps webapp1

Load balancer Configuration

Step 3: Create a new Node 5 dedicated for HAProxy load balancing: 


Step 4: Create the configuration file for the HAProxy load balancing the webapp1 running on Swarm nodes, In this experiment, I've modified a couple of times this haproxy.cfg file to get the port binding and to get the health stats of backend webapp1 servers.

vi /opt/haproxy/haproxy.cfg
global
    daemon
    log 127.0.0.1 local0 debug
    maxconn 50000

defaults
  log global
  mode http
  timeout client  5000
  timeout server  5000
  timeout connect 5000

listen health
  bind :8000
  http-request return status 200
  option httpchk

listen stats 
  bind :7000
  mode http
  stats uri /

frontend main
  bind *:7780
  default_backend webapp1  
  
backend webapp1
  balance roundrobin
  mode http
  server node1 192.168.0.13:8090 
  server node2 192.168.0.12:8090 
  server node3 192.168.0.11:8090 
  server node4 192.168.0.10:8090    
>

  

Save the file and we are good to proceed


Running HAProxy on Docker 

The following docker run command will run as detached mode and ports will be published as per the HAProxy configuration, and this uses Docker storage volume for configuration file accessible from host machine to the container in read-only mode :

docker run -d --name my-haproxy \
    -p 8880:7780 -p 7000:7000 \
    -v /opt/haproxy:/usr/local/etc/haproxy:ro
    haproxy


You can see the webapp1 is responding when you hit 7780 port on the browser with the HAProxy running node and its bind port that will be routed to the backend web applications. You can also view the HAProxy stats as well with the 7000 port.

Troubleshoot  hints

1. investigate what is there in haproxy container logs.

docker logs my-haproxy


alias drm='docker rm -v -f $(docker ps -aq)'

 There are many changes in the HAProxy from 1.7 to latest version which I've encounterd and resolved with the following 

  • The 'mode health' doesn't exist anymore. Please use 'http-request return status 200' instead.
  • please use the 'bind' keyword for listening addresses.
  • References:

    No comments:

    Categories

    Kubernetes (24) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create deployment (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)