Monday, April 14, 2025

Docker Command Tricks & Tips

 

Docker container command Tips & Tricks


Here my idea is to use the Unix/Linux 'alias' command for most those common docker container, network, volume sub-commands to form as shorten to give you more productivity while working on developing the docker images and playing around the newly constructing containers. This trick work on bash, zsh shells.

Improve productivity by using alias for docker cli
Improve Productivity with smart work alias for Docker commands
 
First examine the docker container listing with the powerful option '--format'
docker container ps -s \
  --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
docker ps command

To get the logs of any applications that runs in containers we can use the following:
alias dkrlogs='docker logs'
alias dkrlogsf='docker logs -f '

docker logs with alias trick

List of the images
alias dkri='docker image ls'
docker image list alias trick

The container list
alias dkrcs='docker container ls'
docker container list alias trick

Remove the 'Exited' status container
alias dkrrm='docker rm'
docker rm alias trick

Docker top list to see the container inside process ID

alias dkrtop='docker top'
docker top alias trick

All the above nice commands will be helpful for the pipelines, Containerization makes CI/CD seamless. 

We could add the following alias lines to our .bashrc or .bash_profile or .bash_aliases where you can add all simplified docker command as alias:
alias dkrlogs='docker logs'
alias dkrlogsf='docker logs -f '
alias dkri='docker image ls'
alias dkrcs='docker container ls'
alias dkrrm='docker rm'
alias dkrtop='docker top'
alias cleanall='docker container rm $(docker ps -a -q)'
alias dkrps='docker ps --all --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Status}}"'

Improved version of Bash aliases for Docker command

After a deep search on the internet for multiple docker alias examples, prepared the following to improve productvity.
   ## Basic Docker Aliases
alias dps="docker ps"		# List running containers.
alias dpa="docker ps -a"	# List all containers, including stopped ones.
alias di="docker images"	# List all Docker images.
alias dip="docker inspect --format '{{ .NetworkSettings.IPAddress }}'"	# Get the IP address of a container.
alias dl="docker ps -l -q"	# Get the ID of the most recently created container.

## Container Management
alias dstop='docker stop $(docker ps -a -q)'	# Stop all containers.
alias drm='docker rm $(docker ps -a -q)'	# Remove all containers.
alias drmf='docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)'		# Stop and remove all containers.

## Image Management
alias dri='docker rmi $(docker images -q)'	# Remove all Docker images.

# Docker Volume management
alias dvls='docker volume ls'
alias dbclean='docker volume rm $(docker volume ls -q)'

## Docker log management 
alias dkrlogs="docker logs "
alias dkrlogsf="docker logs -f "

## Docker compose commands simplified 
dcup() {
 cd /path/to/your/local/project
 ./docker-compose up -d
}

dcdown() {
 cd /path/to/your/local/project
 ./docker-compose down
}

dcbuidl() {
 cd /path/to/your/local/project
 ./docker-compose up --build
}

   

How to know which docker volume connected to your container?


Let's create a container named as myweb with a docker volume attached as web-volume.
docker run -d --name myweb \
 --mount source=web-volume,destination=/usr/share/nginx/html nginx
Using inspect sub-command on the container --format string with JSON forms is going to return a Json block:
  docker container inspect \
  --format '{{json .HostConfig.Mounts}}' myweb |jq

# simplified alias with parameterized 
alias knv2c="docker container inspect --format '{{json .HostConfig.Mounts}}' $1"
kn2c myweb|jq

alias k2c="docker container inspect -f '{{json .HostConfig.Mounts }}'"
docker container inspect with -f option json format

Wednesday, April 9, 2025

Ansible Automations Designing & Implementation | Best Practices | Tricks and Tips

Hey DevOps, DevSecOps Engineers, SRE new bees here I am going to share the Learnings which I've executed every day found that this the best and new tip to improve the performance of ansible playbook executions sorted out and listed here.


Planning and designing automation with Ansible

  • Most common DevOps tool used for Planning and Designing is Confluence page
  • Design document must contain a  clear "Objective" - where you will be describe why you wish to do automation on what area
  • Tracking purpose always use a ticketing tool entry preferred tool Jira
  • The design can be breakdown into two levels
    • High level design where we will detail about what each task need to be covered
    • Low level design where we discuss in-depth ideology on each task along with the possible constraints
    •   Usage of global variables (AWX UI use extra vars, host_vars, group_vars etc) discuss their necessity
    • AWX/Tower Job template construct possible options as input to handle overall objective, if not sufficient then chain with Workflow constructs
    • The execution of Job template every possible option that is a valid to consider it as Test Case
    • References Each design document may have some researching requirements that may include your internal company confluence pages or external specific Ansible/AWX technical knowledge articles to help the overall objective.

Playbook Tricks and tips

  1. Ansible playbook directory the directory layout always have the playbooks in the Project directory preferable because Ansible/AWX tower can read the vars from that path to parallel directories such as group_vars, vars etc. 
  2. Playbook description: First thing fist always add a description comment and in that please mention how this playbook works. Ensure that another SRE/DevOps engineer should be able to run without asking you how to run this. 
    • In the description also include extra variable in order to execute the playbook in happy path also give clarity on mandatory or optional variables.
    • While defining the extra variables first list all mandatory variables and then go for optional
    • Better to include tags used in the playbook and their purpose so that end user can easily select for task execution or skip the tasks under certain conditions
    • Please add comment before every newly introduced task, that should be highlight the detail process task will do.
  3. Writing your Task always have named task, that should give brief understanding about what you will be doing in that task, 
    • Task name must have title cased
    • Sometimes you might be copying the task from other playbooks re-check that name is appropriate or not or duplicating. 
    • If a task has some critical logic must add comment on top of the task the purpose should be described.
  4. Manage Facts: Ansible's gather_facts directive implies time consuming operations, so it is a good practice to disable it, when it is not specifically needed in your play.
  5. Don'ts during execution: Do not use LIMIT option as much as possible there could be some plays which will skipped if you use LIMIT option on AWX templates. To resolve this, better option is to use hosts: value assigned to a define variable like targets and have a default value as 'all' or 'localhost as per your play need. When you use the hostvars in a play this limit value could be causes an issue to collect facts from the hostvars.
  6. I also recommend to specifically set "any_errors_fatal: true" in all the plays where we can expect/catch the ERRORs during the execution. 
  7. If you already defined a play with any_errors_fatal: false then DON'T define the ignore_errors to the same play.
  8. When you build the email notification logic in a play for successful flow,  always ensure that you must have failure email as well. You can limit the target audience in case of failure
  9. Encrypt the sensitive data content inside a playbook with ansible-vault command
  10. Always double check your inventory list with ansible-inventory command with list, graph options
  11. In production if all possible use cases tested in non-production then better to reduce the verbose level or suppress the logs this can be done from the task level by using `no_log: "{{ log_suppression| default(true) }}"` at the end of the task definition. Here log_suppression is an Ansible variable  this can be changed at the time of execution the value can be either 'true' or 'false'.
  12. While dealing with the when condition in Ansible number validation use the int filter, do not use character value comparisons ( x == '0' ) instead provide the numerical value( x == 0 ).
  13. While working with the lineinfile module, if same playbook is triggered from multiple AWX Consoles there could be a race condition, and it can be fixed with throttle option.  use Ansible's setting "throttle" set to 1 in the task where lineinfile is executed "The throttle keyword limits the number of workers for a particular task. It can be set at the block and task level. Use throttle to restrict tasks that may be CPU-intensive or interact with a rate-limiting API" Example:
              - name: Updating hostname and uptime in days in file CSV
              lineinfile:
                path: /tmp/uptime-report.csv
                line: "{{ inventory_hostname }},{{ box_uptime }}"
              throttle: 1
              delegate_to: localhost

RedHat Tower or AWX Admin Tricks

  • When you install AWX BETTER version always prefer to use latest -1 version that could be stable and consistent during your installation process.
  • If you ae using existing AWX/Ansible job templates to test various Automations requirements/enhancements that we work on:
  • We usually need to alter the Job Templates temporarily to performing testing (i.e. we change the Projects, and besides we may add some Limits or Skip tags, or we may use some EXTRA Variables like 'block_reboot' or 'log_suppression') 
  • AWX Smart Inventory creation is simple to use only thing you need to know how to use the regex (regular expression) that will bring the combination that satisfies to the existing host list and then it will create the new inventory out of existing inventory hosts. To get all hosts
    name.regex:.*
    To get hosts that start with ec2
    name.regex:^ec2
    To get hosts that contains 'prod'
    name.regex:prod
  • Problem: there are good chances, that we may forget to undo all/some of these temporary changes, resulting in broke jobs or incomplete runs 
  • Solution(Best Practice): Always create a copy of the job you want to use for testing and alter that copy according to the needs, then just delete the temporary job once test finished 
  • making a copy of a job is trivial and will ensure we are not breaking anything in the existing system.
  • The latest AWX versions have different weird behavior! AWX 17.1.0 inventory creation from the source control is allowed only when the inventory yaml file-permissions have executable! 
    ## To set the permission in git
    git ls-files --stage
    git update-index --chmod=+x 'name-of-shell-script'
    git commit -m "made a file executable"
    git push
        
AWX Workflow flow control better to use "On Success" because we usually want to stop remaining actions/(in my scenario reboots) when there is a failure encountered (this may cause the effect to multiple/all remaining servers).

Best References:

  1. Change file permissions while running 
  2. Multi-line Strings in YAML
  3. In-memory inventory


Categories

Kubernetes (25) Docker (20) git (16) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker container (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)