Tuesday, December 28, 2021

Jenkins integration with GitHub and build with Maven

 Git integration with Jenkins the main objective here is how GitHub connects with Jenkins, once build tool maven works and then Java artifacts generated ready for deploy the application (.war file).

This post is 2 mins read.

Jenkins integration with GitHub Code repo build with maven


Prerequisites:

You should have been Signup either on GitHub or Bitbucket

GitHub repo url: https://github.com/BhavaniShekhar/my-app

Global Tool configuration

To configure the following we have installed on the Jenkins Master here I've used CentOS box. And while configuring these we need to provide the installed location for each.

  1. Java - defined name as LocalJDK8 or JDK8 /JDK11/JDK18
  2. Maven - defined name can be as LocalMaven or maven3
  3. Git - name defined as LocalGit or default 

How to configure JDK as Global tool in Jenkins?

You need to navigate in the Jenkins Dashboard select Manage Jenkins and from the options select Configure Global tools. In the Configure Global tool  page goto the JDK section where we have two choices the one we can use existing JDK by providing the JAVA_HOME path. And the other option is we can install JDK automatically as per you project needs select the JDK version.

update-alternatives --display java

LocalJDK8 configuration
You can also install latest JDK as per your project requirements and use that installed path.
You can also either use OpenJDK or Oracle JDK with desired version.

How to Install and Configure Maven as tool in Jenkins?

We can install on the target build server that could be Jenkins controller machine or a dedicated machine for build process, where Maven must be installed. 

Install Maven on CentOS
This is simple process using yum we can install for latest we can use dnf before that, switch to root user and run the following commands:

java -version # to confirm Java installed
yum install -y maven
mvn --version # To confirm that Maven installed successfully
Note that Maven installation prerequisite is JDK. that is you must have JAVA_HOME defined.
Configuring LocalMaven


The latest version maven can be used here.

How to configure Git as global tool on Jenkins?
Git is by default installed on most of the Linux VMs. But on the cloud free instances we don't see so we need to install it with the package manager command utilities w.r.t operating system.

LocalGit configuration on Jenkins Global Tool Configuration

Once all prerequisites are installed and configured we are good to go for the Jenkins integration with GitHub project and use the Maven to build and deliver the artifacts.

How to Setup a Jenkins Maven Project?

Step 1: Configure new item with Freestyle project 

Let's create Freestyle project with the following 

Name: github-integration

In the General section enter

  • Description: This is first java project will be build with maven.
  • Discard old builds checked
  • Max # of builds to keep: 1


Step 2: Source Code Management section select Git 

  • a. Repository URL value entry that uses Git clone https protocol.
  • b. GitHub project will be mostly free and public repos, there is no need to create credentials. Go with 'none' option. If it's organization project it will be private repo you need to create credentials.
  • Branch: You need to specify the branch name as master or main branch will be used as default. For testing purposes we have to change to feature or relative to the environment (dev,QA, prod etc).

Jenkins integration with GitHub - Source Control Management tab setup

When you work on the real time project you may need to work on test/feature branch instead of master branch.

Step 3: Delete workspace before build

In the 'Build Environment' section, select the check box the 'Delete workspace before build starts'. There are more advanced options available but for now we can go with default.

Jenkins Build Environment - delete workspace before build starts


Step 4: Build using Invoke top-level Maven targets

In the 'Build' section, add build step -> invoke top-level Maven targets.

  • a. Maven version : LocalMaven
  • b. Goals : test install or clean package
  • c. POM : If it is available in the root directory nothing to mention if some other location then you need to specify the location example: maven-samples/single-module/pom.xml
  • d. Now Save the project and all set to run it, 

Jenkins Build - invoke top-level Maven targets

All configurations are completed, Go to the top of the Jenkins menu, Trigger the "Build now"  observe the console output.



* If the build executed on the Jenkins controller then, You can see the package created in the workspace directory 
 /var/lib/jenkins/workspace/github-integration/target/myweb-0.0.1.war

* If the build executed remotely then, You can see the workspace location then followed by the target SNAPSHOT file location

References

Friday, December 24, 2021

Jenkins Active choices parameter - Dynamic input

Hello DevOps team!! Today I've revisited the experiment with the Jenkins ACTIVE CHOICE Parameter  to get the Dynamic parameters effect on the Build Job parameters.

Installation Active Choice parameter - Groovy Script


Prerequisite:

Jenkins installed Up and running on your target master machine.
Jenkins URL accessible  

Step 1: Install Active Choice plugin


On the Jenkins Dashboard, select the Manage Jenkins, Plugin- Manager, In the Available tab search for word 'Active', where you can see Active Choice plugin and choose installation option, and this will enables three different parameters in the "Add Paramters" list. They are :
1. Active choice parameter
2. Active Choice Reactive parameter
3. Active choice Reactive Reference parameter

Here In my example I will use two of them, Firstly Active Choice Parameter for "environment".
Create new item Name: active_project select a freestyle project click OK button.
In the General tab, select the checkbox for 'This project is parameterized'.

Step 2: Add Parameter - Active Choice Parameter

Please enter the following as per your project needs. Offcourse here we are starting Groovy code snippets which doesn't required any expert level coding.

Name: environment 
Script: Groovy Script

Add the following groovy code:
return[
'Live',
'QA',
'Dev'
]
Screen shot: 

Adding Active Choice parameter adding Groovy list


Step 3: Add Parameter - Active Choice Reactive Parameter


Let's define the "Active Choice Reactive Parameter" as sub_env, Here 'sub_env' parameter is depends on the 'environment' parameter which is defined in the previous step.
 
Name: sub_env
Script: Select Grovy script:
Add the following groovy code:
if (environment.equals("Live")){
	return ["Prod","DR"]
}
else if (environment.equals("QA")){
	return ["FT","UAT","Stage"]
}
else if (environment.equals("Dev")){
	return ["Dev-Feature","Dev-Release"]
}
else{
	return ["Please select environment"]
}
On the same parameter block "Single select" from the dropdown
Groovy fall back code:
return ["Select proper environment"]
Now enter the environment as value for the 'Reference parameter'.

Adding Active choice reactive parameter

 

Step 4: Add Parameter list - Active Choice Reactive Reference parameter


Select Active Choice Reactive Parameter Enter the following values
Name: datacenter 
Script: Select Groovy script:
Add the following groovy code:
if(sub_env.equals("Prod")){
	return ["Prod environment at HYD region"]
	}
else if(sub_env.equals("DR")){
	return["DR environment at GG region"]
	}
else if(sub_env.equals("FT")){
	return ["FT environment at HYD region"]
	}
else if(sub_env.equals("UAT")){
	return["UAT environment at GG region"]
	}
else if(sub_env.equals("Stage")){
	return["Stage environment at GG region"]
	}
else if(sub_env.equals("Dev-Feature")){
	return["Dev-Feature environment at GG region"]
	}
else if(sub_env.equals("Dev-Release")){
	return["Dev-Release environment at GG region"]
	}	
else{
	return ["dont miss sub-env"]
}	
Groovy fall back code: return ["Select proper sub-env"]
Reference parameter: sub_env
Finally save the configuration of the project.

Final step
Verify saved script in Jenkins UI by clicking “Build with Parameters”.


Document References https://plugins.jenkins.io/uno-choice/

Wednesday, December 22, 2021

Ansible variables, Lists, Dictionaries

 There are many boring tasks in your daily job which can be automated easily if you know some of the tools like here, Ansible. Let's explore more on how to use the variables in the playbooks.

In this post we will be covering :

  1. Basic datatypes
  2. List variables and using them
  3. Dictionary variable and accessing them

Variables and Datatypes in Ansible

In Ansible variables can be defined under global tasks or they can be defined at local to a task level. support all the Python supported datatypes.
---
# Filename: varibles_datatypes.yml
 - name: varibles in ansible
   hosts: localhost
   gather_facts: false
   vars:
     a: "Vybhava Technologies"
     b: yes
     n: 100
     m: 500.99
   tasks:
     - debug:
         msg:
           - "a= {{ a }} a type: {{ a |type_debug }}"
           - "b= {{ b }} b type: {{ b |type_debug }}"
           - "n= {{ n }} n type: {{ n |type_debug }}"
           - "m= {{ m }} m type: {{ m |type_debug }}"
The execution output is :
ansible-playbook varibles_datatypes.yml
 
Screenshot


Ansible Lists

In Ansible List object is similar to the Python list. Which can be assigned alist variable within a single line or lease it can be represented in the column which will start by using "-" for each element. Here I've experimented with two options.
# File: hello.yml
 - name: List variables from ansible playbook
   hosts: localhost
   gather_facts: no
   vars:
     mylearning_list: ['Linux','git','Jenkins','Docker','Kubernetes','Ansible']
   tasks:
     - name: printing list
       debug:
         msg:
         - "mylearning_list:"
         - "{{  mylearning_list  }}"

     - name: Concatenate a list to string
       set_fact:
         my_string: "{{ mylearning_list | join(',') }}"
     - name: Print the String
       debug:
         msg: "{{ my_string }}"

     - name: printing list element
       debug:
         msg: "mylearning_list: {{  mylearning_list[1] }}"
     - name: printing list range of elements
       debug:
         msg:
         - "mylearning_list[3:5]:"
         - "{{  myle
Ansible list of element usage 
Ansible  list example 02

 hosts: localhost
  gather_facts: no
  vars:
    devops_team:
      - srinu
      - rajshekhar
      - arun
      - charan
      - suresh
      - elavarsi

  tasks:
  - name: Display all elements of List
    debug:
      msg: "{{ devops_team }}"

  - name: Display a elements of List
    debug:
        msg: "{{ devops_team[3] }}"

  - name: Display rage of elements from List
    debug:
        msg: "{{ devops_team[3:6] }}"
~




		   

Ansible Dictionaries


The python dictionaries can be used in the Ansible plays. The representation is within {} when we have few key:value
The data item will be stored with key and value

We can dfine a dictionary variable as two forms : 1. single line
osfam_web: {"el": "httpd", "ubuntu": "apache2"}

2. multiline form
osfam_web:
  el: httpd 
  ubuntu: apache2
Example Execution
[ansible@master qa]$ cat mydict.yml
---
# Filename: mydict.yml
 - name: Dictionaries in ansible
   hosts: localhost
   gather_facts: false
   vars:
     osfam: {"el":"httpd","ubuntu":"apache2"}
   tasks:
     - debug:
         msg:
           - "osfam.keys {{ osfam.keys() }}"
           - "osfam {{ osfam }}"
           - "osfam type {{ osfam |type_debug }}"
           - "osfam[el] {{ osfam['el'] }}"
Execution output
ansible-playbook mydict.yml

More variable stories on Ansible Automations are share:

Monday, December 20, 2021

Ansible packages and service modules

Ansible packages and service modules In this post I would like to take you to the most important Linux administration tasks which can be used regularly in their daily activities that can be automated with Ansible. 

How do Linux Package Managers works?

Every Linux Operating system allow us to install any software using package managers such as yum, dnf, apt, deb or apk any other option. 

Here I've explored more details about this package mangers how they are working. If we take RedHat flavor Linux systems such as CentOS, SuSe, RHEL uses actually RPM as package manager. But the CLI clients are available such as yum(Yellowdog updater modified) and in the latest versions using improved yum that is dnf command utility which is known as "Dandified Yum". 

The service or systemctl commands

After installation we need to start, stop or restart or check status that service using systemctl or service command as per the System availability.

Ansible package manager modules connection with front-backend utilities


 
First we will experiment with package managers dnf usage in Ansible. We can target simple two playbooks where you should have inventory groups defined for webserver, database.

Prerequisite:

The inventory file content with the webserver and database groups as following
[ansible@master qa]$ cat hostqa.yml

all:
  children:
    qa:
      children:
        qawebserver:
          hosts:
            node[1:2]-vt:
        qadbserver:
          hosts:
            node3-vt:
        qalb:
          hosts:
            node4-vt:


How to install packages using ansible yum module?

The Ansible yum module is allow us to install the packages on the target hosts. where you can tell the action using state.


---
# File: nginx_yum_installation.yml

- name: install and start nginx
  hosts: "{{ targets | default ('webserver') }}"
  become: yes
  tasks:
    - name: install nginx
      yum:
        name: nginx
        state: present
        update_cache: true

    - name: start nginx
      service:
        name: nginx
        state: started

  
The execution of the above playbook output as:
ansible-playbook nginx_yum_installation.yml
  

Ansible yum module for install nginx and start the service

How to uninstall package using ansible yum module?

The following ansible playbook code will stop the service and remove the package from the target box.
  ---
# Filename: nginx_stop_yumremove.yml
- name: stop and remove nginx
  hosts: "{{ targets | default('localhost') }}"
  become: yes
  gather_facts: no
  tasks:
    - name: stop nginx server
      service:
        name: nginx
        state: stopped

    - name: remove nginix
      yum:
        name: nginx
        state: absent

  
Execution outcome
   ansible-playbook -e targets=qawebserver nginx_stop_yumremove.yml --check
   ansible-playbook -e targets=qawebserver nginx_stop_yumremove.yml 
  
Ansible yum module to remove nginx package


How to install packages using ansible apt module?

The Ansible apt module is allow us to install the packages on the target hosts. where you can tell the action using state.


  ---
# Filename: nginx_apt_installation.yml

- name: install and start nginx
  hosts: "{{ targets | default ('loadbalancer') }}"
  become: yes
  tasks:
    - name: install nginx
      apt:
        name: nginx
        state: present
        update_cache: false

    - name: start nginx
      service:
        name: nginx
        state: started
  
The execution of the above playbook output as:
ansible-playbook nginx_apt_installation.yml
  

How to install a package with ansible dnf module?

If you are working on CentOs8 or Oracle Linux 8 or RHEL 8 then you can use dnf module. The web group target to install nginx webserver, and database target to install with mysql database.
---
- hosts: webserver
  tasks:
    - name: install nginx
      dnf: name=nginx state=present update_cache=true
  
Package manager module can be executed on the target machine with ansible user generally, but it requires sudo access so we need to use become parameter value as 'yes'. In adhoc command execution we can use -b or --become option.
ansible webserver -m yum -a "name=httpd state=latest" -b

How to list out the package is installed?

The yum module can be used to determine if a package is available and installed on the managed node (e.g. the target VM). This ansible module execution is similar to the `yum info` command in CLI. Let's examine "nginx" installed on the web boxes with the followng playbook.
- name: List out the yum installed packages
  hosts: "{{ targets | default ('loadbalancer') }}"
  gather_facts: false
  #remote_user: root
  become: yes
  tasks:
    - name: determine if a package is installed
      yum:
        list: "{{ package }}"
      register: out

    - debug:
        msg:
          - "package: {{ package }}"
          - "yumstate: {{ out.results[0].yumstate }}"
          - "yumstate: {{ out.results[1].yumstate }}"
          - "version: {{ out.results[1].version }}"

		
Executed with the following command :
ansible-playbook -e targets=node1-vt -e package=nginx yum_list.yml
The screen screen will look like this:
Ansible yum module listing out about a package


  To check already httpd is installed on a machine:
rpm -qa|grep httpd 

Important Note:

The name very first you defined used hyphen, here hyphen is only used when you want general information for the playbook reader to indicate about the task. when we use module attribute with name should not with hyphen.

References Ansible documentation:

1. Package manger - dnf  

Monday, December 6, 2021

Ansible Configuration and inventory

In this post I would like to explain about what I had explored on the Ansible Configuration changes at different scopes. Also see the impact of different parameter customizations related to the ansible host inventories.

Working with Ansible Configuration - ansible.cfg 


This ansible.cfg file will be available in the default location (ANSIBLE_HOME/ Ansible.cfg) when you install with yum. It is not available when you use pip installation.

To get a copy of the ansible.cfg you can see a 'rpmsave' file in the default ANSIBLE_HOME location /etc/ansible.

The ANSIBLE_HOME can be changed as per the requirements we can defined in the configuration file.

Ansible inventory  

Learning about the inventory setup for Ansible controller, first it will look into the ansible.cfg about where is the inventory location defined. If no line mentioned in the configuration file then default inventory location will be used as  /etc/ansible/hosts in the default configuration. If you wish to use the configuration per Project environments such as dev, test/qa, stage, prod separated then you can define the host list for each environment into an individual inventory file in the Project.

Ansible inventory and interconnection with ansible.cfg



Ansible inventory can be created in multiple file formats but Ansible understand the two format files as a common format they are : 
  • .INI 
  • .YAML

Ansible inventory in INI format

You can create INI file based inventory, sections are groups or group related with special :modifiers . The host entries in a sections forms a group, This group namem should be relavent to what they are going to run on these hosts. 


How do we setup Ansible inventory in INI format?

Simple inventory creation where we just include the host list into the example inventory file.
mkdir test-project; cd test-project; vi inventory 

node01
node02


Here is interesting experiment, We can have hostnames and IP addresses or their combination of both also can be entered as inventory file and it works.

Updating above created inventory file with an IP4 address as entry!

node01
192.168.1.210
node02
  

Grouping in inventory

We can create grouping of hosts which will running some service or specific software as shown below all the httpd service running VMs are grouped as 'web-server':

[web-server]
node01
node02
 

Sub-groups in Ansible inventory

We can make inventory of group of sub-groups, in the below you can see 3 groups defined web-nodes, db-nodes, lb-node all these become sub-groups for the hyd group. This kind of representation is most common need where we can have different categories of nodes and they all run under different regions or availability zones on your cloud platforms.

[web_nodes]
node01
node02

[db_nodes]
192.168.1.210

[lb_node]
loadbalancer

[hydi:children]
web_nodes
db_nodes
lb_node 
The execution output as follows:

Default groups in Ansible inventory

Ansible also makes some built-in groups once you create an inventory, such groups are as follows:
  • all
  • ungroupped
Here is the interesting logic - every host defined in a group belongs to 'all' group. If a host defined not into any group that belongs to 'ungroupped' default group. For our example we can get 'mailserver.hyd.in' fall into the 'ungroupped' group!

Ansible inventory in YAML format


The ansible inventory defining in the YAMAL format need to care about the following:
1. Top or root for the inventory will be "all" keyword
2. Every next level can be defined with "children" keyword
3. We can define number of groups under the a common group. (Observe qa is example common group)
4. Host can be defined under "hosts" keyword
5. We can define the range of hosts names with [:] (check the qawebserver)
6. Every line shold be ending with a colon 

We can define the inventory file in YAML file format as well. You can see

echo "
all:
  children:
    qa:
      children:
        qawebserver:
          hosts:
            node[1:2]:
        qadbserver:
          hosts:
            localhost:
            
">qa-inventory.yml

#Validate file created
cat qa-inventory.yml   
Enter the ansible.cfg file with the following configuration:
    [defaults]
    inventory = ./qa-inventory.yml
To get the list of hosts from the all groups using the above created qa-inventory.yml file.
ansible --list-hosts all
  
ansible-inventory --graph
ansible-inventory --list
Ansible inventory using YAML file

Here also we can do all those filters on host list extractions as discussed above with ini file.

Ansible inventory parameters

You can define the inventory file in 'ini' format, where we can have aliases to the hosts vms it is similar to Linux configuration file /etc/hosts but is more readable and we can add more ansible_ variables in a line for that host related information such as username, password etc.
# Sample inventory with host aliases  

web1 ansible_host=web1.hyd.in
web2 ansible_host=web2.cmb.in
db1 ansible_host=db1.dli.in 
We can use the following common ansible inventory parameters :
  • ansible_host this can be IP address or DNS of a VM
  • ansible_connection You can specify how to connect to the remote host
  • ansible_user you can use a dedicated user like 'ansibleuser' or else 'root' for Linux machines
  • ansible_ssh_pass will be used for Linux Remote machines
  • ansible_password is used for Windows Remote machines

Usually Ansible controller will be connects with Linux remote hosts using SSH protocol and that too with port 22. When we store some file in the Ansible controller to access them we can skip connecting with SSH, instead of that we can use local cetonnection option. The ansible_connection inventory parmeter can be used to establish a local connection instead of ssh in Ansible.

In a project you may have Linux, Windows combination of remote machines. If we want to connect with Windows remote host then the 'ansible_connection' parameter must be set with the 'winrm' as value.

# Sample Inventory File with Linux, Windows VMs

# Web Servers
web1 ansible_host=node01.devopshunter.com ansible_connection=ssh ansible_user=root ansible_ssh_pass=Secre7@in
web2 ansible_host=node02.devopshunter.com ansible_connection=ssh ansible_user=root ansible_ssh_pass=Secre7@in
web3 ansible_host=node03.devopshunter.com ansible_connection=ssh ansible_user=root ansible_ssh_pass=Secre7@in

# db servers
db1 ansible_host=sqldb01.devopshunter.com ansible_connection=winrm ansible_user=administrator ansible_password=WinVM@09!
Custom inventory file can be defined as per Project or environment type. Generally these custom inventories can be used on single Ansible Controlller multiple Projects or nonprod environments, For best practices they will be pushed to any of the SCM tools like Git/BitBucket.
Let's explore all the inventory accessing experiments related to development environment in dev directory is dedicated 
mkdir dev; cd dev
Create a file with the following inventory file in dev, it is in a alternative locaiton other than default path:
echo "
mailserver.hyd.in

[lb]
lb01

[web]
web01
web02

[db]
db01
db02
">dev
#confirm the dev file content
cat dev

Understanding the inventory accessing filter options

To list 'all' hosts from the dev inventory file. 
ansible -i dev --list-hosts all

We can display the desired group to list the hosts in each of the given group such as db or web from the above created dev inventory file.
ansible -i dev --list-hosts db
ansible -i dev --list-hosts web 
The ansible host list with different options



Creating the local inventory for dev project we create the ansible.cfg file as:
echo "
[defaults]
inventory = ./dev 
">ansible.cfg
#validate
cat ansible.cfg
Now we can run the commands without informing with -i flag. That is
ansible --list-hosts db 
There is a possible option to use regular expressions "*" is same as "all".
ansible --list-hosts "*"
ansible --list-hosts "web0*"
To list out multiple groups for hosts you can select with colon separation as shown here.
ansible --list-hosts web:db
Index out the host from the inventory using the square brackets [] with a number of group name
ansible --list-hosts web[1]
We can also un-select using except indicators the "!" symbol before host or group name.
ansible --list-hosts \!web #except web servers
ansible list of hosts with different options as input



FAQ on Ansible Inventory files

1. Can I pass multiple ansible inventories to run a playbook? Yes it is possible to run a playbook with multiple inventories.
ansible-playbook get_logs.yml -i dev -i qa

2. Is it possible to have a host in multiple groups? Yes it is possible to have this usecase, a host can be present in dbservers group as well as in webservers.
References: 

Saturday, November 20, 2021

Comparing file changes with git diff command

Hello Guys, in this post we will explore about git diff command options when to use what option etc. Git basic work flow is going happen initially starts at work area. When you add the files then it will be in the stage area. Once you feel everything is good then we will commit those changes then it will goes to local repository area. In simple words we can say work area -> stage area -> repo area.
 
The pictorial representation of my understanding about the git workflow:

Work flow for git diff command exeuction
Work flow for git diff command exeuction


 

How to Compare files in git to get the last changes?

In git we have this nice feature to compare a file what has been changed who did the changes we can track them with the inputs we can pass as options to the 'git diff' command.
Syntax: git diff [code-file]
Examples: You can compare and see all the files which are changed with the last commit.
git diff
You can compare and see specific file, for example we consider index.html file recently modified and then git add executed. That means changes are in the staged area.
git diff index.html
Hint: This will compare after git add


 

How can I compare a file of work area with local repo

We can do compare a file that has changes from local repo, when you run the following command the git with a pointer HEAD node
Syntax: git diff HEAD [codefile]
Examples: When we use HEAD pointer, git will show the changes comparing with the work area file(s) with local repository containing file(s)
git diff HEAD
Specific file changes comparing with the work area file with local repository
 
git diff HEAD ./mist.yml
Hint: This will compare after git commit

Comparing of work area and local repo
git diff with HEAD

 

Compare between stage area and repo area

Git diff with –stage or –cached options between stage area file with last committed file
 
Syntax: 
git diff [--staged] [code.file]
git diff [--cached] [code.file]
Here is the execution of --stage flag where it will be displaying the changes were made between stage area and repo area. Examples
 
git diff --staged
One more example when we choose a specific file:
 
git diff --staged myweb.html
Hint: This will compare git commit file with git add

Compare between stage area and repo area
git diff with staged flag

How to get the comparison between two commits?

We can compare a file at two different commit levels, where we need to provide the input as two different commit IDs. Comparing two different commits
 
Syntax: git diff [commit-SHA-ID] [commit-SHA-ID] [code-file]
Examples: Here I'm using short form of the commit ID, considering min 5 char of SHAcode
 
git diff e220bb005 e5aa90d79
Now focusing on my specific file here it is mist.yml file that changed between given two commit IDs.
 
git diff e220bb005 e5aa90d79 mist.yml 
Hint: This will compare after multiple git commits use git log –pretty=oneliner
IMAGE: git-diff-commit-IDs

The git diff command with commit IDs
git diff with commit IDs

 

How to find the list of files that changed between last two commits?

Here we want to get only the name of the files which were modified between last two commits which can be pointed with the HEAD and last but-one using HEAD^.
 
  git diff --name-only HEAD HEAD^
  

Can I compare two branches?

Yes, it is possible. Now beyond the single branch now, we can compare two branches together this is generally done when a merge request comes and you need to observe the changes.
 
Syntax: git diff branch1..branch2 [codefile]
Examples: Here I've created two branches and have different levels of changes in each. Let's see first compare master branch with the vt-dev development branch.
 
git diff vt-dev..master
Where it shows many changes on the output. Now lets filter to specific file by providing the file name.
 
 git diff vt-dev vt-test mist.yml
IMAGE: The git-diff-branches

The git diff with branches
There is specific file 
git diff option branches on a file



 

Git diff Output Simplified with flags

Simplifying output format with options Stats for the file it is special option which will tells us the file changed were happen to a file and how many changes were made to the file count with a plus symbol. The flag options we have here:
  1. stat
  2. name-only
  3. name-status
 
 
git diff --stat master...vt-dev mist.yml

To show filter out those file name-only will be helpful when you need more concern about the file names:
 
 git diff --name-only vt-dev...feature
 
We can also get the the file is in which Status is it modified or just Added or Updated an existing committed file with the first letter as indicator. Example M - Modified
 
git diff  --name-status vt-dev...feature
>

The git diff command with special flags

Saturday, November 6, 2021

HEALTHCHECK Instructions in Dockerfile

 Hello Guys in this post I wish to explore more intresting instructions HEALTHCHECK which can be used in Dockerfile. 

The most common requirement for any real-time projects monitoring using a side-car container which could run in parallel and try to check the process or application URL check or container reachability using ping command etc. 

healthcheck docker container monitorong
Dockerfie instruction HEALTHCHECK


In Dockerfile we can have HEALTHCHECK instruction that allows us to know the condition of an application test runs as expected or not, when a container runs this will be returns a status as healthy, unhealthy based on the HEALTHCHECK command exit code. If the exit code 0 then returns healthy otherwise unhealthy.


HEALTHCHECK [options] CMD [health check command]

Example:

HEALTHCHECK --interval=3s CMD ping c1 172.17.0.2

here are the Healthcheck options 

  1.  --interval=time in sec (duration 30s is default)
  2.  --timeout=time in sec (duration 30s is default)
  3.  --start-period=time in sec (duration 0s is default)
  4.  --retries=3 () default 3 


Let's jump into experiment mode:
docker run -dt --name main-target busybox sh; docker ps 
docker inspect main-target

More specific

To get only IPaddress of a containter use the following format option:
alias dcip="docker inspect \
 -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "
ipdc main-target
Note: when you define the alias for ipdc important point don't miss space at the of the line.
Guys, here is glimpse of 'ping' command exit code:
ping -c1 google.com ; echo $?
ping -c1 shekhar.com ; echo $?
Observe that exit codes values, if a website exits returns 0 if not non-zero.


Get the main-target container IPAddress from the docker inspect command output. Now we will create a Dockerfile with the following code:


#File: Dockerfile
FROM busybox 
LABEL AUTHOR Pavan Deverakonda
HEALTHCHECK --interval=5s CMD ping -c 1 172.17.0.2


Note: You could find the IP from previous docker inspect command.


Let's call this image as monitor
docker build -t monitor .
docker run -dt --name monping monitor; docker ps





Observe the STATUS column for corresponding containers.
alias s="docker inspect -f '{{json .State}}' "
s monping |jq


docker inspect output filter with jq for HEALTHCHECK


USECASE 2: HEALTHCHECK for web applications

Let's see the 'curl' command usage in general

The following command returns HTML content which may be multiple lines
curl http://devopshunter.blog.com/test
Let's make this command usage in minimal way using -f or --fail options in curl command:
# Fail in silently single liner 
curl http://devopshunter.blog.com/test.txt -f
curl command with --fail or -f option


Run a container with a healthcheck command using. A Linux command that checks http uri using `curl` that returns HTML code or HTTP code as per the web applicaiton availability.
docker run -dt --name health-con \
 --health-cmd "curl --fail http://localhost" busybox 
Here we have not used any HEALTHCHECK options, so it will try to check by running health-cmd 30sec interval 3 retries and timeout for each as 30sec. that means after 2 minutes you can get the health status as 'unhealthy'. Because busybox don't run any web server inside the container.
We can check the health status of self container or other container which is accessable. Otherwise if it shared a network with other container which is running a web server.
#File: Dockerfile
FROM busybox 
LABEL AUTHOR Pavan Deverakonda
HEALTHCHECK --interval=5s CMD curl --fail http://localhost

Build the monitoring image that contains HEALTHCHECK instruction.
  
docker build -t moncurl .
docker images
For now we will test the same busybox container - unhealthy status.
docker run -dt --name moncurl-con moncurl sh
# Check the container health 
watch docker ps
  

#cleanup
docker rm -v -f health-con 
Now let's see the interval option how it will impact a container health:
docker run -dt --name health-con  --health-cmd "curl --fail http://localhost" --health-interval=3s busybox
watch docker ps 
My observation - at 0s(when container started) healthcheck starts after 3s test it, retries 3 times that means 3times 3s = 9s you will get the health status changed.

USECASE 3: HEALTHCHECK with Interval and Retries options

We can run a container to check the health with options interval and retries together as: 

 UNHEALTHY
docker run -dt --name health-con3 --health-cmd "curl -f http://localhost" --health-interval=3s --health-retries=1 busybox 
watch docker ps

HEALTHY
docker run -dt --name health-con3 --health-cmd "curl -f http://localhost" --health-interval=3s --health-retries=1 nginx 
watch docker ps



healthy status


Let's build a Healthcheck image
  
#File: Dockerfile
FROM nginx
LABEL AUTHOR Pavan Deverakonda
HEALTHCHECK --interval=5s --timeout=3s CMD curl --fail http://localhost || exit 1
EXPOSE 80
Now build the image
docker build -t moncurl:2 .
docker images
create the container with that image:
docker run -dt --name health-con2 moncurl:2 sh 
Please comment and share with your friends!

Sunday, October 10, 2021

HAProxy on Docker load balance Swarm Nginx web services

What is HAProxy do?


HAProxy is a free and open-source load balancer that enables DevOps/SRE professionals to distribute TCP-based traffic across many backend servers. And also it works for Layer 7 load balancer for HTTP loads.

HAProxy runs on Docker

The goal of this post is to learn more about HAProxy, how to configure with a set of web servers as backend. It will accept requests from HTTP protocol. Generally, it will use the default port 80 to serve the real-time application requirement. HAProxy supports multiple balancing methods. While doing this experiment
HAProxy on Docker traffic routing to Nginx web
Nginx web app load balance with Swarm Ingress to HA proxy

In this post, we have two phases first prepare the web applications running on high availability that is using multiple Docker machines to form a Swarm Cluster and then a web application deployment done using the 'docker service' command.

Step 1: Create swarm Cluster on 3 machines using the following commands

docker swarm init --advertise-addr=192.168.0.10

join the nodes as per the above command output where you can find the docker join command

docker node ls

Deploy web service 

Step 2: Deploying Special image based on Nginx web service that designed for a run on Swarm cluster

docker service create --name webapp1 --replicas=4 --publish 8090:80 fsoppelsa/swarm-nginx:latest


Now, check the service running on the multiple Swarm nodes 

docker service ps webapp1

Load balancer Configuration

Step 3: Create a new Node 5 dedicated for HAProxy load balancing: 


Step 4: Create the configuration file for the HAProxy load balancing the webapp1 running on Swarm nodes, In this experiment, I've modified a couple of times this haproxy.cfg file to get the port binding and to get the health stats of backend webapp1 servers.

vi /opt/haproxy/haproxy.cfg
global
    daemon
    log 127.0.0.1 local0 debug
    maxconn 50000

defaults
  log global
  mode http
  timeout client  5000
  timeout server  5000
  timeout connect 5000

listen health
  bind :8000
  http-request return status 200
  option httpchk

listen stats 
  bind :7000
  mode http
  stats uri /

frontend main
  bind *:7780
  default_backend webapp1  
  
backend webapp1
  balance roundrobin
  mode http
  server node1 192.168.0.13:8090 
  server node2 192.168.0.12:8090 
  server node3 192.168.0.11:8090 
  server node4 192.168.0.10:8090    
>

  

Save the file and we are good to proceed


Running HAProxy on Docker 

The following docker run command will run as detached mode and ports will be published as per the HAProxy configuration, and this uses Docker storage volume for configuration file accessible from host machine to the container in read-only mode :

docker run -d --name my-haproxy \
    -p 8880:7780 -p 7000:7000 \
    -v /opt/haproxy:/usr/local/etc/haproxy:ro
    haproxy


You can see the webapp1 is responding when you hit 7780 port on the browser with the HAProxy running node and its bind port that will be routed to the backend web applications. You can also view the HAProxy stats as well with the 7000 port.

Troubleshoot  hints

1. investigate what is there in haproxy container logs.

docker logs my-haproxy


alias drm='docker rm -v -f $(docker ps -aq)'

 There are many changes in the HAProxy from 1.7 to latest version which I've encounterd and resolved with the following 

  • The 'mode health' doesn't exist anymore. Please use 'http-request return status 200' instead.
  • please use the 'bind' keyword for listening addresses.
  • References:

    Sunday, September 26, 2021

    Step by Step installation of Ansible Tower opensource version AWX

    Hello Guys, In this post, I would like to experiment with the installation of AWX on a new variety of Linux that is Alpine Linux.

    Prerequisites

    • Here I will go with the Alpine Linux which is the default Operating System on Play with  Docker 
    • alternatively, You must have at least 3 boxes on either vagrant  or Any cloud instances (AWS) 1 Ansible engine remaining 2 for remote nodes
    • AWX is GUI/web tool which is currently broken down to AWX-operator and AWX Task

    The AWX up to 18 version installations used docker based environments, where it uses the following Docker images from the Docker hub

    1. Postgress SQL
    2. Rabbit MQ
    3. MemCache

    Steps to install AWX

    1. Update the repo on the Alpine Linux

    apk update
    

    2. Installing with apk package manager 'add' subcommand will do the installation.

    apk add ansible
    
    This will be installing Ansible on Alpine Linux.

    Installing Ansible on Alpine LInux
    Ansible installation on Play with Docker (PWD) instance


     

    3. Validate the ansible installation using the version option.
    ansible --version
    
    This will be confirming that installing Ansible on the Alpine Linux is confirmed. 

    4. Get the git hub hosted latest version or specific version of AWX, 

      git clone -b 17.1.0 https://github.com/ansible/awx.git
      
    This will be installing Ansible on the Alpine Linux.
    git clone awx 17.1.0 

    Step 5: Generate secret_key using openssl command.
    openssl rand -base64 30 

    Step 6: The configure for the awx are as follows:

    admin_password=p@ssw0rd123
    secret_key=5r9L1GqfR/GjJh5+JyBBQLd5bv/7B8/BsNF3z0N6
    pg_database=postgres
    pg_password=p@ssw0rd123
    awx_alternate_dns_servers="8.8.8.8,8.8.4.4"
    postgres_data_dir="/var/lib/awx/pgdocker"
    docker_compose_dir="/var/lib/awx/awxcompose"
    project_data_dir="/var/lib/awx/projects"
    
    Step 7: Now you can run the ansible-playbook execution with the 

    ansible-playbook -i ~/awx/installer/inventory ~/awx/installer/install.yml -v
    
    Execution might take several minutes, at the end you can see the following  



    Installing AWX using Ansible playbook Kompleted!!!

    Step 8: Login to the Ansible AWX console with the following
    AWX username: admin
    Ansible AWX password: {someSTR0NGpassword}

    AWX Login page

    The AWX Dashboard shows the Hosts, inventory, Jobs look like this:


    Step 9: Select Templates in the left pane, then you can see the Demo Job template - Launch the template using rocket icon opposite to it.

    AWX Job Template


    Step 10: AWX Job executed then, it output window shows msg as   - Hello World!


    That is all for now in this post you can check other Ansible related posts links from right side of this blog!

    HAPPY AWX automation!!

    Reference

    https://github.com/ansible/awx/releases

    Saturday, September 11, 2021

    Ansible the lineinfile, blockinfile and replace modules

     Hello !!

    This post is for exploring the "lineinfile" vs 'blockinfile' and "replace" modules. The replace and the lineinfile use the path parameter to mark: The file to modify. lineinfile module is used to ensure[s] a particular line is in a file, or [to] replace an existing line using a back-referenced (backref) regular expression (regex).


    Use the replace module if you want to change multiple, similar lines


    The "dest" parameter is used for modules creating new files like template or copy modules. Just replace dest with path and both provided examples should work as expected.

    Adding lines in the file

    Adding a line in a file, if a file does not exist then it will create it.


    
    
    
    ---
    # Filename: adding_line.yml
     - name: testing LineInFile module
       hosts: localhost
       tasks:
       - name: Add a line to a file if the file does not exist
         lineinfile:
           path: /tmp/hosts
           line: 192.168.1.99 ansiblectl.devopshunter.com ctl
           create: yes
    
    The execution of the above playbook gives us the, update of the file /tmp/hosts file created with the line value. check the content of the file.
     
    Screenshot 1
    Create file if does not exists add line in file
    Adding a line in file create if not 


    Insert before 

    A line can be inserted before the pattern matched 
    ---
     - name: testing inline
       hosts: localhost
       gather_facts: no
       tasks:
       - name: insert before
         lineinfile:
           dest: /tmp/hosts
           line: 192.168.0.19 bhagathsing.devops.com bhagathsing
           insertbefore: (192.168.0.18*)
      
    Execution will gives as follows:

    the lineinfile moduel using insertbefore
    Ansible the lineinfile module using insertbefore parameter


    Insert After

    Insert after the pattern line
    ---
     - name: testing inline
       hosts: localhost
       gather_facts: no
       tasks:
       - name: insert after
         lineinfile:
           dest: /tmp/hosts
           line: 192.168.0.18 freedome.devops.com freedome
           insertafter: (192.*)
      
    Insert After a pattern lineinfile
    lineinfile module with 'insertafter' parameter


     Removal lines in a file of httpd.conf

    ---
     - name: testing inline
       hosts: localhost
       gather_facts: no
       tasks:
       - name: remove commented lines
         lineinfile:
          dest: /tmp/sample
          regex: "(^#)"
          state: absent
          backup: yes
       - name: remove tabbed commented lines
         lineinfile:
          dest: /tmp/sample
          regex: "(#)"
          state: absent
       - name: remove blank lines
         lineinfile:
          dest: /tmp/sample
          regex: "(^\n)"
          state: absent   
    
    The execution of removal of lines from the httpd.conf file.
    Remove lines as per regex pattern in file
    The lineinfile module uses to remove comment lines of httpd.conf file

    Replace module

    This module will be helps you to find and replace the text in remote/target server files.

     /tmp/mytest.txt 
    
    Step 2: Create a playbook with the File name: test-replace.yml and the content as follows
    
    ---
    # Ansile replace module example
     - name: Ansile replace module
       gather_facts: no
       hosts: localhost
       become: yes
       tasks:
         - name: mytest.txt replace
           replace:
             path: /tmp/mytest.txt
             regexp: "ofmw"
             replace: "Oracle Fusion Middleware"
    
    
    Assume that you have inventory file having 'webserver' group with node1, node2
    Step 3: Execution output is as follows:
    ansible-playbook test-replace.yml                                                                                                                             
    
    Execution Screen shots:

    
      

    'blockinfile' module

    If you have multiple lines that need to be inserted into a file in remote box with Ansible blockinfile module. It works similar ot lineinfile module but here multiple lines can be processed.
    --- 
    # Filename: file-blocking.yaml
    # targets [optional] if you pass extra vars ok, otherwise localhost
    - name: Creating File with blockinfile
      gather_facts: no
      hosts: "{{ targets | default('localhost') }}"
    
      tasks:
      - name: Create new file
        file:
          path: /tmp/ansible-slogun.txt
          state: touch
        
      - name: Block of text adds to file
        blockinfile:
          path: /tmp/ansible-slogun.txt
          block: Ansible has a large collection of inbuilt modules to manage various cloud resources. The book begins with the concepts needed to safeguard your credentials and explain how you interact with cloud providers to manage resources. Each chapter begins with an introduction and prerequisites to use the right modules to manage a given cloud provider. Learn about Amazon Web Services, Google Cloud, Microsoft Azure, and other providers.
    
    The playbook can be using no extra vars that means localhost will be target host, the executed as follows :
    ansible-playbook file-blocking.yaml


    References:

    Categories

    Kubernetes (24) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create deployment (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)