Sunday, November 27, 2022

Ansible real-time project - Installing and configure Tomcat 10

 Hey DevOps or DevSecOps or SRE Guys!!

What's up? in the automation journey one more wonderful learning here! 


In this post we will be implementing all our Ansible modules one after other to build a complete solution for Java based Application server installation and running using Ansible playbook.

At present Tomcat latest version is 10.0.27 so I've used same 

Pre-requisites: 

  • To install Apache Tomcat there is separate JDK/JRE compatibility we need to validate before we proceed
  • Create a dedicated user account as 'tomcat' with shell as bash to manage Tomcat application server
  • Create a separate directory for tomcat server to be installed
Execution of multiple tasks in the Playbook will be as follows:
  • Download the Tomcat software from Apache Tomcat 10.0.27.tar.gz
  • Uncompressing the tomcat tar.gz file
  • Change the file permissions and ownership
  • Cleanup after unarchive the tar.gz file
  • Start the Tomcat server
  • Have a task to stop the Tomcat server

---
- name: Installation and setup Apache Tomcat 10
  hosts: "{{ targets | default('localhost') }}"
  become: yes
  gather_facts: no
  tasks:
  
    - name: Install openjdk
      yum:
        name: java-1.8.0-openjdk
        state: present
      tags: install_openjdk

    - name: Create user as tomcat
      user:
        name: tomcat
        state: present
        home: /home/tomcat
        shell: /bin/bash
      tags: pre-install

    - name: Create a directory - /opt/tomcat
      file:
        path: /opt/tomcat
        state: directory
        mode: 0755
        owner: tomcat
        group: tomcat
      tags: pre-install

    - name: Download tomcat - apache-tomcat-10.0.27.tar.gz
      get_url:
        url:  https://dlcdn.apache.org/tomcat/tomcat-10/v10.0.27/bin/apache-tomcat-10.0.27.tar.gz
        dest: /opt/tomcat
      tags: pre-install 

    - name: Extract tomcat inside directory and set ownership, permissions
      unarchive:
        src: /opt/tomcat/apache-tomcat-10.0.27.tar.gz
        dest: /opt/tomcat
        extra_opts: [--strip-components=1]
        remote_src: yes
        owner: tomcat
        group: tomcat
        mode: "u+rwx,g+rx,o=rx"
      tags: tomcat-install

    - name: Remove gz file apache-tomcat-10.0.27.tar.gz
      file:
        path: /opt/tomcat/apache-tomcat-10.0.27.tar.gz
        state: absent
      tags: post-install

    - name: Start the Tomcat server
      become_user: tomcat
      shell:
        cmd: nohup ./startup.sh
        chdir: /opt/tomcat/bin
      tags: start-tomcat

    - name: Stop the Tomcat server
      become_user: tomcat
      shell:
        cmd: nohup ./shutdown.sh
        chdir: /opt/tomcat/bin
      tags: stop-tomcat, never
      
You can run the playbook in multiple ways
1. Use the target as webserver group to install Tomcat only on them.   
ansible-playbook tomcat-solutoin.yaml -e targets=webservers
2. Use the tags to stop the Tomcat servers.   
ansible-playbook tomcat-solutoin.yaml -t stop-tomcat -e targets=webservers

3. Use the tags to start the Tomcat server.   
ansible-playbook tomcat-solutoin.yaml -e targets=webservers

Finding & Terminating a process on remote VM

The play can be having two tasks where you need to find the running process by process name or command used to run it. If there is any error ignore them. Try to catch the result into a variable to pass it to next task.
Example - find process which is started with java
Second task is to terminating a process which is identified in the first task. Here we can use 'shell' module where we can use 'kill' running process.
---
 - name: Find Java process and Terminate
   hosts: "{{ targets | default ('localhost') }}"
   gather_facts: false

   tasks:
     - name: Get the running Java process
       shell: "ps -ef|grep tomcat |grep -v grep | awk {'print $2'}"
       register: JavaProcessID

     - name: Print Java Process
       debug:
         msg: "{{ JavaProcessID.stdout }}"

     - name: Terminating Java Process
       become: yes
       become_user: tomcat
       shell: "kill -9 {{ JavaProcessID.stdout }}"

Execute the playbook as follows:
ansible-playbook find_killtomcat.yaml -e targets=webservers
The output Image:
Using shell module to find and kill tomcat process


Friday, November 25, 2022

Undoing changes - git reset

Hello Guys!!


HEAD pointer movement

HEAD points to specific commit in the local repo-branch as new commits are made, the pointer changes

HEAD always points to the "tip" of the currently checked-out branch in the repo (not the working directory or staging index)

last state of repo (what was checkout initially HEAD points to parent of next commit(where writing next commit takes place)


HEAD Movement in Git branches

Git Reset movements

This is most common need of every DevOps team development phase need. There are three options we have but of course two of them are mostly used.

Git reset movements at three tree levels


  1. soft
  2. mixed
  3. hard

Using --soft reset

The soft reset command is to combine many commits into a single one.


git reset --soft HEAD       (going back to HEAD)
git reset --soft HEAD^    (going back to the commit before HEAD)
git reset --soft HEAD~1     (equivalent to "^")
git reset --soft HEAD~2     (going back to 2 commits before HEAD)

Using hard reset

move the HEAD pointer update the Staging Area (with the content that the HEAD is pointing to) update the Working Directory to match the Staging Area
git reset --hard HEAD       (going back to HEAD)
git reset --hard HEAD^      (going back to the commit before HEAD)
git reset --hard HEAD~1     (equivalent to "^")
git reset --hard HEAD~2     (going back two commits before
Backing out changes to commit ids. So refer to the git show, git log commands in between to identify that change which you wish to reset to.

 
Observe the git reset with soft and hard example


Sunday, November 20, 2022

GitHub Personal Access Token (PAT) for Linux Users

Hey Greetings of the day!!

GitHub providing Personal Access Token instead of using username, password for git repository that is on the GitHub. where git subcommands such as git pull, git push, fetch and any remote operations will be depends on this PAT. 

 There are two choices for PAT 
1. fine grain Personal Access Token(PAT) 
2. Personal Access Token (Classic) 

I understood that it is easy to change permissions, authentication token that use only specific target repositories. 

If you want to know 'How PAT works? Where to start PAT' then this post is absolutely for you! Welcome to this 2 mins read!


  • Fine grained PAT newly introduced in Oct 2022 on the GitHub still at the time this post it is mentioned as [Beta] versioned.
  • Personal Access Token PATS are going to work with commonly defined API on GitHub. Any integration made simplified with this method.

How to create PAT on GitHub? 

  1. Login to your GitHub account. Click on the profile picture upper-right corner, from there select the 'Settings' menu item. 
  2. The left bottom, PAT will be defined under the 'Developer Settings' there you will get new page with 'Personal Access Tokens' into the work area select your choice. There are many actions related to each row tells about a granular choices that GitHub could provide. 
  3. Go to right side 'Generate Token' will be a button to hit. Then it will prompt you for login credentials to be entered. Then New personal access token parameters will need to be filled up. 1. Note - alias a authentication methods let's enter the note as "vt_demo_token
  4. Select the scope - 'repo' is enough, where it will be containing repo:status, repo_deployment, public_repo, repo:invite, security_events.
  5. Click on the "Generate Token" button new page will have the token. Make sure to copy your new personal access token now. Remember that you won't be able to see it again eventhough you have created it. As soon as click away of this page you will not get the value any more.
Preserving the TOKEN on your Linux system Create an environment variable store it inside .bash_profile.
MY_GITHUB_TOKEN=GENERATED-TOKEN-VALUE-COPIED FROM GitHub PAT page.
How do I access remote GitHub repository?
Use the following command to set the remote repository:
git remote origin https://github-remote-repo-url 
Use the following command to check the remote repository
git remote -v 
Here -v option indicates that verbose of the remote url and fetch and pull commands could connect with the URLS shown as above command output. push the changes to remote GitHub repository.
git push -u origin https://git-url 
It will prompt for the Password then enter the TOKEN value which is already copied that you can paste here. To store the password you don't want to see the prompting every push, pull activities. To avoice that you can set the remote url as follows:
git remote set-url https://api:$MY_GITHUB_TOKEN@github.com/REMOTE-USER/REMOTE-REPO.git 
Example:
git remote set-url https://api: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@github.com/SanjayDevOps2022/football.git

After setting this all remote operations push, pull are no user access prompted.
If you find this post is useful please share this to your technical communities :)

Saturday, November 19, 2022

Ansible Tags - Controls Tasks

 Ansible playbook can be a construct of multiple plays or each play may contains multiple tasks. This is where we may have situation where you need to add new task to the existing play or playbook, and we need to test many times that newly added task. 

While testing multiple times we many don't want to execute certain tasks such as a task 'Send email notification' when you preparing a 'Reboot of server' or 'Restart of Service' or 'Deployment of a service'. During the testing time you may want to exclude these notification tasks. 

There are situations where we might want to run a particular task as per the input at the run time of a playbook. This may be from AWX/Tower UI select them.

Ansible tags - to control the tasks of a Playbook


I will be explaining in this post, How to run or not to run a particular task in given  playbook. 

Important concepts about Ansible tags

  • Ansible tags are keys to identify and control the tasks for execution or exclude from the playbook that could have multiple tasks
  • Ansible tags are references or aliases to a task inside a play selection can be done with -t or --tags options in the command line 
  • Selecting a tag for exclude can be defined with --skip-tags option
  • A task may have multiple tags such as sending email task could have tags as email_notify, notifications
  • There can be same tag that can be associated with multiple tasks such as notifications  can be associated with sending email and sending slack notices as well

Prerequisites

I've Ansible Controller machine and wanted to install the 'Apache2' webserver on the managed nodes. I've a sample playbook for it. 

How to be specific during the execution of Ansible Playbook?

Now let me create a playbook for install and control a Apache webserver. All my managed nodes are with CentOS so I'm using yum package module in this.

The logic for the below playbook is build with two modules: yum, service

staate attribute of yum package module

service module parameter state and values

You can see the YAML using tags which is our main objective of this post:

---
 - name: Install and control Apache Webservers 
   hosts: "{{ targets | default('localhost') }}" 
   become: yes
   tasks: 
     - name: Install httpd server
       yum:
         name: httpd
         state: present 
       tags: install_httpd, boot_web

     - name: start httpd server
       service:
         name: httpd
         state: started 
       tags: start_httpd, boot_web

     - name: stop httpd server
       service: 
         name: httpd
         state: stopped
       tags: stop_httpd, destroy_web

     - name: uninstall httpd server
       yum:
         name: httpd 
         state: absent
       tags: uninstall_httpd, destroy_web
The Package state – parameter values can be found from the ansible documentation The execution can be specific to start the webserver, that means there should be Apache webserver must be installed and then start the service. To install webserver on two nodes.

ansible-playbook httpd-tags.yml -t install_httpd -e targets=ansible-node-1,ansible-nodes-2
To start the webserver on two remote hosts
ansible-playbook httpd-tags.yml -t start_httpd -e targets=ansible-node-1,ansible-nodes-2
Ansible tags passed in the command line with -t option


To install and start webserver if no targets mentioned means it will be localhost where Ansible controller running there this webserver will be installed and started with the following command
ansible-playbook httpd-tags.yml --tags boot_web 
To stop the webserver on two remote hosts
ansible-playbook httpd-tags.yml -t stop_httpd -e targets=ansible-node-1,ansible-nodes-2
To uninstall the webserver on two remote hosts
ansible-playbook httpd-tags.yml -t uninstall_httpd -e targets=ansible-node-1,ansible-nodes-2
We can also select multiple tags to run the playbook. To stop the webserver on two remote hosts
ansible-playbook httpd-tags.yml -t stop_httpd,uninstall_httpd -e targets=ansible-node-1,ansible-nodes-2
Ansible tags using multiple tags


Same thing can be executed with excluding tasks which we can tell by using --skip-tags options for the install_httpd, start_httd means remaining tasks need to be executed.
ansible-playbook httpd-tags.yml --skip-tag install_httpd,start_httpd -e targets=ansible-node-1,ansible-nodes-2
We can also use common tag instead of using two tags used for stop and uninstall the webserver as destroy_web single tag.
ansible-playbook httpd-tags.yml --tags destroy_web 
Hey is there any other alternative way without tags can I say from here onwards execute the playbook?


Yes, We can also use task-name as an input for a special option –start-at-task for the ansible-playbook command. See this example that tells 'stop httpd server' onwards means two tasks will be executed that is stop_httpd, uninstall_httpd tasks.

ansible-playbooks httpd_tags.yml --start-at-task "stop httpd server"
There are two newly introduced from Ansible 2.5 version onwards they are important and very special tags in Ansible playbooks are always and never.

Always: If you assign the always to tag to a task or play then Ansible Controller will always run that task or play unless you specify that need to be skip using --skip--tags
Never: If you assign the never tag to a task or play then Ansible controller will skip that particular task or play unless you specify with '--tags never' it looks odd! But it works as it meant to do in that way.


 
---
- name: Understanding always, never tags
  hosts: localhost
  become: true
  tasks:
    - name: Tasks execute always 
      command: echo "this is from always tag"
      tags:
        - always

    - name: Install nginx server
      apt:
        name: "nginx"
        state: "present"
      tags:
        - install
        - never

    - name: Deploy webpage to website
      copy:
        src: index.html
        dest: /var/www/html/
      tags:
        - deploy
  
When you run this playbook it will always task will be executed always! The install task will be never executed. What is the command to list all tags present in a Playbook?
ansible-playbook mytags.yml --list-tags
  
Terminology: AWX is free open-source project that enables us to manage Ansible from a web interface.

All the examples executed thanks to Krishna
Courtesy by   Krishna Tatipally

Document References:

Monday, November 14, 2022

Ansible Facts - Customizations

Hey DevOps Team, In this post I would like to give you the knowledge on the special feature about Ansible Facts and we can also customize these facts as global variables to use in multiple playbooks.

What is ansible facts? 

Ansible facts are just simple variables that are automatically discovered by ansible on a managed nodes. such as system information disk info, os info, package info IP Network and many more ...

Why we manage facts?

Default these facts will be automatically collected exclusively we need to disable some times. - multiple play in a playbook

How can we use facts?

we want to run only when enough memory is available on the target machine then install the package. That is the smartest way to do the automation!
---
# File : hello.yaml
- name: Facts example 
  hosts: "{{targets|default('localhost')}}"
  tasks:
    - name: prints details
      debug:
        msg: "Hello this machine have {{ ansible_memory_mb['real'] }}"

when you run the above playbook it will automatically do this "Gathering Facts". This is done by a special built-in module called 'setup' module. What is this setup module does? Let's examine by executing the following ad-hoc command:
ansible node1 -m setup # To know about node1
ansible localhost -m setup # To know about Ansible controller machine
the above will be printing lot of information in the JSON format thrown on to your monitor, look this is all under "ansible_facts"

To collect all these and displaying on the stdout will be time consuming process. We can skip this by saying I don't want to see! To control this, disable collecting facts process you can inform in the playbook by saying 'gather_facts: false' this will not do automatic collection so that playbook will be executed faster.
Facts are like ansible_distribution ansible_hostname ansible_default_ipv4['address']
---
# File : facts_ex.yaml
- name: Facts example   
  hosts: nodes
  gather_facts: false
  tasks:
    - name: collect facts
	  setup:
    - name: prints details
	  debug:
	    msg: 
		- "{{ ansible_distribution }}" 
		- "{{ ansible_hostname }}"
		- "{{ ansible_default_ipv4['address'] }}"

Custom Facts

Let's go to the node1 and set the facts manually. ofcourse we can also set this with ansible file operation as well.
sudo mkdir -p /etc/ansible/facts.d 
cd /etc/ansible/facts.d
Let's create our facts file under the above created directory structure where ansible controller can look for and fetch the data1 `sudo vi local.fact`
[db_server]
db_installer= Oracle 21c
db_port= 1521

[bu]
project=Banking
Now let's run the fact collection from the ansible controller machine following command:
ansible node1 -m setup -a 'filter=ansible_local'
Use Case 2: Accessing the local facts from the playbook as follows:
---
# File : localfacts_ex.yaml
- name: Custom Facts example   
  hosts: node1
  tasks:
    - name: prints local facts details
	  debug:
	    msg: 
		- "{{ ansible_local }}" 
execute the above example observe the output.
ansible-playbook localfacts_ex.yaml
Now let's try to fetch those internal elements from custom facts, which are defined in the remote managed node 'node1' as dictionary of dictionaries. 
---
# File : localfacts_ex1.yaml
- name: Custom Facts in details example   
  hosts: node1
  tasks:
    - name: prints local facts details
	  debug:
	    msg: 
		- "{{ ansible_facts['ansible_local']['local']['db_server'] }}" 
        - "{{ ansible_facts['ansible_local']['local']['db_server']['db_installer'] }}" 
        
        
Run the above customized facts and retrieving is little complex as it requires little knowledge on the JSON form of data accessing method, but it's okay this is simple example used in the playbook.
ansible-playbook localfacts_ex1.yaml
Now we can control the custom facts collection based on the availability as we know these facts are defined in node1 manage server only, when we pass hosts value as nodes it will be getting failed for node1 or localhost to suppress such cases we have option to use 'when' conditional check as shown below:
---
# File : localfacts_ex2.yaml
- name: Custom Facts example   
  hosts: nodes
  tasks:
    - name: prints local facts details
	  debug:
	    msg: 
		- "{{ ansible_facts['ansible_local']['local']['db_server']['db_installer'] }}" 
	  when: ansible_local.local.db_server.db_installer is defined
The execution will be takes place as follows:
ansible-playbook localfacts_ex2.yaml
Keep smiling 🤣 with rocking automation ideas you are learning with me!!
References:
Official Document

Categories

Kubernetes (24) Docker (20) git (13) Jenkins (12) AWS (7) Jenkins CI (5) Vagrant (5) K8s (4) VirtualBox (4) CentOS7 (3) docker registry (3) docker-ee (3) ucp (3) Jenkins Automation (2) Jenkins Master Slave (2) Jenkins Project (2) containers (2) create deployment (2) docker EE (2) docker private registry (2) dockers (2) dtr (2) kubeadm (2) kubectl (2) kubelet (2) openssl (2) Alert Manager CLI (1) AlertManager (1) Apache Maven (1) Best DevOps interview questions (1) CentOS (1) Container as a Service (1) DevOps Interview Questions (1) Docker 19 CE on Ubuntu 19.04 (1) Docker Tutorial (1) Docker UCP (1) Docker installation on Ubunutu (1) Docker interview questions (1) Docker on PowerShell (1) Docker on Windows (1) Docker version (1) Docker-ee installation on CentOS (1) DockerHub (1) Features of DTR (1) Fedora (1) Freestyle Project (1) Git Install on CentOS (1) Git Install on Oracle Linux (1) Git Install on RHEL (1) Git Source based installation (1) Git line ending setup (1) Git migration (1) Grafana on Windows (1) Install DTR (1) Install Docker on Windows Server (1) Install Maven on CentOS (1) Issues (1) Jenkins CI server on AWS instance (1) Jenkins First Job (1) Jenkins Installation on CentOS7 (1) Jenkins Master (1) Jenkins automatic build (1) Jenkins installation on Ubuntu 18.04 (1) Jenkins integration with GitHub server (1) Jenkins on AWS Ubuntu (1) Kubernetes Cluster provisioning (1) Kubernetes interview questions (1) Kuberntes Installation (1) Maven (1) Maven installation on Unix (1) Operations interview Questions (1) Oracle Linux (1) Personal access tokens on GitHub (1) Problem in Docker (1) Prometheus (1) Prometheus CLI (1) RHEL (1) SCM (1) SCM Poll (1) SRE interview questions (1) Troubleshooting (1) Uninstall Git (1) Uninstall Git on CentOS7 (1) Universal Control Plane (1) Vagrantfile (1) amtool (1) aws IAM Role (1) aws policy (1) caas (1) chef installation (1) create organization on UCP (1) create team on UCP (1) docker CE (1) docker UCP console (1) docker command line (1) docker commands (1) docker community edition (1) docker container (1) docker editions (1) docker enterprise edition (1) docker enterprise edition deep dive (1) docker for windows (1) docker hub (1) docker installation (1) docker node (1) docker releases (1) docker secure registry (1) docker service (1) docker swarm init (1) docker swarm join (1) docker trusted registry (1) elasticBeanStalk (1) global configurations (1) helm installation issue (1) mvn (1) namespaces (1) promtool (1) service creation (1) slack (1)