Unreachable Host: port unreachable

Unreachable Host: port unreachable : port unreachable

I do have access to ssh into the destination machine, and it works, but whenever I run this playbook, I get this error output:

sudo ansible-playbook test.yml PLAY [web] ***************************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************************** fatal: [machine]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive).\r\n", "unreachable": true} to retry, use: --limit @/ansible-play/test.retry PLAY RECAP *********************************************************************************************************************************************************************************************** machine : ok=0 changed=0 unreachable=1 failed=0

Solution 1:

Try to check the SSH arguments and I used below, and it helps me sometime.

#ansible-playbook --user=brines -vvv test.yml

Solution 2:

Invalid SSH Configuration also may lead this issue. So, hvae to fix the SSH configuration issue or copy & paste the ssh keys on concern hosts.

#cd /root/.ssh 
#ssh-keygen -t rsa

save key under the name of id_rsa

#cat id_rsa.pub

copy the entire key and paste in file (of master node located at path: /.ssh/ or /root/.ssh) as:

#vi authorized_keys

Then run this to check:

#ansible all -m ping -u brines

Output should be like this:

master-node | SUCCESS => { "changed": false, "ping": "pong" }

 

How to create Incident in Service Now using ansible?

Overview

Faster delivery can result in improved support and for stakeholder satisfaction, faster delivery and improved productivity will be the most important thing while automating any service and it is very much satisfied here.

 We can do below operations in Service Now using ansible

         Updating incidents, problems, and change requests

         Updating the Service Now configuration management database (CMDB)

         Using the CMDB as an inventory source  

In this post will demonstrate, how to manage incidents.

First, we need install the collation to handle any service and here we need to install.

servicenow.itsm collection to manage service servicenow through ansible.

Install Service Now collection using below command:

$ ansible-galaxy collection install servicenow.itsm

Once the collection installed, then we have access to below modules:

  1. servicenow.itsm.incident for managing incident tickets
  2. servicenow.itsm.problem for interacting with problems
  3. servicenow.itsm.change_request for handling changes
  4. servicenow.itsm.configuration_item for managing the CMDB
  5. servicenow.itsm.now Inventory plugin and it allows us to use CMDB as an inventory source.

To display the documents of each module use below command

$ ansible-doc servicenow.itsm.incident

Credentials and Service Now declaration:

Before managing incident, we should tell ansible where our ServiceNow instance available and what credentials to be used.

Create inc_vars.yml file and mention instance & credentials as variables like below

---
#snow_record variables
sn_username: admin
sn_password: mypassword@123
sn_instance: snow_host

#data variables
sn_severity: 2
sn_priority: 2
Now that we have our credentials variables ready to use in playbook and we need to create a playbook to create new incident.

Create inc_new.yml and add below codes and save & exit

---
- host: localhost
  gather_facts: false
  tasks:
    - name: create new incident
      servicenow.itsm.incident
        state: new
        username: "{{ sn_username }}"
        password: "{{ sn_password }}"
        instance: "{{ sn_instance }}"
        
        data:
          severity: "{{ sn_severity }}"
          priority: "{{ sn_priority }}"
          short_description: demo incident
   register: new_incident
 - debug:
     var: new_incident.record
Now run this playbook using below  and it will create a new incident
#ansible-playbook inc_new.yml

Configuration Management in puppet

Configuration Management in puppet

Will see How Configuration management puppet works in this post.

Let us take a example to create user in complex environment with different Linux distribution. To create a user we have small different in command when we go with different distribution like Red Hat, Ubuntu, CentOS,etc.

We have two method to create user without puppet help.

  1. We can directly login to the servers and will create user when the number of server is less. But, in when the server number hits more 100, its very difficult to create user manually in all user.
  2. We can create script to manage user in all servers. But, for that we should have knowledge about scripting and command different and flags(-u, -U) for each distribution. Once the script created, we need a common server which has access to all the other Linux servers.

But, using puppet we can do any type of user/group management, Package installation, service start/stop/restart, etc. By using puppet built-in resources to achieve the same operation on different distribution without worry about the underlying Operating System and commands.

By using simple code will do the necessary configuration management like
user/group management, Package installation, service start/stop/restart,etc.

Example: To create user will write below code to perform the task over all the Linux machines.

# cat user.pp
user { "lbcuser1" :
ensure => "present",
}

Same like above if you want to delete a user/ install package, etc. Solution is wring simple, robust, idempotent, extendable puppet code to the necessary configuration over remote servers.

same like that will see the code to install ntp package, which is used for network time and starting service.

# cat ntp.pp
package { "ntp":
ensure => "present",
}

service { "ntpd":
ensure => "running",
}

Like this will manage environment using puppet code. In other work managing environment using code will call as Iac(Infrastructure-as-Code).
This code will be applied over all the client machines to do the operation and will reduce the manual effort and time.

And its very essay to change the code for any modification on configuration management over all client machines.

Idempotency:
Puppet codes are idempotent by nature. Which means the results of the code remains same irrespective of the number of time we perform puppet run on nodes.puppet always ensure to keep the resources in desired state.
For example in user creation, it will check whether the user is already exist.
If the user already exist, will not perform the user creation and report us that the user already exist. Basically these checks are already in place of the puppet resources.
And if you have lines of codes to perform a action on remote machines, in such case, if any of your action already exist in any server, puppet simply will skip that action and proceed for further configuration.

These all are the good points to why we are using puppet in our environment for configuration management.

Thanks for your support and reading this post. Will post next lecture about puppet in next post.

Refernce: Puppet Docs

How to patch linux servers using ansible

How to patch linux servers using ansible

Ansible is opensource automation tool and will see how to patch linux servers using ansible in this post.

We are going to use RedHat Linux 7.3 Operating System in this practical.

Requirements:
1. Linux Host Installed with Ansible and Yum repository configured with httpd.
2. Linux Host Installed with RHEL 7.4 -> Node machine
3. Since Ansible requires SSH enabled between ansible master and node and don’t have node package, Make sure SSH connection established between Master and node.

Configuring yum repository for patching:
  1. browse https://access.redhat.com/ and login with valid credentials.
  2. Click on Security -> Security Advisories and downlod the necessary packages.
  3. Copy those packages to yum repository where all existing packages are available in Linux host. I downloaded and copied kernel update in my repository.
 
# yum list all | grep 3.10.0-1062.el7
kernel.x86_64 3.10.0-1062.el7 @yum_repo
kernel-headers.x86_64 3.10.0-1062.el7 yum_repo
kernel-devel.x86_64 3.10.0-1062.el7 yum_repo
kernel-tools.x86_64 3.10.0-1062.el7 yum_repo
kernel-tools-libs.x86_64 3.10.0-1062.el7 yum_repo

4. Run createrepo, “yum clean all” & “yum makecache” commands to update the repository along with new RPM’s.

Now the repository is ready for patching.

Ansible playbook for Linux patching:
  1. Login to Ansible Host and change directory to /etc/ansible
#cd /etc/ansible

2. create playbook called “patching.yml” with below content

# vi patching.yml
---
- name: Patch Linux system
hosts: Linux_Servers
become: true
ignore_errors: yes
tasks:
- name: Copy the Kernel Patch Repo File
copy:
src: /etc/yum.repos.d/yum.repo
dest: /etc/yum.repos.d/
- name: Apply patches
yum:
name: kernel
state: latest

3. Edit /etc/ansible/hosts file and provide Linux hosts which needs to be patched and mention group as “Linux_Servers” for those hosts. Host group name has been mentioned in playbook in “hosts: Linux_Servers” portion.

# cat /etc/ansible/hosts
[Linux_Servers]
client.lbc.com

4. Now run the playbook from Ansible host and make SSH connection established between master and client.

# ansible-playbook patching.yml
Before kernel patching:

# uname -a
Linux client.lbc.com 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

After kernel Patching:

# uname -a
Linux client.lbc.com 3.10.0-1062.el7.x86_64 #1 SMP Thu Jul 18 20:25:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

We successfuly completed kernel patching. Reference:

About Puppet

Will see About puppet in this post. Puppet is a open source configuration management tool. Which will help us to reduce our working time by automating most of the day-to-day and other tasks in IT environment.

puppet is declarative one(Puppet domain specific language).
Puppet take care of all our regular repetitive task along with application deployment. configuration changes,etc.
Puppet written in Ruby.
Puppet is scale-label, which can be used any physical/virtual environments.
Codes written in puppet are idempotent by naturally.
It easily create/update and maintain the OS configuration files using its own declarative methods.

We can do below things using puppet on our OS without any human intervention.

* Installing application on various machines
* Managing Firewall ports
* Modifying configuration files
* Managing services, etc.

We have N number of Resources and Classes to build easily a complex environment over VMWare, Any Cloud environments.

How Puppet Works?
  • We have Master and agent concept in Puppet environment.
  • Master should be Installed and configured on Linux machines only and there is no support for Windows machine. But Agent can be Linux or Windows machines.
  • We have two deployment models
    • Master-Agent deployment : Master and agent machines different machines. Master will manage the agent machines. Its used for Production environment
    • Standalone deployment: Master and agent both packages are installed on one server and its used for Dev/Test Environment.
  • Puppet Master are Linux based machine where we need to install and configure “puppetserver” package and this will be responsible to create and maintain puppet codes to manage agent machines.
  • Agent machines are different servers in our environment which we would like to manage using puppet.
  • “Puppet-agent” package should be installed on agent machines
  • Agent machines will check with Master every “1800 Seconds(30 Mins)” to know if anything to be updated on agent machine.
  • If anything needs to be updated, Agent will pull from Master machine through puppet codes and this will be called us “pull mechanism” and will do required updates which is mentioned in puppet codes.
  • And we have “Push and Pull” based deployment.
  • In Push based, master will push the configuration updates to their agent machines
  • In Pull based model, Agents will establish connection with master and will pull the updates from master in periodic interval.
Workflow:
About Puppet
  • Administrator Login on Puppet Master to create/ Update puppet codes and this machine is responsible for puppet code management and contains different configurations in environment.
  • We have multiple agents in environment and puppet-agent package installed on agent machines.
  • Communication between master and agent will be established through secured certificates.
  • Puppet master will allow agent machines through port 8140
  • We make sure port 8140 enabled on firewall
  • Communication between master and agent has three steps
  • Once communication established, Agents send data to Master and the data includes, Host name, IP Address and MAC Address. These are called as facts.
  • Master uses this facts and compile a list with configuration which needs to be applied on agent and this will be called as catalog.
  • Catalog contains data such as packages to be installed/services, etc. which needs to be updated on agent machines based on puppet codes which wrote.
  • Agent uses the catalog to apply required changes on the nodes
  • Once agent received catalog, it will do required changes and nodes will report to master that will say the configuration has been applied ans successfully completed.
  • Puppet provides compatibility to get these reports using third party tools.

Reference: Puppet official Docs

Installing Docker on RHEL/ CentOS 8

Docker is a tool that uses kernel modules like namespace and cgroups to run container over single OS Instance.

It provides lightweight and efficient environment to deploy and manage applications by creating containers.

We are going to see Installing docker on RHEL/ CentOS 8 in this post

Installing Docker on RHEL/ CentOS 8

Docker available in below two types:

Docker EE(Enterprise Edition)
Docker CE(Community Edition)

Pre-Requisites:

Uninstall the old version of Docker using below command

yum  -y remove  docker-common docker container-selinux docker-selinux docker-engine

Your existing containers will be remain under /var/lib/docker

Installing dependent packages:
# yum -y install lvm2 device-mapper device-mapper-persis                                                                                        tent-data device-mapper-event device-mapper-libs device-mapper-event-libs
Adding Docker Repository:

Docker Inc still didnt replease Docker for RHEL8/ CentOS 8. So, we can use alternate one which is used for RHEL7/ CentOS7

# curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2424 100 2424 0 0 22238 0 --:--:-- --:--:-- --:--:-- 22238

Docker community edition requires container.io => 1.2.2.3. But, its not available for RHEL/ CentOS 8. So, we need to skip and proceed the the docker installation in our own RISK.

# yum install docker-ce
Docker CE Stable - x86_64 16 kB/s | 21 kB 00:01
Error:
Problem: package docker-ce-3:19.03.5-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed

cannot install the best candidate for the job
package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
package containerd.io-1.2.2-3.el7.x86_64 is excluded
package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Installing docker by skipping unavailable packages
[root@ip-172-31-44-32 ~]# yum -y install docker-ce --nobest
Output:
Installed:
docker-ce-3:18.09.1-3.el7.x86_64 containerd.io-1.2.0-3.el7.x86_64 docker-ce-cli-1:19.03.5-3.el7.x86_64
container-selinux-2:2.94-1.git1e99f1d.module+el8.0.0+4017+bbba319f.noarch libnftnl-1.1.1-4.el8.x86_64 libcgroup-0.41-19.el8.x86_64
policycoreutils-python-utils-2.8-16.1.el8.noarch libnfnetlink-1.0.1-13.el8.x86_64 libnetfilter_conntrack-1.0.6-5.el8.x86_64
iptables-1.8.2-9.el8_0.1.x86_64


Skipped:
docker-ce-3:19.03.5-3.el7.x86_64


Complete!

Now Docker Version “3:18.09.1-3.el7.x86_64” has been installed.S

Start and enable the Docker service by using below command
# systemctl start docker

# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service â /usr/lib/systemd/system/docker.service.
Check the docker service status
# systemctl status docker
â docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-01-17 05:37:17 UTC; 2min 4s ago
Docs: https://docs.docker.com
Main PID: 15635 (dockerd)
Tasks: 18
Memory: 53.5M
CGroup: /system.slice/docker.service
ââ15635 /usr/bin/dockerd -H fd://
ââ15649 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.341886251Z" level=info msg="Graph migration to content-addressabil>
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.342289173Z" level=warning msg="Your kernel does not support cgroup>
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.342309354Z" level=warning msg="Your kernel does not support cgroup>
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.342708097Z" level=info msg="Loading containers: start."
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.556082824Z" level=info msg="Default bridge (docker0) is assigned w>
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.654816733Z" level=info msg="Loading containers: done."
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.681089736Z" level=info msg="Docker daemon" commit=4c52b90 graphdri>
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.681241065Z" level=info msg="Daemon has completed initialization"
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal dockerd[15635]: time="2020-01-17T05:37:17.717122644Z" level=info msg="API listen on /var/run/docker.sock"
Jan 17 05:37:17 ip-172-31-44-32.us-east-2.compute.internal systemd[1]: Started Docker Application Container Engine.

Now check the Docker installation by running a container using anyone the base image

# docker run -it hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:9572f7cdcee8591948c2963463447a53466950b3fc15a247fcad1917ca215a2f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
The Docker client contacted the Docker daemon.
The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

Allowing non root users:

Check whether group called “Docker” availavle or not

# cat /etc/group | grep docker
docker:x:989:

Since group already exists, Now create a new user

# useradd abu

Check created users details like default UID, GID/ Groups added

# id abu
uid=1001(abu) gid=1001(abu) groups=1001(abu)

Now add “abu” user to “Docker” group as another group.

# usermod -aG docker abu

# id abu
uid=1001(abu) gid=1001(abu) groups=1001(abu),989(docker)

Now we can use this user to run docker instead if using root user.

Setup Docker Repository

Before installing the Docker Engine on your host, you need to setup the repository first. So, will see How to setup Docker Repository in this post.
After that, you can Install/Update the Docker from the repository.

Setup Docker Repository
Setup Docker Repository

Setup Docker Repository:

  1. Yum should be configured on your host. Please use this post to know How to configure yum repository
  2. Than, Packages required: yum-utils, yum-config-manager and device-mapper-persistent-data, lvm2 are required for devicemapper
    Storage driver.
  3. use below command to to install above mentioned packages using yum
#yum install -y yum-utils device-mapper-persistent-data lvm2

Since the packages are already installed on my host, Update has been done

Updated:
device-mapper-persistent-data.x86_64 0:0.8.5-1.el7 lvm2.x86_64 7:2.02.185-2.el7_7.2

Use the below command to setup Docker Repository

#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

And check whether the repository added or not by issuing below command

#yum repolist | grep Docker
docker-ce-stable/x86_64 Docker CE Stable - x86_64 63

Installing Docker Engine:

To confirm the successful completion of Repository setup, will try to install New version of Docker engine now using below command.

#yum install docker-ce docker-ce-cli containerd.io 

Installed:
containerd.io.x86_64 0:1.2.10-3.2.el7 docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-cli.x86_64 1:19.03.5-3.el7

Dependency Installed:
container-selinux.noarch 2:2.107-3.el7 libseccomp.x86_64 0:2.3.1-3.el7


Now start the Docker Engine:

# systemctl start docker

# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2020-01-02 02:14:11 EST; 8s ago
Docs: https://docs.docker.com
Main PID: 60692 (dockerd)
Memory: 37.6M
CGroup: /system.slice/docker.service
└─60692 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Jan 02 02:14:10 localhost dockerd[60692]: time="2020-01-02T02:14:10.667134175-05:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:/…odule=grpc
Jan 02 02:14:10 localhost dockerd[60692]: time="2020-01-02T02:14:10.667153441-05:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 02 02:14:10 localhost dockerd[60692]: time="2020-01-02T02:14:10.695465002-05:00" level=info msg="Loading containers: start."
Jan 02 02:14:10 localhost dockerd[60692]: time="2020-01-02T02:14:10.952900918-05:00" level=info msg="Default bridge (docker0) is assigned with an IP ad…P address"
Jan 02 02:14:11 localhost dockerd[60692]: time="2020-01-02T02:14:11.018716067-05:00" level=info msg="Loading containers: done."
Jan 02 02:14:11 localhost dockerd[60692]: time="2020-01-02T02:14:11.040693143-05:00" level=warning msg="Not using native diff for overlay2, this may ca…r=overlay2
Jan 02 02:14:11 localhost dockerd[60692]: time="2020-01-02T02:14:11.041056334-05:00" level=info msg="Docker daemon" commit=633a0ea graphdriver(s)=overl…on=19.03.5
Jan 02 02:14:11 localhost dockerd[60692]: time="2020-01-02T02:14:11.041178502-05:00" level=info msg="Daemon has completed initialization"
Jan 02 02:14:11 localhost dockerd[60692]: time="2020-01-02T02:14:11.072808771-05:00" level=info msg="API listen on /var/run/docker.sock"
Jan 02 02:14:11 localhost systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

Now verify the Docker using below command

# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:4fe721ccc2e8dc7362278a29dc660d833570ec2682f4e4194f4ee23e415e1064
Status: Downloaded newer image for hello-world:latest


Hello from Docker!
This message shows that your installation appears to be working correctly.


To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.


To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash


Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/


For more examples and ideas, visit:
https://docs.docker.com/get-started/

Thanks for reading this post and going forward will talk about Docker Engine more…

Reference: Docker Docs

How to install Ansible on RHEL7/ CentOS7

We are going to see how to install Ansible on RHEL7/ CentOS7 in this post.

Control node needs to install Python 2.6 or latest version and windows doesn’t support for control node.

Since the ansible agentless tool, on Managed hosts no need to install any specific agent/client. And need to install python 2.4 or latest version on managed hosts.

How to install Ansible on RHEL7/ CentOS7

Installing Ansible on RHEL7/ CentOS7:

To install the Ansible we should have Enabled EPEL repository on our server already

Once enable EPEL Repo, then we can start installing Ansible using yum.

[root@localhost ~]# yum install ansible -y

Post installation of ansible will check the version of Ansible by using below command

[root@localhost ~]# ansible --version
ansible 2.7.9
 config file = /etc/ansible/ansible.cfg
 configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
 ansible python module location = /usr/lib/python2.7/site-packages/ansible
 executable location = /usr/bin/ansible
 python version = 2.7.5 (default, Aug 2 2016, 04:20:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
[root@localhost ~]#

Finally, we installed ansible over our machine which we are going to take it as a control node.

Hereafter if we want to deploy or manage any remote hosts(Managed Host) from the control node, SSH authentication is mandatory. So, We should copy and paste the SSH keys to the remote hosts to make the communication available between the control and managed node.

 

Reference: Ansible documented site

 

 

Architecture of Ansible

we are going to see the Architecture of Ansible in this post.

Communication:

Architecture of Ansible

Communication established between control node(Server) and Managed hosts(Client machines) using SSH Protocol.

A normal user will be sufficient for communication between Control and Managed hosts.

A normal user can able to perform a few tasks but for other tasks, we need administrators user or other users who have sudo access to perfom that tasks.

complete Architecture detail of Ansible:

Architecture of Ansible

 

This will explain how the ansible working and what are all the things contains as architecture.

As we can see the above diagram ansible automation engine will interact directly with the person who writes playbooks to do tasks.

It also interacts with the cloud(public/private) directly. Basically its CMDB(Configuration Management Data Base).

Also, it contains the below components:

  1. Inventory
  2. Modules
  3. API
  4. Plugins

 

Inventory:

Inventory will contain the List of Host or IP Address of Host/ Wildcards where we are going to do automation tasks using ansible.

default ansible inventory path: /etc/ansible/hosts

We can specify the different inventory path using -i option.

Modules:

Ansible has more 1000 readymade playbooks in it and we should use those modules in paybooks to do automation tasks. Modules will be copied from Control node to managed hosts while executing the tasks and it will run the program based on playbook and Module then will give back us the output.

Also, the user can create custom playbooks based on their needs.

We should mention the modules in playbooks and modules will be directly executed in remote hosts through playbooks and will get the output.

API:

Ansible uses API as transport  for Cloud services.

Plugins:

Plugins will enhance the features ansible.

Plugins will allow executing the task on build stat. Its a piece of code.

Using ansible we can automate the tasks on different types of network.

 

 

 

 

Introduction of Ansible automation tool

We are going to see Introduction of Ansible automation tool in this post. By reading the future post you can learn full ansible automation and it’s purely based on RedHat Linux.

Ansible is written by Micheal DeHaan

What is Ansible?

It’s a simple IT automation and powerful configuration management tool which is written in python.

It’s an open source configuration management tool.

We can standardize our environment configuration from one server to all other remote servers using ansible by creating the playbooks to complete that task.

Mainly it’s agentless automation tool. Work is pushed to the remote host when the ansible executed.

What we can do:

  • Configuration of Servers
  • Application Deployments
  • Continuous testing of existing application
  • Provisioning
  • Orchestration
  • Automating our administration tasks

 

What we cannot do:

  • We cannot install the initial minimum installation of the system.
  • We cannot monitor the servers
  • It will not track what changes are made over the files on the system.

How the Ansible work:

 

Introduction of Ansible automation tool

Ansible Syntax (or) ansible adhoc command:

Ex:

#Ansible -m command -a "uptime" Test

 

Ansible:- Keyword

m:- Module

command:- Module Name

uptime:-  OSCommand

Test:- Target server Group

 

Ansible Features:

  • Easy to learn
  • Written in python
  • Agentless
  • YAML based playbooks
  • Ansible Galaxy

Ansible Modules:

It’s having 1375 modules. For each and every operation we need to use modules to run the commands.

So we should understand the modules to do automation.