Finding files in linux



“Find” command:

This will allow you to find the files/folders within a directory or recursively, that match the searching pattern whichever you are given to search.

#find .

Above command will indicate search in the current directory.

-name: This option will help you to find a file that matches the specific pattern.

And we can use metacharacters like “*”,” ” with enclosure of double quotes “”

Example:

[root@linux etc]# find . -name mtab./mtab

[root@linux /]# find /etc -name passwd/etc/passwd/etc/pam.d/passwd  -> this command will find files for starting like passwd under the /etc directory.

Using the “Locate” command:

     Locate command is faster than find command. Because it’s using a database which is previously built the database for search. Locate command will list all the path names which containing your search pattern.

The database is updated using corn or we can also manually update the database using below command.

#sudo updatedb

Example:

#locate mydata

Application deployment on Kubernetes


Deployment on Kubernetes using Kubectl.

        We can deploy containerized applications on that. To do that, we need to create Kubernetes Deployment. This deployment is responsible for the creating/ updating instance. Once deployment has been created, Kubernetes master will schedule the application instances that the deployment creates onto individual nodes in the cluster.

        Once instance are created, Kubernetes Deployment controller will monitor the application instances continuously. If the instance down or deleted, Deployment monitor will replaces it. This is called self healing, which will help to address machine failure or maintenance.

        In some other cloud technology, installation script will be used to start the applications. But, it will help on recovery from the system failure. In kubernetes will provide different approach by creating and running application instance on different nodes.

     We can create and manage deployment by using Kubernetes Command line Interface (Kubectl). Kubectl will use kubernetes API to connect with cluster.

Kubectl basics:

Downloading Kubectl:

#wget https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl

Copying kubectl to path:
#chmod +x kubectl
#mv kubectl /usr/local/bin/

Downloading
client credentials and CA cert:


gcloud compute copy-files node0:~/admin-key.pem .


gcloud compute copy-files node0:~/admin.pem .


gcloud compute copy-files node0:~/ca.pem .


Getting Kubernetes controller external IP:


EXTERNAL_IP=$(gcloud compute ssh node0 --command


  "curl -H 'Metadata-Flavor: Google'


http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip")


Creating cluster config:


kubectl config set-cluster workshop


--certificate-authority=ca.pem


--embed-certs=true


--server=https://${EXTERNAL_IP}:6443


Adding a admin credentials:


kubectl config set-credentials admin


--client-key=admin-key.pem


--client-certificate=admin.pem


--embed-certs=true


Configuring the cluster context:


kubectl config set-context workshop


--cluster=workshop


--user=admin


kubectl config use-context workshop


kubectl config view


Explore the kubectl CLI


Checking health status of the cluster components:


kubectl get cs


List pods:


kubectl get pods


List nodes:


kubectl get nodes


List services:


kubectl get services


"Kubectl run" command will create new deployment and have to provide deployment name,


application image location and port number.




Example:
#kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080


command to display the deployment:
> kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILA
BLE AGE
kubernetes-bootcamp 1 1 1 1
2m

In default case deployed applications are visible inside of kubernetes cluster, before exposing.
To view the application without exposing outside, can add route between our
terminal and kubernetes cluster using proxy.
>kubectl proxy
Started proxy enables access to the API. The applications running inside the pod.
Get the pod name and store it in the POD_NAME environment variable.


>export POD_NAME=$(kubectl get pods -o go-template –template ‘{{range .items}}{{.metadata.name}}{{“n”}}{{end}}’) echo Name of the Pod: $POD_NAME


command to see application output:

>curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/



Note: Will see more application related things in future post.

About amazon EC2

  Provides scalable computing and Will reduce the Hardware investment. You can develop and deploy applications faster as you can.We can configure security, networking and storage.

Features of AWS EC2:

Virtual computing environments ( Instance)
Preconfigured templates for instance (Amazon MACHINE IMAGES)
Secure login for your instance (Amazon stores public key and you store the private key)
AMI Templates are used when launching the machine.
high memory, availability, on domain instances, reserved instance for a particular time, cost instances.

Instance types:

General purpose            –  t software development
Compute optimised         –  c High performance
Memory optimised          – r  DBs
Storage optimised           –  i or d
GPU Instances – g (Video encoding)

Launch Instance -> AMIs -> types (VCPUs) -> Storage -> NW Performance -> Additional Info (Tags)




Cluster in Kubernetes



Cluster creation using Minikube:

             It’s a opensource and will handle highly available cluster of systems that are ready to work together as connected. It will allow you to deploy containerized applications on individual systems. Containerized applications are flexible and available comparing to the old deployment methods.
               Kubernetes will automates scheduling and distributing applications across the cluster in the way of more efficient.

This cluster type consists of below resources:

1. Master – Cluster coordinator
2. Nodes – Where the applications run

Master:

Master will coordinate such as scheduling, managing, Scaling, Updating and roll out applications.
Kubernetes cluster will handle of production traffic with three nodes.

Nodes:

           This might be physical/ VM nodes where the application are running. This has kubelet agent which will help to manage and communicate with Kubernetes master server and also nodes have Docker and rkt which will help to handling the containers.

            On deployment of applications, master will schedule and start the application containers on nodes. Nodes will communicate with masters using Kubernetes API. Users also will user Kubernetes API to interact with cluster.

Kubernetes cluster can be deployed on your local or VM machines. We can use Minikube which will create a VM and implement one node cluster on your local machine. Minikube CLI will provide basic operations on your cluster like start, stop, status and delete.

To install Minikube on Linux box, we must need Virtual box/ KVM.

Minikube Installation

# curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.17.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

If you are totally new to Kubernetes, you also need to download kubectl client, This is will communicate with Kubernetes API to manage containers.

Kubectl Installation:

#curl -Lo kubectl http://storage.googleapis.com/kubernetesrelease/release/v1.3.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Note: Here I’m using Kubernetes interactive tutorials. So, required package has been already installed.

Now you were ready to use Minikube/ Kubernetes. First we must enter below command to start VM and Kubernetes cluster

#minikube start
Starting local Kubernetes cluster…
Kubectl is now configured to use the cluster.


To check the status enter below command:

#Minikube status
Running

Use the below command to check kubectl is configured or not to comminucate Minikube VM.

 # kubectl get nodes
NAME      STATUS    AGE
host01    Ready     33s


Command to check Kubectl which  command line interface version

  #kubectl version
Client Version: version.Info{Major:”1″, Minor:”5″, GitVersion:”v1.5.2″, GitCommit:”08
e099554f3c31f6e6f07b448ab3ed78d0520507″, GitTreeState:”clean”, BuildDate:”2017-01-12T
04:57:25Z”, GoVersion:”go1.7.4″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”5″, GitVersion:”v1.5.2″, GitCommit:”08
e099554f3c31f6e6f07b448ab3ed78d0520507″, GitTreeState:”clean”, BuildDate:”1970-01-01T
00:00:00Z”, GoVersion:”go1.7.1″, Compiler:”gc”, Platform:”linux/amd64″}


Use the below command to view the cluster details:

#kubectl cluster-info
Kubernetes master is running at http://host01:8080
heapster is running at http://host01:8080/api/v1/proxy/namespaces/kube-system/service
s/heapster
kubernetes-dashboard is running at http://host01:8080/api/v1/proxy/namespaces/kube-sy
stem/services/kubernetes-dashboard
monitoring-grafana is running at http://host01:8080/api/v1/proxy/namespaces/kube-syst
em/services/monitoring-grafana
monitoring-influxdb is running at http://host01:8080/api/v1/proxy/namespaces/kube-sys
tem/services/monitoring-influxdb

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

To view the nodes in the cluster:

#kubectl get nodes
NAME      STATUS    AGE
host01    Ready     2m



           

Introduction to kubernetes

                 Using this will automate the deployment, scaling, management of containerized applications and etc.
             
                It will create containers which will help to easy management and discovering applications as logical units.

                As its a cloud system, it can be accessible from anywhere. It will support on-premise, hybrid and public cloud infrastructure. It will reduce ops count for operations.

Features:

Automatic binpacking
Self-healing
Horizontal scaling
Service discovery and load balancing
Automated rollouts and rollbacks
Secret and configuration management
Storage orchestration
Batch execution

         We will help you to know the basic of kubernetes cluster system with background, concepts of kubernetes, Deploy a containerized application, Scale the deployment, Update and debug the applications.

Modules of Kubernetes:

1. Create a kubernetes cluster
2. Deploy a application
3. Explore your application
4. Explore your application publicly
5. Scale up your application
6. Update your application

Configuring syslogd in linux

Will see Configuring syslogd in Linux in this post

Follow the below steps on server:

Required services for syslogd

1. portmap
2 xinetd
3. syslog

Run the below command to keep on running above mentioned service after server reboot

#chkconfig portmap on
#chkconfig xinetd on
#chkconfig syslog on

Start portmap and xinetd services

#service portmap start
#service xinetd start

Check the service status

#service portmap status
#service xinetd status

Now edit “/etc/sysconfig/syslog” file using vi editor

#vi /etc/sysconfig/syslog

find for SYSLOGD_OPTIONS=”-m 0″

and add -r option to accept the logs from system

SYSLOGD_OPTIONS=”-r -m 0″

save and quit from the file :wq

-x disable DNS lookups on messages with -r option
-m 0 disabled MARK messages
-r enables logging

Once edited the file restart the syslog service

#service syslog retsrat

Follow the steps on client

Edit /etc/syslog.conf file and add a entry at end of file for server as shown below.

:Lets assume server ip as 192.168.1.10

#vi /etc/syslog.conf
user.* @192.168.1.10

Save and quit from the file :wq

now restart the syslog service on client

#service syslog restart

Now restart the client and check log entry in server. it will generate logs in server.

Checking logs on server

#less /var/log/messages

end of this log file you can see the recent logs.

Log analysis in Linux

We have lots of default and third-party tool/commands for log analysis in Linux.
Will see some default commands which is used in Linux for log analysis.
 
Awk
Cut
Grep
Tail
Syslogd
 
Usage of  awk command
 
   We can find and replace text and will sort the output of this command.  It will search for a given pattern by us and if any text matches for that pattern then it will do the further action which given in the command.
 
For example, if we need to the second item from an output of the command will use in below format
 
#ls -l | ask '{print $2}'

12
5
13
 
Likewise will use ask command Wherever we need a specific value from a log file/output of command or text file.
 
 
 
 

YUM Configuration in RHEL7/ CentOS 7

YUM Configuration in RHEL7/ CentOS 7

We are going to see YUM Configuration in RHEL7/ CentOS 7 in this post.

In Linux mostly we are using RPM and Yum Package management.

YUM- Yellowdog Updater Modified

Yum is mostly used to install a package without fail by resolving software dependencies.

We can configure yum locally/in network

Required RPM:
1. Yum
2. Createrepo
3. deltarpm
4. python-deltarpm

Configuring YUM in RHEL7

1. Insert RHEL7 media and mount it under /mnt

#mount /dev/cdrom /mnt

2. All the RPM’s available under Packages directory. change to Package Directory and
copy all the RPM’s under any local directory which you have already created.

3. Creating local directory to use as repository

#mkdir /yumpkg

4. Copying RPM’s from media to local directory

#cd /mnt/Packages
#cp *.* /yumpkg

5. Ensure all the required RPM’s installed or not by using below commands

#rpm -qa yum
#rpm -qa createrepo

6. Generate local repository files from yumpkg and create xml file


#cp /mnt/repodata/59eXXXXXX.xml /yumpkg/comps.xml
#cd /yumpkg
#createrepo -g /yumpkg/comps.xml .
7. Create repo file

#vi /etc/yum.repos.d/yum.repo
[server_name]
name=YUM_Server
baseurl=file:///yumpkg
enabled=1
gpgcheck=0

:wq save and exit from yum.repo file

Note: For using ftp/http as repository, we must enter the concerns path in baseurl field

#yum clean all
#yum makecache

8. Now check yum by listing all available RPM’s

#yum list all

9. Than try to install any package using YUM

# yum install vsftp

Other commands of yum

#yum remove <package_name>
#yum update <package_name>
#yum search <package_name>
#yum info <package_name>
#yum list installed
#yum check-update
#yum update
yum groupinstall <package_name>

Two private IP’s not responding each other in Multicasting – Linux

Will fix Two private IP’s not responding each other in Multicasting – Linux issue in this post

Follow the below steps to fix Two private IP’s not responding each other in Multicasting – Linux:

1) Ping the private IP of node2 from node1
ping <PRV_IP_Node2>
2) Check whether the priv IP is added in /etc/hosts
cat /etc/hosts
3) Check whether the configuration is correct for the NIC device
cat etc/sysconfig/network-scripts/ifcfg-eth?
4) Check whether the routing is correct
route -n; netstat -nr
5) traceroute to the priv IP
traceroute <PRV_IP_Node2>
6) Check whether any Firewall is running
/etc/init.d/iptables status
7) Add routing for the host of PRIV_IP_Node2 & check response
8) Check the kernel parameter value of net.ipv4.conf.default.rp_filter
If it is 1 change it to 2
#net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
Do the above steps in Node 2 as well.
So that both the ways we get the reply

Cloud – Open Stack


Will talk about Cloud – Open Stack in this post.
What is Cloud ?

Cloud is loaded terminal. Cloud is convenient, on demand network access to a share pool of configurable computing service. that include application/services.

characteristics:

Self-service
Multitenancy
Elasticity
Telemetry

Cloud types:

Private cloud
Public cloud
Hybrid cloud

What is private cloud?

Private Cloud will provide all basic benefits of public cloud like below.
Service and scalability, multi-tenancy, ability to provision machines, changing computing resources ondemand and creating multiple machines for complete jobs.
In this cloud type limited people only will able to access web based apps/websites.

Disadvantage:

We need staffing system and will be handled and managed by third party service.

Advantage:

To reduce implementing Rack space and VMWare by deploying private cloud.

What is Public cloud?

Public cloud is a standard cloud computing. In this method service provider have to provide all the resources like application, Hardware’s,etc.. and
its available in public over the internet.
This service could be a free a service or payable service.

Advantages:

Expense is low because of provider will pay for hardware,application and bandwidth.
Easy to access.
Scalability.
Resources usage is low.

Example: Amazon Elastic Compute Cloud (EC2), Sun Cloud, Google AppEngine, etc…

What is Hybrid Cloud?

In hybrid cloud organizations will deploy their apps in Private and Public cloud both.
So, it’s maintained by both provider internal and external.

Dynamic or changeable applications using this module cloud. Application might be deployed in private cloud and it will access public cloud resources when the computing demand is high. To connect Private and Public cloud resources, hybrid cloud is required.

Other topics will be covered in next post