Application deployment on Kubernetes


Deployment on Kubernetes using Kubectl.

        We can deploy containerized applications on that. To do that, we need to create Kubernetes Deployment. This deployment is responsible for the creating/ updating instance. Once deployment has been created, Kubernetes master will schedule the application instances that the deployment creates onto individual nodes in the cluster.

        Once instance are created, Kubernetes Deployment controller will monitor the application instances continuously. If the instance down or deleted, Deployment monitor will replaces it. This is called self healing, which will help to address machine failure or maintenance.

        In some other cloud technology, installation script will be used to start the applications. But, it will help on recovery from the system failure. In kubernetes will provide different approach by creating and running application instance on different nodes.

     We can create and manage deployment by using Kubernetes Command line Interface (Kubectl). Kubectl will use kubernetes API to connect with cluster.

Kubectl basics:

Downloading Kubectl:

#wget https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl

Copying kubectl to path:
#chmod +x kubectl
#mv kubectl /usr/local/bin/

Downloading
client credentials and CA cert:


gcloud compute copy-files node0:~/admin-key.pem .


gcloud compute copy-files node0:~/admin.pem .


gcloud compute copy-files node0:~/ca.pem .


Getting Kubernetes controller external IP:


EXTERNAL_IP=$(gcloud compute ssh node0 --command


  "curl -H 'Metadata-Flavor: Google'


http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip")


Creating cluster config:


kubectl config set-cluster workshop


--certificate-authority=ca.pem


--embed-certs=true


--server=https://${EXTERNAL_IP}:6443


Adding a admin credentials:


kubectl config set-credentials admin


--client-key=admin-key.pem


--client-certificate=admin.pem


--embed-certs=true


Configuring the cluster context:


kubectl config set-context workshop


--cluster=workshop


--user=admin


kubectl config use-context workshop


kubectl config view


Explore the kubectl CLI


Checking health status of the cluster components:


kubectl get cs


List pods:


kubectl get pods


List nodes:


kubectl get nodes


List services:


kubectl get services


"Kubectl run" command will create new deployment and have to provide deployment name,


application image location and port number.




Example:
#kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 --port=8080


command to display the deployment:
> kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILA
BLE AGE
kubernetes-bootcamp 1 1 1 1
2m

In default case deployed applications are visible inside of kubernetes cluster, before exposing.
To view the application without exposing outside, can add route between our
terminal and kubernetes cluster using proxy.
>kubectl proxy
Started proxy enables access to the API. The applications running inside the pod.
Get the pod name and store it in the POD_NAME environment variable.


>export POD_NAME=$(kubectl get pods -o go-template –template ‘{{range .items}}{{.metadata.name}}{{“n”}}{{end}}’) echo Name of the Pod: $POD_NAME


command to see application output:

>curl http://localhost:8001/api/v1/proxy/namespaces/default/pods/$POD_NAME/



Note: Will see more application related things in future post.

About amazon EC2

  Provides scalable computing and Will reduce the Hardware investment. You can develop and deploy applications faster as you can.We can configure security, networking and storage.

Features of AWS EC2:

Virtual computing environments ( Instance)
Preconfigured templates for instance (Amazon MACHINE IMAGES)
Secure login for your instance (Amazon stores public key and you store the private key)
AMI Templates are used when launching the machine.
high memory, availability, on domain instances, reserved instance for a particular time, cost instances.

Instance types:

General purpose            –  t software development
Compute optimised         –  c High performance
Memory optimised          – r  DBs
Storage optimised           –  i or d
GPU Instances – g (Video encoding)

Launch Instance -> AMIs -> types (VCPUs) -> Storage -> NW Performance -> Additional Info (Tags)




Cluster in Kubernetes



Cluster creation using Minikube:

             It’s a opensource and will handle highly available cluster of systems that are ready to work together as connected. It will allow you to deploy containerized applications on individual systems. Containerized applications are flexible and available comparing to the old deployment methods.
               Kubernetes will automates scheduling and distributing applications across the cluster in the way of more efficient.

This cluster type consists of below resources:

1. Master – Cluster coordinator
2. Nodes – Where the applications run

Master:

Master will coordinate such as scheduling, managing, Scaling, Updating and roll out applications.
Kubernetes cluster will handle of production traffic with three nodes.

Nodes:

           This might be physical/ VM nodes where the application are running. This has kubelet agent which will help to manage and communicate with Kubernetes master server and also nodes have Docker and rkt which will help to handling the containers.

            On deployment of applications, master will schedule and start the application containers on nodes. Nodes will communicate with masters using Kubernetes API. Users also will user Kubernetes API to interact with cluster.

Kubernetes cluster can be deployed on your local or VM machines. We can use Minikube which will create a VM and implement one node cluster on your local machine. Minikube CLI will provide basic operations on your cluster like start, stop, status and delete.

To install Minikube on Linux box, we must need Virtual box/ KVM.

Minikube Installation

# curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.17.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

If you are totally new to Kubernetes, you also need to download kubectl client, This is will communicate with Kubernetes API to manage containers.

Kubectl Installation:

#curl -Lo kubectl http://storage.googleapis.com/kubernetesrelease/release/v1.3.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Note: Here I’m using Kubernetes interactive tutorials. So, required package has been already installed.

Now you were ready to use Minikube/ Kubernetes. First we must enter below command to start VM and Kubernetes cluster

#minikube start
Starting local Kubernetes cluster…
Kubectl is now configured to use the cluster.


To check the status enter below command:

#Minikube status
Running

Use the below command to check kubectl is configured or not to comminucate Minikube VM.

 # kubectl get nodes
NAME      STATUS    AGE
host01    Ready     33s


Command to check Kubectl which  command line interface version

  #kubectl version
Client Version: version.Info{Major:”1″, Minor:”5″, GitVersion:”v1.5.2″, GitCommit:”08
e099554f3c31f6e6f07b448ab3ed78d0520507″, GitTreeState:”clean”, BuildDate:”2017-01-12T
04:57:25Z”, GoVersion:”go1.7.4″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”5″, GitVersion:”v1.5.2″, GitCommit:”08
e099554f3c31f6e6f07b448ab3ed78d0520507″, GitTreeState:”clean”, BuildDate:”1970-01-01T
00:00:00Z”, GoVersion:”go1.7.1″, Compiler:”gc”, Platform:”linux/amd64″}


Use the below command to view the cluster details:

#kubectl cluster-info
Kubernetes master is running at http://host01:8080
heapster is running at http://host01:8080/api/v1/proxy/namespaces/kube-system/service
s/heapster
kubernetes-dashboard is running at http://host01:8080/api/v1/proxy/namespaces/kube-sy
stem/services/kubernetes-dashboard
monitoring-grafana is running at http://host01:8080/api/v1/proxy/namespaces/kube-syst
em/services/monitoring-grafana
monitoring-influxdb is running at http://host01:8080/api/v1/proxy/namespaces/kube-sys
tem/services/monitoring-influxdb

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

To view the nodes in the cluster:

#kubectl get nodes
NAME      STATUS    AGE
host01    Ready     2m



           

Introduction to kubernetes

                 Using this will automate the deployment, scaling, management of containerized applications and etc.
             
                It will create containers which will help to easy management and discovering applications as logical units.

                As its a cloud system, it can be accessible from anywhere. It will support on-premise, hybrid and public cloud infrastructure. It will reduce ops count for operations.

Features:

Automatic binpacking
Self-healing
Horizontal scaling
Service discovery and load balancing
Automated rollouts and rollbacks
Secret and configuration management
Storage orchestration
Batch execution

         We will help you to know the basic of kubernetes cluster system with background, concepts of kubernetes, Deploy a containerized application, Scale the deployment, Update and debug the applications.

Modules of Kubernetes:

1. Create a kubernetes cluster
2. Deploy a application
3. Explore your application
4. Explore your application publicly
5. Scale up your application
6. Update your application

Cloud – Open Stack


Will talk about Cloud – Open Stack in this post.
What is Cloud ?

Cloud is loaded terminal. Cloud is convenient, on demand network access to a share pool of configurable computing service. that include application/services.

characteristics:

Self-service
Multitenancy
Elasticity
Telemetry

Cloud types:

Private cloud
Public cloud
Hybrid cloud

What is private cloud?

Private Cloud will provide all basic benefits of public cloud like below.
Service and scalability, multi-tenancy, ability to provision machines, changing computing resources ondemand and creating multiple machines for complete jobs.
In this cloud type limited people only will able to access web based apps/websites.

Disadvantage:

We need staffing system and will be handled and managed by third party service.

Advantage:

To reduce implementing Rack space and VMWare by deploying private cloud.

What is Public cloud?

Public cloud is a standard cloud computing. In this method service provider have to provide all the resources like application, Hardware’s,etc.. and
its available in public over the internet.
This service could be a free a service or payable service.

Advantages:

Expense is low because of provider will pay for hardware,application and bandwidth.
Easy to access.
Scalability.
Resources usage is low.

Example: Amazon Elastic Compute Cloud (EC2), Sun Cloud, Google AppEngine, etc…

What is Hybrid Cloud?

In hybrid cloud organizations will deploy their apps in Private and Public cloud both.
So, it’s maintained by both provider internal and external.

Dynamic or changeable applications using this module cloud. Application might be deployed in private cloud and it will access public cloud resources when the computing demand is high. To connect Private and Public cloud resources, hybrid cloud is required.

Other topics will be covered in next post