0

 

  • Cluster : Group of server is known as Cluster
  • A container is a lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. Containers are encapsulated units that run on a container runtime (e.g., Docker, containerd)
  • Pod is the smallest unit of k8s cluster in which applications are running.
  • A Pod is the smallest deployable unit in the Kubernetes ecosystem.
  • It is a logical group of one or more containers that share the same network namespace, storage, and have an IP address in a shared network space.
  • kube-scheduler :The kube-scheduler in Kubernetes is responsible for determining where to run pods within the cluster. It considers factors like resource requirements, affinity rules, and constraints to make informed decisions on node placement. Its role includes node selection, resource management, affinity considerations, spreading pods across nodes, respecting constraints and priorities, and ensuring resilience in case of failures.
  •  The Kubernetes API server is the central component that manages and exposes the cluster's state and handles API communication
  • etcd is a distributed key-value store that serves as the primary data store for Kubernetes, storing configuration and cluster state
  • The controller manager in Kubernetes is responsible for running controller processes that regulate the state of the cluster to meet desired configurations.
  • The kubelet is a Kubernetes node agent responsible for maintaining communication with the control plane and managing container lifecycles on the node. Kubelet: It's an agent running on each node, responsible for communication between the Kubernetes Master and the node. It ensures that containers are running in a Pod. 
  • The kube-proxy is a Kubernetes component responsible for network proxy and load balancing, facilitating communication between services and managing network rules. Kube Proxy: Maintains network rules on nodes, enabling communication between Pods and services.  
  • CNI, or Container Networking Interface, is a standard for network plugins in container runtimes, facilitating communication between containers in a cluster. 
  • Container Runtime: The software responsible for running containers, such as Docker or containerd 
  •  kubectl is a command-line tool for interacting with Kubernetes clusters, allowing users to deploy, manage, and troubleshoot applications and resources within the cluster.
  •  kubeadm is a command-line tool for easily setting up and managing Kubernetes clusters.

https://github.com/LondheShubham153/kubestarter/blob/main/kubeadm_installation.md 


 

 

In simple terms, a Kubernetes Deployment is like a manager for your application in a Kubernetes cluster. It makes sure that the right number of copies of your application (called replicas) are always running, and it helps with updating your application without causing downtime. It's a way to tell Kubernetes how you want your application to run and then let Kubernetes take care of making it happen.

 In Kubernetes, a Service provides a stable endpoint for accessing dynamic Pods. ClusterIP, the default type, enables internal communication. NodePort exposes the Service on each node's IP at a static port, facilitating external access. LoadBalancer integrates cloud provider load balancers for external access, distributing traffic and ensuring application reliability in dynamic clusters.

 


 

In the diagram:

  • The "Kubernetes Master" manages and controls the entire cluster.
  • Each "Node" represents a machine (physical or virtual) that contributes resources to the cluster.
  • "Kubelet" is an agent on the Node, interacting with the Master and managing containers.
  • "Container Runtime" is responsible for running containers on the Node.
  • "Kube Proxy" maintains network rules, enabling communication between Pods and services.

Nodes collectively form the infrastructure on which your applications (in the form of containers) run. The Master orchestrates these nodes, ensuring the desired state of your applications is maintained across the cluster.

 

 Two-Tier Deployment (K8s Cluster setup done on Kubeadm)

 https://github.com/LondheShubham153/two-tier-flask-app/tree/master


Git Clone


Create k8s folder to keep all the manifests file and create pods in the directory k8s


after creating pod.yml, deployment.yml and service.yml of flask app and mysql now run the command 

kubectl apply -f pod.yml and siimilary for other manifests file


Now we run worker node ip with with port 3007 then we get error like the below image


The error says there is no MySql  manifests file and we need to create the files.

We created mysql-deployment yml file and run the file through Kubectl cmd and if we chk the node then it shows mysql node has CrashLoopBackOff and to solve it we will create volume yml file and will also delete mysql deployment file through the cmd Kubectl delete -f mysql-deployement.yml


We will create persistent-volume yml file. We will create path to store data and here we have created mysqldata path


After creating persistent-Volume yml we will create PVC yml file to claim the volume.

And then in the deployment yml file we will add Volume path and PVC name


apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:latest
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "admin"
            - name: MYSQL_DATABASE
              value: "mydb"
            - name: MYSQL_USER
              value: "admin"
            - name: MYSQL_PASSWORD
              value: "admin"
          ports:
            - containerPort: 3306
          volumeMounts:
            - name: mysqldata
              mountPath: /var/lib/mysql         # this is your container path from where your data will be stored

      volumes:
        - name: mysqldata
          persistentVolumeClaim:
            claimName: mysql-pvc    # PVC claim name


Now we will create mysql-service.yml file

 

image 1

 

image 2

In the image 2 the error is "Unknown server Myhost"

So from image1 we will MySQL cluster-IP and will add in the my-sql-deployment yml file in env and in mysql-host valu will be the cluster-ip.

 and then will run kubectl apply -f my-sql-deployment.yml

After creating and applying all YML files if we run the application we get error in image 3 like table, mydb.messages doesn't exist

Image 3

To solve image 3 error. We will go in the worker node and will run command 

Sudo docker ps  to get my-sql container id and it will look like

We will run the below command to enter into mysql


and now we will see the bash#


To enter MySQL




After creating table if we run the application it wont show the error.
















Notes:

 To scale the deployment





 

 

Post a Comment

 
Top