Ankita.eth
GithubContact
  • About Ankita
  • experience
    • TECHNOLOGIES
    • Frontend
      • Javascript
      • React
      • NextJS
      • HTML & CSS
      • UI Libraries & Frameworks
        • Tailwind CSS
        • Comprehensive Guide to UI Libraries and Frameworks
    • Backend
      • Node.js
      • Express.js
    • Database
      • Mongodb, Mongoose
      • PostgresSQl
      • MySQL
    • Packege Mangers
      • NPM-Node Packege Manager
      • Yarn
      • Yarn 2 (Berry)
      • PNPM
      • BUN
      • Commands cheatsheet
    • API Providers
      • Alchemy
      • Telegram Bot
      • CoinMarket
      • Thirdweb
      • Infura
      • Moralis
    • DevOps/Infrastructure
      • Docker
      • Kubernetes
      • CI/CD
      • Docker Swam
    • Protocols
      • ERCs & EIPs
        • ERC-20
        • ERC-721
        • ERC-1155
        • ERC-4337
        • ERC-6551
        • ERC-777
        • ERC-3643
        • EIP-7702
        • ERC-7715
        • ERC-7739
        • EIP-6780
        • EIP-5792
        • ERC-4626
        • EIP-1559
        • ERC-404
        • ERC-3643
        • ERC-223
    • Web3 Toolkits
      • Foundry
      • Hardhat
      • RemixIDE
    • Messaging/Caching
      • Kafka
      • Redis
      • Sendgrid
    • Blockchain
      • Solana
      • Ethereum
      • Polygon & Zero knowldge Proof
      • Bitcoin
      • Solidity
    • Deployment Platforms
      • AWS
      • Vercel
      • Heroku, Render
      • Domain setup
  • SDKs
    • Google Cloud SDK
    • AWS SDK
    • Firebase SDK
  • EOF EVM Object Format
  • Articles
    • Medium Articles
    • 🌐 My Work
  • 📞 Get in Touch
Powered by GitBook
On this page
  • Kubernetes: A Container Orchestration Platform
  • Starting docker
  • kubectl
  • Creating a Pod
  • Stop the pod
  • Kubernetes manifest
  • Deployment
  • Series of events
  • Create a replicaset
  • Create a deployment

Was this helpful?

  1. experience
  2. DevOps/Infrastructure

Kubernetes

Kubernetes: A Container Orchestration Platform

Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. It provides a declarative way to define the desired state of your application, and Kubernetes takes care of the underlying infrastructure to make it happen.

Key differences between Docker and Kubernetes:

  • Purpose: Docker is a container runtime that allows you to create and run individual containers. Kubernetes, on the other hand, is a platform for managing multiple containers at scale.

  • Scope: Docker focuses on the creation and management of individual containers, while Kubernetes orchestrates entire clusters of containers.

  • Features: Kubernetes provides advanced features like service discovery, load balancing, secrets management, and self-healing, making it suitable for complex applications.

Common Kubernetes Commands

  • kubectl get pods: Lists all running pods.

  • kubectl run <name> --image=<image_name>: Deploys a new pod.

  • kubectl get services: Lists all services.

  • kubectl expose deployment <deployment_name> --type=NodePort: Exposes a deployment as a NodePort service.

  • kubectl apply -f <manifest_file>: Applies a Kubernetes manifest file.

  • kubectl delete <resource_type> <name>: Deletes a Kubernetes resource.

Use Cases for Kubernetes

  • Microservices Architecture: Kubernetes is ideal for managing complex microservices-based applications.

  • Cloud-Native Applications: It provides the infrastructure for building and deploying cloud-native applications.

  • Large-Scale Applications: Kubernetes can handle large-scale applications with thousands of containers.

  • Continuous Delivery: Kubernetes can be integrated with CI/CD pipelines for automated deployments.

Starting docker

ensurae docker desktop is running

docker run hello-world
choco install kind
kind --version
docker info
kind create cluster --name local
docker ps

after this all command you can get local-control-plane that is help to start master node

after this you have to add .yml file with name of master and worker node what you have

Single node setup

  • Create a 1 node cluster

kind create cluster --name local
  • Check the docker containers you have running

docker ps
  • without docker kubernetes install docker run -p 3000:80 nginx

  • You will notice a single container running (control-pane)

  • Delete the cluster

kind delete cluster -n local

Multi node setup

  • Create a clusters.yml file

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
  • Create the node setup

 kind create cluster --config clusters.yml --name local
  • Check docker containers

docker ps

Now you have a node cluster running locally

Using minikube

  • Install minikube - https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Fx86-64%2Fstable%2Fbinary+download

  • Start a k8s cluster locally

minikube start
  • Run docker ps to see the single node setup

Kubernetes API

The master node (control pane) exposes an API that a developer can use to start pods.

Try the API

  • Run docker ps to find where the control pane is running

  • Try hitting various endpoints on the API server - https://127.0.0.1:50949/api/v1/namespaces/default/pods

Kubernetes API server does authentication checks and prevents you from getting in.All of your authorization credentials are stored by kind in ~/.kube/config

kubectl

kubectl is a command-line tool for interacting with Kubernetes clusters. It provides a way to communicate with the Kubernetes API server and manage Kubernetes resources.

Install kubectl

https://kubernetes.io/docs/tasks/tools/#kubectl

Ensure kubectl works fine

 kubectl get nodes
 kubectl get pods

If you want to see the exact HTTP request that goes out to the API server, you can add --v=8 flag

kubectl get nodes --v=8

Creating a Pod

There were 5 jargons we learnt about

  1. Cluster

  2. Nodes

  3. Images

  4. Containers

  5. Pods

We have created a cluster of 3 nodesHow can we deploy a single container from an image inside a pod ?

Finding a good image

Let’s try to start this image locally - https://hub.docker.com/_/nginx

Starting using docker

docker run -p 3005:80 nginx

Try visiting localhost:3005

Starting a pod using k8s

  • Start a pod

kubectl run nginx --image=nginx --port=80
  • Check the status of the pod

kubectl get pods
  • Check the logs

kubectl logs nginx
  • Describe the pod to see more details

What our system looks like right now

Stop the pod

Stop the pod by running

 kubectl delete pod nginx

Check the current state of pods

kubectl get pods

Kubernetes manifest

A manifest defines the desired state for Kubernetes resources, such as Pods, Deployments, Services, etc., in a declarative manner.

Original command

kubectl run nginx --image=nginx --port=80

Manifest

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

Breaking down the manifest

Applying the manifest

kubectl apply -f manifest.yml

Delete the pod

 kubectl delete pod nginx

Deployment

A Deployment in Kubernetes is a higher-level abstraction that manages a set of Pods and provides declarative updates to them. It offers features like scaling, rolling updates, and rollback capabilities, making it easier to manage the lifecycle of applications.

  • Pod: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster, typically containing one or more containers.

  • Deployment: A Deployment is a higher-level controller that manages a set of identical Pods. It ensures the desired number of Pods are running and provides declarative updates to the Pods it manages.

Key Differences Between Deployment and Pod:

  1. Abstraction Level:

  • Pod: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster, typically containing one or more containers.

  • Deployment: A Deployment is a higher-level controller that manages a set of identical Pods. It ensures the desired number of Pods are running and provides declarative updates to the Pods it manages.

  1. Management:

  • Pod: They are ephemeral, meaning they can be created and destroyed frequently.

  • Deployment: Deployments manage Pods by ensuring the specified number of replicas are running at any given time. If a Pod fails, the Deployment controller replaces it automatically.

  1. Updates:

  • Pod: Directly updating a Pod requires manual intervention and can lead to downtime.

  • Deployment: Supports rolling updates, allowing you to update the Pod template (e.g., new container image) and roll out changes gradually. If something goes wrong, you can roll back to a previous version.

  1. Scaling:

  • Pod: Scaling Pods manually involves creating or deleting individual Pods.

  • Deployment: Allows easy scaling by specifying the desired number of replicas. The Deployment controller adjusts the number of Pods automatically.

  1. Self-Healing:

  • Pod: If a Pod crashes, it needs to be restarted manually unless managed by a higher-level controller like a Deployment.

  • Deployment: Automatically replaces failed Pods, ensuring the desired state is maintained.

Series of events

When you run the following command, a bunch of things happen

kubectl create deployment nginx-deployment --image=nginx --port=80 --replicas=3

Step-by-Step Breakdown:

  1. Command Execution:

  • You execute the command on a machine with kubectl installed and configured to interact with your Kubernetes cluster.

  1. API Request:

  • kubectl sends a request to the Kubernetes API server to create a Deployment resource with the specified parameters.

  1. API Server Processing:

  • The API server receives the request, validates it, and then processes it. If the request is valid, the API server updates the desired state of the cluster stored in etcd. The desired state now includes the new Deployment resource.

  1. Storage in etcd:

  • The Deployment definition is stored in etcd, the distributed key-value store used by Kubernetes to store all its configuration data and cluster state. etcd is the source of truth for the cluster's desired state.

  1. Deployment Controller Monitoring:

  • The Deployment controller, which is part of the kube-controller-manager, continuously watches the API server for changes to Deployments. It detects the new Deployment you created.

  1. ReplicaSet Creation:

  • The Deployment controller creates a ReplicaSet based on the Deployment's specification. The ReplicaSet is responsible for maintaining a stable set of replica Pods running at any given time.

  1. Pod Creation:

  • The ReplicaSet controller (another part of the kube-controller-manager) ensures that the desired number of Pods (in this case, 3) are created and running. It sends requests to the API server to create these Pods.

  1. Scheduler Assignment:

  • The Kubernetes scheduler watches for new Pods that are in the "Pending" state. It assigns these Pods to suitable nodes in the cluster based on available resources and scheduling policies.

  1. Node and Kubelet:

  • The kubelet on the selected nodes receives the Pod specifications from the API server. It then pulls the necessary container images (nginx in this case) and starts the containers.

Hierarchical Relationship

Deployment:

  • High-Level Manager: A Deployment is a higher-level controller that manages the entire lifecycle of an application, including updates, scaling, and rollbacks.

  • Creates and Manages ReplicaSets: When you create or update a Deployment, it creates or updates ReplicaSets to reflect the desired state of your application.

  • Handles Rolling Updates and Rollbacks: Deployments handle the complexity of updating applications by managing the creation of new ReplicaSets and scaling down old ones.

ReplicaSet:

  • Mid-Level Manager: A ReplicaSet ensures that a specified number of identical Pods are running at any given time.

  • Maintains Desired State of Pods: It creates and deletes Pods as needed to maintain the desired number of replicas.

  • Label Selector: Uses label selectors to identify and manage Pods.

Pods:

  • Lowest-Level Unit: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster and typically contains one or more containers.

💡A good question to ask at this point is why do you need a deployment when a replicaset is good enough to bring up and heal pods

Create a replicaset

Let’s not worry about deployments, lets just create a replicaset that starts 3 pods

  • Create rs.yml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
  • Apply the manifest

kubectl apply -f rs.yml
  • Get the rs details

kubectl get rs

NAME               DESIRED   CURRENT   READY   AGE
nginx-replicaset   3         3         3       23s
  • Check the pods

kubectl get pods

NAME                     READY   STATUS    RESTARTS   AGE
nginx-replicaset-7zp2v   1/1     Running   0          35s
nginx-replicaset-q264f   1/1     Running   0          35s
nginx-replicaset-vj42z   1/1     Running   0          35s
  • Try deleting a pod and check if it self heals

kubectl delete pod nginx-replicaset-7zp2v
kubectl get pods
  • Try adding a pod with the app=nginx

kubectl run nginx-pod --image=nginx --labels="app=nginx"
  • Ensure it gets terminated immedietely because the rs already has 3 pods

  • Delete the replicaset

 kubectl delete rs nginx-deployment-576c6b7b6

💡Note the naming convention of the pods. The pods are named after the replicaset followed by a unique id (for eg nginx-replicaset-vj42z)

Create a deployment

Lets create a deployment that starts 3 pods

  • Create deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

  • Apply the deployment

 kubectl apply -f deployment.yml
  • Get the deployment

kubectl get deployment

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           18s
  • Get the rs

kubectl get rs
NAME                         DESIRED   CURRENT   READY   AGE
nginx-deployment-576c6b7b6   3         3         3       34s
  • Get the pod

kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-576c6b7b6-b6kgk   1/1     Running   0          46s
nginx-deployment-576c6b7b6-m8ttl   1/1     Running   0          46s
nginx-deployment-576c6b7b6-n9cx4   1/1     Running   0          46s
  • Try deleting a pod

kubectl delete pod nginx-deployment-576c6b7b6-b6kgk
  • Ensure the pods are still up

kubectl get pods

Installing Kubernetes on Windows and macOS

Windows:

  1. Install WSL2: Ensure you have Windows Subsystem for Linux 2 (WSL2) enabled. You can enable it through the Windows Terminal settings.

  2. Install Docker Desktop: Download and install Docker Desktop for Windows. This will also install the Kubernetes runtime.

  3. Verify Installation: Open a PowerShell or Command Prompt window and run the following command:Bash

    kubectl get pods

    If Kubernetes is installed correctly, you should see a list of pods.

macOS:

  1. Install Homebrew (if not already installed): Open Terminal and run:Bash

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  2. Install Kubernetes: Run the following command in Terminal:Bash

    brew install kubectl

  3. Verify Installation: Run the following command in Terminal:Bash

    kubectl get pods

    If Kubernetes is installed correctly, you should see a list of pods.

Additional Notes:

  • You may need to restart your computer after installing WSL2 or Docker Desktop for the changes to take effect.

  • If you encounter any issues, refer to the official Kubernetes documentation or community forums for troubleshooting.

  • For more advanced Kubernetes setups, consider using tools like Minikube or kind to create local Kubernetes clusters.

By following these steps, you should be able to successfully install Kubernetes on your Windows or macOS system and start using it to manage your containerized applications.

A pod in Kubernetes is a group of containers that share a common network namespace and volume. It's the smallest deployable unit in Kubernetes.

Key characteristics of a pod:

  • Containers: A pod can contain one or more containers. These containers share the same network namespace, which means they can communicate with each other directly without needing to go through a network service.

  • Volumes: Pods can mount volumes, which are persistent storage units that can be shared between containers within the pod or across multiple pods.

  • Lifecycle: Kubernetes manages the lifecycle of pods, including creation, deletion, and restarting if necessary.

  • Labels: Pods can be labeled to group and identify them.

Why use pods?

  • Co-locating containers: Pods are used to co-locate containers that need to work closely together. For example, a web server and a database container might be placed in the same pod.

  • Sharing resources: Pods can share resources like volumes and network interfaces, which can improve efficiency and reduce overhead.

  • Isolation: Each pod is isolated from other pods, providing a level of security and preventing conflicts.

In summary, a pod is a fundamental building block in Kubernetes, providing a way to group and manage containers and their associated resources.

PreviousDockerNextCI/CD

Last updated 8 months ago

Was this helpful?

notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image
notion image