Kubernetes
Kubernetes: A Container Orchestration Platform
Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. It provides a declarative way to define the desired state of your application, and Kubernetes takes care of the underlying infrastructure to make it happen.
Key differences between Docker and Kubernetes:
Purpose: Docker is a container runtime that allows you to create and run individual containers. Kubernetes, on the other hand, is a platform for managing multiple containers at scale.
Scope: Docker focuses on the creation and management of individual containers, while Kubernetes orchestrates entire clusters of containers.
Features: Kubernetes provides advanced features like service discovery, load balancing, secrets management, and self-healing, making it suitable for complex applications.
Common Kubernetes Commands
kubectl get pods
: Lists all running pods.kubectl run <name> --image=<image_name>
: Deploys a new pod.kubectl get services
: Lists all services.kubectl expose deployment <deployment_name> --type=NodePort
: Exposes a deployment as a NodePort service.kubectl apply -f <manifest_file>
: Applies a Kubernetes manifest file.kubectl delete <resource_type> <name>
: Deletes a Kubernetes resource.
Use Cases for Kubernetes
Microservices Architecture: Kubernetes is ideal for managing complex microservices-based applications.
Cloud-Native Applications: It provides the infrastructure for building and deploying cloud-native applications.
Large-Scale Applications: Kubernetes can handle large-scale applications with thousands of containers.
Continuous Delivery: Kubernetes can be integrated with CI/CD pipelines for automated deployments.
Starting docker
ensurae docker desktop is running
after this all command you can get local-control-plane that is help to start master node
after this you have to add .yml file with name of master and worker node what you have
Single node setup
Create a 1 node cluster
Check the docker containers you have running
without docker kubernetes install
docker run -p 3000:80 nginx
You will notice a single container running (control-pane)
Delete the cluster
Multi node setup
Create a
clusters.yml
file
Create the node setup
Check docker containers
Now you have a node cluster running locally
Using minikube
Start a k8s cluster locally
Run
docker ps
to see the single node setup
Kubernetes API
The master node (control pane) exposes an API that a developer can use to start pods.
Try the API
Run
docker ps
to find where the control pane is running
Try hitting various endpoints on the API server - https://127.0.0.1:50949/api/v1/namespaces/default/pods
Kubernetes API server does authentication checks and prevents you from getting in.All of your authorization credentials are stored by kind
in ~/.kube/config
kubectl
kubectl
is a command-line tool for interacting with Kubernetes clusters. It provides a way to communicate with the Kubernetes API server and manage Kubernetes resources.
Install kubectl
https://kubernetes.io/docs/tasks/tools/#kubectl
Ensure kubectl works fine
If you want to see the exact HTTP request that goes out to the API server, you can add --v=8
flag
Creating a Pod
There were 5 jargons we learnt about
Cluster
Nodes
Images
Containers
Pods
We have created a cluster
of 3 nodes
How can we deploy a single container
from an image
inside a pod
?
Finding a good image
Let’s try to start this image locally - https://hub.docker.com/_/nginx
Starting using docker
Try visiting localhost:3005
Starting a pod using k8s
Start a pod
Check the status of the pod
Check the logs
Describe the pod to see more details
What our system looks like right now
Stop the pod
Stop the pod by running
Check the current state of pods
Kubernetes manifest
A manifest defines the desired state for Kubernetes resources, such as Pods, Deployments, Services, etc., in a declarative manner.
Original command
Manifest
Breaking down the manifest
Applying the manifest
Delete the pod
Deployment
A Deployment in Kubernetes is a higher-level abstraction that manages a set of Pods and provides declarative updates to them. It offers features like scaling, rolling updates, and rollback capabilities, making it easier to manage the lifecycle of applications.
Pod: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster, typically containing one or more containers.
Deployment: A Deployment is a higher-level controller that manages a set of identical Pods. It ensures the desired number of Pods are running and provides declarative updates to the Pods it manages.
Key Differences Between Deployment and Pod:
Abstraction Level:
Pod: A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster, typically containing one or more containers.
Deployment: A Deployment is a higher-level controller that manages a set of identical Pods. It ensures the desired number of Pods are running and provides declarative updates to the Pods it manages.
Management:
Pod: They are ephemeral, meaning they can be created and destroyed frequently.
Deployment: Deployments manage Pods by ensuring the specified number of replicas are running at any given time. If a Pod fails, the Deployment controller replaces it automatically.
Updates:
Pod: Directly updating a Pod requires manual intervention and can lead to downtime.
Deployment: Supports rolling updates, allowing you to update the Pod template (e.g., new container image) and roll out changes gradually. If something goes wrong, you can roll back to a previous version.
Scaling:
Pod: Scaling Pods manually involves creating or deleting individual Pods.
Deployment: Allows easy scaling by specifying the desired number of replicas. The Deployment controller adjusts the number of Pods automatically.
Self-Healing:
Pod: If a Pod crashes, it needs to be restarted manually unless managed by a higher-level controller like a Deployment.
Deployment: Automatically replaces failed Pods, ensuring the desired state is maintained.
Series of events
When you run the following command, a bunch of things happen
Step-by-Step Breakdown:
Command Execution:
You execute the command on a machine with
kubectl
installed and configured to interact with your Kubernetes cluster.
API Request:
kubectl
sends a request to the Kubernetes API server to create a Deployment resource with the specified parameters.
API Server Processing:
The API server receives the request, validates it, and then processes it. If the request is valid, the API server updates the desired state of the cluster stored in etcd. The desired state now includes the new Deployment resource.
Storage in etcd:
The Deployment definition is stored in etcd, the distributed key-value store used by Kubernetes to store all its configuration data and cluster state. etcd is the source of truth for the cluster's desired state.
Deployment Controller Monitoring:
The Deployment controller, which is part of the
kube-controller-manager
, continuously watches the API server for changes to Deployments. It detects the new Deployment you created.
ReplicaSet Creation:
The Deployment controller creates a ReplicaSet based on the Deployment's specification. The ReplicaSet is responsible for maintaining a stable set of replica Pods running at any given time.
Pod Creation:
The ReplicaSet controller (another part of the
kube-controller-manager
) ensures that the desired number of Pods (in this case, 3) are created and running. It sends requests to the API server to create these Pods.
Scheduler Assignment:
The Kubernetes scheduler watches for new Pods that are in the "Pending" state. It assigns these Pods to suitable nodes in the cluster based on available resources and scheduling policies.
Node and Kubelet:
The kubelet on the selected nodes receives the Pod specifications from the API server. It then pulls the necessary container images (nginx in this case) and starts the containers.
💡A good question to ask at this point is why do you need a deployment
when a replicaset
is good enough to bring up and heal pods
Create a replicaset
Let’s not worry about deployments, lets just create a replicaset that starts 3 pods
Create
rs.yml
Apply the manifest
Get the rs details
Check the pods
Try deleting a pod and check if it self heals
Try adding a pod with the
app=nginx
Ensure it gets terminated immedietely because the
rs
already has 3 podsDelete the replicaset
💡Note the naming convention of the pods. The pods are named after the replicaset
followed by a unique id (for eg nginx-replicaset-vj42z)
Create a deployment
Lets create a deployment that starts 3 pods
Create deployment.yml
Apply the deployment
Get the deployment
Get the rs
Get the pod
Try deleting a pod
Ensure the pods are still up
Installing Kubernetes on Windows and macOS
Windows:
Install WSL2: Ensure you have Windows Subsystem for Linux 2 (WSL2) enabled. You can enable it through the Windows Terminal settings.
Install Docker Desktop: Download and install Docker Desktop for Windows. This will also install the Kubernetes runtime.
Verify Installation: Open a PowerShell or Command Prompt window and run the following command:Bash
If Kubernetes is installed correctly, you should see a list of pods.
macOS:
Install Homebrew (if not already installed): Open Terminal and run:Bash
Install Kubernetes: Run the following command in Terminal:Bash
Verify Installation: Run the following command in Terminal:Bash
If Kubernetes is installed correctly, you should see a list of pods.
Additional Notes:
You may need to restart your computer after installing WSL2 or Docker Desktop for the changes to take effect.
If you encounter any issues, refer to the official Kubernetes documentation or community forums for troubleshooting.
For more advanced Kubernetes setups, consider using tools like Minikube or kind to create local Kubernetes clusters.
By following these steps, you should be able to successfully install Kubernetes on your Windows or macOS system and start using it to manage your containerized applications.
A pod in Kubernetes is a group of containers that share a common network namespace and volume. It's the smallest deployable unit in Kubernetes.
Key characteristics of a pod:
Containers: A pod can contain one or more containers. These containers share the same network namespace, which means they can communicate with each other directly without needing to go through a network service.
Volumes: Pods can mount volumes, which are persistent storage units that can be shared between containers within the pod or across multiple pods.
Lifecycle: Kubernetes manages the lifecycle of pods, including creation, deletion, and restarting if necessary.
Labels: Pods can be labeled to group and identify them.
Why use pods?
Co-locating containers: Pods are used to co-locate containers that need to work closely together. For example, a web server and a database container might be placed in the same pod.
Sharing resources: Pods can share resources like volumes and network interfaces, which can improve efficiency and reduce overhead.
Isolation: Each pod is isolated from other pods, providing a level of security and preventing conflicts.
In summary, a pod is a fundamental building block in Kubernetes, providing a way to group and manage containers and their associated resources.
Last updated
Was this helpful?