Getting Started with Containerization and Kubernetes on Linux

Blogs

Optimizing SSIS Packages for Performance: Tips and Techniques
October 11, 2024
How to Debug an SSIS Package: A Comprehensive Guide
October 23, 2024

Getting Started with Containerization and Kubernetes on Linux

In today’s fast-paced world of software development, two essential technologies are leading the charge for efficient, scalable, and flexible infrastructure: containerization and Kubernetes. Together, they are revolutionizing the way applications are developed, deployed, and managed, providing agility and consistency in cloud environments.

What is Containerization?

Containerization is the process of packaging an application and all its dependencies (libraries, binaries, configuration files) into a container. A container is a lightweight, standalone, and executable package that includes everything needed to run the application, ensuring consistency across different environments—whether it’s a developer’s laptop, a testing server, or a production environment.

Containers solve the common problem of “it works on my machine” by creating isolated environments. This allows developers to run their applications without worrying about system differences, enabling greater portability and efficiency. Docker is one of the most popular platforms for containerization.

Benefits of Containerization:
  • Portability: Containers can run across any platform that supports container engines (Linux, Windows, etc.).
  • Consistency: Ensures the same environment in development, testing, and production.
  • Resource Efficiency: Containers are lightweight and share the host system’s kernel, consuming fewer resources than traditional virtual machines.
  • Isolation: Each container runs in its own isolated environment, preventing interference between applications.

What is Kubernetes?

While containers offer a robust solution for deploying applications, managing a large number of containers across multiple environments becomes complex. This is where Kubernetes (often abbreviated as K8s) comes in. Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications.

Originally developed by Google, Kubernetes has become the industry standard for managing containerized workloads and services. It abstracts away the underlying infrastructure, allowing developers to focus on their applications while Kubernetes handles scheduling, scaling, and maintenance.

Key Features of Kubernetes:
  • Automated Scaling: Kubernetes can automatically adjust the number of running containers based on demand, ensuring optimal resource usage.
  • Self-Healing: If a container fails, Kubernetes automatically restarts it or replaces it, ensuring high availability.
  • Load Balancing: Distributes traffic across multiple containers, optimizing performance and preventing overload.
  • Rolling Updates: Kubernetes allows seamless updates to your applications without downtime, ensuring continuous availability.
  • Declarative Configuration: Kubernetes uses YAML or JSON files for defining the desired state of applications and infrastructure, enabling automated management.

How Kubernetes and Containers Work Together

Kubernetes acts as the “brains” behind a fleet of containers. Once you package your application into a container, Kubernetes takes over the orchestration tasks—deciding where and when to run containers, how to connect them, and how to scale them based on real-time conditions.

Containers run inside Kubernetes Pods, which are the smallest deployable units in the Kubernetes ecosystem. A Pod can contain one or more containers that share the same resources, such as network and storage. Kubernetes also offers persistent storage, networking configurations, and security policies for managing containers at scale.

The Power of Combining Kubernetes and Containers

Combining the flexibility of containers with the automation and management capabilities of Kubernetes brings several advantages:

  • Efficient Resource Utilization: Kubernetes ensures your applications are using resources efficiently, with automated scaling and resource allocation.
  • Fault Tolerance: With self-healing capabilities, Kubernetes keeps applications running smoothly, even in the face of hardware or software failures.
  • Simplified DevOps: Kubernetes integrates well with CI/CD pipelines, facilitating automated deployment and updates.
  • Cross-Platform Compatibility: Kubernetes works with any container runtime (e.g., Docker, containerd), making it platform-agnostic.

Step-by-Step Guide: Containerization and Kubernetes Deployment (Linux)

Prerequisites:

  • A Linux machine (Ubuntu or CentOS)
  • Basic understanding of Linux commands
  • Docker installed
  • Minikube for Kubernetes

Step 1: Install Docker

  1. Update your package index:

sudo apt update

     2.Install necessary packages:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

      3. Install Docker:

sudo apt update
sudo apt install docker-ce

Step 2: Install Minikube and Kubernetes (K8s)

Minikube is a lightweight Kubernetes implementation that creates a local Kubernetes cluster on your machine. It is designed to help developers and teams learn, experiment with, and develop applications in a Kubernetes environment without needing access to a full-fledged cloud infrastructure.

  1. Install Minikube:curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    sudo install minikube-linux-amd64 /usr/local/bin/minikube

       2. Verify Minikube installation:

minikube version

         3. Start Minikube:

minikube start

      4. Install kubectl (Kubernetes command-line tool):

kubectl is the command-line tool used to interact with Kubernetes clusters. It provides a way to        deploy applications, manage cluster resources, inspect and view logs, and perform various administrative tasks within the Kubernetes environment.

sudo apt-get update
sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a                  /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

      5. Verify kubectl installation:

kubectl version –client

Step 3: Write the Python Flask Application

Create a simple Python Flask app that will be containerized and deployed to Kubernetes.

  1. Create a project directory:
    mkdir flask-app
    cd flask-app
  2. Create app.py:

     

    from flask import Flask

    app = Flask(__name__)

    @app.route(‘/’)
    def home():
    return “Hello, Kubernetes!”

    if __name__ == ‘__main__’:
    app.run(host=’0.0.0.0′, port=5000)

  3. Create requirements.txt:
    Flask==2.0.2
  4. Test the Flask app:
    • Install Flask and dependencies:
      pip install -r requirements.txt
    • Run the app locally:
      python app.py

Step 4: Containerize the Flask Application with Docker

  1. Create a Dockerfile:

# Use an official Python runtime as a parent image
FROM python:3.9-slim

# Set the working directory
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install –no-cache-dir -r requirements.txt

# Make port 5000 available to the world outside this container
EXPOSE 5000

# Run app.py when the container launches
CMD [“python”, “app.py”]

 

       2.Build the Docker image:

docker build -t flask-app .

       3.Run the Docker container:

docker run -d -p 5000:5000 flask-app

Step 5: Deploy to Kubernetes using Minikube

  1. Create a Kubernetes Deployment YAML file (flask-deployment.yaml):

GNU nano 4.8 flask-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
spec:
containers:
– name: flask-app
image: thejask46/flask-app:latest
ports:
– containerPort: 5000

     2.  Apply the Deployment and Service configurations:

kubectl apply -f  flask-deployment.yaml

3. Check the status of the Deployment:

kubectl get deployments

4.  Check the Pods:

kubectl get pods

Step 6: Scaling the Application

To scale the number of Pods for the Flask app:

  1. Scale up the Deployment:
    kubectl scale deployment flask-app --replicas=5
  2. Verify the scaled Pods:
    kubectl get pods

    Conclusion

    This demo demonstrated how to containerize a Python Flask application and deploy it to a Kubernetes cluster using Minikube. We also learned how to scale the app by increasing the number of replicas in the Kubernetes cluster. Containerization and Kubernetes are essential tools for modern application development, providing flexibility, scalability, and ease of management in cloud environments.

    Happy reading!!

 


Thejas K

Leave a Reply

Your email address will not be published. Required fields are marked *