In today’s fast-paced world of software development, two essential technologies are leading the charge for efficient, scalable, and flexible infrastructure: containerization and Kubernetes. Together, they are revolutionizing the way applications are developed, deployed, and managed, providing agility and consistency in cloud environments.
Containerization is the process of packaging an application and all its dependencies (libraries, binaries, configuration files) into a container. A container is a lightweight, standalone, and executable package that includes everything needed to run the application, ensuring consistency across different environments—whether it’s a developer’s laptop, a testing server, or a production environment.
Containers solve the common problem of “it works on my machine” by creating isolated environments. This allows developers to run their applications without worrying about system differences, enabling greater portability and efficiency. Docker is one of the most popular platforms for containerization.
While containers offer a robust solution for deploying applications, managing a large number of containers across multiple environments becomes complex. This is where Kubernetes (often abbreviated as K8s) comes in. Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications.
Originally developed by Google, Kubernetes has become the industry standard for managing containerized workloads and services. It abstracts away the underlying infrastructure, allowing developers to focus on their applications while Kubernetes handles scheduling, scaling, and maintenance.
Kubernetes acts as the “brains” behind a fleet of containers. Once you package your application into a container, Kubernetes takes over the orchestration tasks—deciding where and when to run containers, how to connect them, and how to scale them based on real-time conditions.
Containers run inside Kubernetes Pods, which are the smallest deployable units in the Kubernetes ecosystem. A Pod can contain one or more containers that share the same resources, such as network and storage. Kubernetes also offers persistent storage, networking configurations, and security policies for managing containers at scale.
Combining the flexibility of containers with the automation and management capabilities of Kubernetes brings several advantages:
sudo apt update
2.Install necessary packages:
sudo apt install apt-transport-https ca-certificates curl software-properties-common
3. Install Docker:
sudo apt update
sudo apt install docker-ce
Minikube is a lightweight Kubernetes implementation that creates a local Kubernetes cluster on your machine. It is designed to help developers and teams learn, experiment with, and develop applications in a Kubernetes environment without needing access to a full-fledged cloud infrastructure.
2. Verify Minikube installation:
minikube version
3. Start Minikube:
minikube start
4. Install kubectl (Kubernetes command-line tool):
kubectl is the command-line tool used to interact with Kubernetes clusters. It provides a way to deploy applications, manage cluster resources, inspect and view logs, and perform various administrative tasks within the Kubernetes environment.
sudo apt-get update
sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
5. Verify kubectl installation:
kubectl version –client
Create a simple Python Flask app that will be containerized and deployed to Kubernetes.
mkdir flask-app
cd flask-app
app.py
:
from flask import Flask
app = Flask(__name__)
@app.route(‘/’)
def home():
return “Hello, Kubernetes!”
if __name__ == ‘__main__’:
app.run(host=’0.0.0.0′, port=5000)
requirements.txt
:
Flask==2.0.2
pip install -r requirements.txt
python app.py
# Use an official Python runtime as a parent image
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install –no-cache-dir -r requirements.txt
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Run app.py when the container launches
CMD [“python”, “app.py”]
2.Build the Docker image:
docker build -t flask-app .
3.Run the Docker container:
docker run -d -p 5000:5000 flask-app
flask-deployment.yaml
):GNU nano 4.8 flask-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: flask-app
template:
metadata:
labels:
app: flask-app
spec:
containers:
– name: flask-app
image: thejask46/flask-app:latest
ports:
– containerPort: 5000
2. Apply the Deployment and Service configurations:
kubectl apply -f flask-deployment.yaml
3. Check the status of the Deployment:
kubectl get deployments
4. Check the Pods:
kubectl get pods
To scale the number of Pods for the Flask app:
kubectl scale deployment flask-app --replicas=5
kubectl get pods
This demo demonstrated how to containerize a Python Flask application and deploy it to a Kubernetes cluster using Minikube. We also learned how to scale the app by increasing the number of replicas in the Kubernetes cluster. Containerization and Kubernetes are essential tools for modern application development, providing flexibility, scalability, and ease of management in cloud environments.
Happy reading!!
Thejas K