Scaling Applications with Kubernetes

Tutorial 4 of 5

Scaling Applications with Kubernetes

1. Introduction

Goal

This tutorial aims to impart knowledge on how to scale applications with Kubernetes. By the end of this tutorial, you will be able to adjust the number of Pod replicas and use Kubernetes Services for load balancing, thus, effectively managing the load on your applications.

Learning Outcomes

  • Understanding Kubernetes and its scaling capabilities.
  • Adjusting the number of Pod replicas.
  • Using Kubernetes Services for load balancing.

Prerequisites

  • Basic understanding of Kubernetes.
  • A Kubernetes cluster up and running.
  • Familiarity with the command line interface.

2. Step-by-Step Guide

Understanding Kubernetes Scaling

Kubernetes supports two types of scaling: horizontal and vertical. Horizontal scaling means increasing or decreasing the number of Pods, while vertical scaling means increasing the CPU or memory capacity of existing Pods.

Horizontal Pod Autoscaling

Horizontal Pod Autoscaler (HPA) automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization.

To create a Horizontal Pod Autoscaler, you can use the kubectl autoscale command:

kubectl autoscale deployment <deployment-name> --min=2 --max=5 --cpu-percent=80

This command creates an autoscaler that maintains between 2 to 5 replicas and tries to maintain 80% CPU utilization.

Load Balancing with Kubernetes Services

Kubernetes Services abstract the way to access your Pods. It provides a single IP address and distributes network traffic to all Pods matching the Service’s Selector.

To create a LoadBalancer Service:

kubectl expose deployment <deployment-name> --type=LoadBalancer --name=<service-name>

3. Code Examples

Example 1: Creating a Horizontal Pod Autoscaler

# Create a Deployment
kubectl run my-app --image=nginx --requests=cpu=200m --expose --port=80

# Create a Horizontal Pod Autoscaler
kubectl autoscale deployment my-app --min=2 --max=5 --cpu-percent=80

Example 2: Creating a Kubernetes Service for Load Balancing

# Expose the Deployment as a Service
kubectl expose deployment my-app --type=LoadBalancer --name=my-service

4. Summary

  • Learned about Kubernetes and its scaling capabilities.
  • Learned how to adjust the number of Pod replicas in Kubernetes.
  • Learned how to use Kubernetes Services for load balancing.

Next Steps

  • Try to deploy your own application on Kubernetes and scale it.
  • Learn more about Kubernetes Services and their types.

Additional Resources

  • Kubernetes Official Documentation link
  • Kubernetes Scaling Documentation link

5. Practice Exercises

Exercise 1

Create a Deployment with an image of your choice and expose it as a Service of type LoadBalancer.

Exercise 2

Create a Horizontal Pod Autoscaler for the Deployment you created in Exercise 1. Set the minimum number of Pods to 3 and the maximum to 10. Set the CPU utilization to 50%.

Exercise 3

Check the status of the Horizontal Pod Autoscaler you created in Exercise 2. Try to generate some load on your application and observe how Kubernetes automatically scales the number of Pods.