Introduction
Suppose that you’ve started a new job as a software developer at the world’s most popular pizza joint – Contoso Pizza. Business is booming and the Contoso Pizza’s website that indicates whether pizzas are in stock or not has recently been refactored into microservices hosted in Docker containers.
In a microservice-based development approach, each microservice owns its model and data so that it will be autonomous from a development and deployment point of view from other microservices. Hosting microservices inside of a container is a common way to achieve that. These kinds of systems are complex to scale out and manage. You need to consider the process of organizing, adding, removing, and updating many containers. This process is referred to as container management.
For example, you may find during specific times of the day you need to scale up the number of container instances that handle caching. Or you may have an update to the container instance that checks pizza inventory.
To help with container management tasks, you can use a container orchestrator. Kubernetes is one such orchestrator. It is an extensible open-source platform for managing and orchestrating containerized workloads.
This post will teach you about Kubernetes and the problems it solves and how to deploy a .NET web API and web app into a Kubernetes cluster.
The decoupled design of microservices combined with the atomicity of containers make it possible to scale out apps, and respond to increased demand by deploying more container instances, and to scale back if demand is decreasing. In complex solutions the process of deploying, updating, monitoring, and removing containers introduces challenges.
Container management
Container management is the process of organizing, adding, removing, or updating a significant number of containers.
The Contoso Pizza Company website consists of multiple microservices responsible for tasks like caching, data processing, and a shopping cart. Each of these services is hosted in a container and can be deployed, updated, and scaled independently from one another.
If you increase the number of a shopping cart container instances and need to deploy a new version, you’ll have to update every single instance of that container.
Container management helps with these tasks.
Container orchestration
A container orchestrator is a system that automatically deploys and manages containerized apps. For example, the orchestrator can dynamically respond to changes in the environment to increase or decrease the deployed instances of the managed app. Or, it can ensure all deployed container instances get updated if a new version of a service is released.
Kubernetes
Kubernetes is a portable, extensible open-source platform for managing and orchestrating containerized workloads. Kubernetes abstracts away complex container management tasks, and provides you with declarative configuration to orchestrate containers in different computing environments. This orchestration platform gives you the same ease of use and flexibility you may already know from platform as a service (PaaS) or infrastructure as a service (IaaS) offerings.
Benefits
The benefits of using Kubernetes are based on the abstraction of tasks.
These tasks include:
- Self-healing of containers. An example would be restarting containers that fail or replacing containers.
- Scaling deployed container count up or down dynamically, based on demand.
- Automating rolling updates and rollbacks of containers.
- Managing storage.
- Managing network traffic.
- Storing and managing sensitive information, such as usernames and passwords.
Because Kuberentes is a tool to orchestrate containerized workloads, and you can deploy .NET microservices into containers, you can use Kubernetes to orchestrate your .NET microservices. And that’s what the rest of this module will teach you.
Push a microservice image to Docker Hub
In order for Kubernetes to create a container image, it needs a place to get it from. Docker Hub is a central place to upload Docker images. Many products, including Kubernetes, can create containers based on images in Docker Hub.
Retrieve the Contoso Pizza Shop microservice container images
The code for the Contoso Pizza Shop and the Dockerfiles to build the container images has already been created for you. Clone the repository from GitHub to retrieve the code.
- Open a command prompt or terminal window.
- Open the root directory you want the code downloaded to. The code will be downloaded into a new folder in that location.
- Run the following command to download, or clone, the sample repository.\
git clone https://github.com/microsoftdocs/mslearn-dotnet-kubernetes
The code is downloaded into a new folder called mslearn-dotnet-kubernetes.
Verify the Docker images by creating containers locally
There are two containers in the Contoso Pizza Shop project. Before pushing the images to Docker Hub, let’s use them to create the containers locally. After the containers are created and running, we will be able to browse the Contoso Pizza Company website and verify the microservices are running OK.
Follow these steps to create and run Docker containers from the Docker files you downloaded.
- Make sure Docker Desktop is running.
- Open a command prompt and move to the mslearn-dotnet-kubernetes directory.
- Run the following command to build the containers.
docker-compose build
It may take a while to build the containers.
- Run the following command to run the app and attach the containers.
docker-compose up
- When the operation has finished, in a browser tab enter
http://localhost:5902
to view the Contoso Pizza Shop menu.
Sign in to Docker Hub
The next step in uploading the images to Docker Hub is to sign into Docker Hub. From the command prompt, enter the following:
docker login
Important : Use the same username and password from when you created your Docker account. You can visit the Docker Hub website to reset your password, if needed.
Upload the images to Docker Hub
- Enter the following code to retag, or rename, the Docker images you created under your Docker username.
docker tag pizzafrontend [YOUR DOCKER USER NAME]/pizzafrontend docker tag pizzabackend [YOUR DOCKER USER NAME]/pizzabackend
- Then finally upload, or push, the Docker images to Docker Hub.
docker push [YOUR DOCKER USER NAME]/pizzafrontend docker push [YOUR DOCKER USER NAME]/pizzabackend
In this exercise, you cloned Contoso Pizza Shop code from GitHub, used Dockerfiles contained within that code to create two Docker images and containers, and then pushed those images to Docker Hub.
Now you’re ready to use Kubernetes to manage the deployment of Contoso Pizza Company’s microservices.
Deploy a microservice container to Kubernetes
Kubernetes runs containers for you. You describe what you want Kubernetes to do through a YAML file. This exercise will walk you through the creation of the file so you can deploy and run the backend service on Kubernetes.
Important: Before proceeding you must be sure you have a Kubernetes implementation installed. We will be using the implementation included with Docker Desktop. Follow these directions from Docker in order to enable it.
Create a deployment file for the backend service
You can create a file to manage the deployment of a container into Kubernetes with a YAML file. Let’s create a file to deploy the backend service.
- Open a text editor, such as Visual Studio Code and switch to the directory you cloned the project files to earlier.
- Create a new file in the root of the project called backend-deploy.yml.
- Copy the following text into the file and then save it.
--- apiVersion: apps/v1 kind: Deployment metadata: name: pizzabackend spec: replicas: 1 template: metadata: labels: app: pizzabackend spec: containers: - name: pizzabackend image: [YOUR DOCKER USER NAME]/pizzabackend:latest ports: - containerPort: 80 env: - name: ASPNETCORE_URLS value: http://*:80 selector: matchLabels: app: pizzabackend --- apiVersion: v1 kind: Service metadata: name: pizzabackend spec: type: ClusterIP ports: - port: 80 selector: app: pizzabackend
- Replace the placeholder
[YOUR DOCKER USER NAME]
with your actual Docker username.
This file does a couple of things.
The first portion defines a deployment spec for the container that will be deployed into Kubernetes. It specifies there will be one replica, where to find the container image from, which ports to open on the container, and sets some environment variables. This first portion also defines labels and names that the container and spec can be referenced by.
The second portion then defines that the container will run as a Kubernetes ClusterIP. For this module, you don’t need to understand all of the specifics of ClusterIPs, but do know that this type of service does not expose an external IP address. It is only accessible from other services running from within the same Kubernetes cluster.
Deploy and run the backend microservice
Next let’s deploy and run the microservice.
- Open a command prompt to the same directory where you created the backend-deploy.yml file.
- Run the following command.
kubectl apply -f backend-deploy.yml
This command is telling Kubernetes to run the file we just created. It will download the image from Docker Hub and create the container.
- The
kubectl apply
command will return quickly. But the container creation may take a while. To view the progress, use the following.kubectl get pods
In the resulting output, you’ll see a row with pizzabackend followed by a string of random characters under the NAME column. When everything is ready, there’ll be a 1/1 under the READY column and Running under the STATUS column.
- Browse to
http://localhost/pizzainfo
. It will return an HTTP 404 Not Found message. This error is because the pizza backend service is not accessible from the outside world.
Create a deployment file and run the frontend service
Much like the backend service, we will need a deployment file for the front end as well.
- Create a new file named frontend-deploy.yml
- Paste the following in.
--- apiVersion: apps/v1 kind: Deployment metadata: name: pizzafrontend spec: replicas: 1 template: metadata: labels: app: pizzafrontend spec: containers: - name: pizzafrontend image: [YOUR DOCKER USER NAME]/pizzafrontend ports: - containerPort: 80 env: - name: ASPNETCORE_URLS value: http://*:80 - name: backendUrl value: http://pizzabackend selector: matchLabels: app: pizzafrontend --- apiVersion: v1 kind: Service metadata: name: pizzafrontend spec: type: LoadBalancer ports: - port: 80 selector: app: pizzafrontend
- Replace the placeholder [YOUR DOCKER USERNAME] with your actual Docker username.You’ll notice this file is similar to the one we created for the backend microservice. There are three differences:
-
- We’re specifying a different container to run under the deployment’s
spec.template.spec.containers.image
value. - There’s a new environment variable under the
spec.template.spec.containers.env
section. The code in the pizzafrontend application calls the backend, but because we haven’t specified a fully qualified domain name nor will we know the IP address of the backend microservice we use the name we specified under themetadata.name
node of theDeployment
. Kubernetes will then take care of the rest. - And in the service section, we are specifying a value of LoadBalancer for
spec.type
. And port 80 is open for that. This means we will be able to browse the pizza frontend by navigating to http://localhost.
- We’re specifying a different container to run under the deployment’s
-
- Deploy the container to Kubernetes with the following command.
kubectl apply -f frontend-deploy.yml
Again you can use
kubectl get pods
to see the status of the deployment. Once the row for pizzafrontend displays Running under the STATUS column everything is ready to go. - When the container has been successfully deployed, browse to
http://localhost
to see both microservices running.
In this exercise, you created a deployment file that described exactly how you wanted the containers to run within Kubernetes. Then you had Kubernetes download the image from Docker Hub and start up the containers.
Scale a container instance in Kubernetes
Your microservice may come under heavy load during certain times of the day. Kubernetes makes it easy to scale your microservice by adding additional instances for you.
Run the following command to scale the backend microservice to five instances.
kubectl scale --replicas=5 deployment/pizzabackend
The reason we need to specify deployment/pizzabackend instead of just pizzabackend is because we’re scaling the entire Kubernetes deployment of the pizza backend service, and that will scale the instances of the individual pods correctly.
To verify five instances are up and running, run this command.
kubectl get pods
Once all the instances are spun up, you should see five pod instances (represented as individual rows) in the output. Each row will start with pizzabackend and then be followed by a random string.
To scale the instance back down, run the following command.
kubectl scale --replicas=1 deployment/pizzabackend
Prove microservice resilience in Kubernetes
One of the benefits of Kubernetes is the support for declarative configuration management. The services you define in the configuration files will be retained at all costs.
This means if there’s a failure Kubernetes will automatically restart the services that were running before the failure.
Let’s see this resilience in action by deleting the pizza frontend pod and then verifying that Kubernetes has restarted it.
- First run
kubectl get pods
and note the name, including the random string, of the pizza frontend pod. Here’s an example output:username@computer-name % kubectl get pods NAME READY STATUS RESTARTS AGE pizzabackend-7445bdb5c9-pnpk6 1/1 Running 0 31m pizzafrontend-5b6cc765c4-hjpx4 1/1 Running 0 63m
- Now delete the pizza frontend pod by using the
kubectl delete
command. You’ll need to specify the full name of the pod, including the random string.kubectl delete pod pizzafrontend-5b6cc765c4-hjpx4
You’ll receive a message immediately stating the pod has been deleted.
- Because Kubernetes will maintain the system state as declared in the configuration files, it will immediately start up another pod instance. You can verify that by running
kubectl get pods
.username@computer-name % kubectl get pods NAME READY STATUS RESTARTS AGE pizzabackend-7445bdb5c9-pnpk6 1/1 Running 0 31m pizzafrontend-5b6cc765c4-vwmv8 1/1 Running 0 7s
Notice that the random string following the pizzafrontend name has changed. Indicating the pod is a new instance. Also the AGE value is considerably less as well.
In this exercise you learned how Kubernetes will automatically maintain declared system state, even if there’s a failure.
Read More from Cloudspoint:
Summary
Hosting microservices in their own containers is a common pattern for microservice based development. It’s not uncommon to have many different microservices composing a single application. Trying to coordinate and maintain all of those microservices and their containers manually can quickly overwhelm a person.
A container orchestrator is a system that automatically deploys and manages containerized apps. Kubernetes is a portable, extensible open-source platform for managing and orchestrating containerized workloads. Kubernetes abstracts away complex container management tasks, and provides you with declarative configuration to orchestrate containers in different computing environments.
In this module you learned how to take a .NET application that was already partitioned into containerized microservices and deploy it into a Kubernetes environment. You first pushed the Docker images to Docker Hub, to make the images available to the Kubernetes instance to download, and then created deployment files to declaratively describe what Kubernetes should do to each microservice. You also learn how straight-forward it is to scale a containerized microservice using Kubernetes.