The Linux Foundation Projects
Skip to main content
BlogEdgeX Foundry

Running EdgeX Foundry on Kubernetes

Guest blog post by Rohit P Sardesai, System Architect, Huawei Technologies India Pvt Ltd.

Why Kubernetes for Edge?

An edge computing platform comprises of many management and application services deployed on tens of thousands of edge-gateways managing millions of devices. Ensuring reliability and scalability of these services at such a large scale can be ensured by a platform like Kubernetes which orchestrates containers in a scalable and highly available way.

Kubernetes Pods and Deployments

Pods are the smallest deployable units of computing that can be created and managed in Kubernetes. A pod is a group of one or more containers logically co-located to share the same network, volumes etc.

A Deployment is a controller which manages pods and ensures the desired number of pods are always running.

An EdgeX core-metadata deployment yaml could be defined as below:

Figure 1. Metadata-deployment.yaml

 

The spec defines the containers to run, docker image to use, and ports to expose.

Kubernetes volumes

A Kubernetes volume is just a directory to store some data. To use a volume, a pod specifies what volumes to provide and where to mount those into containers. In Fig 1 above, volumes and volumeMounts are specified for the mongodb data, logs, and consul-config and consul-data directories.

Kubernetes Services

A Kubernetes Service is acts an intermediary for pods to talk to each other, providing features like load balancer and service-discovery. It routes the requests to backing pods based on matching labels.

Figure 2: EdgeX Services Communication

 

An EdgeX core-metadata Service yaml could be described as below:

Figure 3: Metadata-service.yaml

 

The config-seed service in EdgeX currently provides service discovery for all other services. Kubernetes Services serve the same purpose, so we don’t need to run the config-seed service in Kubernetes.

All other EdgeX micro-services have a dependency on the config-seed service which can be removed by setting the setting spring.consul.enabled to false in bootstrap.properties and rebuilding the docker image.

Figure 4: Disable consul dependency

Setting up a single node Kubernetes cluster

kubeadm tool can be used to setup a single node Kubernetes cluster :  https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Kubernetes supports different pod network add-ons implementing the Container Network Interface (CNI). I used Weave as the pod network add-on.

If using weave, make sure you start kubeadm with the correct pod-network-cidr range:

$kubeadm init –pod-network-cidr=10.244.0.0/16

Next, install the Weave pod network:

$kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”

Creating Services and Deployments

Create Service objects for all the EdgeX micro-services using the kubectl command line tool.

$kubectl create –f <service.yaml>

Next, create the Deployment objects for all the services.

$kubectl create –f <deployment.yaml>

The order of creation of the deployment objects is similar to the Docker Compose approach. The only difference being we don’t bring up the volume and config-seed services.

Deployment and Service yamls and scripts to create the EdgeX Services and Deployments can be found here: https://github.com/rohitsardesai83/edgex-on-kubernetes

Figure 5: Project structure

 

The scripts in the hack folder can be used to bring up/down the EdgeX services.

If you have questions or comments, visit the EdgeX Rocket.Chat and share your thoughts in the #community channel.