The Linux Foundation Projects
Skip to main content
BlogEdgeX Foundry

EdgeX Foundry Kubernetes Installation

Written by Jason Bonafide, EdgeX Foundry Contributor and Principal Software Engineer at Dell Technologies

In an ever-growing world of connected devices, there is plenty of opportunity in edge computing. While devices are getting smaller and smarter, there is always a need to share data. With that, I have just the platform for you!

EdgeX Foundry is a vendor-neutral open source project hosted by LF Edge building a common open-framework for IoT edge computing. EdgeX Foundry offers a set of interoperable plug-and-play components which aim to satisfy IoT solutions of all variations.

The goal of this blog is to walk through techniques which can be used in deploying EdgeX Foundry to a Kubernetes cluster. Establishing a foundation for deploying EdgeX in Kubernetes is the main takeaway from this tutorial.

Tools and technologies used

Why Deploy EdgeX to a Kubernetes cluster?

Kubernetes provides the following feature set:

  • Service Discovery and load balancing
  • Storage orchestration
  • Automated roll-outs and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management

EdgeX Foundry is built on micro-service architecture. The micro-service architecture is powerful, but it can make deploying and managing an application more complex because of all the components. Kubernetes makes deploying micro-service applications more manageable.

Glossary

Kubernetes

  • Affinity: Act of constraining a Pod to a particular Node.
  • ConfigMap: An API object used to store non-confidential data in key-values pairs.
  • Deployment: Provides declarative updates for Pods and ReplicaSets.
  • Ingress: Exposes HTTP and HTTPS routes from outside the cluster to Services within the cluster.
  • kubelet: Primary “node agent” that runs on each Node. It works in terms of a PodSpec.
  • LivenessProbe: The means used in a Kubernetes Deployment to check that a Container is still working and to determine whether or not it needs to be restarted.
  • Node: Virtual or physical machine in which Pods are run.
  • PersistentVolume: A piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using StorageClasses.
  • PersistentVolumeClaim: A request for storage by a user.
  • Pod: The smallest deployable unit of computing that can be created and managed in Kubernetes.
  • PodSpec: A YAML or JSON object that describes a Pod.
  • ReadinessProbe: The means used in a Kubernetes Deployment to check when a Container is ready to start accepting traffic.
  • Secret: An object that contains a small amount of sensitive data such as a password, a token or a key.
  • Service: An abstract way to expose an application running on a set of Pods as a network service.
  • StartupProbe: The means used in a Kubernetes Deployment to check whether or not the Container’s application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don’t interfere with the application startup.
  • StorageClass: Provides a way for administrators to describe the “classes” of storage they offer.
  • Volume: A directory, possibly containing some data which can be accessed by a Container.
  • Volume Mount: A property for a Container in which a Volume is bound the Container.

Helm

  • Chart: An artifact which contains to a collection of Kubernetes resource files.
  • Named Template: A template which has an identifying name. Named-templates are also referred to as “partials”. Named-templates are similar to that of a programming language’s function which can be used to re-use code (or in this case YAML configuration).
  • Template: A file that can hold placeholders {{}} and are interpolated by specified values
  • yaml: A configuration file within Helm which enables abstraction of configurable items which can be applied to templates.

Setting up manifests directory

Create the directory structure below on your machine. The folders will be used and populated throughout this tutorial.

project-root/
edgex/
templates/
edgex-redis/
edgex-core-metadata/
edgex-core-data/
edgex-core-command/

Before we create definition files

Kubernetes provides a recommended set of labels. These labels provide a grouping mechanism which facilitate management of Kubernetes resources which are bound to an application. Below is Kubernetes’ recommended set of labels:

app.kubernetes.io/name: <application name>
app.kubernetes.io/instance: <installation>
app.kubernetes.io/version: <application version>
app.kubernetes.io/component: <application component>
app.kubernetes.io/part-of: <organization>
app.kubernetes.io/managed-by: <orchestrator>

A concrete example of this would be:

app.kubernetes.io/name: edgex-core-metadata
app.kubernetes.io/instance: edgex
app.kubernetes.io/version: 1.2.1
app.kubernetes.io/component: api
app.kubernetes.io/part-of: edgex-foundry
app.kubernetes.io/managed-by: Kubernetes

With the above label structure, and kubectl’s support of filtering resources by labels, we’ve created enough labels which give us plenty of flexibility when searching for specific resources. An example of this would look like:

$ kubectl get pod -l app.kubernetes.io/name=edgex-core-metadata

In a cluster with many applications, the above command will result in selecting only edgex-core-metadata pods.

On to our first application – edgex-redis

EdgeX Foundry supports Redis as a persistent datastore. Due to the fact that Containers are ephemeral, data created within a Container will live only as long as the Container. In order to keep data around past the life its Container, we will need to use a PersistentVolume.

Lets say that we want to create a PersistentVolume with the following in mind:

  • Our cluster contains a StorageClass named hostpath.
  • Our use-case requires 10Gi of storage (not to be confused with GB – Kubernetes has its own Resource Model).
  • Redis data needs to be stored on a cluster node’s file-system at /mnt/redis-volume. It is worth noting that hostPath storage plugin is not recommended in production. If your cluster contains multiple nodes, you will need to ensure that the edgex-redis Pod is scheduled on the same node. This is to ensure that edgex-redis references the same hostPath storage each time the Pod is scheduled. Please refer to Assigning Pods to Nodes documentation for more information.
  • We anticipate many nodes having Read and Write access to the PersistentVolume.

With the above requirements, our PersistentVolume definition in templates/edgex-redis/pv.yaml should be updated to look like this.

Although the PersistentVolume has not been created yet, we’ve defined a resource that will make the storage we defined in the PersistentVolume definition available for binding. Next, we will need to create a resource that will assign a PersistentVolume to an application. This is accomplished using a PersistentVolumeClaim.

The pattern used in this tutorial establishes a one-to-one relationship between application and a PersistentVolume. That means, for each PersistentVolume, there will exist a PersistentVolumeClaim for an application. With that in mind, our claim for storage capacity, access modes, and StorageClass should match the PersistentVolume we defined earlier.

Lets go over the requirements for our storage use-case:

  • 10Gi storage.
  • Read and Write access by many nodes.
  • A StorageClass named hostpath exists in our cluster.

Given the above requirements, the PersistentVolumeClaim definition file at templates/edgex-redis/pvc.yaml should be updated to look like this.

Now that the PersistentVolume and PersistentVolumeClaim have been defined, lets move on to the Deployment.

Similar to what was done with the PersistentVolume and PersistentVolumeClaim, lets list things that describe the Redis application:

  • For simplicity sake, lets say we only create 1 instance (replica).
  • When a new version is rolled out, we want to kill all of the old pods before bringing up new pods. This is referred to as a Recreate deployment strategy.
  • The Deployment consists of a single process (Container).
  • The Container image will come from Dockerhub’s redis image of latest
  • The Container will listen to connections on port 6379.
  • The application will write data to /var/lib/redis. The Container’s data directory will be mounted to the PersistentVolume via the PersistentVolumeClaim named edgex-redis which we created earlier. The Volume mapped to the PersistentVolumeClaim will be named redis-data and can be mounted as Volume for the Container.
  • The application is in a Started state when a connection can be successfully established over TCP on port 6379. This check will be tried every 10 On the first success, the Container will be in a Started state, however, on the 5th failure to obtain a connection, the Container will be killed. When this StartupProbe is enabled, LivenessProbe and ReadinessProbe are disabled until StartupProbe succeeds.
  • Every 30 seconds, only after a 15 second delay, the Deployment will ensure that the Container is alive by establishing a connection over TCP on port 6379. On the first success, the Container will be in a Running state, however, on 3 failures, the Container will be restarted.
  • Every 30 seconds, only after a 15 second delay, the Deployment will try to determine if the Container is ready to accept traffic. Over TCP, the Deployment will attempt to establish a connection on port 6379. On 3 failures, the pod is removed from Service load balancers.

Given the above requirements, we can define our Deployment in templates/edgex-redis/deployment.yaml should be updated to look like this.

Lastly, for edgex-redis, we will need to establish network access to our Pod within the Kubernetes cluster. This is accomplished by defining a Service.

As for our Service requirements:

  • selector should only match the labels defined in the edgex-redisDeployment’sPodSpec.
  • Map external port 6379 to Container port 6379 with a name of port-6739. Each port requires a name.

On to the next application – edgex-core-metadata

EdgeX application supports the concept of externalized configuration. This is a neat feature enabling the platform’s portability. For our Deployment, we will define our configuration as a Secret. Normally configuration would be defined in a ConfigMap, however, since Redis credentials reside in our configuration file, we will stash the entire configuration file in a Secret.

We are going to leverage the default application configuration and make just a few just a few modifications:

  • Core Metadata Service Host property has been updated to listen on 0.0.0 which ensures that the Container is listening on all network interfaces.
  • Core Data Service Host property has been updated to edgex-core-data Later on when edgex-core-data is installed, a Service will expose edgex-core-data as the hostname of edgex-core-data.
  • Database Host property has been updated to edgex-redis Service

Feel free to refer to the reference project configuration files here.

Lets create the secret by performing the following steps:

  • Download https://github.com/jbonafide623/edgex-lf-k8s/blob/master/secrets/edgex-core-metadata-secret or create your own.
  • Execute $ kubectl create secret generic edgex-core-metadata –from-file=configuration.toml=edgex-core-metadata-secret (where edgex-core-metadata-secret points to the file downloaded in the previous step).

Now for the edgex-core-metadata Deployment. Lets list out some of the requirements:

  • We will create only 1 instance (replica).
  • When a new version is rolled out, we want to kill all of the old pods before bringing up new pods. This is referred to as a Recreate deployment strategy.
  • The Deployment consists of a single process (Container).
  • The Container image will come from Dockerhub’s edgexfoundry/docker-core-metadata-go image of 2.1 tag.
  • Override the Dockerfile’s Entrypoint so that –confdir points to /config.
  • Disable security via EDGEX_SECURITY_STORE environment variable. This can be done by setting the flag to “false”.
  • The Container will listen to HTTP requests on port 48081.
  • The application is in a Started state when a n HTTP GET request to /api/v1/ping results in a 200 response. This check will be tried every 10 On the first success, the Container will be in a Started state, however, on the 5th failure to obtain a connection, the Container will be killed. When this StartupProbe is enabled, LivenessProbe and ReadinessProbe are disabled until StartupProbe succeeds.
  • Every 30 seconds, only after a 15 second delay, the Deployment will ensure that the Container is alive by sending an HTTP GET request to /api/v1/ping. On the first success, the Container will be in a Running state, however, on 3 failures, the Container will be restarted.
  • Every 30 seconds, only after a 15 second delay, the Deployment will try to determine if the Container is ready to accept traffic. Over TCP, the Deployment will attempt to send an HTTP GET request to /api/v1/ping. On 3 failures, the pod is removed from Service load balancers.
  • Establish a limit on cpu usage of 1 and request a starting allocation of 5 cpus.
  • Establish a limit on memory usage of 512Mi and request a starting allocation of 256Mi memory.
  • Mount the configuration Secret with name edgex-core-metadata to the Container’s path /config. Recall /config is the path that we supply as a –confdir override to the application’s image.

Given the above requirements, we can define our Deployment definition file at templates/edgex-core-metadata/deployment.yaml and it should look like this.

Lastly, lets expose the application so that it can be accessed from within and outside of the cluster. For our Service lets define the some requirements:

  • selector should only match the labels defined in the edgex-core-metadataDeployment’sPodSpec.
  • Map external port 48081 to Container port 48081 with a name of port-48081. Each port requires a name.
  • Expose the application outside of the cluster via NodePort type on port 30801

Given the above requires we can define our Service definition file at templates/edgex-core-metadata/service.yaml and it should look like this.

Incorporating Helm

The decision to include Helm in the container-orchestration stack is based on the following Helm features:

  • Helm Facilitates a single responsibility which is to manage Kubernetes resources.
  • Enables easy application installation and rollback.
  • Supports ordered creation of resources via hooks.
  • Provides access to installations of popular applications via Helm Hub.
  • Encourages code-reuse.

Each Helm chart contains values.yaml and Chart.yaml files. In the root of the project directory, lets go ahead and create these files by executing:

touch values.yaml
touch Chart.yaml

Chart.yaml contains information about the chart. With that in mind, lets add the following content to Chart.yaml:

apiVersion: v2
name: edgex
description: A Helm chart for Kubernetes

# A chart can be either an ‘application’ or a ‘library’ chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They’re included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.2.1

values.yaml allows you to create a YAML object which can be referenced in your chart’s template files.

Lets make use of what values.yaml can offer us.

For edgex-redis, lets list out some properties that might be beneficial to abstract out as a configurable option:

  • Application name: if you noticed, almost every resource uses the name edgex-redis. These resources can be accessed by other applications. For instance, edgex-core-metadata references the Service defined for edgex-redis. If we abstract the name out to yaml, the application name will only need to change in a single place (values.yaml).
  • Deployment strategy: If there is a particular environment where you are concerned with high-availability, you may want to leverage RollingUpdate In smaller environments, you may or may not care about a RollingUpdate strategy and would chose Recreate in effort to conserve computing resources.
  • Image name and tag: There exist scenarios where artifacts/dependencies are vetted and kept “in-house”. Scenarios like these include artifact/image repositories of their own. With that in mind, having the ability to switch the image registry during installation can be a huge benefit.
  • Port: The port at which the Container listens on is something that can easily be referenced in many places within a single chart. As you define various network components such as Ingresses, Services, and even the Container port, it would be nice to refer back to a single place.
  • Replicas: Depending on the environment’s resources, the number of replicas may change.
  • StorageClassName: There may exist a scenario where the StorageClass may completely vary from cluster to cluster or even within a single cluster.

Considering the above, we can define our edgex-redis configuration object like this:

edgex:
redis:
name: edgex-redis
deployment:
strategy: Recreate
image:
name: redis
tag: latest
port: 6379
replicas: 1
storageClassName: hostpath

We can apply the same pattern with a few adjustments for edgex-core-metadata:

edgex:
metadata:
name: edgex-core-metadata
deployment:
strategy: Recreate
image:
name: edgexfoundry/docker-core-metadata-go
tag: 1.2.1
port: 48081
replicas: 1
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 0.5
memory: 256Mi

In the configuration object for edgex-core-metadata we added a resources object which allows us to adjust limits and requests during installation.

Remember earlier when we talked about recommended labels and how these labels apply to each of our resources? With Helm, we can create a named-template and include the named-template in each place where the labels are referenced.

Here, in the file templates/_labels.tpl, we are creating a named-template named edgex.labels. This named-template takes in two arguments a ctx (context) and AppName (application’s name).

{{/*
Define a standard set ouf resource labels.

params:
(context) ctx – Chart context (scope).
(string) AppName – Name of the application.
*/}}
{{ define “edgex.labels” }}
app.kubernetes.io/app: {{ .AppName }}
app.kubernetes.io/instance: {{ .ctx.Release.Name }}
app.kubernetes.io/version: {{ .ctx.Chart.AppVersion }}
app.kubernetes.io/component: api
app.kubernetes.io/part-of: edgex-foundry
app.kubernetes.io/managed-by: {{.ctx.Release.Service }}
helm.sh/chart: {{ .ctx.Chart.Name }}-{{ .ctx.Chart.Version | replace “+” “_” }}
{{ end }}

Now that we’ve defined configuration in values.yaml and created edgex.labels named template, lets apply them to a simple definition file in templates/edgex-redis/service.yaml.

apiVersion: v1
kind: Service
metadata:
name: {{ .Values.edgex.redis.name }}
labels:
{{- include “edgex.labels” (dict “ctx” . “AppName” $.Values.edgex.redis.name) | indent 4 }}
spec:
selector:
{{- include “edgex.labels” (dict “ctx” . “AppName” $.Values.edgex.redis.name) | indent 4 }}
ports:
– port: {{ .Values.edgex.redis.port }}
name: “port-{{ .Values.edgex.redis.port }}”

When the template is rendered, placeholder values will be interpolated by either properties specified in the values file or via –set flag. When helm install is executed with no -f flag, values.yaml in the root of chart is used by default. If we want to override values during installation, the property can be overridden using –set flag.

For an application as portable as EdgeX is, tying in configuration overrides can tremendously speed up deployments.

Finalizing the chart using a reference project

Up to this point, we have:

  • Created raw Kubernetes YAML files for edgex-redis and edgex-core-metadata.
  • Explained usage of yaml in Helm chart.
  • Adjusted edgex-redis Service definition file to reference Helm values and named-templates.

This, by no means is an end-to-end deployment. The patterns described above, gives you enough to apply to the remaining pieces you wish to include in your deployment. Here you can refer to the reference project which contains a finalized Helm chart responsible for Deploying: – edgex-redis – edgex-core-metadata – edgex-core-data – edgex-core-command

Following the pattern above, edgex-core-data and edgex-core-command will also require configuration files mounted as Secrets. You can create them by executing:

$ kubectl create secret edgex-core-data generic –from-file=configuration.toml=[path to edgex-core-data’s configuration.toml]

$ kubectl create secret edgex-core-command generic –from-file=configuration.toml=[path to edgex-core-command’s configuration.toml]

All of the Secrets can be accessed here in the reference project.

When your Helm chart is finalized, you can install it by executing:

# From the root of the project

$ helm install edgex –name-template edgex

In a sense, you get a free verification from the Container probes for each application. If your applications successfully start up and remain in a Running state, that is a good sign!

Each application can be accessed at [cluster IP]:[application’s node port]. For example, lets say the cluster IP is 127.0.0.1 and we want to access the /api/v1/ping endpoint of edgex-core-metadata, we can invoke the following curl request:

$ curl -i 127.0.0.1:30801/api/v1/ping

where nodePort 30801 is defined in edgex-core-metadata Service definition.

In closing

In this tutorial, we’ve only scratched the surface with respect to the potential that EdgeX Foundry has to offer. With the components we’ve deployed, the foundation is in place to continuously deploy EdgeX in Kubernetes clusters.

As a member of this amazing community I highly recommend checking out EdgeX Foundry Official documentation. From there you will have access to much more details about the platform.

As for next steps, I highly recommend connecting a Device Service to your EdgeX Core Services environment within Kubernetes. With the plug-and-play nature of the Device Sevice’s configuration, you can certainly have a device up and running within EdgeX intuitively.

This tutorial was built on a foundational project worked on at Dell. I would like to acknowledge Trevor Conn, Jeremy Phelps, Eric Cotter, and Michael Estrin at Dell for their contributions to the original project. I would also like to acknowledge the EdgeX Foundry community as a whole. With so many talented and amazing members, the EdgeX Foundry project is a great representation of the community that keeps it growing!

Visit the EdgeX Foundry website for more information or join Slack to ask questions and engage with community members. If you are not already a member of the community, it is really easy to join. Simply visit the wiki page and/or check out the EdgeX Foundry Git Hub.