The Linux Foundation Projects
Skip to main content
By | March 3, 2021

How do you manage applications on thousands of Linux hosts?

Written by Glen Darling, contributor to Open Horizon and Advisory Software Engineer at IBM

If they are debian/ubuntu-style distros with Docker installed, or RedHat-style distros with docker or podman installed, Open Horizon can make this easy for you. Open Horizon is an LF Edge open source project originally contributed by IBM that is designed to manage application containers on potentially very large numbers of edge machines. IBM’s commercial distribution of Open Horizon for example supports 30,000 Linux hosts from a single Management Hub!

How can this scale be achieved? On each Linux host (called a “node” in Open Horizon) a small autonomous Open Horizon Agent must be installed. Since these Agents act autonomously, this minimizes the need for connections to the central Open Horizon Management Hub and also minimizes the network traffic required. In fact, the local Agent can continue to perform many of its monitoring and management functions for your applications even when completely disconnected from the Management Hub! Also, the Management Hub never initiates communications with any Agents, so no firrewall ports need to be opened on your edge nodes. Each Agent is responsible for its own node and it reaches out to contact the Management Hub to receive new information as appropriate based on its configuration.

Open Horizon’s design also simplifies operations at scale. No longer will you need to maintain hard-coded lists of nodes and the software that’s appropriate for each of them. When a fleet of devices is diverse maintaining complex overlapping software requirements can quickly become unmanageable with even relatively small scale. Instead, Open Horizon lets you specify your intent in “policies”, and then the Agents, in collaboration with the Agreement Robots (AgBots for short) in the Management Hub will work to achieve your intent across your whole fleet. You can specify policies for individual nodes, and/or policies for your services (collections of one or more containers deployed as a unit), and/or policies to govern deployment of your applications. Policies also enable your large data files, like neural network models (e.g., large edge weight files) to have lifecycles independent of your service containers.

I recently presented a hands-on Open Horizon deployment example at Cisco Systems’ “DevNet 2020” developer conference. In this example I showed how to take an existing container, publish it to DockerHub, define Open Horizon artifacts to describe it as a service, and a simple deployment pattern to deploy it. The example application detects the presence or absence of a facial mask when provided an image over an HTTP REST API. Cisco developers, and most developers working with Linux machines, can use Open Horizon as I did in this example to deploy their software across small or large fleets of machines.

You can view my Cisco DevNet 2020 presentation at the link below:

Additional Resources: