Introduction to Project EVE, an open-source edge node architecture
This article originally ran as a LinkedIn article last month.
At the inaugural August ZEDEDA Transform conference, I participated in a panel discussion entitled “Edge OSS Landscape, Intro to Project EVE and Bridging to Kubernetes”. Project EVE, an LF Edge project, delivers an open-source edge node, on which applications are deployed as either containers or VMs. As some audience members noted, they were pleasantly surprised that we didn’t spend an hour talking explicitly about the Project EVE architecture. Instead, we considered many aspects of a container-first edge, community open-source investments, and whether technologies like Kubernetes can be useful for IoT applications.
The Edge Buzz
There’s a lot of buzz around the edge, and many see it as the next big thing since the adoption of cloud computing. Just as the working definition of the cloud has morphed over time and conceptualized as a highly scalable, micro-service-based application hosted on computing platforms around the globe, so has the edge come to represent a computing style. To embrace the edge means to place your computing power as close to the data that it is processing, to balance the cost and latency of moving high volumes of data across the network. Most have also used the transition to edge computing to adopt the same “cloud-native” technologies and processes to manage their compute workloads in a similar fashion, regardless of where the compute is deployed, be it cloud, data center, or some remote edge environment.
Turning the Problem on its Head
That last part is what enthuses me about the shift to edge computing: we move away from highly curated enterprise IT devices (whose management tends to prevent change and modification) and move towards cloud-like, dynamic, scalable assets (whose management technologies are designed for innovation and responding to ever-changing circumstances). This is the classic example of “pets vs. cattle“, but sprinkled in with IoT challenges like systems that are distributed at extreme ends of the world, with costly trip charges required to manage with an on-site technician. The solution turns the problem on its it head. It requires organizations, from IT to line of business, to adopt practices of agility and innovation, so that they can manage and deploy solutions to the edge as nimbly as they experiment and innovate in the cloud.
Next up, we discussed the benefits of utilizing open source software for projects like Project EVE. Making an edge node easy to deploy, secure to boot, and remotely manageable is not trivial, and it is not worth competing over. The community, including competitors, can create a solid, open-source edge node architecture, such as Project EVE, and all parties can benefit from the group investment. With a well-accepted, common edge architecture, innovators can focus instead on the applications, the orchestration, and the usability of the edge. Even using the same basic edge node architecture, there is more than ample surface area left to compete for service value, elegance, and solution innovation. Just as the typical investment around Kubernetes is allowing masses of projects and companies to improve on nearly every aspect of orchestration, without making each project re-invent the basics of orchestration, we don’t need tens of companies re-inventing secure device deployments, secure boot, and container protection. Get the basics done (the “boring stuff” as I called it) and focus on the specialization around it.
Can container first concepts, and projects like Kubernetes, be effective at the edge and solving IoT problems? Yes. No doubt, there are differences in using Kubernetes in the cloud from using it to manage edge nodes, and some of those challenges include limited power infrastructure, communications and connectivity, and cost considerations. However, these technologies are very adaptable. A container will run as happily on an IOT device, an IoT gateway, or an edge server. How you connect to and orchestrate the containers on those devices will vary. Most edges won’t need a local Kubernetes cluster. Distant Kubernetes infrastructures could remotely orchestrate a local edge node or local edge cluster.
Infrastructure as Code
A common theme in edge orchestration architectures is local orchestration, which provides enough power to a small orchestrator running at the edge to make decisions while offline from a central orchestrator. Projects like Open Horizon, which IBM recently open-sourced to the LF Edge, is designed to bridge traditional cloud orchestration to edge devices with a novel distributed policy engine. This distributed policy engine is executing and responding to changing conditions even when disconnected. Adopting an “infrastructure as code” mentality provides administrators a high degree of configuration and control of the target environment, even over remote networks. There is high confidence in the resulting infrastructure configurations, but with the variability as to “when” the changes are received due to bandwidth considerations. Could this be used on oil and gas infrastructure in the jungles of Ecuador? Yes. However, the challenge is in deciding which of the philosophies and projects are best suited to the needs of the situation.
If you find yourself architecting your edge node or using a simple Linux installation, while kicking the security can down the road, or are otherwise bogged down by remote manageability challenges when you’d rather be innovating and solving domain-specific problems, look to the open-source community. Specifically to projects like Project EVE, to give you a leg up for your edge architecture.
Please feel free to connect with me and start a conversation about this article. Here are some additional resources: