The Linux Foundation Projects
Skip to main content
AkrainoBlog

Where the Edges Meet, Apps Land and Infra Forms: Akraino Release 5 Public Cloud Edge Interface

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix

Introduction

In the PCEI R4 blog we described the initial implementation of the blueprint. This blog focuses on new features and capabilities implemented in the PCEI in Akraino Release 5. Before discussing the specifics of the implementation, it is useful to go over the motivation for PCEI. Among the main drivers behind PCEI are the following:

  • Public Cloud Driven Edge Computing. Edge computing infrastructure and resources are increasingly provided by public clouds (e.g., AWS Outposts, IBM Cloud Satellite, Google Anthos). In the PCEI R4 blog we described  various relationships between PCC (Public Cloud Core) and PCE (Public Cloud Edge), ranging from PCE being Fully-Coupled to PCC at hardware, virtualization, application and services layers to PCE being Fully-Decoupled from PCC at all these layers. This “degree of coupling” between PCE and PCC dictates the choice of orchestration entry points as well as the behavior of the edge infrastructure and applications running on it.
  • Hybrid infrastructure. Most practical deployments of edge infrastructure and applications are hybrid in nature, where an application deployed at the edge needs services residing in the core cloud to function (coupled model). In addition, a PCE application deployed at the edge, may need to communicate, and consume resources from multiple PCC environments.
  • Multi-Domain Interworking. Individual infrastructure domains (e.g., edge, cloud, network fabric) present their own APIs and/or other provisioning methods (e.g., CLI), thus making end-to-end deployment challenging both in complexity and in time. A Multi-domain orchestration solution is needed to handle edge, cloud, and interconnection in a uniform and consistent manner.
  • Interconnection and Federation. Need for efficient and performant interconnection and resource distribution between edge and cloud as well as between distributed edges proximal to end users. We would like to point out that a common assumption in many infrastructure orchestration solutions is that the fundamental L1/L2/L3 interconnection between edge clouds and core clouds as well as between the edges is available for overlay technologies such as SD-WAN or Service Mesh. We specifically see the need for the orchestration solution to be able to enable L2/L3 connectivity between the domains that are being orchestrated.
  • Bare Metal orchestration. As with the interconnection, many orchestration solutions assume that the bare metal compute/storage hardware and basic operating system resources are available for the deployment of virtualization and application/services layers. We would like to point out that in many scenarios this is not the case. 
  • Developer-centric capabilities. Capabilities such Infrastructure-as-Code are becoming critical for activation and configuration of public cloud infrastructure components, interconnection as well as the end-to-end application deployment, integrated with CI/CD environments.

PCEI in Akraino R5

Public Cloud Edge Interface (PCEI) is a set of open APIs, orchestration functionalities and edge capabilities for enabling Multi-Domain Interworking across the Operator Network Edge, the Public Cloud Core and Edge, the 3rd-Party Edge as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. 

Terraform-based Orchestration

One of the biggest challenges with multi-domain infrastructure orchestration is finding a common and uniform method of describing the required resources and parameters in different domains, especially in public clouds (PCC). Every public cloud provides a range of service categories with a variety of different services, with each service having several different components, and each component having multiple features, with different parameters.

Terraform emerged as a common Infrastructure-as-Code tool that allows to abstract diverse provisioning methods (API, CLI, etc.) used in the individual domains and provision infrastructure components using a high-level language, if a Terraform Provider is available for the domain. 

The notable innovation in PCEI R5 is the integration of Terraform as a microservice within the PCEI orchestrator (CDS, see below). This allows for important orchestration properties:

  • Uniformity – use of the same infrastructure orchestration methods across public clouds, edge clouds and interconnection domains.
  • Transparency (model-free) – the orchestrator does not need to understand the details of the individual infrastructure domains (i.e., implement their models). It only needs to know where to retrieve the Terraform plans (programs) for the domain in question and execute the plans using the specified provider.
  • DevOps driven – the Terraform plans can be developed and evolved using DvOps tools and processes.

Examples of Terraform plans are shown below.

Open-Source Technologies in PCEI 

The PCEI blueprint makes use of the following open-source technologies and tools:

  • EMCO – Edge Multi-Cluster Orchestrator. EMCO is used as a multi-tenant service and application deployment orchestrator.
  • CDS – Controller Design Studio. CDS is used as the API Handler, Terraform Executor, Helm Chart Processor, Ansible Executor, GitLab Interface Handler.
  • Terraform. In PCEI R5, Terraform has been integrated with CDS to enable programmatic execution of Terraform plans by the PCEI Enabler to orchestrate PCC, PCE and interconnection infrastructure.
  • Kubernetes. Kubernetes is the underlying software stack for EMCO/CDS. Kubernetes is also used as the virtualization layer for PCE, on which edge applications are deployed using Helm.
  • Helm. Helm is used by EMCO for deployment of applications across multiple Kubernetes edge clusters.
  • GitLab. GitLab is used to store Terraform plans and state files, Helm charts, Ansible playbooks, Cluster configs for retrieval and processing by CDS using API calls.
  • Ansible. Ansible can be used by CDS to deploy Kubernetes clusters on top of bare metal and Linux.
  • Openstack. PCEI R5 can use Terraform to deploy IaaS infrastructure and apps on Openstack edge clouds.

Functional Roles and Components in the PCEI R5 Architecture

Key features and implementations in Akraino Release 5

  • Software Architecture Components
      • Edge Multi-Cloud Orchestrator (EMCO) 
      • Controller Design Studio (CDS) and Controller Blueprint Adapters (CBA)
      • Helm
      • Kubernetes
      • Terraform
  • Features and capabilities
    • NBI APIs
      • GitLab Integration
      • Dynamic Edge Cluster Registration
      • Dynamic App Helm Chart Onboarding
      • Automatic creation of Service Instances in EMCO and deployment of Apps
      • Automatic Terraform Plan Execution
    • Integrated Terraform Plan Executor
      • Azure (PCC), AWS (PCC)
      • Equinix Fabric (Interconnect)
      • Equinix Metal (Bare Metal Cloud for PCE)
      • Openstack (3PE)
    • Equinix Fabric Interconnect
    • Equinix Bare Metal orchestration
    • Multi-Public Cloud Core (PCC) Orchestration (Azure, AWS)
    • Kubernetes Edge
    • Openstack Edge
    • Cloud Native 5G UPF Deployment
    • Deployment of Azure IoT Edge PCE App
    • Deployment of PCEI Location API App 
    • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge 
    • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

DevOps driven Multi-domain INfrastructure Orchestration (DOMINO) 

In PCEI R5 we demonstrated the use of PCEI Enabler based on EMCO/CDS with integrated programmatic Terraform executor to orchestrate infrastructure across multiple domains and deploy an edge application. The DevOps driven Multi-domain Infrastructure Orchestration demo consisted of the following:

  • Deploy EMCO 2.0, CDS and CBAs.
  • Design Infrastructure using a SaaS Infrastructure Design Studio.
      1. Edge Cloud (Equinix Metal in Dallas, TX)
      2. Public Cloud (Azure West US)
      3. Interconnect (Equinix Fabric)
  • Push to GitLab.
      1. Cluster Info
      2. Application Helm Charts (Azure IoT Edge, kube-router)
      3. Terraform Plans
        1. Azure Cloud
        2. Equinix Interconnect
        3. Equinix Metal
  • Provision Infrastructure using CDS/Terraform.
      1. Bare Metal server in Equinix Metal Cloud in Dallas, TX
      2. Deploy K8S on Bare Metal
      3. Azure Cloud in West US (Express Route, Private BGP Peering, Express Route GW, VNET, VM, IoT Hub)
      4. Interconnect Edge Cloud with Public Cloud using Equinix Fabric L2
  • Deploy Edge Application (PCE).
      1. Dynamic K8S Cluster Registration to EMCO
      2. Dynamic onboarding of App Helm Charts to EMCO
      3. Composite cloud native app deployment and end-to-end operation
        1. Azure IoT Edge
        2. Custom Resource Definition for Azure IoT Edge
        3. Kube-router for BGP peering with Azure over ExpressRoute
  • Verify end-to-end IoT traffic flow.

The video recording of the PCEI R5 presentation and demonstration can be found at this link.

For more information on PCEI R5: 

Acknowledgements

Project Technical Lead:
Oleg Berzin, Equinix

Committers: 
Kavitha Papanna, Aarna Networks
Vivek Muthukrishnan, Aarna Networks
Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Contributors: 
Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks