By Cristina Pauna, Bin Lu, Howard Zhang
Due to the large scale f its projects and various application scenarios, Akraino Edge Stack is complex when it comes to testing logistics; normal integration or unit tests are not enough for testing the projects. To ensure that the full deployment is both functional and reliable, it needs to be tested at all layers of the stack, from hardware to application. To verify deployments based on Kubernetes at the platform layer, K8s conformance is a good way to do this job.
The k8s conformance tests are used in the CNCF vendor Certified Kubernetes program. It integrates end-to-end (e2e) testing in Kubernetes which contains a wide range of test cases, and passing them proves a high level of common functionality of the platform.
K8s Conformance Tests in Akraino
General testing in Akraino is done using the Blueprint Validation Framework. The framework provides tests at different layers of the stack, such as hardware, operating system, cloud infrastructure, security, etc. Each layer has its own container image built by the validation project. The full list of images provided can be found in the project’s DockerHub repo. The validation project has an multi-arch approach. All images currently support both x86_64 and aarch64 architectures, and can be easily extended to other platform architectures in the future. This is implemented using docker manifest list so the same image name is used to pull regardless of the architecture of the System Under Test. Therefore the steps described in this article apply regardless of the architecture of the SUT.
The Kubernetes conformance tests are included in the k8s layer. The test suit can be run in a couple of ways, either by directly calling the e2e tool or though sonobuoy, which is a diagnostic tool that gathers logs and and stats in one archive. Regardless of how the tests are ran, it is possible to customise what test cases are included, which versions and even the container images within the tooling.
In Akraino, the conformance suite is run from a separate container, using the validation image built for the k8s layer: akraino/validation:k8s-latest. This image contains all of the tooling necessary for running tests on the k8s layer in Akraino, as well as the tests themselves. The tools that go into the image can be found in its Dockerfile. The container is designed to be started on a remote host that has access to the k8s cluster. Check the validation project User guide for more information on the setup.
The conformance suite is customized using the sonobuoy.yaml file. Some of the images used by sonobuoy in Akraino are built by the validation project in order to customize them to fit the community’s needs (e.g. akraino/validation:sonobuoy-plugin-systemd-logs-latest is based on the official gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest but with added aarch64 support).
Prerequisites
Before running the image, copy the folder ~/.kube from your Kubernetes master node to a local folder (e.g. /home/ubuntu/kube) that will later be mounted in the container. Optionally, the results folder can be mounted in order to have the logs stored on the local server.
Run the tests
The Akraino validation project has a variety of tests for the k8s layer. To run just the conformance tests, follow the steps below:
- Clone the validation repo.
- Create a customized blueprint that will be passed to the container. The blueprint will contain just the conformance suite in it, like in the example below.
3. Fill in volumes.yaml file with the data that applies to your setup. The volumes that don’t have a local value set will be ignored. Do not modify the target part of each volume. These paths will be automatically created inside the container when started, and are fixed paths that the tools inside the container expect to exist as is.
In the example below, only the following volumes are set:
- Location to the kube config files needed for access to the cluster
- Location to the customized blueprint file
- Location to where to store the results
4. Run the tests
To run the tests, the blucon.py command is used which needs the name of the blueprint as input. The full suite takes around one hour to run, depending on the setup.
The sonobuoy containers can be seen running on the cluster on the heptio-sonobuoy namespace.
Get the results
The results are stored inside the container in the /opt/akraino/results folder. If a volume was mounted to this folder (e.g. /home/ubuntu/results) then they can be viewed from the host where the test was ran.
The results can be viewed with a browser, using the report.html file that the robot framework provides. In case of failures, the sonobuoy archive can be used to inspect all the logs.