Discover why PagerDuty users are switching to Everbridge xMatters. Learn more

Executing End-to-End Tests in Kubernetes

Executing End-to-End Tests in Kubernetes

Our infrastructure is more complex than ever and there is greater pressure to deliver quality features to customers on time. To meet these needs, automated end-to-end tests play an important role in our continuous integration and delivery process. Let’s look at how we can execute these tests in a container within a Kubernetes cluster on Google Cloud Kubernetes Engine.

As software applications transition towards a micro-service architecture and platforms become more cloud native, these shifts have changed how the xMatters development team builds and tests software. Each component of the application is packaged in its own container, an approach commonly called containerization.

To scale and manage these containers, organizations are turning to orchestration platforms like Kubernetes. Kubernetes is a well-known open-source orchestration engine for automating deployment, scaling, and management of containerized applications at scale, whether they run in a private, public, or hybrid cloud.

With increased complexity in the infrastructure and the need for timely delivery of quality features to customers, automated end-to-end tests play an important role in our continuous integration and delivery process. Let’s look at how we can execute these tests in a container within a Kubernetes cluster on Google Cloud Kubernetes Engine.

Building the Testing Container Image
We start by building our tests, which are written in TestNG using Selenium WebDriver, into a container image. The image includes all the test files, libraries, drivers, and a properties file, as well as the shell script to start the tests.

Below you’ll find some sample code that should give you a sense of how you can structure and configure your testing, including snippets from the following files:

Docker logo

Dockerfile

FROM centos:7.3.1611
RUN yum install -y \
java-1.8.0-openjdk \
java-1.8.0-openjdk-devel
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/

# Set local to UTF8
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8
ENV LC_ALL en_US.UTF-8

# Install automation tests
COPY ./build/libs ${test_dir}/bin
COPY ./build/lib ${test_dir}/lib
COPY ./testfiles ${test_dir}/testfiles
COPY ./build/resources ${test_dir}/resources
COPY ./build/drivers ${test_dir}/drivers
RUN chmod -R 755 ${test_dir}/drivers
COPY ./*.properties ${test_dir}/

WORKDIR /opt/automation
USER root

CMD [ “./run-suite.sh”, “TestSuite” ]

Bash logo

run-suite.sh

#!/bin/bash

# run test suite
fullList=””
function listOfSuites() {
for i in $(echo $1 | sed “s/,/ /g“)
do
# call your procedure/other scripts here below
echo “$i”
fullPath=`find . -type f -name “$i.xml”`
#echo “fullList=$fullList”
#echo “fullPath=$fullPath”
fullList=”$fullList$fullPath “
done
}
listOfSuites $1
echo $fullList

java -Dlog4j.configuration=resources/main/log4j.properties -cp “./lib/*:./bin/*:.” \
org.testng.TestNG $fullList

java -cp “./lib/*:./bin/*:.” com.gcp.UploadTestData

Gradle logo

build.gradle

buildscript {
repositories { maven { url "${nexus}" } }
dependencies {
classpath 'com.bmuschko:gradle-docker-plugin:3.6.2'
}
}

apply plugin: com.bmuschko.gradle.docker.DockerRemoteApiPlugin

import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage

def gcpLocation = ‘gcr.io’
def gcpProject = ‘automation’
def dockerRegistryAndProject = “${gcpLocation}/${gcpProject}”

Provide a lazily resolved $project.version to get the execution time value, which may include -SNAPSHOT

def projectVersionRuntime = "${-> project.version}-${buildNumber}"
def projectVersionConfigurationTime = "${project.version}-${buildNumber}"
def projectRpm = "${projectName}-${projectVersionConfigurationTime}.${arch}.rpm"

We want to use the branch name as part of the GCR tag. However we don’t want the raw branch name, so we strip out symbols and non alpha-numerics. We also strip out git branch text that contains remotes/origin or origin/, since we don’t care about that.

def sanitize = { input ->
return input.replaceAll("[^A-Za-z0-9.]", "_").toLowerCase().replaceAll("remotes_origin_", "").replaceAll("origin_", "");
}

def gitbranchNameRev = ‘git name-rev –name-only HEAD’.execute().text.trim()
def gcpGitbranch = System.env.GIT_BRANCH ?: (project.hasProperty(‘gitbranch’)) ? “${gitbranch}” : “${gitbranchNameRev}”
def gitbranchTag = sanitize(gcpGitbranch)

def projectVersionRuntimeTag = sanitize(“${-> project.version}”)
def dockerTag = “${dockerRegistryAndProject}/${projectName}:${projectVersionRuntimeTag}-${buildNumber}-${gitbranchTag}-${githash}
def buildType = System.env.BUILD_NUMBER ? “JENKINS” : “LOCAL”

Create gcpBuildVersion.properties file containing build information. This is for the build environment to pass onto other upstream callers that are unable to figure out this information on their own.

task versionProp() {
onlyIf { true }
doLast {
new File("$project.buildDir/gcpBuildVersion.properties").text = """APPLICATION=${projectName}
VERSION=${-> project.version}
BUILD=${buildNumber}
BRANCH=${gcpGitbranch}
>GIT_HASH=${githash}
TAG_FULL=${dockerTag}
TAG=${projectVersionRuntimeTag}-${buildNumber}-${gitbranchTag}-${githash}
TIMESTAMP=${new Date().format('yyyy-MM-dd HH:mm:ss')}
BUILD_TYPE=${buildType}

“””
}
}

Make sure the below version file generation is always run after build:

build.finalizedBy versionProp
task dockerPrune(type: Exec) {
description 'Run docker system prune --force'
group 'Docker'

commandLine ‘docker’, ‘system’, ‘prune’, ‘–force’
}

task buildDockerImage(type: DockerBuildImage) {
description ‘Build docker image locally’
group ‘Docker’
dependsOn buildRpm
inputDir project.buildDir

buildArgs = [
‘rpm’: “${projectRpm}”,
‘version’: “${projectVersionConfigurationTime}”
]

doFirst {

Copy the Dockerfile to the build directory so we can limit the context provided to the docker daemon.

copy {
from 'Dockerfile'
into "${project.buildDir}"
}

copy {
from ‘docker’
into “${project.buildDir}/docker”
include “**/*jar”
}

println “Using the following build args: ${buildArgs}”

This block will get the execution time value of $project.version, which may include -SNAPSHOT

tag "${dockerTag}"
}
}

task publishContainerGcp(type: Exec) {
description ‘Publish docker image to GCP container registry’
group ‘Google Cloud Platform’
dependsOn buildDockerImage

commandLine ‘docker’, ‘push’, “${dockerTag}”
}

Selenium Grid Infrastructure Setup
Our end-to-end tests use Selenium WebDriver to execute browser-related tests, and we have a scalable container-based Zalenium Selenium grid deployed in a Kubernetes cluster (you can see setup details here).

You can configure the grid URL and browser in the test_common.properties file included in the test container image:

targetUrl=http://www.mywebsite.com
# webDriver settings
webdriver_gridURL=http://${SELENIUM_GRID}/wd/hub
webdriver_browserType=${BROWSER_TYPE}

Deploying Test Containers in Kubernetes Cluster

Now that we have the container image built and pushed to the Google Cloud Platform, let’s deploy the container in Kubernetes. Here’s a snippet of the template for the manifest file to execute tests as a Kubernetes job:

apiVersion: batch/v1
kind: Job
metadata:

name: run-suite-${JENKINS_JOB_INFO}-${TEST_SUITE_LOWER}
labels:
jobgroup: runtest
spec:
template:
metadata:
name: runtest
namespace : automation
labels:
jobgroup: runtest
spec:
containers:
– name: testcontainer
image: gcr.io/automation/${IMAGE_NAME}:${IMAGE_TAG}
command: [“./run-suite.sh”, “${TEST_SUITE}”]
env:
– name: SELENIUM_GRID
value: “${SELENIUM_GRID}”
– name: BROWSER_TYPE
value: “${BROWSER_TYPE}”
– name: TARGET_URL
value: “${TARGET_URL}”
– name: GCP_CREDENTIALS
value : “${GCP_CREDENTIALS}”
– name: GCP_BUCKET_NAME
value: “${GCP_BUCKET_NAME}”
– name : BUCKET_FOLDER
value : “${BUCKET-FOLDER}”
restartPolicy: Never

And, here’s the command to create the job:

kubectl apply -f ./manifest.yaml --namespace automation

Publishing Test Results

Once test execution is complete, you can upload test results to your Google Cloud storage bucket using a code snippet similar to this:

public static void main(String... args) throws Exception {

String gcp_credentials = readEnvVariable(GCP_CREDENTIALS);
String gcp_bucket = readEnvVariable(GCP_BUCKET_NAME);
String bucket_folder_name = readEnvVariable(BUCKET_FOLDER);

// authentication on gcloud
authExplicit(gcp_credentials);

// define source folder and destination folder
String source_folder = String.format(“%s/%s”, System.getProperty(“user.dir”), TEST_RESULTS_FOLDER_NAME);
String destination_folder = “”;
if (!bucket_folder_name.isEmpty()) {
destination_folder = bucket_folder_name;
} else {
>String timeStamp = new
SimpleDateFormat(“yyyyMMdd_HHmmss”).format(Calendar.getInstance().getTime());
>destination_folder = TEST_RESULTS_FOLDER_NAME + “-” + timeStamp;
>System.out.println(“destination folder: “ + destination_folder);
>}
// get gcp bucket for automation-data
Bucket myBucket = null;
if (!gcp_bucket.isEmpty()) {
myBucket = storage.get(gcp_bucket);
}
// upload files
List<File> files = new ArrayList<File>();
GcpBucket qaBucket = new GcpBucket(myBucket, TEST_RESULTS_FOLDER_NAME);
if (qaBucket.exists()) {
qaBucket.createBlobFromDirectory(destination_folder, source_folder, files);
System.out.println(files.size() + “files are uploaded to ” + destination_folder);
}
}
}

Next Steps
We’re looking into publishing test results in a centralized result database fronted by an API service. This allows users to easily post test results data for test result monitoring and analytics, which I will cover in a future blog post about building a centralized test results dashboard. Until then, I hope this post has helped you put all the pieces together for executing automated end-to-end testing using Kubernetes.

Have you run tests in Kubernetes to ensure the quality of your software? Let us know on our social channels. To try xMatters for yourself, race to xMatters Free, and you can use it free forever.

Request a demo