Low-cost Hardware System Test using Jenkins in a CI pipeline.

By Nulltek

Continuous integration is a practice common in software development where a server will be used to build and test projects when changes are made. Historically these builds occurred over-night (‘nightly builds’), however with modern infrastructure this is no longer necessary. In current times CI servers build when triggered on VCS push hooks, or pull requests. This essentially allows developers to have near instant feedback on how their code interacts with the project as a whole.

Most large open-source projects will implement some use of a CI sever, usually an independently hosted cloud-service. The difficulty arrives with hardware based open-source projects, where testing the hardware and interfaces is crucial to the goal of the project. Obviously you can’t deploy your own hardware directly to a cloud service, and stubbing / simulating hardware limits your coverage of much of the vital sections of the code base. So why not self-host a Jenkins hardware test-rack, using a $30 raspberry pi, that integrates with your regular cloud-based CI pipeline.

So I did just that and set up a Jenkins test-rack, loaded for my UOS projects, to test both the high level interfaces and low level embedded code. These are my notes on prepping the Raspberry Pi.

It’s worth going through the raspberry-pi and disabling / enabling everything you’ll need. Since all my use cases of the RPI are either via SSH CLI or through a web-server interface, it makes most sense to run a headless low-power configuration.

For enabling the SSH use the raspi-config CLI interface. You can use this to disable a lot of un-needed interfaces to reduce power consumption and maybe more importantly, heat generated. Additional interfaces can be disabled via modifying their boot values in /boot/config.txt. This file is supposed to serve as the BIOS analogue for a RPI.

For my use-case I can disable wifi and bluetooth, act LED, as well as switch off and blank the hdmi.

dtoverlay=disable-wifi
dtoverlay=disable-bt
hdmi_blanking=1
dtparam=act_led_trigger=none
dtparam=act_led_activelow=on

I also like to disable BCM2935 audio, as I have absolutely no need for this on a headless system. This just means also toggling the existing on value to off.

dtparam=audio=off

Now that the RPI is prepped we need to install a valid version of the JRE as Jenkins depends on this, JRE-11 is supported and should be installed from official sources. Then install and configure Jenkins. I prefer not to run the Jenkins workspace on the boot SD, rather using a USB hard-drive as a scratch disk. That way I reduce the writes on my bootdisk at the cost of marginally decreased performance. This USB drive should be formatted as ext4 and set configured by creating an /etc/fstab entry from the partition ID. The exec argument is optional, but you may run into some privilege issues when running builds if it’s not present.

PARTUUID=enter-id-here    /mnt/scratch    ext4    user,exec,nofail     0       0

You can then copy over the contents of the default jenkins home dir, recursively set privileges to that folder to the jenkins group and user.

sudo service jenkins stop
chown -R jenkins:jenkins /mnt/scratch/jenkins/

Update jenkins home directory. First change the home dir for the jenkins user and then update the configuration file value to `mnt\scratch\$NAME`

sudo usermod -d /mnt/scratch/jenkins/ jenkins
sudo nano /etc/default/jenkins

Once that’s done literally just need to restart the server and it’s ready to rock and roll without consuming boot SD write cycles. You should confirm this by checking the web dashboard settings.

sudo service jenkins start

Now this server is specifically for testing hardware and accessing hardware ports during builds as the Jenkins user is not going to be allowed in Linux by default. So you need to add the Jenkins user to the the dial-out group that has permissions to external ports.

sudo usermod -a -G dialout jenkins

Since this test rack is for coverage and hardware interface analysis only, I am going to hook it off my LAN git remote. It would be trivial to configure a web relay for forwarding Github web-hooks without directly exposing a port on your public IP if this was to serve as the primary CI server.

To connect to a git server over ssh is fairly straight forward. Firstly on the raspberry pi create a rsa key-pair using ssh-keygen remember we moved the jenkins home (and by extension .ssh/) dir. This should create both public and private keys in the .ssh/ subdir. The private key should be added as a git user credential in Jenkins. The public key should then be added to the git servers authorized_keys list /home/git/.ssh/authorized_keys. This key should be appended as a new line.

With the authorization configured configuring the source-control remote should be as easy as ssh://git@my-git-server.com:/this/is/my/repo.git. Fortunately Jenkins will check that it can authenticate and see the branches so you shouldn’t have to mess around testing using builds.

In my repository I include shell files intended to be executed by the systems test CI server, which instruct it how to execute the hardware level test. For example a jenkins execute shell build step is added with:

bash ./resources/jenkins_systems_test.sh

The will produce the XML coverage report, which can then be uploaded automatically to Codacy, the static repository analysis platform I use for my open source projects. The key to doing this is adding your Codacy API token using the EnvInject plugin for Jenkins. Then with a step that exposes this API token to the build, it’s trivial to download and execute the Codacy coverage uploader against the relevant commit id. (Note: it’s best practice to enable EnvInject’s option for ‘do not show injected variables’ so that they don’t get captured in the console log).

Uploading the results to Codacy is a little bit tricky, as their default binary is not built for ARM execution. You could build your own binary from source using scala sbt assembly. The solution I chose was to just manually run the jar on the JVM you already set up for Jenkins. To get the latest jar as part of your build you’ll need to sudo apt install jq jason parser to easily pipe the response from the github API. You can then run the following command which will automatically obtain the latest released version of the uploader jar.

curl -LS -o codacy-coverage-reporter-assembly.jar "$(curl -LSs https://api.github.com/repos/codacy/codacy-coverage-reporter/releases/latest | jq -r '.assets | map({name, browser_download_url} | select(.name | endswith(".jar"))) | .[0].browser_download_url')"

Once the jar has been downloaded into the build workspace it’s trivial to run it using the following command and the Jenkins built-in COMMIT_ID environment variable. Provided your xml report was generated correctly and you injected your API key, this should work smoothly.

java -jar codacy-coverage-reporter-assembly.jar report --commit-uuid "$COMMIT_ID" -r ../logs/coverage/coverage.xml.

Thats pretty much it, now I have a low-cost CI server testing my project more rigorously and analysing hardware level coverage. Automating away another layer of manual work before the push hits the Travis CI deployment server.