Porting ProL2TP to a new platform: Part 3 - Qualification

Wed 14 August 2013
By Chris Elston

Crash Test Dummy

In the first and second posts in this series, I described the process of porting ProL2TP and the L2TPv3 kernel drivers to the Ubiquiti EdgeRouter LITE. This final post concludes the series with an overview of our testing process. Testing a package as configurable as ProL2TP would be a very tedious process without a fair amount of automation, and it's this automation which enables us to support ProL2TP on such a wide variety of platforms.

In the case of the EdgeRouter, the test harness enabled us to very quickly discover and fix some endian issues within ProL2TP's netlink control interface.


The harness fits into a classic master/servant pattern, with the harness itself running on a development workstation, and exercising control over remote machines running instances of ProL2TP:

Test Network Diagram

It is based on the python unittest module, which we've found to be a flexible and convenient framework to build on. In total we have over 1400 separate tests for ProL2TP, and a full test run takes 4 hours. For this reason we have subsets of tests which give broad but shallow coverage for regular testing, we use full regression tests after substantial changes are made or in preparation for a release.

Design Goals

In common with any software development we've had some design goals in mind while we built the test harness:

  • Simple to run. Starting a test run should be as simple as possible, to reduce friction and encourage frequent testing.
  • Repeatable. It goes without saying that the harness itself should produce dependable results.
  • Minimal target requirements. ProL2TP is portable, so our test harness needs to be able to operate across a wide range of devices.
  • Dynamic analysis. The dynamic behavior of any moderately sized software package becomes very diffcult to predict, so our harness needs to help us identify classic problem areas like memory allocation and initialization.


We knew that we wanted to make testing as painless as possible. Since ProL2TP is networking software, testing involves multiple machines networked together. To be able to test with minimal setup overhead we would need to have test machines ready to go. This would be cumbersome to do with physical machines, so we quickly decided on a network of virtual machines.

We make heavy use of virtualization, and to manage virtual machines we've found that vagrant suits our needs pretty well. To avoid having to manually create and setup machines with vagrant for each test run, we've written a python binding which interacts with vagrant to give our test harness control over virtual machines. This allows the test harness to create, provision, start and stop virtual machines. The result is that we don't need to keep test machines around, which means that starting a test run is as simple as a single shell command.


Repeatability is key for testing, and the key to repeatability is controlling the variables. This is another place where virtualization makes sense - we use vagrant to create and provision our test machines, which means that we can start from a clean virtual machine every time. What's more - we share common vagrant baseboxes, so we can be sure that we are all using the same machine configuration for testing.


Our test harness uses ssh to run commands on each machine under test, and sftp to send and receive files. We use the paramiko SSHv2 library to enable our harness to talk directly to the ssh server on the client machines - this saves us the overhead of calling out to an external ssh tool.

In fact, sshd is one of the few requirements we have of a client machine in the test network. This means that we can easily replace one or more of the virtual machines in the testing network with a real machine. And this is precisely what I did for the EdgeRouter port, and have previously done for OpenWRT builds we produced for a customer deployment.


We recently added the ability to run any test or subset of tests with the ProL2TP daemon running under the dynamic analysis tool valgrind. This allows us to proactively check for things like memory leaks and unitialized variables. Bugs of this kind can be tricky to find by inspection alone, the ability to dynamically sniff them out whilst ProL2TP is being exercised is very handy.

Checking for memory leaks is one of those jobs best done immediately after a change is made, while the operation of the code is still fresh in the mind - so it's good to have that ability as part of our test harness.


As ProL2TP has grown, so has the test harness. Each time we add new code or fix a bug, we add test coverage for it. It's much easier to add a test case at the same time as the functionality is added.

The path to a robust test harness has not always been a smooth one. Sometimes test cases fail unexpectedly, and then it's a case of investigating the failure to see if it's the harness or the software under test to blame. Experience has shown us that it's about 50/50, so sometimes we've fixed the harness and sometimes we've fixed ProL2TP. In either case you gain a deeper understanding of how the software under test behaves, and this can only be an advantage.


We've put considerable effort into ensuring that we can build ProL2TP for almost any Linux system, and to make it easy to qualify new platforms. Our testing infrastructure allows us to perform blackbox testing which scales from single test cases right up to a full suite run. Kicking off a test run, including provisioning new virtual machines, configuring a test network and running the tests is done with a single command. We have the flexibility to substitute any machine in our test network for a physical machine, or a virtual machine running a different distribution. This enables us to support and test ProL2TP across a wide range of architectures, kernels and distributions.