Mauro Bringolf

Currently geeking out as WordPress developer at WebKinder and student of computer science at ETH.

Learning about software testing from the paper behind Karma test runner

April 4, 2017
,

How many times did you check out your favorite JavaScript library’s website and discover something like this:

Karma itself originates from a university thesis, which goes into detail about the design and implementation.

Not so often right? Well, the quote above is from Karma’s website 1. Karma is a test runner for client side JavaScript apps. You can exercise your app’s modules within real browsers, instead of mocking the environment in Node or something similar. I started reading the paper right away. It explains the ideas goals behind the design and architecture of Karma which is interesting in itself, but mentions a few key points about software testing from a more general perspective which really stuck with me. Here are three things that I think the paper describes very well:

1. This project does not need tests? It probably does.

The first thing you need to start testing a software project are solid reasons to do so. It sounds simple, but often automated testing seems like overkill. At first. Then the project grows and other things take priority. Finally it reaches a point where it becomes obvious that automated tests would really have helped along the way, but it is too late now. So what we need is good reasons to motivate automated testing in the first place, because they primarily pay off in the long term. Here are some of my favorites from the paper:

2. Individual test suites and cases must run in isolation

Different test suites must not share any state or data. Karma runs the actual tests inside an iframe and achieves this by refreshing the iframe completely for each test suite. A failing unit test should strongly hint to a bug within the module it tests, otherwise it is not really useful. Shared state, functions and abstractions within tests make it harder to provide this guarantee. Most testing frameworks will ship with some API to run code before and after each test. For example Mocha 2 has beforeEach and afterEach methods. PHPUnit 3 calls them setUp and tearDown. I never really thought about these before, but now it seems that they have a lot of potential for violating the isolation of individual tests. So I will definitely think twice before putting something in those next time!

3. How can or should tests access internal functionality?

In a modular code base, different parts will communicate with each other using their specified public APIs. Each module hides its dirty parts from the others so they do not have to deal with it. Unit tests can then call these API methods and see if they do what is expected. That also shows that modules should be really small, because anything hidden inside a module will not be tested directly.

Now Karma had a different problem with its own tests: How do you test client side JavaScript that is hidden from the global scope? We are talking about testing the testing framework now, but the issue might occur in some other context as well. The paper describes a very simple solution. They would simply put all code on the global scope during development and testing, then wrap everything with an immediately invoked function expression (IIFE) for production. It does not sound very maintainable to me, but I guess you could automate the wrapping with some script or do it during the production build and make it less manual. Anyway, it is not really the solution that I find interesting, but the problem because it requires you to think about public and private parts of modules from a new perspective. When designing modules you have to decide what parts are internal implementation that will not be tested, and what parts should be tested.

References

  1. http://karma-runner.github.io/1.0/intro/how-it-works.html
  2. https://mochajs.org/#root-level-hooks
  3. https://phpunit.de/