In the Onebip team at Neomobile, we take testing seriously and make sure it accompanies developers from the first line of code to the time a new feature hits the website. Since manual tests are boring to execute and difficult to repeat consistently, we have embraced the concept of automated tests and introduced them in the lifecycle of the project. Automation plays a great role in being able to scale the approaches described here to hundreds of different tests: all the code we write for testing is oriented to run by itself and produce a binary result (a green or red light for deploying).
The most basic automated tests you can write in a project act at the unit level – extracting a class or a function from the codebase and exercising it in isolation. Technically speaking, you give the unit under test a certain input and check that the output and the messages it sends to other parts of the system conform to the specification.
The metaphor for this kind of *unit* testing is performing engine bench tests instead of running them in a real vehicle. These tests are then easy to design and cheap to run; they are a tool available to engineers, and they can change quickly with the code they exercise. Furthermore, the focus on the single part rather than on the system points developers in the direction of producing reusable components, instead of classes that are strongly coupled together.
For example, we have lots of components that perform calculations – such as checking the signatures of messages that the operators send us, and transforming their heterogeneous format into a standard one. These operations are easy to unit test – there is clear demarcation of an input (an HTTP request from an operator) and an output (a Boolean or more complex result), while little side effect on the rest of the system. Yet these tests provide great value because they make sure that even as we change the handling code every day to make space for new use cases, messages are still understood and we do not break the integration with that operator. Unit tests are then a category of regression* tests, and not only a design tool.
Objects are tested in isolation, by providing an input and checking an output, substituting interactions with the rest of the system with Test Doubles where necessary.
Typically at an higher level in the system, acceptance tests exercise large components, or even the full system, from the point of view of an end user or another application. The goal of acceptance tests is not to check the technical correctness of the code, but to find out if the system match its business specification.
For example, it’s important for us to take appropriate steps for billing a user, depending on its country, operator and payment amount. However, there is no single piece of code that can start from these information and generate the full behaviour alone – behaviour is the product of all the objects in our system.
So acceptance tests start from generating a purchase instance, and following it through all the steps that a user and the involved systems have to take to complete it. For some cases, this requires sending messages; for others, entering PINs received on a mobile phone; for yet other cases going to an operator’s website.
The acceptance tests check that all the units in the system interact correctly, and that the behaviour for these different payment flows is consistent. The kind of assertions you can introduce are related to:
- the internal state of the system; a purchase becomes failed if the user has clicked Cancel on the operator’s website
- the external interactions; the system shall send a free courtesy message after a successful billing with this operator
- the output to the user: he should see a success message describing his payment and containing the price
Specification in code – after a subscription is terminated the operator receives a notification
Unit tests typically find correctness faults such as incomplete or wrong URL generations, and prevent these bugs from entering the deployment. Acceptance tests find validation faults where the requirements agreed with businesses – Neomobile departments, operators and merchants – are checked while hiding the details of the system such as the internal classes and files.
It’s not enough integrate with business partners – users are heterogeneous too in the web world. Each user loads Onebip’s payment page from a different operating system (Windows, Mac OS, Linux), a different browser (from Internet Explorer to Firefox), in different versions (IE 7 to IE 10), and even different devices altogether such as via Android or iPhone.
To automate the checking tasks, we have seek out a browser-driving tool, Selenium. By using Selenium as a driver, test code can start up browsers from a set of machines, make them load our pages, and check their appearance as well as their functionality.
A screenshot taken automatically on Android by our test suite.
In short, we now have a focused Selenium test suite that loads some sample payment pages and, after basic checks like the presence of a correct title and Terms&Conditions, takes a screenshot of them. The screenshot can then be compared to a previous version, to find out if a fault has been introduced in the rendering. This suite generates a binary result – the page is still correctly displayed or not – and in case of failure generates a difference between the expected screenshot and the actual current page.