Ampelofilosofies

homeaboutrss
The mind is like a parachute. If it doesn't open, you're meat.

On test automation and management, part I

13 Apr 2007

This is the first of a series of articles on test automation, test management and test automation management within the bounds of software development.

In this first part I’ll lay down the problem bothering me and in subsequent articles I’ll talk about a concept and an implementation for a solution.

Automating Tests in a Heterogeneous Environment

Test Automation is the process by which the execution of tests is undertaken by the computer without user intervention.

Lets not go into details as to why test automation is essential under most circumstances, suffice to say that it frees resources and saves time. There are several tools in the market that automate tests. In almost all cases the tools concentrate in solving one of the following:

  • Tests for programs written in a specific language (all unit test frameworks)
  • Tests for a specific plattform (be it OS, framework etc.)
  • Tests for UIs (Web or GUI)
  • Special tools for testing i.e. embedded platforms

Usually the solution consists of a language to write the test script (be it Python, Java, Ruby, Delphi or VBA) and an environment that will run one or more scripts, log the output and return a success or failure result.

Let’s state the problem studying a contrived example, where we are testing a product that controls an embedded device. Development artifacts consist of a Windows-based application which presents the GUI we want to test and the embedded device’s firmware.

To make things a bit more complex the application and the device can communicate over different mediums, say a USB/serial interface and a TCP/IP-based network interface.

We have several tools for testing the different aspects of the system:

  • Unit tests for the application that run on windows
  • Unit tests for the embedded device functionality that run on windows and on the hardware itself.
  • A GUI test automation tool
  • A USB/serial traffic analyzer
  • A simulator against which functional tests are written
  • etc.

Testing the product would include running unit tests, functional tests in simulation mode, using a test rig that runs integration tests for all combinations of communication mediums, using a multiple test server setup for system tests in real-world conditions. There will also be stress testing, and long running tests.

Going past the unit test stage, tests are usually specified giving a short description, stating the preconditions for the test and the expected results. Usually most test tools do not explicitly provide for the setup of the test environment (i.e. the USB analyzer script will assume the correct firmware to be present, the GUI tester will expect the application to be running etc.), although most test automation tools will provide ways of running external scripts, where one can implement such preparation steps.

So extra scripts are needed, for tasks like starting and ending the application, flashing the device, generating test data etc.

In our highly heterogeneous project environment we end up with at least five different ways of implementing test cases. Managing these test cases becomes a case for “Excel Magic”, where the appointed test manager tries to keep tabs on all scripts, descriptions and test conditions through the ingenious use of spreadsheets and hard work. Possibly (and with luck) a good test management tool can help ease the complexity, but experience points to the Excel solution most of the times.

But there is no single entry point to all the tests! In the best of cases elaborate batch/shell scripts will be written to provide the glue integrating the tools together.

Integrating the tools

In our scenario Excel becomes the integrator on the test management side, while on the test execution side, batch/shell scripts take over that role.

It is a race against time that becomes very difficult to win the closer a team comes to the deadline: Every change in the testing environment (test case versions, descriptions, software version numbers) needs to be documented and tracked. Broken shell scripts must be found and corrected.

It becomes obvious that we need a unifying layer for all the different tools. A system that will allow the integration of several tools and provide that single entry point.

Ideally it will also provide us with:

  • A way to define test cases whose execution requires several different tools in a consistent and simple way.
  • A way to automate the execution of said test cases.
  • History and reporting on test execution which can be directly traced to specified builds of the tested system using known versions of the test case set and the tools.

It turns out that if you try to build software that does all of the above, you end up creating a framework for building such a system, which is then customized according to each project’s requirements.

I’ll talk about one such solution in the next installment of the series.