University of Leicester

cms

Software testing

The future of testing lies in automation. In order to be effective large numbers of test cases must somehow be generated, executed and the results evaluated. Even in cases where this is possible "by hand", the process is tedious, time consuming and error prone. It follows that every opportunity to automate parts of the process must be carefully examined.

One very promising approach is to write a (fairly) formal test specification in a special purpose language. This test specification is based on the system requirements and/or high level system specification (and can be augmented and improved as the development process continues). Once the test specification is written it can be processed to produce test cases, or, more usually, detailed descriptions of test cases.

There are interesting parallels between writing test specifications as above and writing formal specifications. In both cases information from the system requirements or project goals is abstracted into a more formal and precise notation. Some of the abstraction is almost identical in both cases, but there are some aspects unique to each. I am interested in these similarities and differences and am researching the area in several ways: looking at how testing information can be extracted from formal specifications and vice versa. My goal is a more complete form of specification, useful for both testing and conventional development. One particular form of specification involves the X-machine model of computation.

Papers and other material


Author: Gilbert Laycock (g.laycock@mcs.le.ac.uk), T: +44 (0)116 252 3902.
© University of Leicester 6th March 1996, 11:19:20. Last modified: 1st March 2005, 12:07:28
CMS Web Maintainer. Any opinions expressed on this page are those of the author.