London software testing news UK

Reasons for bad software testing

Posted in Software testing by testing in London on June 28, 2007

From Comp Lanc ICSE 2007

Despite advances in formal and automated fault discovery and their increasing adoption in industry, it appears hat testing, whereby software is ‘shown to be good enough’ will continue as the principal approach for software verification and validation. The strengths and limitations of testing are well known and there is healthy debate over automation. Case studies have proved valuable, and following in this programme of ‘empirical studies of testing’ we seek to better describe the practical issues in testing for a mall software company.

Best practice in testing has been largely uncontroversial, it being to adopt a phase based approach. The earlier phases in these models have increasingly been automated, whereas innovations focused on the latter stages have been more human centric (for example risk based testing. Agile methods, such as extreme programming (XP), disrupt such models with test driven development and often a  rejection of any testing that cannot be fully automated. The agile approach has been successful but there remains a lack of empirical evidence about such testing, and we are concerned as to whether it solves or merely displaces certain issues. Our experience is also that many companies who have adopted XP practices, do not, in fact, automate all tests.

Alongside the ‘best practice’ approaches there continue to be more pragmatic guides to testing. For example Whittaker [26] argues that “there is enough on testing theory” and looks at “how good testers actually do software testing”. Kaner provides wider “lessons” based upon his experiences in testing. From such guides it seems that drawing and learning from ‘experience’ is somehow as important as following a rational approach to testing.

he empirical study in this paper confirms what Whittaker calls for elsewhere: for theory based and practice based approaches to communicate and converge. In this paper we discuss the pragmatics of software testing for a small software company. The company, which we shall refer to as W1REsys, follow a programme of automated unit testing and a semi-automated programme of integration and acceptance testing. We focus on systems integration and acceptance testing and find the notion of ‘rigorous’ testing is defined organisationally rather than in accordance with some technical criteria. We discuss why it is important for software engineering researchers to understand that testing is a socio-technical rather than a technical process and that, for product companies there will inevitably be ambiguity related to integration and cceptance testing.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: