London software testing news UK

IBM Cloud software testing

Posted in Software testing by testing in London on September 30, 2009

From Information Week

IBM’s CloudBurst appliances are based on Intel, not IBM Power, chips and will make use of eight-core Nehalem or Xeon 5500 chips. The CloudBurst appliance “brings a lot of piece parts together into a coherent package,” he says. Customers will be able to consult a catalogue of software combinations for CloudBurst, such as DB2 combined with the WebSphere Application Server.

“We create the templates and load them with VMware licenses. The customer can then select the software from a self service approach,” and buy the above combination, or, say, Rational development tooling that fits a project about to get underway.

Software testing and quality assurance (QA) is another area where a CloudBurst appliance, either inside or outside the enterprise, could be used to advantage, he said. In effect, IBM is going to offer its expertise in constructing enterprise software environments as part of a self service catalogue.

Methodical Design and Test of Software

Posted in Software testing by testing in London on September 29, 2009


The single most important factor promoting software development agility is the ability to functionally test new or changed software. The tests must be on demand, immediate, thorough, and rapid – as well as separate from the hardware.

Untested software will always have defects or issues when first brought up on target hardware. This is true regardless of how skilled the coders are and how rigorous the rest of the development process may be. Manual testing of software on the hardware is feasible during early development but it takes longer (and costs more) as the product approaches full function. Schedule and budget pressures frequently cut into time required for thorough manual software testing without consideration of the consequences. But taking short cuts with manual testing only leads to more overlooked bugs that disrupt smooth product introductions, or worse, result in field actions and possible recalls.

Managing the testing process

Posted in Software testing,Software testing book by testing in London on September 28, 2009

There is a new edition out of Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing (the third edition to be more precise).

According to the review:

The book covers core testing concepts and thoroughly examines the best test management practices and tools of leading hardware and software vendors. Step–by–step guidelines and real–world scenarios help you follow all necessary processes and avoid mistakes.

Producing high–quality computer hardware and software requires careful, professional testing; Managing the Testing Process, Third Edition explains how to achieve that by following a disciplined set of carefully managed and monitored practices and processes.  The book covers all standards, methods, and tools you need for projects large and small Presents the business case for testing products and reviews the author′s latest test assessments.

Topics include agile testing methods, risk–based testing, IEEE standards, ISTQB certification, distributed and outsourced testing, and more.

ALM and agile testing

Posted in Software testing by testing in London on September 27, 2009

From Dr Dobbs

In an Agile environment, it’s helpful to think of the ALM process not at as sequential phases or steps, but rather as a series of short work increments, each including just enough analysis, design, development, testing, and deployment to deliver a bit of business value. The right processes and best practices are continuously discovered and improved.

In an Agile world, the traditional “phases” of ALM are much shorter and much more tightly dependent upon each other. The application of good engineering practice to all aspects of development, testing, build, deployment and release management is highly critical as this is what makes projects and systems reliable and predictable regardless of stage in the life cycle. It is also critical that your chosen ALM tool holistically supports good engineering practice across each of these areas.

3 useful test cases for ensuring a consistent user interface

Posted in Acceptance testing by testing in London on September 26, 2009

From Software Planner newsletter

  1. Abbreviation inconsistencies: If your screens contain abbreviations (e.g. Nbr for number or Amt for amount), the abbreviations should be consistent for all screens in your application. Again, the style guide is key for ensuring this.
  2. Delete confirmations: It is a good practice to ask the user to confirm before deleting an item. Create test cases to ensure that all delete operations require the confirmation. Taking this a step further, it would also be great to allow clients to turn off specific confirmations if they decide to do this.
  3. Save confirmations: It is good practice to ask the user to confirm an update if updates are made and they navigate to another item before explicitly saving. Create test cases to ensure that all record movement operations require the confirmation when updates are made. Taking this a step further, it would also be great to allow clients to turn off specific confirmations if they decide to do this.

New Oracle Application Testing Suite

Posted in Software testing,testing tool by testing in London on September 25, 2009

From Arcweb

Oracle announced the availability of Oracle Application Testing Suite 9.0, a complete, open and integrated application testing solution for Oracle applications, Web and SOA applications. Oracle Application Testing Suite introduces OpenScript, a new Java-based, integrated test scripting platform for automated functional testing and load testing.

Additionally, the new offering delivers enhanced capabilities for testing of Oracle Applications through new test accelerators for the Oracle E-Business Suite and enhanced accelerators for Oracle’s Siebel CRM.

A key component of Oracle Enterprise Manager’s suite of Application Quality Management products, Oracle Application Testing Suite provides an integrated solution for load testing, functional testing and test management, enabling customers to thoroughly test applications and their underlying infrastructure and helps ensure optimum quality, scalability and availability prior to deployment.

5 Useful test cases for testing User Interfaces

Posted in Software testing by testing in London on September 24, 2009

From Software Planner Newsletter

  1. Error messages: Ensure that error messages are informative, grammatically correct, and not condescending.
  2. Shortcuts: If your application allows short cut keys (like CTRL+S to save), test each shortcut to ensure it works in all different browsers (if the application is web based).
  3. Invalid choices: Do not include instructions for choices not available at the time. For example, if a screen cannot be printed due to the state of the data, the screen should not have a Print button.
  4. Invalid menu items: Do not show menu items that are not available for the context you are currently in.
  5. Dialog box consistency: Use a style guide to document what choices are available for dialog boxes. You should have not have Save/Cancel dialog on one screen and an OK/Cancel on another, this is inconsistent.

SOA Testing and HP

Posted in Software testing,testing tool by testing in London on September 23, 2009

From Tech Target

Service-oriented architectures involve distributed applications with pieces that can be spread out across an enterprise’s infrastructure. This presents a challenge to QA teams who must deal with multiple access points, dependencies and degrees of availability when testing changes to service functions.

Virtualizing system configurations is becoming a popular way to deal with this issue. A developer can remove the real-world system dependencies while simulating their influence.

HP has put faith in this approach by signing a deal with software testing vendor iTKO to resell its LISA Virtualize testing suite. The company plans to package LISA with its HP Quality Management, Functional and Performance Testing products.

LISA gives developers the ability to simulate the behaviour and performance of system configurations that a piece of software would need to be tested on. The performance of a simulated system can be dialled up or down to mimic various performance issues.

Outsourced Testing Webinar

Posted in Software testing by testing in London on September 22, 2009

From Free Webinar
Thursday 24 September: 14.00 BST

There are now many organisations offering managed testing and QA services, with the promise of in-depth technical and testing skills, cost savings, flexibility, accelerated delivery, and better quality.

While these contracts are entered into with the best intentions, all too often these seemingly great deals soon begin to feel expensive, inflexible, and no longer targeted to the needs of the organisation. Control of how well solutions are being tested, and the continuity of value delivery, must essentially be maintained.

Anyone considering or using outsourced testing services will want to avoid the pitfalls experienced by others via proven methods of management and working. This webinar explores some practical management techniques for setting up, or rejuvenating testing managed service arrangements, including supplier management and measurement, and tools that make outsourcing testing work a success for the customer.

The rise of Agile Acceptance Testing

Posted in Acceptance testing by testing in London on September 21, 2009


It was very encouraging to see the rising interest in agile acceptance testing, and we ended up almost running a full track on the topic with 4 presentations on lots of different aspects. I learned about Thoughtworks Twist from Andy Yates and narrative testing ideas from Antony Marcano and Andy Palmer. A key take-away from the conference for me was probably Marcano’s idea of “saving the game”, building systems so that you can easily store a snapshot of the current state and go back to that later. I can see how this can significantly aid in all kinds of testing, including acceptance and exploratory, so I have to look for ways to implement that in my future projects. Exchanging experiences of how people do acceptance testing was also very valuable and, judging by the comments at the closing session, it looks as if quite a few people went away with some new ideas around it.

Next Page »