London software testing news UK


Call for Speakers on Software Testing

Posted in Events and improvement by testing in London on May 31, 2008

From QAI Asia

QAI’s Annual International Software Testing Conferences in India have highly popular offerings, that bring together practitioners and thought leaders from the Software Industry, Academia and Government for sharing and exchange of experiences, ideas and learning.

The Conference’s real world approach delivers the latest state of the art and state of the practice in software testing and strategies being used by leading software organisations for testing product software. The Conference will have two days Tutorials, two days Conference, a Gala Dinner, Vendor Presentations, experience sharing and networking opportunities.

We bring to you the Call for Speakers (for tutorials as well as the Papers) for the Conference.

The Conference will emphasise Software Testing as an important aspect of the software development life cycle (SDLC). The Conference targets testing practitioners, software developers, software testers, project managers and software customers and other testing practitioners.

Who Should Attend:

  • Software Testing Managers
  • Test centers heads and practitioners
  • Software Engineers and Developers
  • Project Managers and team leads

Selling unit testing

Posted in testing tool by testing in London on May 30, 2008

From Java World

For instance, Agitar’s product was phenomenal (and their team, outstanding); nevertheless, their superior mousetrap cost a heck of a lot of money. In the end, however, the Agitator produced JUnit tests. The same tests that one could have arguably written for free (yes, I know that in reality many of tests would have never been written by hand, however).

In essence, Agitar took a commodity (a mousetrap, but in this case, JUnit) and made it better– indeed, they have (or is it had now?) a good customer base (meaning they actually had customers, man). There were people that saw the benefit and had the discipline and honesty to admit they had a problem that the Agitar could solve in a superior manner.

Don’t forget basic economics though– the cost to produce their mousetrap was significant. There are only so many people who would buy a $500 mousetrap when sitting right next to it, on the same shelf, is the dollar variety (which, by they way, works fine). So too, there are only a few companies that would likely spend roughly $50,000 to $100,000 on a tool where any Google search would display a link for another tool that has zero cost (i.e. junit.org).

Indeed, there are other smokin’ companies selling superior unit testing tools that are still operating today; however, if you look closely at the big names, they have a diversified portfolio of products and it wouldn’t surprise me if their economics work out more favorably (but keep in mind the statement regarding profit and cost and the difference of the two related to length of presence in a market).

Software testing service provider

A testing hierarchy?

Posted in Software testing by testing in London on May 29, 2008

From Javaworld

The next level is populated by performance testers.

Performance testing has something to do with concurrency and achieving good performance is hard and requires intimate knowledge of shared variables, thread locals, monitors, java.util.concurrent.*, etc. A performance tester or performance engineer may even be responsible for tuning the JVM garbage collector and that’s really a profession in its own.

At the bottom of the pyramid we have functional (system) testers and software quality assurance people.

Functional testers are meant to execute only GUI-based test tools, so their job is doing not more than performing a few clicks and comparing the results. QA people has the smell of “controlling real programmers” and that’s not a good reputation in any case. QA people are running, e.g., “static code analyzers” and real developers have to fight with all the “false positives” that may result of such a “big brother is watching you” procedure. Moreover, all these QA people that want to tell real developers what’s wrong with their code: did they ever implemented a large application when being pressed for time by the project leader? Or are they only “quality administrators”, just being able to start Findbugs or Checkstyle for a couple of JAR files (or PC-Lint in the C area) thus producing endless reports nobody is really interested in? BTW: When I’m talking about “testers” I mean people who are executing tests, not famous people who are talking about testing on so much test conferences that you may wonder when they have time do do real testing.

Software testing conference in August

Posted in Events and improvement by testing in London on May 28, 2008

From TAIC Part

Among computer science and software engineering activities, software testing is a perfect candidate for the union of academic and industrial minds. TAIC PART is a unique event that strives to combine the important aspects of a software testing conference, workshop, and retreat.

TAIC PART brings together industrialists and academics in an environment that promotes meaningful collaboration on the challenges of software testing. The conference brings together software developers, end users, and academic researchers who work on both the theory and practice of software testing.

The conference will be held at Cumberland Lodge, Windsor, UK, August 29-31, 2008. Set in the heart of Windsor Great Park, yet only 27 miles from London and a short distance from Heathrow, it is perfectly placed for visitors from home and abroad. Food and domestic standards are very high and the atmosphere is that of a friendly country house.

London software testing resources

Test-Driven Development (TDD) and software quality

Posted in Software testing by testing in London on May 27, 2008

From Java World

Any Test-Driven Development practitioner will tell you, Test-Driven Development (TDD) is a design strategy, not a unit-testing technique. Writing unit tests are a means, not an end. The goal is to write better quality, more reliable, and more accurate code.

I recently read a study in the IEEE Software journal (“Does Test-Driven Development Really Improve Software Design Quality” that tried to establish measurable benefits of Test-Driven Development on software design quality. Any TDD practitioner will tell you, using TDD properly tends to produce better quality code. Not just code that is tested better, but well-designed code.

This study looked at TDD as an isolated practice, independent of other agile practices. Indeed, TDD isn’t limited to Agile development – you can also do TDD in more traditional development processes. Just replace the “Detailed Design -> Code -> Unit Test” cycle with “Unit Test -> Code -> Refactor”.

Not suprisingly, teams that practiced TDD obtained much better code coverage statistics. But, more interestingly, the study also found that coding teams that practiced TDD tended to implement smaller, simpler solutions, with less lines of code overall, and less lines of code per method. The cyclometric complexity was also lower, which is a side-effect of simpler, more testable code. My own experience, like a lot of TDD practitioners, tends to bear these finding out. However, it is hard to come up with objective evidence, since you rarely solve exactly the same coding problem in the same conditions twice.

Customer experience testing

Testing times for Browsers

Posted in Software testing by testing in London on May 26, 2008

From NY Times

The browser, that porthole onto the broad horizon of the Web, is about to get some fancy new window dressing. Next month, after three years of development and six months of public testing, Mozilla, the insurgent browser developer that rose from the ashes of Netscape, will release Firefox 3.0. It will feature a few tricks that could change the way people organize and find the sites they visit most frequently.

Not to be outdone, Microsoft recently took the wraps off the first public test version of the latest edition of Internet Explorer, which is used by about 75 percent of all computer owners, according to Net Applications, a market share tracking firm. The finished version of Internet Explorer 8 could be released by the end of the year and is expected to have additional features.

Even Apple, which once politely kept its Safari browser within the confines of its own devices, is making a somewhat controversial push to get it onto the computers of people who use Windows PCs.

In other words, the browser war — the skirmish that landed Microsoft in antitrust trouble in the ’90s — is heating up again.

America Online, which acquired Netscape, spun off the nonprofit Mozilla Foundation in 2003. Its Firefox browser soon inspired an open-source movement backed by computer enthusiasts. Early versions of Firefox introduced features like a built-in pop-up blocker to kill ads, and tabbed browsing, which lets users toggle between Web windows.

Firefox now has 170 million users around the world and an 18 percent share of the browser market, according to Net Applications. That is especially impressive given that most of its users have made the active choice to download the software, while Internet Explorer is installed on most PCs at the factory.

Senior test manager vacancy

Developing talent

Posted in Acceptance testing by testing in London on May 25, 2008

From Business Standard

Software education firm KarROX Technologies is partnering with global IT majors like IBM, Oracle and CompTIA to develop talent pool in Indian infrastructure management and software service.

“As part of the agreement, karROX will offer training on IBM’s latest software technologies like Tivoli, Cognos, Websphere, Lotus Notes and IBM Software Testing in software and Infrastructure Management space. The training will be provided through the company’s select IT education centres in India,” Karrox Technologies Chief Executive Officer Jitendra Nair said.

KarROX will provide training on IBM’s authorised curriculum and enable IBM’s certification for students aspiring to build careers as software professionals and system engineers, he added.

Software quality assurance testing

New test lab for Macworld

Posted in Software testing by testing in London on May 24, 2008

From Macworld

We’re pleased to announce that as of today, you’ll now find all product review and performance reviews in one place—our new blog From the Lab.

Here, you’ll find reviews containing Lab testing and buying advice on Mac-compatible hardware from printers to camcorders to hard drives. Of course, we often run into peculiar issues when testing products, so From the Lab will also serve as an outlet for cooling our engines while keeping you well informed.

Many Macworld readers are either looking to switch from a Windows PC to a Mac, or upgrade from a dusty, old G4 to a shiny Intel machine. To help you choose just which Mac is best for you, our Speedmark results will break down, in numbers, the performance of each Apple computer in every area you can imagine. Speedmark results will be published in this space just days after an Apple computer’s release; we get these posted as soon as we possibly can.

To introduce you to the members of the cast—Jim Galbraith is Macworld’s lab director, who devises and oversees our Lab testing. Working under Jim’s direction as hardware testers are contributing lab analyst Jerry Jung and myself. Roman Loyola, senior editor of reviews, will be editing all the hardware reviews that are posted here. And I’ll be the main editor and coordinator of this blog.

With that said, welcome to From the Lab. Take a tour; click around. We’re in the early stages of getting this blog up and running, so feel free to comment with suggestions on improving this section in terms of presentation of the site, our test procedures, and whatever else comes to mind. We promise that we’ll be able to better respond to your questions and comments now that we can see them all in one place.

Testing and SOA

Posted in Software testing by testing in London on May 23, 2008

From Sys Con

The key to an agile SOA is to look into the various change activities that combine in a service life cycle. These change activities include change inception, change elaboration and impact analysis, construction, and finally, transition into production.

There are three important points here: first, you can’t afford to spend time and resources repeating the same manual activities associated with each of these phases every time a change is needed, such as changes to the environment or testing and impact analysis. Second, the change processes are increasingly iterative; the days of the waterfall model, starting with requirements and design and ending with testing, are gone. Third, the phases I listed – which are mostly based on RUP (Rational Unified Process) – don’t include an isolated SDLC testing phase. In fact, you should never have an explicit, sequential testing or validation phase if you want to have an agile, quality process that produces quality results.

The more time you spend doing tests on changes made in your underlying systems, the more you are likely to compromise the agility of your SOA, and risk quality and continuity.

Quality needs to be baked in. You don’t test it out of an application. I’m by no means suggesting that you shouldn’t test and validate, but the process of testing and validating against established policies needs to be continuously applied throughout the SDLC process, and information pertaining to these policies must also ubiquitously exist, on-demand, in the underlying environment.

French companies show way in SAP Quality Awards

Posted in Acceptance testing by testing in London on May 22, 2008

From Computer World

French companies led the way in the SAP Quality Awards that were presented today (20 May) at the Sapphire event. Bouygues Construction won the global implementation award. It is building a global reporting tool covering 6,000 users in 60 countries, as part of a major business process transformation project. It combined exemplary methodology with a real collaborative approach, Lembke said.

It made strong use of tools to audit quality and tracked quality through external audit reviews and through a user acceptance test of the full solution. In the first pilots more than 10% of the final number of users were involved in testing.

Quality Assurance Testing

Next Page »