London software testing news UK


Network Testing Labs Rates Mi5 Webgate

Posted in Acceptance testing by testing in London on September 30, 2007

From Dark Reading

Mi5 Networks, the web security gateway company, today announced that its Webgate 009 has earned a World Class Award for Best Anti-Malware Gateway Device from Network Testing Labs. In comparative tests against a combination of Blue Coat Proxy SG8000 and AV2000 appliances, Webgate delivered superior accuracy in detecting web malware and up to 8 times the performance. In addition, Webgate provides infection detection, automated spyware removal, and botnet protection capabilities not available from any other vendor. A complete copy of the report is available at http://www.mi5networks.com/products/literature.htm.

According to Barry Nance of Network Testing Labs, “Webgate 009 is a superior gateway-architecture security solution for thwarting web malware. It’s more accurate, easier to use and more scalable than the SG8000/AV2000 combination of devices. Webgate offers better reports, is better-designed and – the icing on the cake – costs less. We recommend you look closely at MI5’s family of Webgate appliances.”

“Network Testing Labs confirms what we have known all along, proxy-based products are no match for the performance of the proprietary S2 streaming architecture that we have developed in Webgate,” said Doug Camplejohn, founder and CEO of Mi5 Networks. “These test results demonstrate that Webgate provides superior protection against web threats, including botnets, without introducing noticeable latency. As enterprises search for ways to consolidate the number of web security devices in their network, Webgate has the horsepower to support its current as well as future security processing capabilities.”

Assurance quality software testing 

Making manual testing of software less

Posted in Software testing,testing tool by testing in London on September 29, 2007

From Sys Con

Original Software has released an enhanced version of TestDrive-Assist offering powerful link and spell-checking features, as well as a markup function that allows for annotated comments to be attached to any area of the program.

Checking links is a time consuming and frustrating proposition, especially for complex websites with constantly changing content, updates and external links. TestDrive-Assist not only checks and flags potential problems, but it creates an annotated list of many different types of link errors from “server not found” or “timed-Out” to “redirect,” which helps developers determine how to pinpoint and fix problems.

Manual testing involves many steps and inputs from QA staff which are rarely annotated with a clear record of the problems flagged or the corrections made. TestDrive-Assist now includes a powerful but easy-to-use markup function that records and creates an annotated list of comments and details the corrected actions. Text, images and formatting changes can appear as markups or highlights, or can be annotated in comment boxes, creating a clear picture of not only the changes made, but the rationale for these edits and corrections.

TestDrive-Assist is available as a dedicated manual testing solution or can be incorporated with the Original Software’s Test-Drive Gold solution to allowing the user to benefit from automated regression testing, code-free scripting and self-healing scripts

Risks to BMUF (Big Modelling Up Front)

Posted in Software testing by testing in London on September 28, 2007

From Dr Dobbs

There are two risks to be concerned about with “big modelling up front (BMUF)” approaches in general. The most obvious one is that overall project cost is driven up with the creation, review, and maintenance of overly detailed models and documents. This is something which the Agile community refers to as “traveling heavy”.

Second, and what isn’t so obvious, is that there are few opportunities for process improvement pertaining to traditional approaches to modelling and documentation. The problem is that the feedback cycle between creating the models up front and validating the models with working software is too long, often months or even years in length. It’s common practice for up-front modellers, particularly business analysts and data analysts, to swoop in at the beginning of a project to do their work, get it accepted, and then move on to the next project. By the time you determine, through system integration testing (SIT) and user acceptance testing (UAT), whether the models actually reflect reality the modellers are often long gone. Worse yet, so many other people have been involved with the actual construction that it’s incredibly easy for the modellers to blame them for any problems — the models were reviewed and accepted after all, so the problem must be in the implementation. This lack of concrete accountability may explain why the traditional modelling community has been stuck in their BMUF rut for almost 30 years.

There are clearly serious problems with traditional approaches to modeling. However, a fair question is whether traditional approaches address risks which agile approaches do not. The answer is a resounding “no” to the consternation of many traditionalists. Traditional modeling approaches provide the opportunities to think things through early in the project, thereby reducing the risk of starting off on the wrong track. However, initial agile requirements and architectural envisioning also enable you to do that.

In short, traditional approaches to modelling increase your risk of building software that people don’t want, working under a false sense of security which in turn prevents you from questioning what you’re doing, increase the chance that you’ll make architectural decisions too early, that you’ll overbuild the system, that you won’t want to deviate from your questionable strategy, that your overall costs will be higher, and that you won’t have easy opportunities for process improvement. Worse yet, the few risks that traditional modeling approaches do seem to deal with are better addressed through Agile Modeling strategies.

User acceptance test completed for Telecom company

Posted in Acceptance testing by testing in London on September 27, 2007

Ftom PRWeb

Mauritius Telecom, the largest publisher of phone directories in Mauritius, has started live production of nxDSMP, nxPageSmart and nxAdSmart.

User Acceptance Testing was completed last month clearing the way for final data migration and live use. Mauritius Telecom is now positioned to start benefitting from the enhanced functionality and new media advertising capabilities.

Like many publishing companies globally, Mauritius Telecom’s advertising market has undergone changes and the net-linx suite introduces the necessary flexibility to adjust business models and advertising offerings in a more timely fashion, generally making Mauritius Telecom a more nimble organization.

Dev Hurkoo, General Manager of Teleservices, subsidiary of Mauritius Telecom which produces the directories, says “the new directory system from net-linx is central to our business development strategies and will help us greatly improve our productivity while providing key capabilities to create new business opportunities. net-linx has also constantly shown high commitment and support to Teleservices and we are really pleased with the business relationship”.

Website Performance and Load Testing

Posted in Load testing,Software testing by testing in London on September 26, 2007

From PR-GB

To understand the behaviour of the loaded web application, it is also important for the load testing tool to enable the tester to track the performance characteristics of external components such as operating systems, web servers, databases etc. This shows how the performance of his application correlates with the performance characteristics of the external component. This kind of analysis will allow the tester to pinpoint the root cause of performance bottlenecks fairly easily.

During test execution the tester should be able to view the performance graphs in real time for performance metrics such as the transaction response time, HTTP responses per second grouped by HTTP code (e.g. 200, 404, 500 etc), passed transactions per second, failed transactions per second, total transactions per second, hits per second, pages downloaded per second etc. The tester should also be able to simultaneously view the performance characteristics of the external components described above. For an operating system this could be something like the % processor time, for a database it could be the number of writes per second. At the end of the test, the tester would typically be able to view and save this data as a report for further analysis.

Load and performance testing allow you to simulate the behaviour of your application under a typical production environment. This will allow you to plan your hardware deployment strategy effectively and ensure that your application will deliver the expected performance characteristics. Rolling out a web application without testing its performance characteristics under expected production loads would resemble crossing a road blindfolded. Load testing is an essential part of the development cycle of a web application and should never be overlooked.

Addressing the problem of testing a complex environment

Posted in Acceptance testing by testing in London on September 25, 2007

From Java Sys Con

Software development organizations are also reaping the benefits of virtualization, leveraging virtual lab environments to reduce test equipment costs, slash test cycle times, and increase the quality of the applications and systems they deliver.

The highly dynamic nature of software development often requires operation on multiple technology platforms, operating systems, Web and application servers, and databases. Add to this the many different builds, patches, or regional versions that are delivered by development and you get an idea of the immense challenge test teams face providing adequate coverage.

This article explores new ways of integrating virtual environments seamlessly into the software testing process and demonstrates how modern test management technologies can align to enable a quantum leap in a test team’s ability to test more efficiently, across a wider range of environments, and with greater coverage of critical requirements to ultimately improve software quality across-the-board.

Application load testing

SAP and IBM Rational Functional Tester

Posted in testing tool by testing in London on September 24, 2007

From Verivox

IBM) have announced new enhancements to its software portfolio to help customers deliver higher quality, more secure applications and to help customers manage requirements and change associated with application deployments. The new capabilities make it easier for IBM and SAP customers to modify, test and deploy business applications before they go live.

As organizations rapidly evolve their business they have to quickly deploy applications. IBM is helping customers manage the software lifecycle for both bespoke and packaged applications. The new features provide support for customers that are using SAP applications within the context of a broader IT investment.

IBM Rational Functional Tester has achieved Certified for SAP NetWeaver® status, making it easier for IBM and SAP customers to test a myriad of business applications before they go live. The SAP® Integration and Certification Center has certified that IBM Rational Functional Tester properly integrates with the SAP NetWeaver platform to exchange critical data with instances of the SAP Business Suite family of applications. The solution was tested with the BC-TEST-GUI 6.40 functional test tool for Windows integration scenario. The certified integration of IBM Rational Functional Tester gives teams the ability to automatically test the quality of implementations of SAP applications at customer sites – helping to reduce the time and cost of deployment. The new integration allows customers using SAP solutions to better support high-quality business implementations by enabling the creation, execution and analysis of regression testing to validate the quality of customization of SAP solutions.

IBM is also announcing enhancements to existing products to help SAP customers accelerate testing and deployment of SAP applications and updates. Among them, IBM has added an array of new security vulnerability components to IBM Rational AppScan for use with SAP NetWeaver that improves business integrity by helping to ensure that SAP applications are free of security vulnerabilities. The automated security testing detects and helps remediate security issues before and after SAP applications go live.

Testing SQL server 2000 clustering

Posted in Acceptance testing by testing in London on September 23, 2007

From Tech Republic

SQL Server 2000 is a robust and complicated product, especially when you are using it in a clustered setup. Properly deploying and managing it requires a reliable testing environment, but that can be costly. An economical alternative is to build a clustered SQL 2000 testing environment using VMware.

I’m going to walk you through the process of installing SQL Server in a VMware configuration. I’ll begin by explaining how SQL Server works with clustering. Then, I’ll give you a tutorial on the installation process.

Please read this related article on setting up a testing and clustering environment with VMware:
“Creating virtual machines for testing systems in VMware”
Designing a SQL Server cluster

When you design a SQL Server cluster, you can choose one of the following configurations:

  • Active/active configuration
  • Active/passive configuration
  • Load testing software

    Traders test the limits of information technology

    Posted in Acceptance testing by testing in London on September 22, 2007

    From Financial News Online

    Algorithmic trading, (aka program or black-box trading) is a well-established technique on the buy-side and accounts for about half of electronic trades in the mature stock markets.

    It involves the routing of orders from fund managers’ trading blotter to specific computer-based algorithms that automatically manage the execution of these trades based on criteria such as timing, price or size of the order.

    The first generation of trading algorithms, including volume-weighted average price and implementation shortfall algorithms, have been around for about three years but have come in for criticism this year for contributing to market volatility in periods of frenzied trading activity.

    Algorithmic trading was blamed for escalating volatility spikes last month as the world’s leading equity markets reacted to sub-prime mortgage losses in the US.

    Event-driven algorithms are marketed as the next generation of smart algorithms, differing from the established tools in that they adapt in real time to market changes. They emulate sellside traders by reacting immediately to the market and trading opportunistically to minimize execution costs and maximize returns, their vendors claim.

    Customers can define the actions of the algorithms by setting parameters that optimize their performance while ensuring the underlying funds are not exposed to undue risk in times of extreme volatility.

    Ary Khatchikian, chief technology officer of Portware, a supplier of automated portfolio trading software, said: “Some quants say they can handle it all electronically but you never know. That’s like saying there are no bugs in software.”One benefit of a sophisticated algorithmic trading platform is the ability to back-test market data to see what went wrong and try different strategies that might work better in similar conditions.

    Martins said: “Back testing is key. Funds can capture the volatile market data of early August and back test it in new strategies to see how they would react.”

    But Rosen said predicting the future based on the past is not reliable. He said: “Algorithms are not prophets. This is not a critique of algorithms, it is a critique of the naivete of the market. They are ascribing powers of prophecy to algorithms, giving them an aura of power they were not built to have.”

    Algorithmic trading is here to stay, despite the recent scares. The ability of vendors to tweak and improve their products should make quant traders feel better about using them.

    Testing financial services systems 

    Exploratory testing can uncover critical issues that scripted routines miss

    Posted in Software testing by testing in London on September 21, 2007

    From Redmond Developer

    The testing team would open with three straight days of exploratory testing, find all the bugs we could as rapidly as possible, then start scripted testing while we waited for the new build. Once the new build arrived, we would execute the scripts one more time and pray everything passed.

    The team had never tested this way before, but they were eager to try it. They were skilled business users and several of them had complained about the constraints that scripted testing put on them. They would rather “just poke around in the application” than follow written scripts like witless rats in a maze. Unfortunately, our business partners required written testing records, and I knew that “three days poking around” wouldn’t satisfy their compliance needs.

    Three days into exploratory testing, we had already found several bugs that never would have emerged using the planned scripts, plus we found a few more bugs that would have caused the scripts to fail. The development team set about fixing them and a new build was delivered on day five. Three days later, the scripted testing was successfully completed, and we were authorised to move the application into production.

    Everyone was satisfied with the end result, and our test team credited exploratory testing for making the production release date.

    How did exploratory testing save my project? By removing unnecessary constraints on the team, it allowed testers to follow their well-honed instincts to find the most critical problems first. The approach reflects the fact that effective testing is more about investigation than it is about simple, scripted procedure.

    Removing the constraints of scripted testing had another effect as well — one I’d heard about but had never experienced before: My testers felt more energized and engaged in testing. They enjoyed their work more than ever before, and the resulting energy boost propelled them through testing at great speed.

    Exploratory Testing Benefits:

    Reduced need for detailed test planning
    Increased emphasis on application quality
    More effective use of knowledgeable testers
    Reduced fatigue and boredom of testers
    Increased agility in testing as understanding of the problem space evolved

    Next Page »