Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Thursday, 11 July 2013

What is Performance Testing?

In simple terms, Performance testing is carried out to ensure that your website or application can do what you expect, under a given workload. It is used to discover bottlenecks and establish a baseline for future Performance testing. Performance testing also helps to evaluate other characteristics such as Scalability and Reliability and as a way to test Disaster Recovery scenarios.

To help manage how performance testing is delivered, there needs to be a clearly defined set of steps that ensure a consistent approach is taken on each project. This helps provide a framework upon which relevant activities, practices and approaches can be built.




As such the Performance testing approach takes place on a number of levels and encompasses a range of performance test types. As a minimum the test types used include Load TestingStress Testing and Soak Testing. We should also undertake other forms of Performance testing activities such as SpikeConfiguration and Isolation.

Before conducting Performance testing you need to define Performance Objectives for your website or application. These will ensure you can recognise when the system is performant to your expectations and can be part of the business case to support undertaking the testing. Typical attributes that are assessed include Concurrency / ThroughputServer Response and Resource Usage.

Alongside the actual Performance testing the website or application there are a number of other activities that can be carried out once Performance testing is completed, this includes BenchmarkingConfiguration and Component Isolation. Performance is not something that can be bolted-on at the end of a project. It is an emergent quality achieved by planning, development and testing from start to end of the project.

Thoughts? Send a message!

Mark

Liked this post?


Tuesday, 9 July 2013

Defects associated with Failed test cases


When a Test Case or test condition fails, the general practice is to raise a defect.

I’ve worked in some agile teams where no defects were raised, we just showed the developer who fixed the issue. Great environment, especially if the focus is on ‘shipping working software’, which it should be. However, as mentioned, it doesn’t matter what defect management software or issue tracker you’re using, you’re typically going to have to raise a defect report in some form.

That’s fine, go ahead and do it, provide all the needed detail to support a Triage process if you do one, give the developer what they need to fix the issue. But… when you do raise the defect report, makes sure you connect/link/assign/associate it with the Test Case that failed!

I’ve seen so many times the situation where defects are raised and the failed cases get forgotten. As the project progresses a growing body of failed cases builds up. What should happen of course is that you get a defect fix through, check the defect record and see which case it related to. You then retest both the defect fix against the record and the Test Case, to ensure it now passes.
  • Link failed cases to defects
  • Don’t forget to retest the failed cases when you get a new deployment!

Mark.


Liked this post?

Tuesday, 2 July 2013

A core set of regression tests


Software is complex and it isn’t possible to know the state of every class, method, function, accuracy of deployment, interaction with other systems to name a few things, at all times. When we have a piece of software that is improved over time, through the release of new and enhanced code, then the possibility we really don’t know if the code is good enough, increases exponentially.

That’s why we do regression testing, regression testing is testing that the code has not regressed, gone backwards, become worse than it was at a later stage. Regression testing is often ignored or paid lip service, do so at your own peril. I do understand why though, because it’s costly. It takes time to execute and people to do it and regression test packs need maintaining. So a minimum is done, just around the area of change, a quick look at key functionality.

Therein lays the risk. We take our eyes of the software and get to a point where we really don’t know what the level of quality is. Get that? Without a core set of regression tests, run every build -  we do not KNOW the software has not regressed, broke or become unstable. These days software is there to make money, unstable and broken software doesn’t make money, it costs money.

So how to make sure you have a good set of regression tests in place? A good heuristic is simply “What absolutely must work, can never be allowed to fail?”.

That’s your start point for deciding what needs a regression set around, then scale up from there. What live issues have been reported recently? Get some regression tests in place. Found some odd semi-reproducible defects during testing? Get the cases in the Regression test pack. Run the set each and every release, at least once, no excuses.

You may try and look good cutting testing time down, but your sure look stupid when the software breaks and you had a regression test you didn’t run. The customer is defiantly going to ask you (see what I did there..?) why you’re testing is out of control, when that live issue they complained about, that got fixed, has now reappeared.


A standing, useful, maintained set of regression tests should be a foundation stone to the testing regime, assuring quality and stability of your applications.

Thoughts? Send a message!

Mark

Liked this post?