Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Monday, 13 December 2010

TWIST Podcast with Matt Heusser

A few weeks ago I did a 'This Week in Software Testing' (TWIST) interview with Matt Heusser. We had a great chat about the wider context of software testing, the influence of manufacturing thinking, impact of standards such as CMMi and ISO, organisations such as ISTQB etc.

http://www.softwaretestpro.com/Item/5022/TWiST-23---With-Mark-Crowther/

The site requires a free sign-up to get the podcast but with this and other podcasts it's well worth it! Have a listen to this podcast and share your thoughts here on the STP website.

Mark.

Wednesday, 8 December 2010

Skills Matter - Experience Report on Specification by Example

Thanks to those that were able to attend last night and listen to my Experience Report on Specification by Example. (http://manning.com/adzic/)

I understand the video will be posted to the URL below in the next few days:
http://skillsmatter.com/podcast/agile-testing/specification-by-example-an-experience-report

I've asked for the slides and paper to be posted on the BJSS website and will send the URL when they're published. On side note and as I was asked; the BJSS Enterprise Agile book I handed out copies of IS available as a PDF: http://www.bjss.com/BJSS%20Approach Link at the BOTTOM of the page.

Finally, some good questions came up and I thought I'd just touch on some of them again to give slightly clearer, more complete answers.

Thanks again, it was good fun!

....................

In Concordion we have an orange 'to do' status, I was asked where that had come from

I understood it was a patch off the internet / Concordion community but asking today it appears one of the test team implemented it. I've mailed the heads of development and test to ask if we can publish this tweak somewhere (Github perhaps) so it can be available for the testing community. I'll let you know how this goes.

One Example had 'THEN: Function returns Historic Rates and Projection correctly'

The question here was how are we defining correctly. Checking today I'm told that correctness for this Example is defined in an external customer document that describes the business processes in detail. Comments mentioning it are in the code of the test where an example of what 'correct' looks like is also provided. Correct, As Expected and other phrases like this are things to watch out for!

I mentioned I'm expecting we won't run all Illustrative Examples, but if so how can we make sure they haven't become deprecated?

Currently tests against a new build take 13 minutes to complete and we have about 2 or 3 new builds a day. The next work package is twice as big as the current one. So, rough maths suggests it could take 40 minutes to run the tests in the next work package. Doing that three times a day could get painful and there could be up to 5 work packages!

As a minimum we'll always run the Key Examples. I suggest we'll look for Illustrative Examples (regression checks at that point) that have never failed or raised bugs and stop running them. Then at some agreed cadence, every third build overnight, once a week at the weekend, etc. we'll run the full suite again. It'll be a call on how much change has happened, are we approaching a drop date, etc. but I see the answer being as simple as this.

A point about how automated tests might be affected by requirements change and if we were adding Illustrative Examples wouldn't that increase the admin/maintenance burden

I re-counted the number of tests today and looked at how many had been run. I stated there were 600 tests in place, just for accuracy we have 581 automated tests for Key and Illustrative Examples of which 473 have been run to date. The total number of Examples that have required re-write due to requirements change after Collaborative Specification has taken place so far is estimated to be 8 and the number of automated scripts affected is 2. We'll see what the final figure is when the work package is complete but this is insanely low compared to traditional approaches.

Mark.