Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Monday, 13 December 2010

TWIST Podcast with Matt Heusser

A few weeks ago I did a 'This Week in Software Testing' (TWIST) interview with Matt Heusser. We had a great chat about the wider context of software testing, the influence of manufacturing thinking, impact of standards such as CMMi and ISO, organisations such as ISTQB etc.

http://www.softwaretestpro.com/Item/5022/TWiST-23---With-Mark-Crowther/

The site requires a free sign-up to get the podcast but with this and other podcasts it's well worth it! Have a listen to this podcast and share your thoughts here on the STP website.

Mark.

Wednesday, 8 December 2010

Skills Matter - Experience Report on Specification by Example

Thanks to those that were able to attend last night and listen to my Experience Report on Specification by Example. (http://manning.com/adzic/)

I understand the video will be posted to the URL below in the next few days:
http://skillsmatter.com/podcast/agile-testing/specification-by-example-an-experience-report

I've asked for the slides and paper to be posted on the BJSS website and will send the URL when they're published. On side note and as I was asked; the BJSS Enterprise Agile book I handed out copies of IS available as a PDF: http://www.bjss.com/BJSS%20Approach Link at the BOTTOM of the page.

Finally, some good questions came up and I thought I'd just touch on some of them again to give slightly clearer, more complete answers.

Thanks again, it was good fun!

....................

In Concordion we have an orange 'to do' status, I was asked where that had come from

I understood it was a patch off the internet / Concordion community but asking today it appears one of the test team implemented it. I've mailed the heads of development and test to ask if we can publish this tweak somewhere (Github perhaps) so it can be available for the testing community. I'll let you know how this goes.

One Example had 'THEN: Function returns Historic Rates and Projection correctly'

The question here was how are we defining correctly. Checking today I'm told that correctness for this Example is defined in an external customer document that describes the business processes in detail. Comments mentioning it are in the code of the test where an example of what 'correct' looks like is also provided. Correct, As Expected and other phrases like this are things to watch out for!

I mentioned I'm expecting we won't run all Illustrative Examples, but if so how can we make sure they haven't become deprecated?

Currently tests against a new build take 13 minutes to complete and we have about 2 or 3 new builds a day. The next work package is twice as big as the current one. So, rough maths suggests it could take 40 minutes to run the tests in the next work package. Doing that three times a day could get painful and there could be up to 5 work packages!

As a minimum we'll always run the Key Examples. I suggest we'll look for Illustrative Examples (regression checks at that point) that have never failed or raised bugs and stop running them. Then at some agreed cadence, every third build overnight, once a week at the weekend, etc. we'll run the full suite again. It'll be a call on how much change has happened, are we approaching a drop date, etc. but I see the answer being as simple as this.

A point about how automated tests might be affected by requirements change and if we were adding Illustrative Examples wouldn't that increase the admin/maintenance burden

I re-counted the number of tests today and looked at how many had been run. I stated there were 600 tests in place, just for accuracy we have 581 automated tests for Key and Illustrative Examples of which 473 have been run to date. The total number of Examples that have required re-write due to requirements change after Collaborative Specification has taken place so far is estimated to be 8 and the number of automated scripts affected is 2. We'll see what the final figure is when the work package is complete but this is insanely low compared to traditional approaches.

Mark.

Friday, 19 November 2010

Burndown - are we tracking the right things?

I saw a tweet this morning by Mohinder (@mpkhosla) that pointed to an article that made my blood boil.



http://www.infoq.com/news/2010/09/Sprint_Burndown

I must have missed the tweet, blog post or some other place where I was to learn that tracking Burndown should be done in hours. Why do I think that is influenced by traditional project management? I’ve always tracked Burndown by tasks. Tracking Burndown by hours is pointless, it tells us nothing about the actual delivery of usable functionality within a given sprint. How do we deliver usable functionality? One bit at a time, so we care about the delivered bits, those bits being delivered by completing tasks, not by working a number of hours.

Hours are just a sizing tool to see how many bits can be done in a given sprint of a certain length, say 2 weeks. That would give us roughly 80 hours per person, 64hrs if you want to plan at 80% efficiency, work it as you will. But if I log 80 or 64 hours how much work have I delivered? If you ask me in these terms you’ll want to know other details such as how many test cases I managed to prepare, automate or execute. As that’s what we really care about let’s track that. You know I’ll be here this week, ill or on holiday and tracking that is a project management type activity, an admin task, that is done away from the Burndown so don’t even think about this aspect in relation to your Burndown.

When planning testing tasks I have the team estimate the testing effort required to test a given item. We estimate in hours. When possible (when the Client ‘gets’ it) I add our test estimation to the task card next to everyone else in the development team*.

On the Burndown horizontal axis we can plot the number of days in the Sprint, in the vertical (y) axis we can plot the number of tasks we’re looking to complete on a given day. We know how many we can fit in the given Sprint days because we can divide the total hours in the Sprint by the hours we’ve estimated for the task. Math is great.

Now each day we can agree what tasks to work on and deliver them...or not. We track tasks complete/items delivered, not “I came into the office and did some work therefore that should automagically represent progress”
nonsense.

I'm sure you've seen a Burndown but just for context I'm meaning this one that I create in Excel, if I'm needed to create one that is.



* The Development Team is not just Developers, it’s all the other teams such as; Client, Business Analysts, Technical Authors, Developers, Testers, Build, Operations, Support. All are involved in the development of
working software that solves and continues to solve Client (customer) business problems. Therefore they all have a say in how long their tasks take that when completed will get a bit of working functionality done-done(done{done}).

Sunday, 17 October 2010

Linux Intro - a rough and ready guide

It may come as a surprise, heck perhaps even a shock - but I confess to being a complete Microsoft addict. I decided to crack my shell (turn over a new leaf, etc) and get a bit more into Linux. My work and home PCs now sport Ubuntu and to get to know it I spent the weekend learning-some-stuff.

For anyone reading that has been thinking to have a look at Linux - here's my Rough and Ready Guide to Cracking you Linux Shell! (ahh... witty tech references ;)

ENJOY!
---------------------------------

The instructions assume you're on a PC.

1) Get Linux
The first thing I did was install Ubuntu http://www.ubuntu.com/desktop a free Linux based operating system. After installation your PC will have a dual boot menu where you pick Windows or Ubuntu.

2) Learn what Linux is

With that installed I spent a few hours going through this Linux tutorial via a terminal window (DOS box) in Ubuntu.
http://info.ee.surrey.ac.uk/Teaching/Unix/

If you've used DOS commands in a terminal window before you'll get what this tutorial is going on about.

3) Do another tutorial

Similar to the one above but another perspective but repetition helps the learning. Skip the first few lessons, review lesson 4 for context then bounce straight to Lesson 5.
http://www.linux.org/lessons/beginner/l4/lesson4c.html

4) Learn Some Bash Scripting

In summary, take the above basic commands and rock onto something a bit more useful.
http://www.linuxconfig.org/Bash_scripting_Tutorial

Let me know how you get on!

Mark.

Sunday, 26 September 2010

Free Testing Workshop, Málaga - 7th:9th Oct.

I'll be in southern Spain between 7th and 9th of October visiting a client in the Málaga area. It would be great to have a meet up with some testers and do a few hours workshop on something of interest.

If you're in the area or nearby and can get to Málaga or you are a team/group no more than an hour away give me a shout.

You can set the topic and we'll split the time 50/50 between tuition and hands-on practice. Afterward we can co-author an experience report for the test community.

Here's some ideas of what we could go through in 2-3 hours.

[Exploratory Testing]
* What it is and what it's not
* Test Charters, Diaries and Sessions
* Test design techniques, Heuristics and Memes
* Reporting; Bugs and Progress
* Deploying Exploratory testers in development teams
* How to use it on it's own and in combination with other approaches.
-- I'll leave you either copies of template documents or software to use

[Live Documentation / Active Specification]
* Why documents are dead, literally!
* Live documentation; what is it and why you should use it
* Active Specification; what that is and how it's the link to automation
* FitNesse and Concordion; working example frameworks using Ruby and Java
-- I'll leave you the example FitNesse and Concordion frameworks

[Selenium (Ruby) Automation]
* What are the Selenium tools and why we use each
* Getting set up with IDE and RC^
* Using the IDE for assisted testing and rapid prototyping
* Converting IDE tests to Ruby for use with RC, to test across multiple browsers
- I'll leave you with a working Selenium-Ruby framework and some sample scripts

^ I don't think we'll have time to get Grid installed and running but I'll leave a workbook of instructions I have so you can follow up.

After we discuss each we'll get hands on and do some testing activities. You should end the session being 'able', not just having theory!

Contact me!
Email: mark@cyreath.co.uk
Linked-in: http://uk.linkedin.com/in/markcrowther/
Twitter: http://twitter.com/MarkCTest

Mark.

Saturday, 25 September 2010

One Measure to Rule them all

Measures and Metrics are back in discussion again in the Mark camp of testing. I recently met with Mohinder Khosla around St Pauls after we’d exchanged emails on the topic and shared some material with each other.

I recently wrote a Test Approach where I included five of my favourite measures:

Test Preparation
• Number of Acceptance Criteria v. Number of Test Cases per functional area (Bar Chart)
• Number of Test Cases Planned v. Written & Ready for Execution (Burndown)

Test Execution and Progress
• Number of Tests Cases Executed v. Test Cases Planned per functional area (Burndown)
• Number of Test Cases Passed, Failed and Blocked (Line Chart)

Bug Analysis
• Total Number of Bugs Raised and Closed per period by Severity (Line Chart)

What that struck me as I was writing the approach was that we really needed to know one thing:

“The total number of Acceptance Criteria moving into a pass state – this is PROGRESS”

Sure we need ways to assess the backlog, complexity, test execution progress, etc. but as an ATDD project we want to know what acceptance criteria have gone green.

Mohinder is working on the paper that includes a number of sources and views, look forward to seeing it soon.

Friday, 17 September 2010

Test Case Workshops

Hidden deep within Requirements and Functional Specification documents are the test cases we’re looking for. Like any hidden treasure they need to be discovered by effective exploration, deduction about the nature of the environment they exist in and who put them there and why.

When we (testers) review theses document or other sources that provide this information we have one question at the front of our mind, ‘what test cases does that need?’ We need to be thinking how we’d know that a requirement or acceptance criteria has been delivered on via the software functionality that’s been developed.

How are we going to quickly and efficiently find those needed test cases? The easiest way is to brainstorm them in a workshop with testers.

Get one or two other testers with you and a collection of requirements or acceptance criteria you want to find the test cases for. Assign one to write on the whiteboard, another to write up the test cases and another to perhaps just freely participate and shout out ideas. Everyone is equal, all ideas are to be explored, no negative comments or limiting of creativity and ideas.

As each item it talked through them up on the whiteboard, use flow diagrams, spray diagrams, UML type layouts, mind mapping – whatever allows a free form diagram of linked ideas to be created. Each idea is a possible test cases. Let’s see what we might draw up and talk about if the requirement was for a user to log-in to a system.

EXAMPLE
The user logs-in, so they must be able to logout. If they log-in there must be some log-in information such as a user name or password, Q) what’s the structure of the user name and password? Where are they stored? What characters are / aren’t allowed? How many? Where are unsuccessful log-ins recorded? How many failed log-ins are allowed? Etc, etc.

After more questioning and reading around the requirements, Functional Spec and acceptance criteria a diagram might look something like this:



Test Cases
From here we can start to capture test cases based on what we’re thinking. In the format I prefer these days they might include:

  GIVEN a user has an active account
  WHEN they log-in with a valid Username and Password
  THEN they are authenticated and an active session is started

Some might not be so tidy as they require multiple outcomes such as:

  GIVEN a user has an active account and the failed log-in count is 0
  WHEN they attempt to log-in with an invalid Username and valid Password
  THEN they are notified of a failed attempt
  AND the log-in failure is logged
  AND the failed log-in account is incremented to 1

This is fine, it’s still one test condition it just has multiple outcomes which is more like reality in complex software.

Have a go at writing some more test cases against the above diagram. Have we missed any other test conditions?

We’ll leave the idea of Concrete Examples, Analysis Techniques, etc. for another post.

For now reflect on the complexity of the above Log-In example, even if it seemed simple at first. Think about how much easier it is to brainstorm the test conditions with your testing colleagues and get the test cases written up in the same session.

Take a break, share the test cases with others (Dev, PM, etc.) and get their take on them. We also had a lot of questions so share those out and get clarification on them. This approach is much more efficient than one person being sat at their desk alone. It means knowledge sharing in equal terms and ready progress.

Tuesday, 17 August 2010

Bugs, why are we raising them?

It’s troubled me for while – that we’re not clear on why we’re raising bugs. Sure, to get them fixed we say but there’s a whole bunch of ulterior motives.

One is to show how good we are as testers as we need to demonstrate how clever and effective our testing is right? I’ve pondered before whether a test case that doesn’t find bugs during the development/testing phase was a useful test case. That’s another discussion for another post.

A second reason to raise bugs is to beat developers over the head with them. Isn’t that how we often say it too? In partially comprehensible terms when we really mean it’s to show the developers they’re not as good as they thought and make sure they see we’re as good at what we do as they are at what they do. Pathetic point scoring nonsense.

When we raise a bug we’re identifying a task for a developer to work on. They work on it, the quality issue is resolved, proven to be so by our re-testing, rinse and repeat and we all help deliver working software. That’s why we raise bugs, to record tasks that need to be done and help deliver working software.

Now we can do other clever things such as creating bug taxonomies and carrying out root cause analysis to address systemic quality issues, improve productivity and reduce costs. However, in my experience that’s never done, memories are too short to care. Find the bugs, fix them, deliver working software in collaboration with your project colleagues not in confrontation with them.

Mark

Monday, 5 July 2010

Specification by Example and Agile Acceptance Testing

I recently attended a great workshop with Gojko Adzic the other week covering 'live documentation', the workshop was called "Specification by Example and Agile Acceptance Testing", check it out on his site (http://gojko.net/).

At the testing consultancy where I work we already routinely use a couple of tools for live documentation as part of the 'Behaviour Driven Testing' (BDT) approach I developed around BDD.

Live documentation tools of this kind include Cucumber, Concordion, FitNesse, SpecFlow, etc. (links below) and provide a way to help ensure the documentation is current and up to date. They get away from the 30 page Word document that's out of date and a nightmare to keep under
version control.

I'm a big fan but wonder how used these tools are over traditional approaches, so my questions:

* Are you using any of these?
* If not why not, if so what drove you to use them?
* If you are using them have you taken the next step and used them to drive automation?

Mark.

http://cukes.info/
http://www.concordion.org/
http://fitnesse.org/
http://www.specflow.org/

Sunday, 14 February 2010

Update for 2010 or Oh, there you are.

It's been a few months since my last blog because I've been seriously distracted by 'other things'.

One of those things was the Selenium-Ruby book and associated framework and code examples. The process of writing it and deciding which content to add has been fraught with trip wires. It's been the subject of a number of conversations with my employer and those in the industry who've helped me understand Selenium and Ruby. The main and valid concern of these various groups being whether I was going mercenary on them and heading off to claim the glory with no particular forethought for their claim or contribution over the material. That was a slightly unexpected response that derailed my effort and motivation somewhat to say the least. The writing progresses but has changed direction more than once.

Including Ruby has been fun and interesting and the great thing is it has pushed the boundaries of the 'S**t I don't know' and 'don't know I don't know'. Remember that wiring Ruby in there is to get away from the IDE as fast as possible. Not that the IDE is crap, just that it's limiting. That's another interesting point. The excitement around my employer's (NMQA) growing number of engagements using Selenium and Ruby have brought about another change. They're about to announce super clever things around Selenium and Vienna. Vienna Studio and VSL. Watch the NMQA website. I'm testing it now and it'll be Beta within weeks, I expect the Firefox Selenium IDE to become a rarely visited friend. When you see the new NMQA website, you'll see what Vienna Studio and VSL are all about.

Meanwhile back on home turf I've been studiously avoiding all forms of blogging and forum posting and just tweeting occasionally. Why have I not been forum posting? Two reasons, a) The same old basic testing questions coming up again and again from different people, b) pseudo philosophical threads of conversation that roll on for weeks at end and could be finished in two paragraphs. I've little to no interest in b) because I don't have the time or see any real practical value in a three week conversation that's a definition about two words. It's been going on for months in more than one forum by more than one person and must have some of the new comers to the profession wondering what we're on. While we're still not able to resolve issues of a) you'll have to excuse me while I ignore the b) threads. This isn't a dig at individuals it's a dig at a) not being resolved and the focus of our energies to get it resolved.

Why is a) so annoying to me? You wouldn't find a Surgeon, Aerospace Engineer or Solicitor asking basic questions and that, as you probably know, is exactly the level where I think our profession should be. There have been attempts before to resolve a) and I wish I had the time to start some big-idea-to-fix-it but I doubt I do and I doubt we can en masse. In fact I know I don't / we can't but I do have a cunning plan. A cunning plan that will perhaps allow me to stop feeling massively demotivated at the thought of trawling websites and seeing the same old tired questions. The cunning plan is a personal mentoring programme with a record of study behind it evidenced by published papers. Details to follow