Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Sunday, 26 September 2010

Free Testing Workshop, Málaga - 7th:9th Oct.

I'll be in southern Spain between 7th and 9th of October visiting a client in the Málaga area. It would be great to have a meet up with some testers and do a few hours workshop on something of interest.

If you're in the area or nearby and can get to Málaga or you are a team/group no more than an hour away give me a shout.

You can set the topic and we'll split the time 50/50 between tuition and hands-on practice. Afterward we can co-author an experience report for the test community.

Here's some ideas of what we could go through in 2-3 hours.

[Exploratory Testing]
* What it is and what it's not
* Test Charters, Diaries and Sessions
* Test design techniques, Heuristics and Memes
* Reporting; Bugs and Progress
* Deploying Exploratory testers in development teams
* How to use it on it's own and in combination with other approaches.
-- I'll leave you either copies of template documents or software to use

[Live Documentation / Active Specification]
* Why documents are dead, literally!
* Live documentation; what is it and why you should use it
* Active Specification; what that is and how it's the link to automation
* FitNesse and Concordion; working example frameworks using Ruby and Java
-- I'll leave you the example FitNesse and Concordion frameworks

[Selenium (Ruby) Automation]
* What are the Selenium tools and why we use each
* Getting set up with IDE and RC^
* Using the IDE for assisted testing and rapid prototyping
* Converting IDE tests to Ruby for use with RC, to test across multiple browsers
- I'll leave you with a working Selenium-Ruby framework and some sample scripts

^ I don't think we'll have time to get Grid installed and running but I'll leave a workbook of instructions I have so you can follow up.

After we discuss each we'll get hands on and do some testing activities. You should end the session being 'able', not just having theory!

Contact me!
Email: mark@cyreath.co.uk
Linked-in: http://uk.linkedin.com/in/markcrowther/
Twitter: http://twitter.com/MarkCTest

Mark.

Saturday, 25 September 2010

One Measure to Rule them all

Measures and Metrics are back in discussion again in the Mark camp of testing. I recently met with Mohinder Khosla around St Pauls after we’d exchanged emails on the topic and shared some material with each other.

I recently wrote a Test Approach where I included five of my favourite measures:

Test Preparation
• Number of Acceptance Criteria v. Number of Test Cases per functional area (Bar Chart)
• Number of Test Cases Planned v. Written & Ready for Execution (Burndown)

Test Execution and Progress
• Number of Tests Cases Executed v. Test Cases Planned per functional area (Burndown)
• Number of Test Cases Passed, Failed and Blocked (Line Chart)

Bug Analysis
• Total Number of Bugs Raised and Closed per period by Severity (Line Chart)

What that struck me as I was writing the approach was that we really needed to know one thing:

“The total number of Acceptance Criteria moving into a pass state – this is PROGRESS”

Sure we need ways to assess the backlog, complexity, test execution progress, etc. but as an ATDD project we want to know what acceptance criteria have gone green.

Mohinder is working on the paper that includes a number of sources and views, look forward to seeing it soon.

Friday, 17 September 2010

Test Case Workshops

Hidden deep within Requirements and Functional Specification documents are the test cases we’re looking for. Like any hidden treasure they need to be discovered by effective exploration, deduction about the nature of the environment they exist in and who put them there and why.

When we (testers) review theses document or other sources that provide this information we have one question at the front of our mind, ‘what test cases does that need?’ We need to be thinking how we’d know that a requirement or acceptance criteria has been delivered on via the software functionality that’s been developed.

How are we going to quickly and efficiently find those needed test cases? The easiest way is to brainstorm them in a workshop with testers.

Get one or two other testers with you and a collection of requirements or acceptance criteria you want to find the test cases for. Assign one to write on the whiteboard, another to write up the test cases and another to perhaps just freely participate and shout out ideas. Everyone is equal, all ideas are to be explored, no negative comments or limiting of creativity and ideas.

As each item it talked through them up on the whiteboard, use flow diagrams, spray diagrams, UML type layouts, mind mapping – whatever allows a free form diagram of linked ideas to be created. Each idea is a possible test cases. Let’s see what we might draw up and talk about if the requirement was for a user to log-in to a system.

EXAMPLE
The user logs-in, so they must be able to logout. If they log-in there must be some log-in information such as a user name or password, Q) what’s the structure of the user name and password? Where are they stored? What characters are / aren’t allowed? How many? Where are unsuccessful log-ins recorded? How many failed log-ins are allowed? Etc, etc.

After more questioning and reading around the requirements, Functional Spec and acceptance criteria a diagram might look something like this:



Test Cases
From here we can start to capture test cases based on what we’re thinking. In the format I prefer these days they might include:

  GIVEN a user has an active account
  WHEN they log-in with a valid Username and Password
  THEN they are authenticated and an active session is started

Some might not be so tidy as they require multiple outcomes such as:

  GIVEN a user has an active account and the failed log-in count is 0
  WHEN they attempt to log-in with an invalid Username and valid Password
  THEN they are notified of a failed attempt
  AND the log-in failure is logged
  AND the failed log-in account is incremented to 1

This is fine, it’s still one test condition it just has multiple outcomes which is more like reality in complex software.

Have a go at writing some more test cases against the above diagram. Have we missed any other test conditions?

We’ll leave the idea of Concrete Examples, Analysis Techniques, etc. for another post.

For now reflect on the complexity of the above Log-In example, even if it seemed simple at first. Think about how much easier it is to brainstorm the test conditions with your testing colleagues and get the test cases written up in the same session.

Take a break, share the test cases with others (Dev, PM, etc.) and get their take on them. We also had a lot of questions so share those out and get clarification on them. This approach is much more efficient than one person being sat at their desk alone. It means knowledge sharing in equal terms and ready progress.