Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Monday, 28 February 2011

London Tester Gathering - May 2010 - Sign-up now!

I just looked over the growing list of talks and workshops being given at the London Tester Gathering in May and bloody-hell it's looking good. Not sure how Tony manages it but these gatherings just go from strength to strength.

http://skillsmatter.com/event/agile-testing/london-tester-gathering-2011

Given I have a lot to say about testing I fancied I'd request a slot to talk. But... I'm going selfish on this one as I want to be in the audience and learn from some of the folks doing the presenting and workshops.

There's already too much cool stuff happening across the two days to see it all!

If you do nothing else today go check the site out and sign-up. Book holiday, coerce your boss into giving you time to attend - just do what you have to do to be there!

http://skillsmatter.com/event/agile-testing/london-tester-gathering-2011

When you've read it please copy and send the Tweet below:

London Tester Gathering, May. Sign-up now! http://skillsmatter.com/event/agile-testing/london-tester-gathering-2011 @tonybruce77 #LTGMay RT!

Mark 'is it may yet' Crowther

Tuesday, 15 February 2011

Usability testing - a little checklist to use

Looking at the UI it wasn’t clear to me where the buttons were. That might sound odd but the graphics of the website were very ‘arty’ and it wasn’t immediately obvious where I was to click.

I remember another website I visited where I added items to the basket but wasn’t sure if they had been added. I then wondered if in fact the ‘add to basket’ button was inactive, maybe I needed to log-in first.

I’m sure we’ve all been on websites where the layout changes and our tour of it is suddenly halted while we work out how we use the site.

These are all examples of usability issues and though we as testers often know almost instinctively that this makes sites and applications ‘buggy’ but we might hesitate to raise them as bugs. The closest we come is probably functional testing with an uncertainty around whether these types of issues go beyond what we should be testing.

This is in part I think because as testers we don’t have a clear idea of how to classify Usability issues and in part because we might be stuck in a ‘requirements validation’ mind-set.

Getting out of just validating requirements is already being talked about a lot in the testing community so we can take that one as read. Having a clearer view of how to classify Usability issues we can talk about.

When I write Test Plans I have the usual system, functional test types but also Accessibility & Usability. I put the two together because many Accessibility considerations are affected by the Usability of the site.

When doing Accessibility testing we’re thinking about people with various levels of visual, physical and cognitive impairment that would benefit from a little ‘reasonable adjustment’ of the way we create software.

Usability is connected to this but extends beyond it, affecting every user of the website or application. Therefore this under-addressed form of testing needs to be factored into our test planning.

My guidance has come from the simple list below, courtesy of the Open University. When you experience a Usability issue in the software you’re testing, but you think it’s stretching the definition of a requirement – just label it as one of the below. Get the bug tracking software set up with a Usability classification to. Oh, and go chat with your company’s User Experience person and talk about this. One more team on our side :)


HCI Design Principles

Visibility in the context of UI design means making it clear what a UI element is used for.
Feedback is related to the concept of visibility. In the context of UI design it means making it clear what action has been achieved through the use of the UI element.
Affordance in the context of UI design means making it clear how a UI element should be used. Everyday examples include a cupboard handle to be pulled and a door plate to be pushed. When the affordance is wrong, a written sign is often needed.
Simplicity in the context of UI design means keeping things as simple as possible
Structure: A UI will be more usable if it is structured in a way that is meaningful and useful to the user.
Consistency in appearance, positioning and behaviour within the UI makes a system easy to learn and remember.
Tolerance refers to the ability of a UI to prevent errors if possible, or to make them easy to recover from, if not.

Source: The Open University, M150 Unit 12, ISBN0749257695

More information:
This book makes interesting reading: http://www.amazon.co.uk/Design-Everyday-Things-Donald-Norman/dp/0262640376
Website Design Considerations for Older People: http://www.usabilitynews.com/news/article2911.asp

Tools:
Web Accessibility Toolbar: http://www.visionaustralia.org.au/info.aspx?page=614
Colour Contrast Analyser: http://www.visionaustralia.org.au/info.aspx?page=628
Colour Blind Web Page Filter: http://colorfilter.wickline.org/
Colour Scheme Generator: http://wellstyled.com/tools/colorscheme2/index-en.html

Wednesday, 2 February 2011

Test Early and Often

On a recent project it was agreed that testing would mainly be executed at the UAT phase, I was to build a set of test cases that could be executed by the customer. While some testing might get done informally before UAT that was to be the main test phase. When UAT commenced there was delay while the test cases were proven and understood, before being run in anger.

Some test assumptions were wrong and test cases had to be updated. Some functionality wasn’t implemented as described in the requirements documents meaning more clarification and correction. Server issues further delayed the test progression and overall finishing UAT was delayed by around 2 weeks. Sign-off was achieved but it was done a little grudgingly based on contractual terms being met.

In an earlier project testing began before we even had the code. Test cases were used as a way to validate requirements and gaps were found. Not only in the thinking around the requirements but the third party applications that were part of the overall integration. Test cases revealed conditions that those applications wouldn’t be able to meet and a mitigation approach was agreed.

When new development was released tests were run straight away in part because automation was in place to assist. When UAT came around and the customer expected a final test run from us – we didn’t bother. There was no need for a final test run. We handed the code and test over and less than a week later got sign-off for the delivery.

These two very different approaches highlight the importance of testing often and testing early.

It’s not just about ironing out the issues early on. We want to do that of course. We want to find the issues in the requirements, in our test cases and assumptions, we want to have clear information in developer hands so they can code once and quickly. (Developers are artisans and artisans hate reworking what they’ve produced, they’re proud of their work like we are.)

Testing early and often saves time (and money), it cuts frustration and keeps people motivated as it gives feedback and insight, and confirmation of a job being done well.

Customer confidence and belief in the quality of work we produce, our ability as a competent and professional team all come across when we test early and often. In consultancy this is the holy grail of keeping customers and winning new ones. So many consultancies just do the job and follow the contract, it’s immoral, think of those that have been sued in recent years.

Build customer confidence by making quality and delivery a foregone conclusion by testing early and knowing it will be.

Mark

Liked this post?