Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Friday, 28 December 2007

The joy of YouTube

I feel it's taken me a while to realise that YouTube is in fact a great resource for study of software testing. Using the Google Group I am part of or Wikipedia and listening to software testing podcasts are a given. But for some reason I hadn't sat down and spent time on YouTube, curious.

Well, that's the great thing about Christmas breaks. Time to eat too much, watch TV and spend time in front of YouTube.

If you're reading the blog looking at getting into testing then have a watch of the Portnov School's six video set that introduces the course they offer. They're in the US but the overview of what basics a new tester should master are the same the world over. I've encountered a fair amount of people that might have ISEB or similar but don't know some of the basics they seem to cover on this course.

A good video by the well renowned James Bach and is called Becoming a Software Testing Expert. Many comments about James have him viewed as arrogant and self promoting. To a degree I think he is but to think that's all would be to miss the essential message he tries to convey. That being there are no 'experts', there are no 'best practices' and maybe no spoons too. Keep this in mind when watching and it's a great hour with this interesting guy.

Another interesting video is on Agile Testing by Elisabeth Hendrickson. She starts by discussing the more traditional test approaches to put Agile in context. The main discussion picks up at around 13.00 minutes onwards. During the video she discusses Agile and XP and how they sit together, which was a useful perspective, in addition to how Scrum fits in.

Elisabeth mentions a number of things that aren't quite complete 'right'. One thing being the independence of the testing function. Stating it's not correct to have the function independent of the Agile team. In fact Agile teams support that independence through the equality of the function within the team.

As part of the Google Tech Talks series there's Scrum at al with Ken Schwaber which is a great follow up to the previous video. He's surprising in that he doesn't really evangelise the use of Scrum despite being the main creator of it. To follow up Jeff Sutherland then has a talk about Scrum implementation.

The interesting thing from both the Agile and Scrum videos is that both talk about the need for documentation. Something that is often assumed can be dropped as a mark of going Agile...

There's five hours of viewing there alone and many, many more videos to watch. So enough blogging, time for a cuppa and some more videos.

Tuesday, 20 November 2007

The Dreaded Walkthroughs and Reviews

It's always been a source of bafflement (technical term) that within the testing domain Walkthroughs and Peer Reviews are not more widely practiced. I recall being 'invited' to a code review where a developer went through their code line by line in dolby prologic monotone. It was almost as painful as the many walkthroughs of Test Plans I've subjected folks to.

What I took away from the meeting was how incredibly useful and interesting it 'could' have been. Here was the opportunity to have code explained line by line, to be provided a live narration of the thinking, logic and reasoning behind what had been created. What's more our own views and opinions could be incorporated into what he been made.

If this was some contempory artist or author giving a narration of one of thier works we'd be fascinated and consider ourselves fortunate that we might influence the work. But it's just some bloke we work with and it's code, so we fall asleep.

The problem is it can be hard to get excited about code, even harder to get excited about someone talking about it! The reality is most Walkthroughs and Code Reviews are brain numbingly boring.

You've probably heard of and been subjected to Walkthroughs and Code Reviews of various types, the idea that an author of something (code, plans, schedules, etc.) will sit down with an interested group and walk them through, line by line, clause by clause, page by page, explaining what was written and why. Occasionally asking for input, "all ok?", "Seem to make sense?" so that after say 30 minutes or maybe an hour everyone has found they're not interested anymore and are half asleep. Actually, probably asleep. It makes me feel tired just describing it!

Peer Code Reviews
Peer Code Reviews on the other hand are meant to be snappier, more active and energetic. Think more in terms of having produced a usable piece(s) of code, say a set of core functions or an interface onto those APIs your buddy wrote. Then with this small chunk in hand get it reviewed. The review is say 10 minutes long, you're doing a Walkthrough but it's a lively narrative.

To answer your question directly - Best Practices
- Small, manageable review items: not weeks of coding
- 5 or 10 minutes review time: not 1 or 2 hour brain numbing marathons
- Grab reviewers: don't book out meetings, keep it proactive (but don't break folks concentration if they're 'in the zone'!)
- Actively seek review from different colleagues: cross mentoring and knowledge sharing
- Keep responsive to new perspectives: review isn't just bug finding, it's learning tricks and tips too.

Peer Test Reviews
The interesting twist for us in the test domain is that we can apply this approach to our Test Artefacts. When we're in an Agile environment it serves us well to be working in the same spirit as our developer colleagues.

Why not get Peer Test Review of those 20 or so Test Cases you just put together for that module? Incrementally delivering Ready for Execution Test Cases is a great way to help the likes of Project Managers feel relaxed that we're making progress on our planning and prep.

Doing the same with Test Plans, Designs, Breakdowns or whatever other artefacts you produce is also a win. This lightweight approach achieves our objectives but stops us getting bogged down in heavyweight process.

Follow the above Best Practices and keep the event lively, if you really must book out a meeting to coordinate review with several people at the same time, that's OK. Just go a little overboard with your presentation. Print copies in colour if you can or let folks access it on their lap-tops to save trees. Use the projector to make viewing easier, create slides that are already noted up or can be 'written on', what ever keeps the energy levels up.

Mark Crowther
Peering at you!


The idea of applying Peer Code Review to the test domain came from: http://www.jaredrichardson.net/blog/2007/03/14#peer_reviews, thanks Jared.

Wednesday, 3 October 2007

The Future of Testing?

A message over at QA & QC Group, well made my blood boil slightly. Here's the question(s) and response I gave. Grrr... etc.

"I am working in MNC in India, Recently one of my team mates (PM level) told me that the future of testing is no scope, and he gave reasons."

To answer your question in succinct form, the future for software testing has never been better, the future is brighter than perhaps we suppose, it's brighter than we can suppose. Why?

1) If any software boom is down first the Companies fire Testers.
Wrong. This assumes the situation as it is now or was in the past, as it is in this transitional state, but you're talking about what the future holds. So this statement is wrong. The ones at risk are development, probably more like Technical Authors or Support Desk. The value the business derives from testing is eminently improved the more difficult the economic situation becomes. The logic for this is because there is a need to maintain a share of a shrinking market where the differentiator would be price and quality, yes in the singular because the are so linked. Testers are far cheaper than developers generally and in safeguarding quality are essential. Safeguarding quality ensures the total cost of ownership is reduced meaning a well tested product is a better investment. A product with higher quality means a reduced need for Support Staff there by bringing a greater saving to the business.

2) Developers can do our job + Coding.
Wrong. A simple reflection on the Division of Labour, Specialisation of Work theories will tell you that there will at some point be enough work and enough need for focus of effort by individuals who are particularly experienced in testing, to need people who's specialisation is testing. Turn this around, can testers also code as well as developers? We would always say No to this, we understand their specialisation, experience, etc. lend themselves to being developers who can test, just as it does us being testers who can develop. But the two professions are not fully inclusive.

3) In future comfortable tools are comming.
Wrong. This has been spoken of for many years and even the best of the Record and Playback tools fail at encountering the simplest of issues. This is the same logic as for the robots running their AI cleaning my house and making me a cup of tea as I type... I still don't have one. Even if we accept the suggestion that these tools become so all powerful they are acting like a tester, who's going to configure, run, maintain, mature what they do? Is this person by definition not a tester?

4) No investments on the Testing more?
Wrong. The very act of investing in the above in a desire to eliminate the tester is by its nature investing in testing. If we accept that the paradigm of how we currently define 'tester' will shift then you'll not get rid of testers. Again, what will happen is the boundaries between tester and developer will blur. I maintain it's the developers who are at risk in many areas.

All of the above statements your team mate made are based on the current paradigm of what a tester is. They are looking at what they understand the tester of today to be and in doing so are making statements about testers that were essentially existing yesterday. Being in the profession we're aware that the days of just hitting keys and clicking the mouse are over, it has been over for all but the most junior members of a test team for perhaps more than 10 years!

Today's and tomorrows tester is a much more technically savvy professional. They can develop test harnesses, stubs and drivers written in and interacting with a variety of programmatic languages, author complex data sets, work in an integrated manner with Agile teams on a level that blurs the boundary between tester and developer, use highly complex tool sets testing across the many components of the global system architecture and much more.

Today's and tomorrows tester is a professionally educated, examined and accredited professional, including but beyond that of a general computer science degree and some courses that a developer may typically have and soon potentially a member of a Chartered Institute. Putting them at the same level as Architects, Lawyers or HR professionals.

The key hitting monkey of yester year now uses techniques grounded in complex statistical and analytical mathematics, cognitive psychology and some of the best scientific research covering everything from computer technology to human logic.

As I said, the future for software testing has never been better, the future is brighter than perhaps we suppose, it's brighter than we can suppose.

Mark Crowther

Monday, 1 October 2007

Time Spend Analysis

I just posted a new paper on my website which reports the finding of a six week study in time spent by the QA team.

At the start of the study we agreed a set of Key Tasks such as 'Reading Test Documents' or 'Waiting on others', getting those involved in the study to agree the tasks and definition of them.
Then, for the next 6 weeks 6 staff recorded their activity every 15 minutes. At the end of each week the stats were collated and charted out to see where the department, team and individuals were spending their time.


The results were certainly interesting and gave some valuable insights, such as:
  • Team members in work for 7.5 hours a day don't test for that amount of time.
  • Test Managers are available for testing under half of their working day.

No, really. I bet these two insights are obvious to those who'd read this blog but on presentation of the data to the client there was slight disbelief. I had to show them the Daily Tracking Sheets to prove I wasn't making the figures up.

The study participants also kept an Issue Log to track non-bug issues and start to get a broader insight into what else was impacting their testing.

Have a look at the Papers section of the website for the paper and hit the Templates button for the 'Daily Tracking Sheet' and 'Issue Log' templates I mention.

Let me know your thoughts. Plenty of scope to make this study more scientific, what would you do or is this good enough for what we were looking to prove? Does it apply outside of the study group and make sense in your environments perhaps?

Mark Crowther, Head of QA Time Tracking

Sunday, 9 September 2007

Configuring Test Environments

Read the paper on Configuring Test Environments

"It works on my machine!", I'm guessing there are many test professionals who've heard this too many times for it to be funny. The problem we usually encounter though is that we wander over to the developers desk only to discover that ... it really does work on their machine.

The other version of this is 'the customers are finding bugs, how did you miss them?'. Now, having overcome your desire to beat the crap out of them for asking that ill informed question, you might run their test case and find a bug. At which point you'd be well advised to review the way you test cases are writen, the steps they include or functionality they cover. But what if you can't reproduce the bug?

Deep joy, it happpens, but why do these situations happen? Simply put there's something different. That something might be test data, the software version, hardware, the testing algorithm your steps present , who's to say. The main thing is you need to know what that something is.

The easiest way to do that is have as many elements affecting testing defined in advance of encountering issues. Easier said than done? Actually reasonably easily done, you just need a sensible and considered approach. The approach when I'm setting up a Test Function is outlined in my paper, Configuring Test Environments, an approach that brings many benefits.

It's abstracted from ITIL and has proven very valuable in helping to be assured about the status of the environment in which software is tested. Assuming there's good software configuration happening your walking in the park...

Mark Crowther - Optimus Configuratus

Friday, 17 August 2007

Cross Site Scripting and other merriment

I got asked today about security testing and what tests I would recommend as a minimum should always be run. It was interesting to be asked as no one has bothered before!
I'm guessing like me you've heard of Cross Site Scripting (XSS). However, perhaps similarly to me you've never had a good play with it. Sure, all test teams will run a standard battery of UI validation checks, but security focused checks? I bet most companies that should don't insist the test teams do and I know from experience it's a luxury to be running these, especially where there's no automation in place.
I wasn't surprised to find entire sites dedicated just to this area of testing. There's a great site over at http://www.xssed.com/ for the study of this.
Just for fun I then went and had a look over some of the sites I do / have done work for and hey presto, issues.
The most common issues I was able to invoke was through testing with a bit of JavaScript, the Alert box XSS approach. It seemed one form field or another would do something untoward.
The Cheat Sheet I used was over at http://ha.ckers.org/xss.html and just dropping down to using “< pulled some interesting issues too. My second non-surprise, how common it seems, I'm such a cynic aren't I.
I'm not sure how vulnerable being able to pop these alert boxes makes the sites, I need to gen up on my JavaScript more. However, one thing I do know is SQL and just popping ' and 1=1-- in form fields was far too many error for comfort. Playing with a few SELECT statements after seeing table data and malforming URLs using JOINs was even more entertaining.
In the past I've been fairly much using a standard battery of UI validation entries but I'll be adding to the set over the coming weeks for sure!
Mark Crowther, Head of teh haxzor team

Sunday, 1 July 2007

The Importance of Documentation

Documents and other artefacts

The QA team can often be heard asking for documentation, much to the annoyance of those being asked. Those being asked are usually Developers or Project Managers of course. The documents being asked for are variously titled and might be Story Cards, Requirements, Functional Specifications, Technical Designs, Preliminary Specifications, a collection of these or something slightly different.

Either way, from the QA perspective they provide an overview of the requirements the development will try to deliver on. They'll describe what the intended design looks like and the developers are coding against and also explicitly stated requirements from which the implementation specifics can be gleaned. By explicit specifics I mean, for example, not just high level schema but details of table rows, columns, fields, mappings, allowable data, format of that data and so on.

Why? Because if requirements are explicit then the Test Cases we write are going to be specific, in which case they wont be general. See that not so subtle difference there? If requirements are not explicit or documents are incomplete or ambiguous then the tests we can write will be general in nature. For example a test of this type would be "The correct data should be output from the order capture system", you'll see data, it'll look correct, nothing bad will appear to have happened - the test will pass. They'll only prove the system doesn't do something we wouldn't want it to do. That is, we'll only test that the system does something that when observed could be generally assumed as correct.

Not really a very useful test though is it? Writing good Test Cases is a bit like writing SMART goals, you don't have to guess if it was achieved or not, you already know what the planned, predictable, intended outcome will look like. We'll be observing not testing.

However, if no one actually states what the requirement is - clearly, explicitly, unambiguously - we'll have no idea how to test that the intended outcomes of the development work really did produce the product that was wanted in a way it was wanted. We can never write SMART, useful tests. We'll just write a whole raft of superficial, not very useful tests that'll probably pass because the outcomes don't look wrong and be very unlikely to find important bugs.

Mark Crowther - QA Manager and Documentation Clerk

Tuesday, 12 June 2007

Thoughts on Recruiting

I've just completed a cycle of recruitment and it's reaffirmed there are some people out there 'swinging the lead'. I remain convinced that 80% of candidates in our profession are just not that good, or interested, or are overstating their experience, haven't internalised their material or a combination of these and other elements.

So, how to make sure the recruitment process goes well? Here's my 2p worth.

First step here is to get yourself CVs from an agency you trust. Ensure they understand you expect pre-filtering, you expect them to understand the testing domain and with their knowledge pre-screen candidates they present. Refuse to work with body-shop agents and feel free to ask YOUR AGENT what experience they have of the testing domain. At the very least you want to be getting CVs that are good enough you'd be interested in them, not just interested in trashing them. Good enough being they are pre-screened and aligned to the needs of your role.

Secondly, do telephone interviews. I never used to but I've learnt that for the sake of 20 minutes I can screen out many other candidates that aren't quite good enough. Get them to discuss their experience, bring the CV 'to life' for you and touch on what they understand of the role. I'd suggest you can get a sense of the character of the person and a sense of if you believe their CV.

Thirdly, interview face to face with a colleague. I've never run tests by the way. My interview consists of the candidate re-running their walkthrough of the CV "for the benefit of my colleague", the difference can be very revealing second time around. As in highly inconsistent. Assuming they seem consistent I then bounce them around questions wise, as suggested above, ask about practice, process, theory, experience - then jump back to a comment they made 10 minutes ago and ask how they think that relates/affects/supports another comment they just made. Look for them bringing thoughts together that demonstrates they KNOW what they are talking about, that they really have lived the content of their CV.

Finally, make sure you go in having read the CV twice, highlight sections in different colours, write out some questions before hand and put together some form of Interview Pack that has updated Org charts, Job Spec, copy of the CV, etc. For ideas have a look at the templates section on the website for some of the Team Pack documents.

If you hire them and they are rubbish, pull them up straight away, find out why, be aggressive/direct but supportive. Make sure they know you consider them to be under performing and define expectations of performance. If they don't meet your expectations then cut them without hesitation, do not let it linger on in the hope they will improve.


Mark Crowther
Holder of the Magical Appraisal Dust

Wednesday, 23 May 2007

Books, Papers, Forums

Reading, it's a fantastic thing. But how many people actually do it? Other than blog trawling I mean.

I ask because folks always seem surprised I have a number of Software Testing, QA and general Management books on my shelf. Around 30 at the last count. How many have I read? Maybe half cover to cover. I have also half read many papers and articles on the interweb too.

That's just the thing, so few people read and read. Read books, blogs, other web sites, papers, discussion documents, thesis, dissertations. When was the last time you went over to Google Groups or SQA Forums and did some reading and replying? Is it a worry to expose your possible ignorance? If you go to those forums you'll see I'm comfortable exposing myself to strangers.

I wonder if it's the same with buying technical books? The possibility of not understanding and making yourself feel stupid or the sense that, God it's boring. But there's a couple of good news items here.

Firstly, if most people in our profession aren't reading then it wont take much effort to read yourself above them. A few concerted reading sessions and you'll be steps ahead of most folks.

Secondly, even if you have a hard drive full of things you'll read later (or finish reading, along with a bookshelf of half or never opened books, well at least you're building a reference library for when you need them.

If you'd like to see some of my collection then wander over to the Books section of the website. I'll be adding more there as I write my reviews. That's to make sure I read them, well it's worth a try.

Mark Crowther the Librarian

Saturday, 28 April 2007

A Documentation Model

Read the Documentation Model discussion document.

What's the next most groan inducing thing you can say to developers and management after "Let's introduce a Quality Management System!"? How about "... with a really good set of formal documents!". Yep, developing a Documentation Model that's how to gain friends and influence people all right.

OK, so as quality oriented professionals we know a rational approach to the documents we use is a good idea. We also know that some external groups such as those troublesome (kidding, we love you) customers and certification bodies keep wanting to see our documentation.

It sure is good to show off a little by providing documents that are clearly part of an organised and well managed set. But where to begin?

Well, leaving aside dry talk of such things as documentation audits let's just focus on getting some documents in place? Don't worry. I've developed a way for the software test team to put in place a structured system for test documents, that's easy to understand and maintain.

It can even be scaled into a full Quality Documentation Management System with a little help from a few tools and willing participants. But that's another dry part we can discuss later.

A document about documents, all in three pages or less. Wonders never cease.

Mark Crowther, Head of QA and Documents about documents

Monday, 26 March 2007

Secruity Testing

These last weeks I've been drawn into the study of Software Security Testing. By that i mean testing of security features, not the infrastructure.

As if building the web site and playing video games while trying to learn Python wasn't enough. It's been a busy few weeks while I'm 'in between jobs'.

It seems that the area of software security testing has yet to reach the level of maturity that Software Testing is beginning to enjoy. Not that software testing, even your basic black box functional testing, is as universally accepted as software development.

Soapboxing that software security should be seen as a separate area of delivery, that needs trained and experienced professionals to deliver on it sounds like the evangelical standpoint that was adopted for testing a few years ago.

Yet, reading around you discover there are some luminaries in the field that are pushing for this to change. The likes of Gary McGraw, that CTO of Cigital, stand out as Security testing's own Cem Kaner. Visiting his software security website over at http://www.cigital.com/ and listening to the Silver Bullet Security Podcasts reinforces that.

A sister company of Cigital is Fortify and they've developed an interesting idea to approach the actual delivery of software security testing. Combine that with the need for building security into the product right at the design phase.

Over at the Cyreath website I've written a discussion document around software security testing, read it here: http://www.cyreath.co.uk/papers/Cyreath_Testing_Software_Security.pdf then come back to the blog and share your thoughts.

Mark Crowther - Head of SWT (South West Trains...)

Monday, 5 March 2007

Q-bit anyone?

If you're off to Q-Bit Test Expo on the 22nd March drop me a mail!

No Testhouse this year, I hear a rumour they're doing every other year. Vizuri will be there along witht their partners Q-Bit who they get training services from.

Chris Ambler, QA Director of EA and ex-Rugby player (judging by the ear) will be giving an expo of testing Video games. £10 says he tells us you can't really write Test Cases for games ;p If he get's his black ball out I'm going to storm the stage...

Please God make the presentations interesting this year, I fear for the testing n00bs and thier hopeful scriblings as they listen to product / company keyed presentations in the hope of true testing enlightenment.

Mail me (Mark Crowther) if you want enlightenment and confustication, don't listen to tool vendors!

OK, see you there!

Mark Crowther - Testing and QA Gnu

Friday, 23 February 2007

Developing the Team

Read the Developing the Team paper

One of the things I do in any role is Team Mentoring which in part can involve setting up an appraisal system for the Test Team. That may seem unusual for a manager that focuses on delivering solutions around Software Test and Quality Assurance. I'd agree it is unusual but often insisted on especially where:

  • There's no system in place in the business or at least not one that's being followed to any great extent.
  • The business wants the performance of their shiny new Test Function managed effectively and measurably.

Setting up a new Test Function or hiring in a Test Team is an expensive and somewhat risky undertaking. Executive management must not only be assured that the Test Function delivers from a technical standpoint but from a personnel standpoint.

Good testing isn't enough, the business wants good teams made up of good corporate citizens. Are you living the brand, living proof of the corporate values, evangelical like in your delivery? The Test Team, especially a new Test Team, will be under incredible scrutiny and they may be 'seen' to perform in all aspects.

See, now we bring some of the pieces together I bet it doesn't seem unusual at all? Why not read the paper and share your thoughts here?

Mark Crowther, Head of QA and Team Flowering.

Sunday, 21 January 2007

Notes for Emerging Leaders

Read the Paper - Notes for Emerging Leaders

How many times have you learnt something that seems so simple and common sense, yet it's taken years to discover it? We've all been there, perhaps knowing something but having no name for it.

My good friend and Test Manager Stuart Crocker and I had a braniac conflag and decided to codify some of our key learnings.

Rest assured that this is in no way TEH answer (4 teh w1n) to your managerial education but it IS a good start! Let's face it, you'd be an idiot to ignore our wisdom, you're not an idiot are you?

Enjoy and comment you slaktard.

Mark Crowther - Test Manager and Head of Queue Hay