Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Wednesday, 24 June 2009

Recording bugs in development, don’t bother.

There was an interesting thread of tweets on twitter (no less) started by TestingGeek I believe and asking about raising of bugs in agile projects.

In a traditional / heavyweight projects I guess it’s a done deal. We find bugs in the testing phase, actually any phase post the development phase, and they go through the bug lifecycle of log/triage/assign/fix/re-assign/re-test/loop-de-loop/close. No worries about tracking the bugs, building bug taxonomies, doing root cause analysis and corrective action planning and of course generating those all important measures and metrics.

I think I first heard Elisabeth Hendrickson suggest that bugs in an agile projects should just be fixed and not logged/tracked. On first hearing this it seemed to be a heretical suggestion. What about all that lovely stuff I can do when tracking bugs for one and aren’t I letting Developers ‘get away with it’ for another? Hmm.... Then I recalled that when I was at CAPS-Solutions working for Martyn Arbon about 4 years ago he’d suggested roughly the same thing. Test stuff early and get the Developers to fix it while they’re still on the project. The last person to subscribe to this perspective was Tien Hua at NMQA.

It struck me today that I’ve never really considered what my view is on this. In recent years I’ve definitely been letting the Developers get away with it. I distinctly remember a project at CAPS that went out earlier than previous projects, with more in, with more testing done and of a quality that surprised the customers. Full life cycle testing too right up to compatibility and UAT and we finished about 2 days early if I recall correctly. Problem was we recorded next to no bugs but we were finding them, that I do know.

What I remember that felt new was we were talking to the Developers as we found bugs, encountered issues, got stuck. Essentially the test team were paired with a Developer, reporting in issues encountered to the Developer as they were found. A quick fix and retest later and we were off testing new cases. Testing progress was fast, test cases were moving to a passed state and the relationship with Development was positive. I hadn’t heard of Scrum, agile, et al at that time.

These days I’m inclined to follow roughly the same approach but any that are non-trivial are recorded. Trivial being the Developer can literally change it there and then as it was a brain glitch moment (variable not initialised, variable scope context, incorrect typing, wrong table or file name referenced, etc.). Sure we might need a rebuild to get the change but the change is done in the time it takes to shout across the desk. For non-trivial bugs found at this point I’ve (once) had the team post a Bug Card on the wall so it becomes part of the backlog of tasks to be worked on that sprint/iteration, a blue sticker get’s attached when it’s accepted/estimated/assigned, orange when fixed, green when closed. I recall some getting accepted and de-prioritised and resolved in later releases.

If the Developer is still coding and progressing that bit of code towards being ‘done’ and ready for integration - the more I can do to help that and not be a tester ‘getting in the way of progress’ the more I feel I’m fulfilling my role in the team. On a personal level I’m not interested anymore in catching-out the Developer when they’re still working on something that’s being crafted. It isn’t fair and doesn’t help. I wouldn’t want the same being done to me when I was writing tests but insight that helped me complete them would be/is very welcome. If we think about this for a moment it’s easy to see why Developers have got so peeved with testers in the past.

The change comes when the Developer declares they’re ‘done’ and (usually) moves onto the next item to code up, having integrated the current item so the test team can run the next stages of testing. Now there’s a slight distance between the Developer and the code they wrote starting to grow. Just like in a traditional approach where a Developer might complete development for one project and move onto the next. From here on, from Integration to the end of the product’s life sat in production/live I see the value of applying a bug life cycle that then supports any tracking, taxonomies, RCA, etc. that might be found useful.

What’s your views on this? Do we ‘miss out’ by not tracking ALL bugs? Where should tracking not be done/be done?
How have you balanced the desire to measure/manage v make progress/collaborate?

Elizabeth Hendrickson gets it, Tien Hua gets it, I think I get it, how about you folks?

Monday, 22 June 2009

Feedback to PM's - weekly or daily?

Driving on my way to work today I started thinking about ‘feedback’ on projects and how often we’ll give feedback to a Project Manager on a traditional project. My experience has been that the feedback is generally given once per week, at the Monday morning Project Management meeting.

I’d almost forgotten that this had been the way I used to provide feedback on how testing was going. Feedback once per week would see me having kittens now. I’m so used to the agile style of running a daily stand-up that the thought of weekly updates just seems so alien.

I imagined going to my online banking service and after logging in seeing a message that said ‘No updates, please come back Monday’! A moment after thinking this I realised in this situation I would immediately panic and be ringing the bank. A realisation which came a moment before I realised I’d missed my turn off! Doh!

Thursday, 18 June 2009

June SIGIST Talk - 'Pragmatic Testing - the Middle Way'

The following text is the transcript of the talk given by Mark Crowther at the June SIGIST in London.

What is Agile/Heavyweight/Testing anyway?
Often we see on forums, blogs and so on discussions that ask about and responses that aim to provide a definition of what Agile testing is in practical terms.

In the same way we might use say Waterfall as a definition of what Heavyweight is, it sometimes seems that we're looking for a diagram or model as a way to define Agile.

Or perhaps by saying that Heavyweight is all about documents and process we can compare and say Agile must therefore be about avoiding documents and process.

The issue is these approaches of simple comparison will always fail to help us clearly define Agile and Heavyweight. The fact is that right now Agile and Heavyweight are at best metaphors or perspectives on how to approach testing practice.

They're not clearly defined paradigms, i.e. there's no robust and complete definition of the practices that make up Heavyweight or Agile test approaches. So in my view we as a profession don't clearly and collectively know what being in the Agile testing mode of practice is but we expect and are expected to deliver as if we do.

We don't have a clear set of practices and approaches that we can point to and say 'that's agile or that's heavyweight'. We have a few random things that sit in each area but mainly a subjective idea of what it means to be Heavyweight or Agile.

Who's read or contributed to discussions on forums where the question is about the tester’s role in an agile team?

It comes up every 4 or 5 weeks on the forums. The other one being 'I been told we're going agile, what does that mean'?!?! You can almost hear the tester’s frightened voice behind the question.

The very fact we're at a SIGIST dedicated to testing 'in practice' tells us there's still a lot of thinking and discussion to do, not just around Agile either. If it was all signed, sealed and delivered we'd be talking about something else, here and within the testing community generally.

I previously worked at AOL and when I was there I had the good fortune to work with Thoughtworks. If you're familiar with them you'll know they could readily be considered thought and practice leaders in the agile development domain.

In fact around 8 years ago, on a project they were in to help deliver, they introduced us to crazy new ideas such as Daily Stand-ups, backlog items and Wikis. They were doing agile development and testing, 8 years ago... and here we are today as a profession still discussing what it means to do Agile testing in practice.

I believe there's a fundamental reason for this and it's that our current way of thinking about what Heavyweight and Agile testing practice is - is deeply flawed.

It's flawed to such a degree that if we continue in the current mindset we'll carry on re-asking the same questions for years and still not arrive at the answers we're looking for.

False Expectations of the Testing Future
My first contention is that we hold a false expectation that the testing profession is undergoing an evolution from a Heavyweight view of the testing world to a more Agile centric one.

In the same way we might consider ourselves to have matured from an ad-hoc / unstructured testing world into the heavyweight.

That in some way we'll become progressively more enlightened about how to realise testing process and practice that's moves us more towards Agile and in so doing moves us beyond the Heavyweight.

It's my view that this destination doesn't exist and that in fact we're already seeing what the future of the testing profession will look like. Consider what happens in practice when you're on projects and planning or actually delivering testing.

You may consider a project to be an essentially heavyweight one, perhaps it's testing in a regulated industry. But how often in those projects are you asked to create all the documents you proposed;
- but could you maybe not write out the test cases with all the steps?

Perhaps the project is lightweight requireing a more agile approach but how often are you then asked to work from iteration to iteration...
- but provide details and durations of all your tasks for the entire project just for budgeting and planning purposes?

- or could you provide a more complete Gannt chart as that's what the customer is expecting to see, instead of just the dashboards you were planning to provide?

Even in Ken Schwaber's book, Agile Project Management with Scrum, he mentions at the end of almost each chapter that he has to step away from 'pure' Scrum and compromise on the way he would like run things, in the ways just described.

The evidence from my experience and what I've heard and read from others is that the reality of testing practice is neither purely traditional/Heavyweight or Lightweight/Agile, but is more of a Hybrid of the two.

I'd suggest that the future of testing practice isn't going to be defined in terms of Heavyweight or Agile as we think about those perspectives at this time, but in the future testing will defined in ways more similar to the Hybrid approach that we're experiencing on most projects today.

I believe that this hybrid approach, driven by the project's constraints and practicalities of the delivery, is more representative of the 'normal' state of testing practice that we experience.

However, while we continue not to realise that 'hybrid is actually normal testing' we'll continue to hold the mistaken belief that we must be purely heavyweight, agile or it's not quite right.

What's more, in so doing we'll continue to try and evolve into Agile from heavyweight at the exclusion of practices and techniques we keep experiencing the need for.

Meaning and Impact of a Testing Paradigm
Adopting this perspective of hybrid testing as normal, we can more easily move towards defining a central, stable paradigm that more accurately describes the core of our profession.

In saying this I define a paradigm; as being 'a set of exemplary practices that define the core principles of a discipline'. This collection of practices come together to give us a definition of normal testing in the same way we have 'normal science'.

Normal science is often referred to as 'thinking inside the box' and represents the day-to-day accepted ways of approaching a scientific discipline.

The idea of a Normal Testing paradigm can be considered in the same way, as representing the exemplary set of practices we'd use in the day to day testing situations we'd expect to find ourselves in.

I'm not talking about defining immutable best practices here by the way, but I do believe it would be acceptable to refer to them as good practices. In many professions this set of practices is collated into a peer reviewed Body of Knowledge (BoK) by perhaps the governing, chartered institute of that profession.

We have the start of that in the ISTQB but it doesn't go far enough as the ISTQB doesn't publish the actual knowledge, just the syllabi for various courses.

Defining the testing knowledge relies instead on accredited commercial organisations designing courses or authors writing books that provide material that can be used to deliver the syllabi.

Not everyone follows ISTQB of course, as such the testing profession's approach to defining the actual knowledge that would deliver our 'normal testing paradigm' is at best commercially biased, at worse fragmented and un-coordinated.

This situation therefore perpetuates the issue of us never being able to 'finally' define what Agile, Heavyweight or a normal testing paradigm might look like.
At least not in a way that is consistent, agreed and accessible to everyone in the profession.

It's this situation that I suggest causes us to remain confused about topics we knew about 8 years ago or to ask the same questions on forums every 4 weeks.

It's also what causes us to perceive the work of testing luminaries, such as James Bach, Elizabeth Hendrickson, Michael Bolton, etc. as the new, emerging paradigm we must follow or get left behind.

It's also what causes us to feel exasperated as to what we should be learning and the approaches we should be taking, and why there seems to be so much disagreement about what should now be; done-and-dusted fundamentals.

It's attempting to make sense of this is why I say that as most of us go through our careers we inevitably sway from Heavyweight to Agile to somewhere else and often feel very confused along the way.

It’s also a barrier for new comers to the profession and limits our standing as a serious bona fide profession in the eyes of the rest of the world.

So what to do given the current situation?

The Middle Way – a Toolbox for Testing
About 12 months or so ago NMQA had a series of Workshops in-house that essentially touched on the points I've made so far.

We realised that even when called in to work on what were suggested to be Heavyweight or Agile projects they were always – ‘Agile, but...’ and ‘Heavyweight, but...’ or something entirely different.

Now as a consultancy we’d responded to these situations and delivered - but drawing on our previous experiences as managers and members of test teams we recognised getting in this situation isn’t limited to just consultancies.

During our careers as test professionals we’re going to encounter differing environments, maybe we’ll work in;
- digital media or the online space where a more ‘agile’ approach is needed
- pharma or military where regulation needs a more ‘heavyweight’ approach

This is the ‘normal testing paradigm’ that’s at the heart of our profession and being entirely focused, schooled in, beguiled by the Agile or Heavyweight perspective is going to limit us.

The practical outcome of this perspective at NMQA was to develop a ‘Testing Toolbox’ that contains all of the core models, practices, processes, documents, etc. that we would reasonably expect to need for most projects.

We used this to define in real terms, objectively the - ‘normal testing paradigm’ as NMQA experience it, a collation of everything stable, accepted, we felt could place ‘inside the box’. This is the same idea as I mentioned for the scientific paradigm earlier on.

But what about testing practices and the work of people such as James Bach, Elizabeth Hendrickson, James Lyndsay, Matt Heusser, Lisa Crispin, Michael Bolton, and many others I could mention?

It’s important that they and the practices they promote, and agile in general, are not mistaken as the sum of the testing profession even if we often fall into talking about them as if they are.

Continuing the analogy from earlier a lot of their work I’d suggest are great examples of ‘thinking outside the box’. These people are thought leaders, practice leaders and they challenge and stretch us by what they think and do.

Like experimental science the work they do, pokes, prods, stresses, refines, re-defines, replaces, improves and tests the accepted body of knowledge. Every now and then from this we get a new or more powerful tool in our testing toolbox.

Exploratory testing is probably the most recent example that springs to mind.

We need to realise that 80% of ‘us’ need what makes up the normal testing paradigm as we have days jobs where we can’t experiment, we don’t have time or opportunity to dally with interesting and amusing experimental practices.

This is how I see these luminaries, these thought and practice leaders, as a vital part of the energy and vibrancy that the profession needs to overcome the inevitable entropy that would occur in maturing the profession in the way I’m discussing.

Getting Involved
There’s two things I don’t have time for in this talk today, Questions and the opportunity to show you the tools in the toolbox.

What’s more this is the first time this perspective has been presented to the testing community. We know what it looks like within NMQA but I’m interested in seeing what it means to the wider testing community.

Please do something for me - what I want to ask you to do is participate and contribute to this discussion. Here’s how;

• Visit www.softwaretestingclub.com and go to the forums. There’s already a post there waiting for your feedback, thoughts, disagreements, alternate perspectives.

• You’ll find a link in that post to a survey about this talk, click the link and complete the survey.

In return - if this talk has got you thinking, if the ideas of moving beyond the ‘agile v heavyweight’ discussion, if the ‘normal testing paradigm’ resonates with you and if building a ‘Testing Toolbox’ for your organisation is of interest...

Drop me an email or call me and I’ll come and visit you and your team and have a Workshop type meet for an hour or two where we can run through some of this again, I can show you some of the practices, template documents, test techniques, where we can think about what it means in the context of your organisation.

It’s my view that we need to fundamentally rethink the way we view our profession. The old perspectives have served us well so far but I don’t see them doing so well in the future.

I’ve enjoyed talking to you this morning and I hope this has been of interest to you.

Thank you.

Mark Crowther

Thoughts after June BCS SIGIST

I enjoyed SIGIST this time, last time I went I had the overwhelming urge to scream, something like "What are you people thinking!?" or maybe just ""STFU!" which may have ruined what little reputation I have as a thoughtful individual. Thankfully Michael Bolton was there this time so the bar was raised.

I was also seriously pleased to have a few beers with him and other folks last night and today just spend time absorbing what he had to say. James Lyndsay was there today too, I think he sneaked (snuck?) in. It was a bit of a double-take when I saw him having never met him before either. My colleague Ian has, he went on James' Rapid Test Course and is a changed man. Michael suggested a line of study to me, it's little things like this that mean a lot.

It's not wrong to say that the thinking Ian is coming up with now combined with my own stuff is rocking our clients world. 18 months ago when I joined NMQA I finally got time to really 'think' (write, post, ask, rethink) and study the work of folks like James B, James L and Michael (crap loads still to learn of course). In a few months after being taught by James L my colleague and I are as good as on a par with each other regards the really kick-ass test approaches.(Yes we still ahve different experience, unique perspectives, etc. we're not twins)

I think that having been absorbed in this way of thinking is why I'm writing this blog when i should be in bed. It was the presentation after me that has me twitching, I couldn't f*&^%&ng believe it frankly. I doubt he'll ever read this blog hence my uncharacteristic spleen venting. Dood, are you stuck in a f*&^%&ng timewarp? Listen to me.... "No, no, no, no,no" to infinity.

I can't remember the last time I wrote a 'real' Test Strategy in the sense of the 47 page monster example I have on my laptop that I copied when I left a previous employment about 7 years ago. I remember liberating it because I thought it would come in handy, it didn't. I just checked my website too and see I have a Test Strategy Template there.

I have a confession to make, after posting it up there maybe two years or so ago I've never used it. I hate them, I don't write them, I contribute to a Project Strategy if there's one. If I'm creating document like this I write Test Plans that are never over two pages.

Strategies like this are as bad as the typical test case suites that make me want to slit my wrists because it would be quicker than watching my life being wasted with them. If I intended to spend 2/3 of my time writing documents, updating them, explaining them, version controlling them and have become a bloody documentation clerk or administrator. I wasn't hired because I can use word.

By the way, I really don't care if I never see a formally reviewed, 40 page, version 2.0, signed and sealed Specification. It won't stop me testing, I don't need it to test, in a twisted way I have more fun when I don't see them. Anyway, it makes my cringe when the organisation I'm working in churns these out.

As for 'why am I testing if there are no requirements'? There shouldn't even be any code! So it won't be a problem, even if the document is missing I've got this crazy ass idea, I could talk to the developer! (Conference, Reference, Inference - Lessons Learned)

We're testers, so ffs focus on testing. If you want to look after all that crap become a project manager or something. It kills me to see what you presented, no one cares, just focus on testing.

At NMQA we have a couple of phrases that get reused often:
* "Just enough definition, just enough control - nothing superfluous to needs" and
* "Test stuff - a lot!"

Test, test, test, that's why we're here, be militant about testing, the doing of it. Not being able to test because you're waiting for documents is an excuse when there's software available, an excuse incompetent testers use.

Go look at how you can use Bug Reports as test cases, post it notes on a white board as a requirements catalogue, shake your thinking up and read, read, read, read, read, then go read Lesson Learned in Software Testing.

It's hard, it's uncomforatble, it's work to change but the change in perspective that's possible will blow you away and make testing the most exciting, challenging, engaging, fulfilling job you can stay up late and blog about!

I feel better now, time for bed.