Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Tuesday, 1 September 2009

An Introduction to Web Test Automation with Selenium and Ruby

An Introduction to Web Test Automation with Selenium and Ruby

As readers of this blog and other random ramblings of mine across the internet will know - for more than 8 months or so I’ve been working a lot with Selenium and Ruby. It’s been a joy and a pain at the same time. Beguiled by how easy the Selenium tools were to use I got started only to discover that they weren’t as accessible as they at first appeared. A definite case of ‘easy when you know how’.

It seems there are a good number of folks out there using the tools to varying degrees and different ways which unfortunately adds to the confusion. It confuses because most folks taking the first steps with the Selenium toolset soon hit their first issues which seems commonly to be how to set the toolset up, which language to employ and even how to get over the many gotcha’s that Selenium has waiting for us.

Trawling over the forums, not just the smaller community forums either but the ‘official’ forums too, was not that helpful overall. Perhaps that’s because I was intent on asking very specific questions and/or my focus was on using Ruby. Maybe if I just wanted to use Java or buy someone’s course I’d have had better luck. In any case the lack of answers to my and other’s questions where the questions were something less than trivial was frankly disappointing. That is disappointing in a way that I feel folks are purposefully withholding the knowledge they have. There is a possibility that no one really knows too, but over five of the most trafficked forums? I doubt that. As if to confirm my deluded suspicions I even managed to find someone I would have considered a ‘Selenium Expert’ posting that they’d stopped posting - because the forums were full of ‘problem posters’. Well, I hate to spring a shock fest on you but that’s kind of the idea of public forums most of the time. FFS, can you say ‘testing community’?

So where to turn to if I’m a Selenium newbie wanting a clear idea of how to go about setting my Selenium web test framework up? Hoping for insight into other people’s success spelt out to such a degree that I can follow it step by step or translate it into my environment? I could try the official docs at Selenium HQ but it’s hardly an idiots guide, as good as they are getting as they become more complete. Where’s the guided, instructional, friendly, tutorial and walkthrough with helpful pictures and handy hints? They’re rhetorical questions and the reasons I wrote “An Introduction to Web Test Automation with Selenium and Ruby”.

Is the book an answer to all problems and perspectives you might want answers for? Not a chance. This book is an introduction to not a definitive guide and that’s the point. It’s written so that within a week the reader has a fully working Selenium-Ruby automation framework up and running which includes key process techniques for managing how it’s used. The book covers the Selenium tools of the IDE, Remote Control and Grid, provides numerous examples of the Selenium-Client keywords and commands in use, introduces the Ruby language and techniques for designing automated test scripts, it walks through the creation of 10 key test script templates and discusses ideas on managing test suites and test environments.

An Introduction to Web Test Automation with Selenium and Ruby is the book I was looking for 8 months ago and that I’ve written to capture the learning I’ve experienced bringing the framework to life on it’s now multiple live deployments. It’s my attempt to help other’s avoid wasting time that they don’t have and which needs to be focused on delivering great testing. There’s a lot to learn and it’s insane to think every tester wanting to do so is scrabbling around on forums and sites trying to piece the bits together. This book is a vehicle to kick start a more open and robust form of discussion and sharing by others in the community. That discussion will be of critical value and importance as there are a myriad of ways to implement the Selenium tools and the testing community needs to hear what they are.

Book Proposal
Having got just 2/3 the way through writing the book I decided it was time to start sending out the draft manuscript and see what publishers make of it. The first to get site of the manuscript is www.pragprog.com and it should hopefully be reviewed this week. Let’s see what the feedback is. Once I know where things are headed I’ll check in and see what I’m ‘allowed’ to share in advance of any publication.

So please do me and the Selenium would-be test community a favour and go put the word out, An Introduction to Web Test Automation with Selenium and Ruby, is out looking for a publisher.

Mark.

Sunday, 9 August 2009

Behaviour Driven Development (BDD) and the Testing Profession

I posted earlier in the year about BDD. If you’ve not encountered BDD have a wander over to http://behaviour-driven.org/ and take a quick read or hit YouTube and watch Dave Astels (http://techblog.daveastels.com/) Google TechTalk http://www.youtube.com/watch?v=oOFfHzrIDPk. BDD isn’t some kind of Flavour of the Week new thing, it’s been around a good few years so hitting Google or Bing will turn up even more material.

I've read about approaches such as Agile Acceptance Testing (http://snipurl.com/pio2q) too and they offer good ideas about focusing on what’s important to the customer. What really struck home for me about BDD is the focus on Behaviour. As Michael Bolton said “... our clients don't value the code as such; they value the things the code does for them."

Behaviour Driven Testing (BDT)

There is a slight ‘gap’ in all this BDD goodness though and that’s the integration of the testing world’s perspective and practices. Especially those practices that could be applied, as is or modified, to fit alongside a team practicing BDD. AS much as I have the warm fuzzies for BDD it comes across as a far too developer centric perspective yet again. Maxim No. 27: ‘Software Development is more than just the writing of code’. What’s more, I’m a tester, as such it'd be odd to talk about me doing 'development' and having to say 'obviously I mean testing...' every time I mention the word development.

Hence why I don’t talk about BDD as such, I talk about Behaviour Driven Testing 'BDT', a methodology I developed to articulate how a test team could work in a BDD environment and consider what complimentary techniques/approaches might look like. If we’re enlightened enough to realise that ‘development’ is more than writing code and is all the tasks needed (BA, PM, Dev, Test, Support, ...) to get code out of the door then perhaps we could think just BDD but the world is a more fragmented and messy place so unfortunately we talk as if development just means writing code.

Replacing, renaming BDD?
I recently exchanged messages with David Chelimsky (http://www.davidchelimsky.net/) (and Bret Pettichord (http://www.pettichord.com/)) about BDT and a question that came why call BDD by the name BDT? A valid question where the short answer is “I’m not”. Just to be clear BDT is spawned from BDD and uses many BDD practices but the purpose, the objective is to deliver testing of the application code, not the development of the code. BDT sits next to BDD, I’m not renaming or replacing it, BDD is the wellspring that provides the framework and thinking for what BDT is.

When I read about BDD and then read the RSpec book (http://snipurl.com/pipiz) in a stroke I found the answer to many tester pains. BDD:BDT focuses testers back on Behaviour and away from the erroneous belief they are testing code (what do you think testers think they’re doing when they use Boundary Value Analysis?), it eliminates friction between developers and testers because, firstly we’re not testing code (directly) and secondly we both use Cucumber Features and now testers can write more tests (focused on Behaviour) that align with the work Development are doing, Cucumber is a catalyst to get customers, BA's and Test working together in ways they’ve been dreaming of and the first tool that allows Testers to help Developers increase velocity.

When testers then start writing edge and fail cases, end to end scenarios, etc. they like Developers can define those in Cucumber and implement them in the RSpec format. The RSpec approach ‘makes sense’ to testers as it reads so easily. If we’re not just doing manual testing combined with Selenium/Ruby it gives us a really accessible route into automation that focuses on Behaviour. With Selenium/Ruby we also get to use RSpec to structure our automation tests and kick-out neat reports too. Hang on, we’ve just found a way for Test to get really involved in (and in some aspects lead) the project, with all players, aligned with and supporting Development instead of clashing with them and we’ve got a kick-ass way to write automated tests that align with Development tests yet keep us focused on testing Behaviour not code. Is the paradigm shift this represents hitting home? ;)

With RSpec we have a way for testers to write scripted tests that mean something to them and that speaks to what they should focus on, Behaviour, not code. I suggest RSpec is as significant to the testing profession as Exploratory Testing (http://snipurl.com/pipjv). RSpec is the Aha! that smacks you in the face as hard as Exploratory Testing did and provides the paradigm shift in your tester thinking that both excites you about how you’ll work in the future and makes you lament for the pratting about you’ve been doing.

I could go on and on, point is all of the above is not (BD) development of application code. It’s an absolute focus on testing but testing with an equally militant focus on Behaviour. This is what BDT is about. It’s the testing methodology spawned from BDD that focuses testers back onto Behaviour and aligns them with the rest of the development effort. A BDD|BDT project helps testers fulfil roles the business hopes they will but until now didn’t have a way to do so. Are there other ways to approach the issue of development and test team integration, engagement of the customer by all teams in a way that appears congruent, enabling all stakeholders to contribute to each other’s success, etc? Maybe but I’ve yet to work with them.

A final thought, every time I present my BDT methodology to testers it’s great to see the impact of what it enables them to do sinks in. When customers hear about BDD:BDT, Cucumber and RSpec they ‘get it’.

Please email me for a copy of this presentation. I had to take it down as it was branded for another consultancy ;(

Mark.

What test technology and tools are you working with?

I've been working a lot lately on elaborating a Selenium and Ruby web test automation framework. It's going well of itself and I've used the framework with a number of clients successfully in the last 3 months or so. As I always say - lot's more still to learn!

It came as little surprise that the use of Selenium and Ruby was not what most expected as say QTP or Java are more commonplace. This prompted me to think about:

What technologies and tools are you using now in the testing field?

From my side I'm using as tech or tools sat on my desktop:

* Selenium: IDE, RC and Grid
-- http://seleniumhq.org/
* Ruby: for creating Selenium-Ruby based test scripts
-- http://www.ruby-lang.org/en/
* Rake: Build programme
-- http://rake.rubyforge.org/
* NMQA Vienna: as a (free) test management tool
-- www.freetestmanagementtool.com
* MS Virtual PC: To run various OSs to use with Grid
-- http://snipurl.com/pij0i
* SciTE: Text editor
-- http://www.scintilla.org/SciTE.html
* SQL Server Management Studio Free Express edition for working with SQL DBs
-- http://snipurl.com/pij7

So, if you have to rebuild you laptop tomorrow what technologies and tools would you be putting back on there? If you went to a client site tomorrow to deliver some testing what would you expect to be working on?

Mark.

Friday, 31 July 2009

9000 hours of billing with no single bug found yet

In a discussion on Test Republic at http://www.testrepublic.com/forum/topics/code-coverage-with-test-cases Pradeep Soundararajan summarised the evil, immoral, corrupt dark heart of the commercial side of the testing profession so well I wanted to make sure his words were captured here so they wont be lost. The commercial reality is just that, a reality that we can’t escape but I hope we can generate amazing revenue while adding value and avoiding fleecing our clients. Too many consultancies have been sued because they didn’t – you know who you are.


Reply by Pradeep Soundararajan on July 28, 2009 at 6:17pm
Mark,

Thanks for not posting it earlier. It gave me an opportunity to get you post it.

I do exercises in my exploratory testing workshop that demonstrates that those who seem to care so much about RBT aren't actually caring about it. As Michael Bolton pointed out somewhere in Test Republic that those who profess so much about documentation themselves don't bother to read and write good documents.

A good RBT requires skills that people appear to be reluctant to build. By saying that I don't mean, "Yeah, I have built it". I have been trying to develop it as much as possible and constantly practicing it so that I am prepared for a war anytime.

My perhaps more militant stand against RBT (the misuse of terms and vocabulary for ‘coverage’ aside) is because the Behaviour Driven Testing approach I advocate more and more these days focuses on behaviour relevant to the user. Ticking off Requirements against test cases is of no use to the user. Answering the questions above and formulating testing around them AND the requirements is. "

Exactly. You would call it misuse and businessmen would call it fair-use. I am starting to realize that the scripted approach survives because there is more money out there for business people through that.

Mark, outsource work to me:

I would spend a couple of days analyzing your requirement document and bill you for X hours per person involved in my team.

I would spend a couple of days writing a test plan document ( and not refer to it ) and bill you for 2X hours per person involved in my team for preparing it.

I would spend a month or two writing test case document ( and refer only to it ) and bill you for 10X hours per person involved in my team for preparing it.

I would then again create a traceability matrix ( just to fool you and your boss about our coverage ) and bill you for 5X hours per person involved in my team for preparing it.

18X hours per person involved in the team. Assuming X is 50 hours and there are about 10 members in my team, that's 18 * 50 * 10 = 9000 hours of billing with no single bug found yet. If you are paying $20 an hour per person on an average, you would have actually given me a business of $1,80,000 without me or my team finding any bug yet.

Then comes the test case execution cycles and more billing. Why wouldn't a businessman be glad about the traditional approaches to test software?

Lets bother about the users of your product later during our maintenance billing phase ;-)

Tuesday, 28 July 2009

Lack of vendor support for Open Source

The lack of vendor support is a real issue for Open Source and Free tools. It may seem logical that paid-for-tools are going to get superior support by the folks who have actually created them and have a commercial interest in promoting them.

This can certainly be true and NMQA (who I work for) have the Vienna test management tool that we created and so support both through paid service contracts and queries raised by the test community. However, the matter isn’t as simple as proprietary tools get superior support over Open Source or Free ones.

For example, NMQA also offer a Selenium-Ruby automation framework (in various forms) )that we support as aggressively as Vienna that we wrote fully ourselves. The reason being is that we see no difference between the two in terms of the support a customer needs. For NMQA there’s no difference in the support a customer needs for a proprietary solution we’ve developed and written in proprietary code or an Open Source / free framework constructed of open source code that we’ve set-up for them.

It’s when a customer tries to hit the internet and read online documents and forum postings to do it themselves the trouble starts. Think about that for a second, inexperienced staff trawling through spurious sources of information as the way to learn and implement a key technology, what a ridiculous strategy. Yet it’s the one often taken. Open source tools are not an easy solution to adopt unless there is expertise available, in-house or via a consultancy. The learning curve that inexperienced internal staff will take on is usually too great a burden for organisations to support and won’t deliver anywhere near as fast as is needed. Add to that the lack of trusted sources of information and we begin to see why organisations are shying away from Open Source.

There’s the issue – organisations trying to wing it on their own with Open Source solutions will mean they suffer more pain than if they buy proprietary tools and a service agreement. The best way is to engage a consultancy or specialist individual who can provide the same level of support you’d get buying a support contract for a proprietary tool and that way there’s no difference between proprietary or Open Source solutions. Later on the difference is saving tens of thousands in service contracts as insurance in case something goes wrong, also ridiculous.

Mark Crowther.

Wednesday, 22 July 2009

Code Coverage with Test Cases?

It hasn't really struck me until now - Why do testers think of coverage in terms of code - why aren't we thinking of coverage in terms of what the system does or should do? i.e Behaviour?

Thinking in terms of what the system should do is why we're testing isn't it? Isn't that what the customer / user wants us to be making sure before they get the software? Isn't focusing on behaviour how we assure that UAT is a success?

Never in my career have I ever really done 'code coverage'. I've played with it, talked about it with developers, even helped define accaptable coverage levels but I've never run 'code coverage tools' and declared my tests cover xx% of code. The developers I've worked with have, I remember this being the way when I was at EA. They put together Unit Tests and Component Integration Tests, ran the tools or did-the-math and declared coverage at a certain %.

What I have done however is declared coverage of Requirements. I've analysed what Test Scenarios exist, things I would do with the software to demonstrate the Requirements had been delivered on (coded the right thing), then Test Cases to exercise the Scenarios (coded the thing right) and find those lovely bugs.

Hmmm.....

It's testing with a focus on Behaviour

Mark.

What are your Selenium challenges?

It seems that when using Selenium natively there are a number of common challenges people encounter. Here’s my list of things I encountered and thought “Hmm... how do I do that then?”
What would you add? What have you struggled with or are struggling with now?

• Dealing with pop-up windows
• Testing dynamic text or content
• How to go about testing Flash
• Capturing screen shots, either to file or in some form of report
• Iteration of the test case, running repeatedly with minor changes
• Data Driven Testing, pre-cooked data or generating on the fly
• Generating useful test status reports
• Setting up Remote Control
• Setting up Grid

There is a way around all this, by using an outsourced software testing partner, such as my own company Test Hats. However, with a little work you CAN fix these issues yourself. Now that Selenium 2 is out, some of these have gone away.

Thoughts? Leave a message!

Mark.

Wednesday, 24 June 2009

Recording bugs in development, don’t bother.

There was an interesting thread of tweets on twitter (no less) started by TestingGeek I believe and asking about raising of bugs in agile projects.

In a traditional / heavyweight projects I guess it’s a done deal. We find bugs in the testing phase, actually any phase post the development phase, and they go through the bug lifecycle of log/triage/assign/fix/re-assign/re-test/loop-de-loop/close. No worries about tracking the bugs, building bug taxonomies, doing root cause analysis and corrective action planning and of course generating those all important measures and metrics.

I think I first heard Elisabeth Hendrickson suggest that bugs in an agile projects should just be fixed and not logged/tracked. On first hearing this it seemed to be a heretical suggestion. What about all that lovely stuff I can do when tracking bugs for one and aren’t I letting Developers ‘get away with it’ for another? Hmm.... Then I recalled that when I was at CAPS-Solutions working for Martyn Arbon about 4 years ago he’d suggested roughly the same thing. Test stuff early and get the Developers to fix it while they’re still on the project. The last person to subscribe to this perspective was Tien Hua at NMQA.

It struck me today that I’ve never really considered what my view is on this. In recent years I’ve definitely been letting the Developers get away with it. I distinctly remember a project at CAPS that went out earlier than previous projects, with more in, with more testing done and of a quality that surprised the customers. Full life cycle testing too right up to compatibility and UAT and we finished about 2 days early if I recall correctly. Problem was we recorded next to no bugs but we were finding them, that I do know.

What I remember that felt new was we were talking to the Developers as we found bugs, encountered issues, got stuck. Essentially the test team were paired with a Developer, reporting in issues encountered to the Developer as they were found. A quick fix and retest later and we were off testing new cases. Testing progress was fast, test cases were moving to a passed state and the relationship with Development was positive. I hadn’t heard of Scrum, agile, et al at that time.

These days I’m inclined to follow roughly the same approach but any that are non-trivial are recorded. Trivial being the Developer can literally change it there and then as it was a brain glitch moment (variable not initialised, variable scope context, incorrect typing, wrong table or file name referenced, etc.). Sure we might need a rebuild to get the change but the change is done in the time it takes to shout across the desk. For non-trivial bugs found at this point I’ve (once) had the team post a Bug Card on the wall so it becomes part of the backlog of tasks to be worked on that sprint/iteration, a blue sticker get’s attached when it’s accepted/estimated/assigned, orange when fixed, green when closed. I recall some getting accepted and de-prioritised and resolved in later releases.

If the Developer is still coding and progressing that bit of code towards being ‘done’ and ready for integration - the more I can do to help that and not be a tester ‘getting in the way of progress’ the more I feel I’m fulfilling my role in the team. On a personal level I’m not interested anymore in catching-out the Developer when they’re still working on something that’s being crafted. It isn’t fair and doesn’t help. I wouldn’t want the same being done to me when I was writing tests but insight that helped me complete them would be/is very welcome. If we think about this for a moment it’s easy to see why Developers have got so peeved with testers in the past.

The change comes when the Developer declares they’re ‘done’ and (usually) moves onto the next item to code up, having integrated the current item so the test team can run the next stages of testing. Now there’s a slight distance between the Developer and the code they wrote starting to grow. Just like in a traditional approach where a Developer might complete development for one project and move onto the next. From here on, from Integration to the end of the product’s life sat in production/live I see the value of applying a bug life cycle that then supports any tracking, taxonomies, RCA, etc. that might be found useful.

What’s your views on this? Do we ‘miss out’ by not tracking ALL bugs? Where should tracking not be done/be done?
How have you balanced the desire to measure/manage v make progress/collaborate?

Elizabeth Hendrickson gets it, Tien Hua gets it, I think I get it, how about you folks?

Monday, 22 June 2009

Feedback to PM's - weekly or daily?

Driving on my way to work today I started thinking about ‘feedback’ on projects and how often we’ll give feedback to a Project Manager on a traditional project. My experience has been that the feedback is generally given once per week, at the Monday morning Project Management meeting.

I’d almost forgotten that this had been the way I used to provide feedback on how testing was going. Feedback once per week would see me having kittens now. I’m so used to the agile style of running a daily stand-up that the thought of weekly updates just seems so alien.

I imagined going to my online banking service and after logging in seeing a message that said ‘No updates, please come back Monday’! A moment after thinking this I realised in this situation I would immediately panic and be ringing the bank. A realisation which came a moment before I realised I’d missed my turn off! Doh!

Thursday, 18 June 2009

June SIGIST Talk - 'Pragmatic Testing - the Middle Way'

The following text is the transcript of the talk given by Mark Crowther at the June SIGIST in London.

What is Agile/Heavyweight/Testing anyway?
Often we see on forums, blogs and so on discussions that ask about and responses that aim to provide a definition of what Agile testing is in practical terms.

In the same way we might use say Waterfall as a definition of what Heavyweight is, it sometimes seems that we're looking for a diagram or model as a way to define Agile.

Or perhaps by saying that Heavyweight is all about documents and process we can compare and say Agile must therefore be about avoiding documents and process.

The issue is these approaches of simple comparison will always fail to help us clearly define Agile and Heavyweight. The fact is that right now Agile and Heavyweight are at best metaphors or perspectives on how to approach testing practice.

They're not clearly defined paradigms, i.e. there's no robust and complete definition of the practices that make up Heavyweight or Agile test approaches. So in my view we as a profession don't clearly and collectively know what being in the Agile testing mode of practice is but we expect and are expected to deliver as if we do.

We don't have a clear set of practices and approaches that we can point to and say 'that's agile or that's heavyweight'. We have a few random things that sit in each area but mainly a subjective idea of what it means to be Heavyweight or Agile.

Who's read or contributed to discussions on forums where the question is about the tester’s role in an agile team?

It comes up every 4 or 5 weeks on the forums. The other one being 'I been told we're going agile, what does that mean'?!?! You can almost hear the tester’s frightened voice behind the question.

The very fact we're at a SIGIST dedicated to testing 'in practice' tells us there's still a lot of thinking and discussion to do, not just around Agile either. If it was all signed, sealed and delivered we'd be talking about something else, here and within the testing community generally.

I previously worked at AOL and when I was there I had the good fortune to work with Thoughtworks. If you're familiar with them you'll know they could readily be considered thought and practice leaders in the agile development domain.

In fact around 8 years ago, on a project they were in to help deliver, they introduced us to crazy new ideas such as Daily Stand-ups, backlog items and Wikis. They were doing agile development and testing, 8 years ago... and here we are today as a profession still discussing what it means to do Agile testing in practice.

I believe there's a fundamental reason for this and it's that our current way of thinking about what Heavyweight and Agile testing practice is - is deeply flawed.

It's flawed to such a degree that if we continue in the current mindset we'll carry on re-asking the same questions for years and still not arrive at the answers we're looking for.

False Expectations of the Testing Future
My first contention is that we hold a false expectation that the testing profession is undergoing an evolution from a Heavyweight view of the testing world to a more Agile centric one.

In the same way we might consider ourselves to have matured from an ad-hoc / unstructured testing world into the heavyweight.

That in some way we'll become progressively more enlightened about how to realise testing process and practice that's moves us more towards Agile and in so doing moves us beyond the Heavyweight.

It's my view that this destination doesn't exist and that in fact we're already seeing what the future of the testing profession will look like. Consider what happens in practice when you're on projects and planning or actually delivering testing.

You may consider a project to be an essentially heavyweight one, perhaps it's testing in a regulated industry. But how often in those projects are you asked to create all the documents you proposed;
- but could you maybe not write out the test cases with all the steps?

Perhaps the project is lightweight requireing a more agile approach but how often are you then asked to work from iteration to iteration...
- but provide details and durations of all your tasks for the entire project just for budgeting and planning purposes?

- or could you provide a more complete Gannt chart as that's what the customer is expecting to see, instead of just the dashboards you were planning to provide?

Even in Ken Schwaber's book, Agile Project Management with Scrum, he mentions at the end of almost each chapter that he has to step away from 'pure' Scrum and compromise on the way he would like run things, in the ways just described.

The evidence from my experience and what I've heard and read from others is that the reality of testing practice is neither purely traditional/Heavyweight or Lightweight/Agile, but is more of a Hybrid of the two.

I'd suggest that the future of testing practice isn't going to be defined in terms of Heavyweight or Agile as we think about those perspectives at this time, but in the future testing will defined in ways more similar to the Hybrid approach that we're experiencing on most projects today.

I believe that this hybrid approach, driven by the project's constraints and practicalities of the delivery, is more representative of the 'normal' state of testing practice that we experience.

However, while we continue not to realise that 'hybrid is actually normal testing' we'll continue to hold the mistaken belief that we must be purely heavyweight, agile or it's not quite right.

What's more, in so doing we'll continue to try and evolve into Agile from heavyweight at the exclusion of practices and techniques we keep experiencing the need for.

Meaning and Impact of a Testing Paradigm
Adopting this perspective of hybrid testing as normal, we can more easily move towards defining a central, stable paradigm that more accurately describes the core of our profession.

In saying this I define a paradigm; as being 'a set of exemplary practices that define the core principles of a discipline'. This collection of practices come together to give us a definition of normal testing in the same way we have 'normal science'.

Normal science is often referred to as 'thinking inside the box' and represents the day-to-day accepted ways of approaching a scientific discipline.

The idea of a Normal Testing paradigm can be considered in the same way, as representing the exemplary set of practices we'd use in the day to day testing situations we'd expect to find ourselves in.

I'm not talking about defining immutable best practices here by the way, but I do believe it would be acceptable to refer to them as good practices. In many professions this set of practices is collated into a peer reviewed Body of Knowledge (BoK) by perhaps the governing, chartered institute of that profession.

We have the start of that in the ISTQB but it doesn't go far enough as the ISTQB doesn't publish the actual knowledge, just the syllabi for various courses.

Defining the testing knowledge relies instead on accredited commercial organisations designing courses or authors writing books that provide material that can be used to deliver the syllabi.

Not everyone follows ISTQB of course, as such the testing profession's approach to defining the actual knowledge that would deliver our 'normal testing paradigm' is at best commercially biased, at worse fragmented and un-coordinated.

This situation therefore perpetuates the issue of us never being able to 'finally' define what Agile, Heavyweight or a normal testing paradigm might look like.
At least not in a way that is consistent, agreed and accessible to everyone in the profession.

It's this situation that I suggest causes us to remain confused about topics we knew about 8 years ago or to ask the same questions on forums every 4 weeks.

It's also what causes us to perceive the work of testing luminaries, such as James Bach, Elizabeth Hendrickson, Michael Bolton, etc. as the new, emerging paradigm we must follow or get left behind.

It's also what causes us to feel exasperated as to what we should be learning and the approaches we should be taking, and why there seems to be so much disagreement about what should now be; done-and-dusted fundamentals.

It's attempting to make sense of this is why I say that as most of us go through our careers we inevitably sway from Heavyweight to Agile to somewhere else and often feel very confused along the way.

It’s also a barrier for new comers to the profession and limits our standing as a serious bona fide profession in the eyes of the rest of the world.

So what to do given the current situation?

The Middle Way – a Toolbox for Testing
About 12 months or so ago NMQA had a series of Workshops in-house that essentially touched on the points I've made so far.

We realised that even when called in to work on what were suggested to be Heavyweight or Agile projects they were always – ‘Agile, but...’ and ‘Heavyweight, but...’ or something entirely different.

Now as a consultancy we’d responded to these situations and delivered - but drawing on our previous experiences as managers and members of test teams we recognised getting in this situation isn’t limited to just consultancies.

During our careers as test professionals we’re going to encounter differing environments, maybe we’ll work in;
- digital media or the online space where a more ‘agile’ approach is needed
- pharma or military where regulation needs a more ‘heavyweight’ approach

This is the ‘normal testing paradigm’ that’s at the heart of our profession and being entirely focused, schooled in, beguiled by the Agile or Heavyweight perspective is going to limit us.

The practical outcome of this perspective at NMQA was to develop a ‘Testing Toolbox’ that contains all of the core models, practices, processes, documents, etc. that we would reasonably expect to need for most projects.

We used this to define in real terms, objectively the - ‘normal testing paradigm’ as NMQA experience it, a collation of everything stable, accepted, we felt could place ‘inside the box’. This is the same idea as I mentioned for the scientific paradigm earlier on.

But what about testing practices and the work of people such as James Bach, Elizabeth Hendrickson, James Lyndsay, Matt Heusser, Lisa Crispin, Michael Bolton, and many others I could mention?

It’s important that they and the practices they promote, and agile in general, are not mistaken as the sum of the testing profession even if we often fall into talking about them as if they are.

Continuing the analogy from earlier a lot of their work I’d suggest are great examples of ‘thinking outside the box’. These people are thought leaders, practice leaders and they challenge and stretch us by what they think and do.

Like experimental science the work they do, pokes, prods, stresses, refines, re-defines, replaces, improves and tests the accepted body of knowledge. Every now and then from this we get a new or more powerful tool in our testing toolbox.

Exploratory testing is probably the most recent example that springs to mind.

We need to realise that 80% of ‘us’ need what makes up the normal testing paradigm as we have days jobs where we can’t experiment, we don’t have time or opportunity to dally with interesting and amusing experimental practices.

This is how I see these luminaries, these thought and practice leaders, as a vital part of the energy and vibrancy that the profession needs to overcome the inevitable entropy that would occur in maturing the profession in the way I’m discussing.

Getting Involved
There’s two things I don’t have time for in this talk today, Questions and the opportunity to show you the tools in the toolbox.

What’s more this is the first time this perspective has been presented to the testing community. We know what it looks like within NMQA but I’m interested in seeing what it means to the wider testing community.

Please do something for me - what I want to ask you to do is participate and contribute to this discussion. Here’s how;

• Visit www.softwaretestingclub.com and go to the forums. There’s already a post there waiting for your feedback, thoughts, disagreements, alternate perspectives.

• You’ll find a link in that post to a survey about this talk, click the link and complete the survey.

In return - if this talk has got you thinking, if the ideas of moving beyond the ‘agile v heavyweight’ discussion, if the ‘normal testing paradigm’ resonates with you and if building a ‘Testing Toolbox’ for your organisation is of interest...

Drop me an email or call me and I’ll come and visit you and your team and have a Workshop type meet for an hour or two where we can run through some of this again, I can show you some of the practices, template documents, test techniques, where we can think about what it means in the context of your organisation.

It’s my view that we need to fundamentally rethink the way we view our profession. The old perspectives have served us well so far but I don’t see them doing so well in the future.

I’ve enjoyed talking to you this morning and I hope this has been of interest to you.

Thank you.

Mark Crowther

Thoughts after June BCS SIGIST

I enjoyed SIGIST this time, last time I went I had the overwhelming urge to scream, something like "What are you people thinking!?" or maybe just ""STFU!" which may have ruined what little reputation I have as a thoughtful individual. Thankfully Michael Bolton was there this time so the bar was raised.

I was also seriously pleased to have a few beers with him and other folks last night and today just spend time absorbing what he had to say. James Lyndsay was there today too, I think he sneaked (snuck?) in. It was a bit of a double-take when I saw him having never met him before either. My colleague Ian has, he went on James' Rapid Test Course and is a changed man. Michael suggested a line of study to me, it's little things like this that mean a lot.

It's not wrong to say that the thinking Ian is coming up with now combined with my own stuff is rocking our clients world. 18 months ago when I joined NMQA I finally got time to really 'think' (write, post, ask, rethink) and study the work of folks like James B, James L and Michael (crap loads still to learn of course). In a few months after being taught by James L my colleague and I are as good as on a par with each other regards the really kick-ass test approaches.(Yes we still ahve different experience, unique perspectives, etc. we're not twins)

I think that having been absorbed in this way of thinking is why I'm writing this blog when i should be in bed. It was the presentation after me that has me twitching, I couldn't f*&^%&ng believe it frankly. I doubt he'll ever read this blog hence my uncharacteristic spleen venting. Dood, are you stuck in a f*&^%&ng timewarp? Listen to me.... "No, no, no, no,no" to infinity.

I can't remember the last time I wrote a 'real' Test Strategy in the sense of the 47 page monster example I have on my laptop that I copied when I left a previous employment about 7 years ago. I remember liberating it because I thought it would come in handy, it didn't. I just checked my website too and see I have a Test Strategy Template there.

I have a confession to make, after posting it up there maybe two years or so ago I've never used it. I hate them, I don't write them, I contribute to a Project Strategy if there's one. If I'm creating document like this I write Test Plans that are never over two pages.

Strategies like this are as bad as the typical test case suites that make me want to slit my wrists because it would be quicker than watching my life being wasted with them. If I intended to spend 2/3 of my time writing documents, updating them, explaining them, version controlling them and have become a bloody documentation clerk or administrator. I wasn't hired because I can use word.

By the way, I really don't care if I never see a formally reviewed, 40 page, version 2.0, signed and sealed Specification. It won't stop me testing, I don't need it to test, in a twisted way I have more fun when I don't see them. Anyway, it makes my cringe when the organisation I'm working in churns these out.

As for 'why am I testing if there are no requirements'? There shouldn't even be any code! So it won't be a problem, even if the document is missing I've got this crazy ass idea, I could talk to the developer! (Conference, Reference, Inference - Lessons Learned)

We're testers, so ffs focus on testing. If you want to look after all that crap become a project manager or something. It kills me to see what you presented, no one cares, just focus on testing.

At NMQA we have a couple of phrases that get reused often:
* "Just enough definition, just enough control - nothing superfluous to needs" and
* "Test stuff - a lot!"

Test, test, test, that's why we're here, be militant about testing, the doing of it. Not being able to test because you're waiting for documents is an excuse when there's software available, an excuse incompetent testers use.

Go look at how you can use Bug Reports as test cases, post it notes on a white board as a requirements catalogue, shake your thinking up and read, read, read, read, read, then go read Lesson Learned in Software Testing.

It's hard, it's uncomforatble, it's work to change but the change in perspective that's possible will blow you away and make testing the most exciting, challenging, engaging, fulfilling job you can stay up late and blog about!

I feel better now, time for bed.

Wednesday, 13 May 2009

Behaviour Driven Development

Behaviour Driven Development (BDD) - The future of testing.

My view is the BDD is the bridge between development and testing, representing the paradigm shift in thinking that's needed to ensure closer integration between the development and testing professions. BDD provides a logical interlink between what I define as 'Test Requirements', which are drawn from functional requirements, and Test Cases executed to prove these Test Requirements have been achieved.

The BDD approach allows the Test Requirements to more naturally and completely emerge during development. The tester can then relate functional requirements to implemented behaviour and write effective tests cases that demonstrate the customer has what they wanted in a way they wanted. Using solutions such as Selenium/RSpec a tester can create automated tests that help move functionality more rapidly to a ‘done’ state and keep in the spirit of focusing on behaviour.

Shifting thinking from TDD to BDD also eliminates the (curious) mental block development can suffer around testing they do and testing testers do. The BDD mindset re-informs testers as to their valuable role within teams using the approach and eliminates the (also curious) identity crises testers can suffer when being part of an agile team.

Sunday, 26 April 2009

Why there are bugs in software

Why bugs are present in software?

The development of software is a complex discipline the output of which cannot be exactly the same as any previous time or approached in exactly the same way using exactly the same techniques.

It’s obvious the exact actions that are used to develop software cannot be repeated precisely. This is not manufacturing, there are no poke-yoke, failsafe software development system, fixed-gauges to ensure precise quality is achieved or robotic systems to reduce human input.

The development of software involves large amounts of human involvement, ambiguity, ambiguous statements of requirements, a myriad of possible solutions and uncountable potential failures.

This means two things are guaranteed:
• Bugs will be found at every stage of the SDLC.
• Every application released will always have bugs present within it.

That's my thought for the day lay here on the sofa like a cat in the sun, lush green sunny Spring days, nice.

Tuesday, 21 April 2009

The Irrational Tester

Just read a great paper by James Lyndsay of Workroom Productions.

In it he talks about the various types of irrational bias that we can suffer from as testers. Read the paper here.

Confirmation bias = Test Cases?
Process Imperialist, Agile Evangelists = Congruence Bias?
Clustering Illusion = Bugs are where bugs are? How to avoid false clusters?

Under Illusion of Control james states "When testing... we seek reproducible experiments". So, testing is an experimental activity? James Bach thinks it's a science. Testing is an Experimental Science then?

In the same section he discounts testers (unlike traders) being "vi) Goal focused". testers aren't goal focused and so this isn't of Illusion Control? That doesn't seem right unless I've not understood (likely). Aren't testers very goal focused?

Broken Windows, paragraph 4. Yes, it will lead people to become sloppy because that's what happens. people are a) lazy or b) suffering some form of bias mentioned in the paper.

Mark.

Thursday, 9 April 2009

How can China become the world leader in Software Testing?

There’s two aspects to this, firstly there’s the individual test professionals and secondly the testing companies and organisations in China.

Assuming there's an active testing profession in China, lot's of skilled, well educated testers who are all doing great work and assuring the quality of world beating software, then the first step is for these skilled testers to say 'hello' to the rest of the world, just like you're all doing by being part of Software Testing Club

I could give you a list 20 people long of test professionals that I would say are the most prominent or active in the worldwide testing community - but they'd be British, American, Australian, European and Indian. Not one would be Chinese. There must be Chinese software testing forums but what / where are they? People from all of the above countries/regions can be found on this site as well as on other websites, so where are all our Chinese friends talking about software testing? Why aren’t we seeing them on English language forums?

At this point we hit possibly the major issue, English is the lingua franca of the testing profession in the above countries/regions. So if there’s to be wider global collaboration for the profession it’s going to have to be done in English. I know that’s a bit one sided but that’s the reality, the world isn’t going to learn Mandarin.

That means testers in China need to speak and write English if they’re to fully interact with the countries/regions that already actively collaborate. I know many China based offshore development centres and large consultancies already have this as part of their business approach (E.g. CSC, Bleum, Microsoft, IBM). It’s an obvious thing to do if these companies want to work outside of the local Chinese market with their international customers and partners.

So far I’ve suggested individual Chinese test professionals need to become more visible to the global software testing community, they become more collaborative with the worldwide testing profession, learning and sharing with a sense of equality and shared purpose, utilising English as the language of the global profession.

There’s one further aspect to China becoming the leading light in software testing and that’s the companies and organisations involved in software testing.

I’ve said before that I’ve contacted organisations in China, and continue to do so, to discuss how to collaborate with them. The response has been stunningly poor. Examples include proposing the writing of a test training course that could be used freely by the Chinese organisation, I never heard back from them. Another the offer to an individual of a collaboration on Papers and Podcasts, again nothing happened.

I’ve asked you guys here and on other forums about who leads the testing profession, who’s been on CSTQB courses, authors of Chinese software testing books, magazines, trade shows like the SIGIST meetings we have here, etc. and everything I’ve been told is on the China Testing Club here. It’s about 5 lines of material. The Chinese software testing profession seems invisible, almost insular.

The worst response to date was the most recent, where I offered to run my Test Practice training course for free when I visit family in Shanghai in September “honestly speaking, hires professional testers, so there is fat chance to have the training; also we have our in-house trainer”. Wow, so much for collaboration with the global testing community. That put me in my place! ;]

What’s the steps in summary then? Encourage Chinese testing professionals to;
• Get involved with the global testing community. Signing up here is s great first step.
• Write papers and essays on software testing and share them here and other forms, collaborate with non-Chinese test professionals such as me.
• Regularly write a blog here so we can see what they’re thinking and get insight into testing in China.
• Develop relationships that allow sharing of forums, papers, podcasts, etc. across websites.
• Pair up for cross-mentoring between Chinese and non-Chinese test professionals.

My final thought is that a Chinese test professional making the effort to become known and collaborate in this way can easily make themselves known globally as a notable figure within the Chinese testing community.

Mark.

Wednesday, 18 March 2009

Legacy Industrial Strength Test Software

QTP & QC.

That's what prompted the phrase above in discussion with my colleague, the idea that these tools were 'industrial' scale heavyweight tools and due to the amount of time they've been around they are getting legacy. In my experience I still hear about these tools (obviously) but on the ground see them less and less.

Many times I get called in to help organisation with using the tools - because they bought them and use say the Test Case management module or no longer use QTP because the automation guy left.

What I'm seeing is more and more interest in lightweight, open source tools for test management and automation. Selenium, Watir, FIT/FitNesse being the usual candidates.

So the question is are the "Legacy Industrial Strength Test Software" tools days numbered? What's the next 1, 2, 3 years going to look like for tools?

Wednesday, 11 March 2009

Teaching the ‘Product Life Cycle’ in my Test Practice course?

Why I teach the ‘Product Life Cycle’ in my Test Practice training course.

For those that have attended the Test Practice Course* you may recall we open up by introducing the Product Life Cycle. Most folks in software development and testing start thinking from the Development Life Cycle onwards. So, why do we teach about the Product Life Cycle?

From the course you’ll know that we put the Software Development and Testing Life Cycles in context with each other and the product Life Cycle. In this way we learn how the Test Life Cycle supports the Development Life Cycle which in turn supports the Product Life Cycle. What we wait to discuss later in the Test Management Course is how awareness of the Product Life Cycle allows more effective management of the test function. How so?

When a product is released from the development phase and goes into maintenance (live) then the project is usually considered to be over. The test and development teams are done with it and they move onto the next project. This is a fallacy for most organisations, because it simply doesn’t work this way. More often than not the reality is developers and testers are dragged back to fix and test issues with live products. Some organisations are a little more aware of this need and have a process of bug-bash days for just this type of activity. The problem here is it’s approached as something that needs brute force to address and make go-away.

The Test Manager who wants a more complete understanding of how effective the test function is will be keen to know what happens when a product goes live. Thinking back to the Product Life Cycle the Test Manager will be mindful of how long the life span of product is in the market. They will consider the Product Life Cycle Phases and take interest in measures such as; No. Of bugs found overtime, across Life Cycle Phases. How many of us know that? How many bugs, by severity per functional area get reported by your customers?

With this insight a Test Manager can assess needed testing resource for the duration of the products life, analyse what bugs are found when and start to effect a quality improvement plan, this allows assessment of effort and costs and a way to show reductions in the cost of quality and reduction in cost of ownership in real terms.

That’s two reasons why we teach the ‘Product Life Cycle’ in our Test Practice training course.

.....................................................
Head over to Software Testing Club to book your place:
Read about the course here
Use the enrolment forms to book your place

Sunday, 11 January 2009

Move Test Teams to Agile?

In the LinkedIn group "Senior Testing Professionals" a question was asked about how to move testers who are used to Heavyweight approaches to a more agile test approach.

It rather depends on how agile is being implemented in your organisation. Moving to agile is likely to mean an approach of either; Agile = SCRUM, agile = SCRUM + XP, agile = SCRUM + XP + TDD. How is it happening in your organisation? Process and Technical changes? Have you already begun, if so what’s the current situation?

Obviously this is another topic we could write a book on so I had a think what would be my top three changes the team would have to take on board.

Irrespective of what agile means as described above, here’s three key considerations I’d suggest for moving a team that’s been delivering in a heavyweight development environment to a lightweight (agile) one.

Firstly they need to adjust from focusing on the bulk up-front analysis and test case authoring they may be used to. Remember the documents they’ve been expecting may simply not be ready up-front of each Sprint / development phase. They need to plan, analyse and design based on what they know now and in a way that’s flexible enough to accommodate what’s to come.

Secondly, test execution has to be effective. Effective means finding bugs early and quickly then proving stability near the end of the development phase. This may mean they need to learn new approaches such as Exploratory and be much stronger on analytical techniques, domain knowledge, test case design, bug reporting and near-cause analysis. There’s no room for tic-toc testing.

Thirdly, they must maintain and develop frequent high-bandwidth communication, relationships with the other project team members and focus on interaction and not communication by proxy (documents).

Scratching the surface, but I've seen the above ignored and it's painful to see!