Excel as a Test Management Tool

Because you're going to use it anyway...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Managing Knowledge Transfer Sessions

Read the article, grab the templates

Ask questions

If you don't ask, you won't learn. You won't teach either.

Friday, 23 November 2012

Near Cause and Root Cause Analysis

Let’s look back to the Deductive Analysis section and consider the issue of “…settlement amount on a transaction, given its base price and a fee that gets charged on top, is not adding up to what’s expected.”

Thinking this through as a worked example, how might we have found the issue? It appears we had some kind of trade or purchase to make, it looks like there’s a spot or base rate for the item but we have to pay a fee of some kind on top too. So if the item was £100 plus a 10% fee, we’d expect the total to be £110, just a simple one for ease of illustration.

Now the issue appears to be that our expected outcome of £110 wasn’t achieved, let’s say it came out at £120 when we ran an end of day report. What would you do? Well, you wouldn’t just report “settlement amount on a transaction is not adding up to what’s expected.” That isn’t Near or Root Cause analysis, it’s just issue reporting. It means the developer has to do all the investigation and as a Test Analyst you should be doing some.

So, we’d do some checking… check the figures that were input and make sure it was £100. OK, entry of the base figure is not an issue. Run different figures and see if they come out at amount + 10%, that’s OK too, it appears on our summary page as 10% fee and £110. At this point we could be ‘under’ the UI elements, nearer to the scripts that submit transactions, closer to the transaction or reporting database, etc.

This is a very simple example of Near Cause analysis and is the MINIMUM we should do. We’re nearer to the Root Cause and it will help the team member who has to fix it get to a fix more quickly. Let me say again, Near Cause analysis is the MINIMUM you should do when reporting any issue.

If we’re skilled enough or have appropriate access, etc. we might then look at underlying scripts and maybe take a look at the databases or other systems. We might now inject data straight into the database with a bit of SQL and run the transactions report. Let’s say when we do the report is now showing our £100 as a £120. Aha, so we’re getting warmer. We could realise that what’s presented on the UI is merely the output of a browser side JavaScript, but the actual final, chargeable amount is calculated by some clever server side code, as you’d expect due to security/hacking considerations if that was on the user facing side.

Now we have our Root Cause, script ABC is calculating the 10% fee twice. A developer can quickly crack open the script, correct some erroneous if-then or variable or whatever the cause was and deliver a fix.

In summary, Near Cause as a minimum, never ever just report an issue and leave it there. Root Cause if you have the time, understanding, etc. it doesn’t mean developers blindly correct what you suggest as Near or Root Cause, we always expect things to be peer reviewed. But in this way we cut down the find to fix time and get working software delivered. That’s what we’re here for right?

Mark.

Liked this post?


Inductive and Deductive Analysis

More Advanced Analysis techniques

Once we have the basic analysis techniques of Reference, Inference and Conference understood we can look at some more advanced techniques. Many moons ago when I worked in manufacturing QA these two other techniques were my Katana and Wakizashi, slicing through the planning and root cause analysis problems. As often happens in a career with changes in focus, I forgot them for a while and then rediscovered them a few years back. These techniques are Inductive Analysis and Deductive Analysis.

These two are my favourites and to keep repeating the point, I’m guessing you already use them. If you’ve ever written any Test Cases or thought through why a bug might be occurring – you’ve already applied Inductive and Deductive Analysis, at least at a basic level. I consider these to be advanced techniques as they are relied on, albeit supported by some analysis tools, by industries where quality is paramount and any failure must be absolutely understood. Industries that use these techniques include gas and oil exploration, civil and military aerospace, the nuclear power industry, medical equipment manufacture and pharmaceuticals.

The test group will naturally apply Inductive and Deductive analysis as they carry out their testing activities. For example;

- When a bug is found, thought will be given to what other functionality may be affected and these areas of functionality will be checked
- When errors are observed that seem similar in nature connected paths of the user journey may be tested to see if these lead back to a single cause

In other words we could use:

- Inductive Analysis: to identify meaningful Test Cases, before any test execution takes place
- Deductive Analysis: to find Root Cause of issues and add additional Test Cases once bugs are found

For those who have followed the Testing Vs Checking debate, it should be noted that we’re not using these two techniques to check for requirements being implemented. I don’t believe the skills of a clever, experienced Test Analyst are best employed in marking the work of developers. In some ways I don’t feel this testing (really, checking) should even be done by professional testers, but that’s a topic for a post on UAT. Just assume for the sake of this post, your developers can code just fine and you need to find the tricky issues.

Inductive Analysis
When we apply Inductive Analysis we work from the perspective that a bug is present in the in the application under test, then try to evaluate how it would manifest itself in the form of erroneous behaviours and characteristics of the application’s features or functionality. One way to remember the purpose of Inductive Analysis is to remember that we move from the specific to the general.

At this point we could be applying Test Case Design Techniques and asking; if invalid data was input by the user, if a component state was not as expected and the user tried to carry out some action – how would we know? What would the failure look like? How would the error show itself to the user?

For those who are versed in manufacturing QA we’re also moving close to Failure Modes and Effects Analysis (FMEA), yet another interesting topic for another post…

With Inductive Analysis we move from the idea of a specific bug (a specific failure mode), that we agree could be present in the system and then ascertain its effect on, potentially numerous areas of, the application.

As I’m a fan of the Gherkin way of writing test conditions here’s a slightly construed example in that style:

GIVEN connectivity to the trade database has failed (our bug/failure mode[that may not be obvious])
WHEN the user submits their trade data for fulfilment (expected functionality of the system)
THEN the order will not be placed (failure effect [we can’t assume the user will know])
AND end of day reconciliation reports will show an outstanding order (definite ‘how’ we will know)

Remember, with Inductive Analysis you’re thinking about testing risks and designing a test case to cover that risk. Isn’t that more like testing than just checking requirements we’re implemented?

Deductive Analysis
With Deductive Analysis we assume that erroneous behaviours and characteristics of features or functionality have already been observed and reported. That is, we have a bug that’s been reported and we now need to work backwards to the Near Cause or Root Cause of the bug.

An example might be where style and layout on multiple pages of a website is not as expected, the specific cause perhaps being a bug in a CSS file. Perhaps a settlement amount on a transaction, given its base price and a fee that gets charged on top, is not adding up to what’s expected. In this way we attempt to move from the general issue that’s being observed to the specific Root Cause. It may be that a range of issues have a single root cause and the Test Analyst assessing this will help development deliver fixes more efficiently.

Deductive Analysis is most often used when testing is under way and bugs have been raised. Similarly, when the application is live and a customer reports a bug, applying Deductive Analysis is a natural approach for test and development staff to get to the Near Cause and Root Cause of the bug. We’ll cover that in the next post.

Mark

Liked this post?


Elementary Analysis techniques

In thinking about the process of test analysis I recalled what Caner et al (2002) wrote in their book 'Lessons Learned in Software Testing'. This was that we could keep in mind the idea of three, what I'll call 'elementary' analysis techniques. Namely; Reference, Inference and Conference. The reason I consider these to be elementary techniques is they are in my experience the most basic, rudimentary and uncomplicated techniques that a test analyst will apply. With or without specific training.

As I'll often say, given they are so rudimentary they are most likely already being used. The issue is that in order to recognise them, we need to realise what activities we're doing and then give those activities a name.

Lessons Learned in Software Testing: A Context Driven Approach

Reference
The most accessible thing a Test Analyst can look over are the existing application and/or project documents that have been created.

The first analytical activity therefore would be to refer to the documents or artefacts provided at the start of a project. These are usually documents such as the Requirements and Technical Design documents, but there may be others. Remember these may be in a different form, such as task cards or items in a back-log. Either way there will be a definition of what the system to be tested should do and it's a first point of reference.

In addition there may be other sources of information available such as UI Mock-Ups, perhaps early prototypes or even an existing application that's being enhanced. All of these can be referred to directly and in this way the Test Analyst will start to identify the explicitly stated aspects that will need testing.

When this analysis is carried out, it may be done as part of Verification testing of documents and other items mentioned above.

Inference
When the Test Analyst is in the process of identifying the most obvious and clearly stated testing needs, they will of course be thinking in broader terms.

As each new testing need is identified the Test Analyst will refine their understanding; of how the functionality of the application should be, what characteristics it should exhibit, what business tasks it should support or enable. Testing needs that are identified will imply others that should both be present and of course not present in the system.

Inference is looking at any form of requirement statements that have been identified and asking; ”what” each time, for example:
- what requirement does a requirement that 'all data is validated' imply?
- what does 'data needs to be of a specific type and follow a defined syntax' actually mean in defined terms?
- what requirement is assumed to be obvious if 'only complete orders should be processed'?
- what 'users are alerted' when their order is incomplete?

In order to correctly infer what these implicit requirements might be, the Test Analyst will need to:
- apply their business and technical domain knowledge drawn, from previous experience with similar applications or
- review earlier versions of the application under test, if it exists
- apply their understanding of customer needs in context of the problems the software is meant to resolve

Conference
After referring to all the sources of information at hand and considering what additional requirements the identified testing needs infer, the Test Analyst now does what often is forgotten or is left far too long to do.

They can go and speak to the rest of the implementation team about the issues around testing. This includes Product Managers, Project Managers, Developers, Business Owners, System Users, Support Teams, etc. and ask them about the testing needs of the system - directly. They can share with these stakeholders the testing needs that have been identified explicitly and inferred implicitly.

This form of analysis can also be considered a Verification activity where the Test Analyst essentially conducts a Peer Review or Walkthrough with the stakeholder.

Business Analysis

Once we're aware of the elementary forms of analysis that are being applied by default, we can start to apply them with intention. By doing that we make our use of them and the application of our 'personal methodology', more consistent and effective.

Bare in mind, these techniques apply even MORE in an Agile context, compared to a traditional testing context.

From here we can start to look at some more advanced techniques. Yet again, I'll say many Test Analysts will apply these without realising it. But others will realise, yet perhaps not have a name for them. So let's break the spell on two more techniques and get them in-mind and used.

Mark.

Liked this post?


Monday, 19 November 2012

What is a Test Architect?

(Re-written Feb 2017)
This blog post was originally posted about 5 years ago and is consistently one of the most popular posts on the site here. The interesting thing was it mainly linked out to an article by John Morrison over at Oracle as I felt he summarised the role of Test Architect very well. Time moves on and my thoughts have diverged sufficiently enough that this article needs refreshing and more recent experiences need sharing.
So, 5 year on is Test Architect still a thing?
In short, maybe. A quick search for Test Architect pulls up the original blog posts by myself and John along with a bunch of jobs advertised on Indeed, CWJobs, Total Jobs and LinkedIn. 1000+ jobs - or do we?

Closer inspection reveals that the search is mainly pulling back tester roles and not Test Architect roles. Looking in Feb 2017 I see only 2 references to Test Architect in the job search. It would appear that as in 2012 this remains a niche role/title. In fact I'd go as far as to say that as in 2012, the Test Architect isn't a standard role at all.

Alan Page blogged back in 2008 that at Microsoft they avoided reference to it, instead seeing it as a senior test role and that there was no single definition of what a Test Architect actually was. That makes perfect sense to me and in fact it a GOOD thing, because that was the whole point of branding myself as a Test Architect.

Background
In a previous life I worked with architects in electronics manufacturing. They had the challenge of understanding electronics design, manufacturing processes, testing and consumer/product expectations. The result was a role that was filled by a senior electronics professional, with broad technical, process and business knowledge along side commercial awareness.

That to me is the essence of a Test Architect. It absolutely IS a senior role and it requires a depth of experience and breadth of knowledge applied in context of the business.

So given there's no standard definition of what a Test Architect is let's define some core characteristics and responsibilities.

What is a Test Architect?
A Test Architect is a senior testing professional who's primary function is to design solutions to testing problems the business faces.

These solutions are solved through the application of contextually relevant process and practice, the use of tools and technology and by applying soft skills such as effective communication and mentoring of both team and client.


What does a Test Architect actually do?

One thing I'd lift out is that ideally they do NOT do Test Management. That is the Test Architect is essentially a deep technical specialist focusing on design, implementation, use, enablement of the testing function and so the overall development function as well.

The Test Manager remains responsible for the strategic direction and tactical use of the testing function, for the line management of the team, the mentoring and training, the hiring and firing, etc.

It may be the roles are combined, but it must be understood a Test Manager isn't automatically going to cut it as a Test Architect. Likewise, a Test Architect with no experience of team leadership will struggle to design and improve the process and practice applied to the team. That said, my view is a Test Architect is likely senior to a Test Manager professionally, even if operationally they report to the Test Manager. (I propose however they report to the Scrum Master or whoever heads up the overall development function.)

Responsibilities of a Test Architect may include:
  • Supporting the Test Manager in achieving their strategic goals for the Test Team by providing technical support to the Manager and the team

  • Possess broad awareness of testing approaches, practices and techniques in order to help design and deliver the overall testing methodology used by the team

  • Have the ability to monitor the effectiveness of the testing function and bring about improvements through insights gained via analysis at all stages of the SDLC/STLC

  • Identify what tools and technologies can be implemented, aligning with that already used across the broader development function and in-line with the skill-set of the team

  • Design and develop the test automation framework, harnesses and code libraries to enable the team to both use and enhance them across successive projects

  • Take responsibility for test infrastructure including environments and software, liaising with teams such as DevOps and Support in areas such as CI/CR and IT budgets

  • Provide technical know-how, documentation and training to test and other business functions

  • Stay up to speed on process, practice and technology developments to ensure they are brought in-house and enhance the solutions applied to the testing problems

In essence the Test Architect works to ensure that approaches, tools and techniques are built into a relevant methodology. They monitor, optimise, mentor, collaborate and continually improve the test team on behalf of both the Test Manager and the rest of the development function. To that end the role must be held by someone of good experience and seniority.

Mark.


Liked this post?

Thursday, 15 November 2012

Making Connections and Critical Thinking


A key skill for any tester is the ability to 'make connections' between aspects of relevance, when thinking about the testing problem they have to address. This idea of making connections is closely related to the skill of ‘critical thinking’.

Making Connections
Making connections is about recognising such things as how - a certain aspect of the system under test, perhaps a particular testing need that’s been identified or maybe a risk that’s been highlighted; relate to things of a similar type or indeed a different type. Like many skills employed by a tester, making connections often ‘just happens’. But, we need to recognise and understand the skill if we want to improve and meaningfully employ it.

Examples of making connections between things of a similar type include;
  • relating several risks to each other and considering how one may affect the other
  • associating testing needs and perhaps reducing the number of test cases while maintaining coverage
  • considering an aspect of the system and identifying a dependency on another aspect of it, maybe a UI needs the database in place or vice versa
In addition to things of a similar type we need to connect things of a different type, examples of doing that might include:
  • relating a risk with a testing need, do all risks highlight a testing need, if not can we identify one that will mitigate the risk?
  • assessing if a certain aspect of the application under test introduces a risk which requires coverage
  • identifying where there’s a gap between planned aspects of the application, such as specific functionality, and stated testing needs that tests are being planned for
 Making connections relies in part on the knowledge and experience of the tester, in order to know how one thing relates to another. We could reflect and ask ‘… how does this thing I’m considering affect [x]?’ when trying to make connections. Developing the argument for and possibly against an aspect is the key to effective test analysis.

Understanding Arguments

Critical Thinking
The skill of making connections is closely related to critical thinking, because critical thinking is about thinking past the initial details and information that are presented about the aspects, needs and risks, etc. and critically evaluating them. When thinking critically we don’t just accept what’s presented at face value and assume no further meaning, we are not just accepting. When thinking critically we are evaluating, analysing, synthesizing and keeping our minds open to the possibility of new perspectives on the information presented to us.

We might choose to bear in mind the phrases ‘…what does that mean?’ and ‘…why is it this way and can it be another way?’, in context of the testing problem we are trying to address. For any tester it’s essential to develop reflective thinking skills and to improve the critical analysis.

An Introduction to Informal Logic

Make sure that next time you’re presented a piece of information about a system, a test or other item – stop and think.

You could always attend a free course too... https://www.coursera.org/course/thinkagain

Mark.

Liked this post?

 

Monday, 12 November 2012

Learning and Teaching - by asking questions

One thing that's often said is there's no such thing as a stupid question. It's something that's close to my heart as throughout my career I've had to ask a lot of questions, some of which have made me feel a bit less enlightened than others around me. The thing is we really have no choice but to take a deep breath and just ask these questions, stupid or not.

Think about it, how else are we going to learn? If we try to avoid asking what may appear to be daft questions then where will our information come from? The options are things like meetings or conversations with others, perhaps documentation that’s been provided or the application we may be testing. Now, what’s the likelihood of these sources answering all our questions and providing us with complete knowledge? Highly unlikely and in the main we know it and we expect to, and do, come to a point where we have questions to ask.

I’d suggest questions however can be put into two rough groups. The first being simple gaps in knowledge, often about technical or business aspects that are beyond our experience. For example you might ask, could you tell me exactly what the difference is in testing needs when something moves to a Solaris container? You could ask this or you may already think this is a daft question.

The second group of questions are those which relate to things you are sure everyone knows and understands. How many times have you heard the phrase “…but everybody knows that”, while you’re thinking, “…well I don’t know it!”. Ever been in a meeting feeling confused, yet everyone else seems perfectly clear on something about the slicing of a cube and how it gives a view on data or some such. You understand the words but the meaning is lost.

In other words and to follow the popular pattern; stuff you know you don’t know and stuff you think you’re supposed to know but know you don’t. Both are in need of you asking questions, but do you always do it?

Asking the Right Questions: A Guide to Critical Thinking

One thing I’ve often found it that if you’re not clear, others probably aren’t that clear either. But guess what, they’re afraid to ask daft questions! Remember, there is so much technology, a lot of it customised, that you can never know everything. What’s more, you can only be where you are right now in terms of your knowledge and experience, so don’t beat yourself up over it.

When you’re not clear, go right ahead and ask for something to be clarified. State that you’re not sure how that affects testing. Just ask “Just so I’m clear, can you run through some ways that affects testing?”. If you’re in a meeting and you think everyone else is rolling on with a conversation, “Just to say this, I’m not completely up to speed on this topic. If someone can give me the 2 minute rundown now then great, or so an not to slow the meeting who can give me 5 minutes afterwards?”. It’s easy enough and there’s no embarrassment.

While I’d encourage you to ask questions openly I realise there are some caveats. There are situations when others, perhaps clients, will expect you ‘to just know’ and not doing may cause your company or the wider project team embarrassment or at least just raise some eyebrows. In this case you still need to ask, just more discreetly. Make notes of some kind for any points or phrases, ideas or concepts that you’re not totally clear on and note who seems to be talking confidently about them. If they’re on-your-side, afterwards wander over to them and say “Hey, in the meeting before you mentioned (insert topic here), I’m not sure I’m as clear as I need to be about how this affects testing, could you give me a back to basics run down of it, just so I clarify my understanding?”. Most people will be flattered that you asked.

That's a Good Question: How to Teach by Asking Questions 

There will come a point on any project or in any employment when you really should be up to speed. Don’t leave it so long in asking your questions as to be a problem when you hit that point. There’s always a grace period at the start when it’s OK to not know, but it doesn’t last forever. You cannot hide and hope knowledge will just come your way. Go and seek it and use it to enhance what you do, then help others in the same way!

Mark.

Liked this post?


Saturday, 12 May 2012

Crowther on Writing


Some books change your life. Once read they leave a permanent and lasting impression on you, that without some shocking change will likely stay with you the rest of your life. For these books you can remember the title and author, even what within the book you were affected by.

Some books effect a less dramatic change, they shift your view of the world and yourself a more subtle way. Perhaps the authors and titles of these books are harder to remember, though likely you will recall some idea or concept you took from the book.

I can recall many books that I've read, relating to varied subjects that at one time or other have been of interest to me. As a youngster I had a small illustrated, pocket book on Trees. It was the first time I experienced the power of books to enlighten and reveal the world around me. I'd always done hiking and camping and so knew some of the plants and trees I encountered. But I remember thinking that trees were at best just a wall of green, different shapes and heights, different forms of leaves and flowers but beyond that they were indistinguishable.

After learning more from my book on trees the world became a more meaningful place. I could recognise trees as adults and as seedlings, spot the male and female versions and understand the soil and environment they preferred. I even began to know what type of tree a twig came from due to the shape the missing leaf had left on the twig. It's hard to explain how enlightening the experience was.

Years later I'd pick up a book on rocks and had the same enriching experience. To look at rocks and stones and know what they are made of, how they formed, recognise their crystalline structures, and have insight into the history they had experienced.

Another more recent book that literally changed my life was Buddhism Plain and Simple by Steve Hagen. I read it with fascination and closed it with a sense of having had my eyes opened and my way of thinking shifted forever. I've given away probably 10 copies of that book to date.

More recently still I changed my professional perspective fundamentally after reading Gojko Adzic's, Specification by Example. As some of you will know I did a little road-tour of presentations about how I'd used it. I remain convinced this work represents a paradigm shift in how we can think about what we do, like Context Driven was for testing. It may need refining but it's a pivotal moment in the progressive growth of our profession.

Just the other day, in a manner of speaking, I read Jerry's book on the Fieldstone method of writing. I've always liked writing but like most people I struggled with two concerns. Firstly, that I was not very good at writing, in that what I write is not very entertaining, instructive, well structured, etc. Secondly, that am somehow lacking in the required intellect and ability needed to recognise, capture and write-up appropriate material.

Right now, I have 12 unfinished writing projects. They range from essays and papers to a huge project that's been on the go for over 2 years and is about 30% complete. I spoke to a testing friend on email the other day and he recounted his own set of in-progress writing work.

Before I read Jerry's book I saw this as a problem, proof of my limited intellect and ability. After all, if I was really 'that clever' I'd breeze through the writing and publish material en masse, right? Well, no because it doesn't work that way. A real gem of a revelation was Jerry talking about his own mass of unfinished projects. Not sure why but I was surprised by this. He mentioned how he doesn't just sit and write a piece of work but builds it up, one energy stimulating field stone idea at a time. What's more, he collects these field stones gradually for a range of projects.

It seems like such a simple idea and yet consider how utterly contrary this is to what we were taught in school. Again, as mentioned in his book, the usual way is to conceive a topic to write about, build an outline, ensure a start-middle-end and then get writing. Just merrily writing away until it's finished. This might work for some forms of writing, say for help file, guidebook type writing. But for what I'll call creative technical writing it's a hinderence.

In creative technical writing you of course want to provide instructive material, so the reader can grasp the topic, but you also want to provide insight that connects subjects, topics, ideas and experience in ways that mean your writing provides unique value to the reader. To do this you need to have the insights in the first place, you need to have the aha moments that make the connections and then wire them together into your writing. If you don't you can't share them and your work will lack a certain unique value.

Is writing an outline and then writing out the words against that outline or Field stoning a the better way to write something that’s technical but creative too? In my view writing from an outline is like testing from a test script. You plod through each heading / test condition and prescriptively fill in the blanks. It’s OK for getting words on paper, but it’s not the best process for sparking creativity, for prompting aha moments of sudden, valuable insight and going beyond the scope of the (test) plan. Actually, just thinking about that, is getting words on paper really what we want to do when writing? I imagine like me you aspire to something more meaningful and valuable. To achieve that, Field stoning is the way.

Field stoning is like ad-hoc exploratory testing. You initiate an ad-hoc session of Field stone collection when the appropriate moment arises. That moment may be because you have some time and sit down to write or better still you’ve been journeying through your day and encountered something that connects with the writing project you’re working on. It’s like exploratory testing when you observe something interesting along the testing path you were taking, then head off to investigate. What you uncover in this way, what you end up writing about and how you write about it, is usually far more interesting and valuable than the planned items you had to guide you.

So, what more about Jerry’s technique? Well for one I no longer feel guilty about having many projects on the go at once, in fact it’s the best way. I now gather up fieldstones at apace, they’re literally everywhere and writing them down is no harder than taking notes. Fieldstones come in all shapes and sizes. Some are 200 word paragraphs, others 20 word ideas that need to be found a little nook to fit into. My writing projects are not all technical either. Some like The Human Empire, Aranath Awakes, First Weavings and Mines of Ar’tir are some of the more creative works I’m gathering a very different bag of stones for. 

So if you’re struggling to write but want to get better at it and write more, take heart. Get a copy of Jerry’s book and read it through a couple of times.

Weinberg on Writing: The Fieldstone Method

Then set out your mixed array of field stone collection bags and start gathering stones as you go about your day. Importantly, get writing and reading and keep writing and reading. Write, write, write then publish, publish, publish. Create, Read, Update, but don’t Delete, instead Publish.

It’s not about being the best writer or winning prizes. It’s about the wonderful process of writing - the time spent in gathering, analysis, thinking, reshaping, writing and sharing. Don’t worry about what others will think. Some of my first writing many years ago I now consider pretty weak but I’m still proud of them as works. If you want to write, you will, one field stone at a time.

Thoughts? Leave a message!

Mark.

Friday, 3 February 2012

Get your code up


For testers like me, coding beyond batch/routine like scripting is a bit exotic in the main. In my experience most testers are doing VBScript on you-know-which-tool, maybe some Java on pretty much every tool as it’s the defacto language it seems. There’s a smattering of Python in use but the ardents seem few and far between. Then there’s fan boys like me doing some Ruby stuff all mixed with a smattering of other languages depending on the environment the tester works in.

I guess it’s more like doing testing which involves a bit of coding, instead of coding which involves a bit of testing. It’s great if your role involves you touching code at some point a few days a week. That way you can get your coding skills up and keep them up to. A lot of testers however are looking to start coding so they can develop professionally and of course utilise it for automation, testing, etc.
One problem I found was deciding which language to pick. So, simply put:

Step 1: Pick a language
There’s a multitude to pick from, the TIOBE index (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) is said in some circles to be worthless but if you don’t know what’s out there it’s a good start point for reference.
What are you using at your current workplace? What are they developing in? Well there’s your starter for 10. Is there a particular tool you want to get good at, which involves some coding? Bingo, go learn that then.
There are languages that are flipping everywhere. Java, PHP, VBScript, JavaScript, C, C#, C++ and of course the handy ones such as SQL and Perl that have been in utility use forever.
If you can, pick one that’s going to see you able to use it day to day, ideally one that those around you are competent in too. It’s easier to learn that way.
In order to learn you need to find some trusted sources of tuition, I’m assuming you’re doing self-directed study here and not going on a course. Have a look at www.codeyear.com for example, how about http://www.sqlcourse.com or just searching for “Learn [language] online” and see what comes up. There are too many resources to list but plenty to get started.
Create some form of learning plan and set yourself a goal as to how many hours you’re going to spend learning your chosen language. I picked 100 hours after a suggestion on the Software Testing Club. I have a goal but not a how-many-hours-a-week goal, I study when I can. If you can set time aside then great. Regarding a plan, simply list out what topics you want to learn and tick some off or add some more as you go.

Step 3: Share your learning
There’s nothing better than backing yourself up against a wall, putting yourself in a corner, etc. by telling the world you’re going to learn to programme. Head over to http://www.softwaretestingclub.com and start blogging about your plan and progress.
Also, have a look at www.codepad.org where you can paste snippets of code, then post the link in your blogs. Set up your free Github repository over at https://github.com, there’s an easy to follow tutorial there and it’s 3 commands day to day to maintain your library. Don’t forget to use the Wiki you get to document what you’re adding.
Overall…
Just get started, whatever you learn will be of use!
Good luck and I look forward to reading your updates.