Monday, January 2, 2012

Exploratory Testing Hands-On - Part II

Heyhey!

In my previous post about Exploratory Testing Hands-On I wrote about thinking and how I set myself into a certain mental state which helps me to do ET. It grew some discussion and even a vision for business. Nice!

And now it's time to go deeper, namely contemplate some of the actual methods and tricks that are often used when doing ET, even by me. Please however note that - as Cem Kaner and James Bach have said - exploratory testing is more a mindset or a way of thinking about testing than a methodology. So session based testing, pair wise testing, Whittaker's touring analogy, etc. aren't ET per se, but just means that are often useful when doing ET.

So let's get at them then!

The most important thing for me when doing ET is my library of ideas. Each tester should have their individual libraries of ideas, fitted to their own way of doing testing, influenced by their experiences and expertise AND shared to everyone involved so that these libraries can evolve in collaboration. Many people sit on their libraries and show them only to impress people, but that's low and immoral.

Sharing is caring.

My muse
Mine is basically a mindmap. I try to put in it everything I know about testing. It has different branches for different kind of applications (web, cloud, mobile, different business domains, etc.), generic input variations, test techniques, heuristics, quality criteria, product elements, project environments, etc. It's a collection of some of my ideas and so many ideas I've gotten from different individuals. Even our cat has contibuted! And quite recently I've started to develop my Shake'n Break set i.e. a collection of ideas I apply when trying to break software quickly. It consists of some inputs to which it's hard to build checks in code, tricks to interrupt internal processes, etc. And it's really designed to break stuff, not to follow some end user behavior or suitable demographics. It's basically a smoke test set that imitates a child with his/hers new toy; Shake it and if it doesn't break from that, you can start playing with it, meaning you can prepare yourself for the next round while development fixes your handiwork... :)

A great source for these tips and tricks are James Whittaker's books How To Break Software and How To Break Software Security.  Don't want to break stuff? Then remind yourself of what James Bach said:
“Testing is a quest within a vast, complex, changing space. We seek bugs. It is not the process of demonstrating that the product CAN work, but exploring if it WILL.”
Black hat, baby!

Our strength as testers comes from our differences. We all have our own set of ideas that we use when doing testing. It may change, quite rapidly even, when situations change, and it's always very personal. That's why I consider plans written for others to be taken with a grain of salt; In higher level it works, but in detailed instruction level no way. When planning at detailed level it's quite impossible to know what kind of situation the plan will be executed in, and perhaps by whom.

In short; Detailed planning is personal. More about this in part III. Perhaps.

Ok, about the higher level stuff then. Often when doing ET, people talk about charters. A charter is a goal or agenda for testing, and it can be updated at any time if situation demands it. It holds mission statement, areas to be tested, etc. important information, and it can be fueled by pretty much anything: requirements, specifications, use-cases, test plans, previous defects, etc. The scope of a charter can vary greatly, but as in anything, use common sense. Don't put everything in it, only the things you're about to test. Sometimes even the requirements, use-cases, etc. themselves work a charters, and can be updated as such.

Remember; Requirements, use-cases, specifications, etc. are to be tested also.

Session
Jonathan Bach describes one way of using charters in an article about session-based testing. Many think that the only way to bring structure into ET, and to separate it from the ugly world of Ad Hoc, is to do session-based testing. In short it's just doing testing in uninterrupted periods of time, where each session is focused on a charter, but testers can also explore new opportunities or issues during this time. More info at here and here. Please however note that you don't necessarily need to do ET in team sessions. Actually session-based testing requires commitment beyond the capability of many (part-time presence, time zones, hurry, etc.) AND certain type of person that can adapt to strict timeframes and wake creativity when needed, "make the rooster sing". You can however do ET in personal sessions i.e. whenever it fits to your schedule and your way of doing testing. Just reserve a session long, uninterrupted time slot from your calendar, see that all prerequisites (environment, data, software, etc.) to do testing are fullfilled and do it. Ok, team is preferred to ensure proper collaboration and flow of ideas, but ET can be done solo quite nicely. I do it all the time!

Sometimes ET is done in pairs, where test ideas are created together, one performs them and the other one documents. The biggest challenge in this approach is to fit the pair's chemistry together. In many occasions all of the limited time is spent on fitting the pairs rather than testing. The only exploration is done to personalities of people, not the qualities of software. Actually the software might not get touched at all while a pair fits itself. And there's always also the skill off-balance; One is always better than the other and thus drags the other with him/her. Of course this can be used to transfer knowledge, train, etc. but it doesn't really contribute on the testing effort.

Then again, if the pair fits, testing can be quite marvellous. Over time every person can coexist with another person, however unfit the pair might be. Unfortunately this time is rarely granted when doing projects or other assignments. To us consultants it's everyday life and that's why I wouldn't suggest pair-testing to fast moving consultants, but with internals it just might work. Never tried, hopefully someday will. And if it works on pairs, it can work on teams, units, organisations...

In part III of this series I will cover a bit of the dangers of doing scripted testing. To make ground for that I want to talk about Whittaker's touring analogy...

Those who know anything about testing, know who James Whittaker is. The rest can google him. He has written the first book about ET, which - even though critisized and nitpicked by many - I actually like. Whittaker has an unorthodox approach on testing and on ET, quite controversial even, and mainly because of his touring analogy. He believes that ET can be covered with touring techniques which are basically the same tricks that a tourist might use when visiting a city.

Michael Bolton wrote that ET is not necessarily touring, and that touring is not necessarily ET. I think the problem is the word "touring". Touring is actually taking tours, which can be awfully scripted behaviour with the guides, preplanned bus routes, etc. That can be a part of ET as can be scripted test cases, but it does not describe ET in practise. And why I like Whittaker's book even though it's about touring? Because by introducing touring it led me to think about bigger picture...

Tourism!!

Test!
I think that - supported by everything I know about ET - tourism is the best way to describe ET in practise. I mean imagine experiencing e.g. Helsinki in one week of holiday. Quite an unreasonable amount of work when looked through the eyes of a tester, huh? But think of it as a tourist who is about spend a week's holiday trip in Helsinki. Not so unreasonable after all, huh? Actually week or two is quite normal time when we take holiday trips to anywhere. And it's always totally ok. We plan for the trip, we read reviews about the attractions, learn some of the language, book some trips, and when it's crunch time, we enjoy and we experience. We explore! And we all have our own ways of doing it. Some may go totally free-form, just wondering around and spending the week that way. Experienced travellers often act this way, while unexperienced just waste their week. Some might want to hang out in bars, some want to lie under the sun (what if it starts to snow?), some wander in the areas that contain more attractions than others, and one way to experience is to take tours. You can always hop off if your experience meter doesn't go up. Anything is possible while the goal is quite clear; Experience Helsinki in one week of holiday. You can split your plan into days, attractions, parts of town, etc. and while experiencing, your plan might change if and when something unexpected happens. Such is life.

Anything is possible.

Now turn that to software testing. One week's time to find bugs and report them. Bugs equal experiences and software equals Helsinki. Spend your time well. Use your knowledge. Use others' knowledge. Take tours if you like, but don't sit on a tour bus or get led by a guide the whole week. Experience! And it goes without saying that longer time gives more possibilities to find more, and dig deeper.

Anything is possible.

And then the report. Imagine reporting your holiday. How would you like people to experience what you've experienced? How can you tell what really happened? By introducing graphs, charts, numbers, yadi yadi yada? Or by showing them pictures, telling them stories, sharing your feelings about your week? Remember your audience and think what they want to know instead of imitating what some line of text in a thick, grey book has said. Take a risk and try. You'll be amazed how much respect you gain by showing this kind of initiative.

How would you like people to experience what you've experienced?

Think about it.

In part III I'll go deeper and try to come up with some actual implementation on how to manage this thought process and bring it to benefit testing, preferably via tool examples. Or not! :)

Ok, quote time. This post included some famous names like Bach brothers, Cem Kaner and Michael Bolton, but I'd like to return to the breaking part, because we testers do break software, we spar it, kick it and make it tough so it wouldn't die when put it in the ring. The quote is from Google Testing Blog and might have a typo, because it feels like a slogan for American investment bankers or something... :)
"If it ain't broke, you're not trying hard enough." -James Whittaker
Yours truly,

Sami "Mickey" Söderblom

1 comment:

  1. That is really nice and interesting post, so much details in single post. Going to share this post with others. Thank you for sharing it with us

    ReplyDelete