In my previous post about Exploratory Testing Hands-On I wrote about thinking and how I set myself into a certain mental state which helps me to do ET. It grew some discussion and even a vision for business. Nice!
And now it's time to go deeper, namely contemplate some of the actual methods and tricks that are often used when doing ET, even by me. Please however note that - as Cem Kaner and James Bach have said - exploratory testing is more a mindset or a way of thinking about testing than a methodology. So session based testing, pair wise testing, Whittaker's touring analogy, etc. aren't ET per se, but just means that are often useful when doing ET.
So let's get at them then!
The most important thing for me when doing ET is my library of ideas. Each tester should have their individual libraries of ideas, fitted to their own way of doing testing, influenced by their experiences and expertise AND shared to everyone involved so that these libraries can evolve in collaboration. Many people sit on their libraries and show them only to impress people, but that's low and immoral.
Sharing is caring.
A great source for these tips and tricks are James Whittaker's books How To Break Software and How To Break Software Security. Don't want to break stuff? Then remind yourself of what James Bach said:
“Testing is a quest within a vast, complex, changing space. We seek bugs. It is not the process of demonstrating that the product CAN work, but exploring if it WILL.”Black hat, baby!
Our strength as testers comes from our differences. We all have our own set of ideas that we use when doing testing. It may change, quite rapidly even, when situations change, and it's always very personal. That's why I consider plans written for others to be taken with a grain of salt; In higher level it works, but in detailed instruction level no way. When planning at detailed level it's quite impossible to know what kind of situation the plan will be executed in, and perhaps by whom.
In short; Detailed planning is personal. More about this in part III. Perhaps.
Ok, about the higher level stuff then. Often when doing ET, people talk about charters. A charter is a goal or agenda for testing, and it can be updated at any time if situation demands it. It holds mission statement, areas to be tested, etc. important information, and it can be fueled by pretty much anything: requirements, specifications, use-cases, test plans, previous defects, etc. The scope of a charter can vary greatly, but as in anything, use common sense. Don't put everything in it, only the things you're about to test. Sometimes even the requirements, use-cases, etc. themselves work a charters, and can be updated as such.
Remember; Requirements, use-cases, specifications, etc. are to be tested also.
Sometimes ET is done in pairs, where test ideas are created together, one performs them and the other one documents. The biggest challenge in this approach is to fit the pair's chemistry together. In many occasions all of the limited time is spent on fitting the pairs rather than testing. The only exploration is done to personalities of people, not the qualities of software. Actually the software might not get touched at all while a pair fits itself. And there's always also the skill off-balance; One is always better than the other and thus drags the other with him/her. Of course this can be used to transfer knowledge, train, etc. but it doesn't really contribute on the testing effort.
Then again, if the pair fits, testing can be quite marvellous. Over time every person can coexist with another person, however unfit the pair might be. Unfortunately this time is rarely granted when doing projects or other assignments. To us consultants it's everyday life and that's why I wouldn't suggest pair-testing to fast moving consultants, but with internals it just might work. Never tried, hopefully someday will. And if it works on pairs, it can work on teams, units, organisations...
In part III of this series I will cover a bit of the dangers of doing scripted testing. To make ground for that I want to talk about Whittaker's touring analogy...
Those who know anything about testing, know who James Whittaker is. The rest can google him. He has written the first book about ET, which - even though critisized and nitpicked by many - I actually like. Whittaker has an unorthodox approach on testing and on ET, quite controversial even, and mainly because of his touring analogy. He believes that ET can be covered with touring techniques which are basically the same tricks that a tourist might use when visiting a city.
Michael Bolton wrote that ET is not necessarily touring, and that touring is not necessarily ET. I think the problem is the word "touring". Touring is actually taking tours, which can be awfully scripted behaviour with the guides, preplanned bus routes, etc. That can be a part of ET as can be scripted test cases, but it does not describe ET in practise. And why I like Whittaker's book even though it's about touring? Because by introducing touring it led me to think about bigger picture...
Anything is possible.
Now turn that to software testing. One week's time to find bugs and report them. Bugs equal experiences and software equals Helsinki. Spend your time well. Use your knowledge. Use others' knowledge. Take tours if you like, but don't sit on a tour bus or get led by a guide the whole week. Experience! And it goes without saying that longer time gives more possibilities to find more, and dig deeper.
Anything is possible.
And then the report. Imagine reporting your holiday. How would you like people to experience what you've experienced? How can you tell what really happened? By introducing graphs, charts, numbers, yadi yadi yada? Or by showing them pictures, telling them stories, sharing your feelings about your week? Remember your audience and think what they want to know instead of imitating what some line of text in a thick, grey book has said. Take a risk and try. You'll be amazed how much respect you gain by showing this kind of initiative.
How would you like people to experience what you've experienced?
Think about it.
In part III I'll go deeper and try to come up with some actual implementation on how to manage this thought process and bring it to benefit testing, preferably via tool examples. Or not! :)
Ok, quote time. This post included some famous names like Bach brothers, Cem Kaner and Michael Bolton, but I'd like to return to the breaking part, because we testers do break software, we spar it, kick it and make it tough so it wouldn't die when put it in the ring. The quote is from Google Testing Blog and might have a typo, because it feels like a slogan for American investment bankers or something... :)
"If it ain't broke, you're not trying hard enough." -James WhittakerYours truly,
Sami "Mickey" Söderblom