Wednesday, January 11, 2012

Exploratory Testing Hands-On - Part III

Heyhey!

In my previous post about Exploratory Testing Hands-On I even more deepened the thought process around ET and introduced some ways of testing that go very well with ET. And now it's time to get down to brass tacks, namely introduce the actual way I manage ET with the help of management tools.

It's quite irrelevant which tools to use, but in almost every assignment I've been in HP Quality Center has been the tool of choice for the organization, and this of course has an influence on my output. In my first post of this series I stated that that many consider QC be the worst thing ever happened to a field of software testing. Of course it's extremely bloated, heavy and cumbersome to use, but even it can be used in lightweight fashion that suits ET. Those complaining that QC cannot be used in ET, don't know how to use it. Period.

Ok, time to stand behind my words then.

Many of even the most influenctial people in software testing consider detailed planning to be test cases, with detailed steps how to run the test AND expected result so that anything that deviates from that can be reported as a bug. Even majority of those who hire me think so. There are certifications that encourage this thinking. Courses, seminars, even companies think this way and embrace this thinking. Even the best selling test management tools have been designed to serve this purpose. So it's actually a professional suicide to think otherwise. Or is it?

In consultant business test cases are often considered as deliverables, a currency of sort. You are expected to deliver a certain amount of test cases and it can be even written to a contract between customer and company that provides testing consultancy, and of course by those who don't know better. And when this mass of test cases is delivered, it's considered to cover the area of testing 100%, or by more "enlightened" people 98% because once in a random seminar, etc. they heard that everything cannot be tested...

I felt sick when writing that.

How many test cases?
So let's cure my sickness and - more importantly - yours. Take any number of test cases and fit it with your set of test cases. Is 10 good? 100? 1000? 12765? Was that too much? How about a million? Does that cover everything? Also non-functionalities like performance, security, usability, testability, etc? Did you consider tester mindsets? You definitely considered your domain, so was it some standalone software? Web app? SAP? Mainframe? Consider every possible combination, let your mind free. Do you have doubts about your test coverage now? Could you come up with more test cases? How many do you need? Out of what larger number is your test case amount now? Million? Billion? Quintillion? How big is the universe? Ok, you can cover even the universe, but in what time?

Do not count your test cases!!

I once stated that the more you know about testing the more you understand that it's infinite. But what to do before this overwhelming awesomeness? Writing test cases is totally ok, but don't consider them to be the only tool for testing. You have endless amount of means to harness your creativity to benefit testing. You just have to find it. I like mindmaps. This way I can create a vast tree structure which visualizes everything in one glance but which can offer remarkable depth when taking one branch at a time. You can use colors to make it more visual. Some have even 3D-models and sound to aid the thinking and accessibility to ideas. I use check lists too! Some things just have that kind of nature that requires me to use check lists. I remember when I once tested video surveillance software; On user level I tested mainly from the gut by following the fluency of the video, color balance, audio sync, etc. and reported more verbally while the more technical things like following inter and intra frame consistency, more precise audio sync, performance, automating repetitive tasks, etc. were all based on detailed checklists and was reported in more detailed fashion, stating specific metrics about delays in microseconds, etc. This was about seven years ago, and I'm still doing things like this. It has never failed me or those who expect me to do good testing.

As I've grown to know more about testing, I've grown to understand more that it's totally ok to not cover everything. Yeah, it's the infinite part. :) Just make sure that you spend your time well and test those areas considered important by those who matter.

I mentioned ideas there. Don't consider your means to test as test cases but ideas. It's funny how those to whom you report leave your "test cases" alone when they are considered as ideas. Ideas are often considered something that cannot be measured. When you have a set of ideas, it's automatically understood as a part of something larger, an infinite concept that cannot be contained: a universe. How'd you report this? How'd you require this to be reported?

Test architecture
Of course many will require you to produce test cases. Give them the higher level of planning as test cases. What I mean by this is that when you plan your testing, you'll propably come up with some kind of logical structure with different levels. In one of my previous posts I wrote about granularity; I live in Earth, Europe, Finland, Helsinki, Lauttasaari. All in different levels. Sense what is required of you and report the level that is most suitable. Some want "part of town" level, some "country" level, but I quarantee that those who make decisions i.e. those who matter, don't have time to follow what happens in the lowest level where your ideas lie. This is how you manage your test architecture.

Long story short; Don't use your tools' (QC or whatever) test step level. Create your "'part of town" level and attach your ideas i.e. mindmaps, Excel sheets, video/audio recordings (very popular in usability testing), schematics, session reports and even use cases, specs, requirements, etc. to that and create an audit trail. Unfortunately the majority of tools offer only Pass/Fail selections. E.g. a session report or similar story can give a lot more information than Pass or Fail, but you can grow discussion by setting the "part of town" level (or even higher level) to Fail. Ok, developers, vendors, etc. can start crying when doing this, but make sure that everyone understands that when working this way, Fail is not to decline anything but to grow discussion on the situation that testers found suspicious. To inform about the risks that may persist.

Pretty
More statuses than just Pass and Fail would be wonderful. Colors and Facebook-like thumbs-ups are even better. And at this point stop talking about "part of town" levels and start using e.g. features. All software/system features fall under categories that fall under higher categories that fall under higher categories... i.e. you create your test architecture which gives structure and can be applied even to teams. Just recently I used this approach on my team of 20+ testers; 11 high level nodes (the actual reporting level to management), under them tens of subnodes (internal reporting), under them even more subnodes (more detailed internal reporting if needed), etc. and eventually thousands of ideas how to test the vast body of systems.

It's really simple. And simple works!

Ok, QC and other tools like it are still cumbersome as hell, but this way they don't cripple you as a tester. Remember; You test, you do not run massive sets of verification. That's for machines. This approach frees you and your mind to the benefit of testing and as the planning is made to "part of town" level (there it is again! :) it saves you immense amount of time and you can get cracking right away.

Beautiful!

I could write and talk about ET and testing in general to the world's end. Let's however wrap this series up and start something else. I bet many test case enthusiasts are a bit angry about what I wrote, some ET people are definitely too, because I don't follow their example on the dot, so more explaining might be in order. Writing this series produced also a lot of new material as drafts, so perhaps I start with those.

Or not. :)

Quote time! For a long time I followed this phrase quite blindly and altered my behaviour as a tester to fit that mold I considered it to represent. But as many undying phrases, there's more to it and just recently I've started to really understand what it means. Let's see if you do... ;)
“If you can't measure it, you can't manage it.” -Peter Drucker
Yours truly,

Sami "Colored Monkey" Söderblom

4 comments:

  1. Hi Sami,

    Very interesting trilogy!

    I am boiling down the ideas you have provided and I find a lot of similarities in the way we think. Refreshing text and a nice personal touch!

    By the way, test cases are a great tool, but their misuse is too easy. This seems to be the case with many tools in testing. Just like with QC as you pointed out.

    Good read!


    Best regards,
    Jari

    ReplyDelete
  2. Hi Jari,

    And thank you! Great minds think alike... ;)

    Seems that my blogs are read by those (e.g. you) who already know these things, while others are too busy because their time is spent on poor practise. Test cases, QC and even ET are all prone to misuse. My heartfelt wish is that I could bring the joyous message of ET, this trilogy and the world around it into use more extensively. Our friends in the trenches deserve that.

    It's hard work, but I'm glad I'm not alone. :)

    Sami

    ReplyDelete
  3. Actually QC does allow more statuses than just Pass/Fail, and they can be adjusted from the admin menu. Via workflow scripting you can even use colors, thumbs-ups, etc.

    Enterprise world, here we come! \o/

    ReplyDelete
  4. Ok, I was a bit hasty there and wrote that "features" cover everything. They actually don't, because they are actually just a part of functional coverage as there are many other areas too; Human behaviour, data/domain, platform, etc. Bad wording, sorry.

    ReplyDelete