Monday, November 12, 2012

Mging ET w/ QC, LOL!!1

Heyhey!

Ok, tons of thoughts and ideas swimming in my head again. I've written lengthy texts about building trust, product risk analysis, cool heuristics like FPS Nausea, test cases, how to gain by losing, gaming testers, etc. All at alpha level still. All complete s**t still. The highest priority is however the recap about my experiences at recent Rapid Testing Intensive / Critical Thinking courses, held none other than the almighty James Bach himself.

Well, the highest priority was that. Namely somehow my approach on managing exploratory testing with QC has gotten some attention lately. Perhaps it's because I'm actually training it now, finally. Perhaps it's because I've ranted about it more in Twitter and other mediums. Or perhaps it's because factory schoolers have grown tired of me ranting about borderline philosophical things about testing and thinking like a tester, and are finally demanding something tangible to mimic.

I don't know.

The actual reason I'm writing this is firstly the interest of some of the testers I greatly respect namely Maaret Pyhäjärvi, Aleksis Tulonen, Huib Schoots and Paul Carvalho. I've managed ET and teams doing ET with QC for some time now, but I've never actually gotten any peer reviews of my actions. I've roamed relatively blindly and I hope this blog post cures that.

But secondly and no less importantly I'm hoping others will get ideas how to manage ET either with QC or with other tools as well. The idea is basically the same with every tool.

So let's get crackin'!

Note: I made a summary paragraph and it can be found at the very end of this post. So if you're a member of the sales division or otherwise suffer from attention deficiencies, you can skip right into it... ;)


HP Quality Center


A typical exploratory tester
I'm not going to explain what is ET in this post. Nor am I going to explain what is HP Quality Center. I'm trying to explain why it's precisely QC that we're trying to use to manage ET now. The idea is preposterous to many as it's indeed very costly and cumbersome for the likes of typical exploratory tester. Exploratory testing is a mindset and when practicing it, the tool should be as light as a thought. QC doesn't quite fill that prerequisite.

But it is still the most dominant test management software out there. It has been sold to so many companies that it's almost a (happy) miracle if you never have used it, if you're in the profession of testing that is. As a consultant I bump into it in nearly every assignment. Perhaps because I've over 6 years of admin level experience with it, which is a compelling sales argument, but also perhaps because I can do unorthodox things with it. Anyway, it's everywhere.

The reason why it's precisely QC that we're trying to use to manage ET now is indeed because it's everywhere. I'm basically throwing in a towel and yielding to the might of QC.


Test Plan module


The story starts with test planning. You do this by forming "risk buckets" (learned this in RTI course, so thank you, James ;), a number (5-15) of areas that are fairly equal to someone who is responsible for releasing a quality product (product owner, project manager, etc.) These areas are what the product is made of.

You should be able to do this fairly quickly. Just imagine dividing the product in logical entities, equivalence classes, groups which contents share same qualities. For instance a human being has equivalence classes like body, soul, movement, looks, personality, etc. They aren't necessarily permanent so don't be afraid of doing something wrong. You can always change them.

Let's set more tangible example. "A computer" and it's risk buckets as test cases in QC Test Plan module:

  • Screen
  • Operating unit (motherboard, processor, etc.)
  • Chassis
  • Main peripherals (keyboard, mouse, etc.)
  • Complementary peripherals (printers, audio, drives, etc.)
  • Type (desktop, laptop, etc.)
  • Looks
  • Performance
  • Stability
  • Claims
  • Competition
  • New features

This took about five minutes, including a bathroom break. It's not a complete one, but something that's has made the project manager happy. The goal is to give an array of areas of concern. If a screen doesn't work, it can be clearly and quickly indicated via this. If there's no QC, use a low-tech dashboard. Or a mindmap. Or something.

Note: We've outscoped the software from this.

                        A computer?!                    
Testing has already begun. By doing this division you can and should ask questions. What requirements, needs, want does this computer has to fulfill i.e. what's it for? What are the key quality criteria? Do I have enough information to do this division? If not, ask!

Testing is asking.

This division can be made in sessions. You can reserve an uninterrupted timeslot in which you try to form these "risk buckets" by asking, contemplating the need of someone who matters, creating a dialogue with the product itself (intake), surveying it, analysing it, using common sense, applying what you already know about computers and the ecosystem they live in, etc. Just when I finished the last sentence, I got an idea about Competition. This is how it works!

Before moving into Test Lab module, let's write something into the Description fields of these "test cases" we've just created. That is something about the nature of the current risk bucket, why it's a separate area and not incorporated into other areas or why others not into it. By doing this thought process (and asking) you grow more confidence in your planning. I'd recommend writing also the key functionalities, and the complementary ones too, to the description to give more confidence and insight of the current area to you and to those who you are reporting to. But don't go crazy. There's a fine line between this and stepping into the abyss of plan-driven testing, writing ten thousand test cases. You'll get to write test cases if you want, but only as a test log. Not as a plan. Explanation here.

You can also attach files into these "test cases". They can be requirement documents, specs, drafts, brainstorming notes, whatnot. They can also linked behind to requirements and releases in other modules. But don't create too complicated structure as it needs to be as able to vary as possible.


Test Lab module


In previous chapter we drifted into using sessions. Let's do that more. I won't explain the subtleties of Session-Based Test Management here, but in short they are just - as said before - uninterrupted timeslots in which you do testing, separated by retrospectives.

In QC's Test Lab module you create "test sets", which when managing ET, and more precisely in session-based style, are sessions. One "test set" is one session, which holds a number of previously planned risk buckets aka "test cases". I've used all my creativity to come up with clever titles for these sessions:

  • Session 1
  • Session 2
  • Session 3
  • Etc.

:)

                              A session                            
Ok, sometimes I use titles like "10.11.2012, session 1, 30mins", but not that often. I haven't used a descriptive title taken from a charter the session tries to follow. The reason for this is that in one session several charters are often used. I prefer attaching a charter titles to the "test set" either via file attachment or field entries, tags. Tags are cool, because filters catch them nicely. If however the charter is verbose, use file attachment and some keywords as tags.

And I put the "test sets" into a folder structure as follows:

  • Week 1
    • Monday
      • Session 1
      • Session 2
      • Etc.
    • Tuesday
    • Etc.
  • Week 2
  • Week 3
  • Etc.

Not really rocket science... :)

Ok, there can be even just one "test case" in one "test set". That is totally ok. As it is ok to have all "test cases" in one "test set". It all depends on the charter, the story you are supposed to follow in this session. But the meat is in the attachments. Session gives you the time slot, charter gives you guidelines, risk buckets focus your testing and attachments are the actual testing report. You've built everything needed to manage ET in QC, so now it's time to start doing some testing.

Well, actually you've done testing all along. Hopefully you've asked a lot of questions to get to this point, and found some bugs too. That's what they are paying you for. ;)

Now, it's irrelevant what kind of reporting you do as long as you do it and as long as it gives insight of the quality of the product in areas you've been doing testing to someone who is responsible of releasing a quality product. Oh, and it should be able to be attached to either "test cases" or "test sets". You can attach Rapid Reporter reports, XMind image exports, notepad memos, Excel sheets, etc. I mentioned tags. Use them. Use fields. Write to them abstractions about the elaborate contents of the attachments. Think about reporting only the status of the risk buckets, but being ready to back your story up if the decision makers want more info.


Passed vs. Failed


Then there's those Passed/Failed/etc. statuses. The first thing to do is to make sure that "Passed" doesn't mean approval for release. It's only an indicator that nothing was found by this tester/these testers in this context. "Failed" is also a bit strong statement, because someone might consider it a total rejection and there are still those who consider it to be deviation from expected results, no matter what. It would be better if you can replace these statuses altogether. It's possible to do by admin. I'd use these statuses:

  • Nothing to report
  • Need to discuss
  • Bug
  • Incomplete

Please note that you must be able to explain these. If you set one to status "Incomplete", use for example tags to explain yourself. "Bathroom break" or such. Ok, that's second bathroom related comic relief. I probably have to visit one soon... ;)


Depth analysis


                                 Done!                                 
Then there's the depth analysis I often use when managing ET with mindmaps. It basically indicated how deeply you've sunk yourself into the risk bucket, how deeply you've tested the area. There's three levels:


  • One star. Superficial inspection. Doesn't require any testing expertise, but doesn't mean that bugs are not found. Actually the most serious ones are often the easiest to find.
  • Two stars. Tested as well as it's possible withing these time and resource constraints.
  • Three stars. Additional testing wouldn't bring any added value. The tester has bled his heart into this and has no ammo left.

If you've played Angry Birds, you should get this immediately. That's actually my inspiration for using this. Yoink! :)


Time division


A lot of people consider measuring time division to be very important. That means that you examine how you divide your time in testing; Testing the product, bug reporting, maintenance activities, reading requirements, etc. I consider it to be very important too... but only when you're not in control of your time division.

Let me explain.

Conventional means to do and manage testing try to maximize activities around testing; Planning, preparations, defining processes, whatnot. In Finland we have this proverb "well planned is half done" and we always joke about it when things go awry. But somehow many don't recognize the sarcasm behind this proverb. Over and over again people are struggling with time estimations and deadlines, because they focus their work wrongly. They fiddle with irrelevancies. Ok, some level of activities around testing is necessary, but not in extent that the product itself is tested poorly.

In this light it's necessary to monitor time division, to make sure that actual testing gets enough time. A good way to start is to divide your time into three categories:

  • Testing the product
  • Bug reporting
  • Supporting activities

QC does create time-stamps, but when using it as lightly as I've so far described, they don't really work. Instead use Rapid Reporter or similar logging tool, or have a concious mind of the time you've spent on each of these actitivities and write them to the fields. It could be wise to create different fields for each of them if you wan't to filter out them separately and create monitoring for your time division.

It's not accurate mathematics, but it gives you an insight of where your time goes and a possibility to create a conscious choice on shifting the focus.

Then again, if there's no problem in your and your team's time division, I wouldn't force you to do this. You see, this thing swings both ways. If you get really neurotic about where your time goes, all time is spent on that. Plus it creates another metric to be paranoid about. What if you have to go to bathroom in the middle of session and it creates an inconsistency in time stamping? I've actually heard about people who've gotten fired because their QC test runs have lasted too long. Crazy doesn't quite cover what that is.

Use f***ing common sense!


Defect management


You all know how to write bug reports, right? Actually I've another blog post coming about that, so I'll keep this light.

Something bugging you, sir?
If you find a bug, write a report about it and then attach it to a "test case", a risk bucket. That creates a link that can be used in reporting. Please note that bugs are not just deviations from expected behaviour. If you have need to say something about the risks that may bug the users of this product, the people responsible for it or other important people, by all means tell it. Providing information is what testing does.

This is a brilliant presentation about what is a bug. Watch it!


Reporting


At the moment I don't have access to QC so I won't be able to create an actual report from it. There is a number of things you could report when managing ET with QC. I'd create two kinds of reports:

  1. A dashboard about risk bucket statuses from Test Plan module. Test Plan tells the latest status of "test cases". No matter how many sessions you have going, how many risk buckets you have picked to "test sets" or how many of them are under testing, it's the latest info. This has also a downfall; If a tester finds a bug and puts "test case" into Failed state and a heartbeat from that another tester puts that same "test case" (can be in other "test set") into Passed state, the dashboard tells that it's in Passed state. So keep your eyes open! History data helps in that.
  2. A session report from Test Lab module with linkings to bugs (or even requirements and versions/releases). Sessions have risk buckets under them, colored with statuses, depth analysis, time division, bug headlines and other supporting info about them, etc. Attachment have to be imported separately, but create a script for this and you're solid.

And don't let the decision makers to get used to reporting in a way that it becomes routine that can be ignored. Observe if they're making decisions based on information you give them and if not, remind them of this. Be active and create a dialogue to support the info gotten from tools. This is how you create value.


Summary


For those who didn't have the tenacity to read the whole thing through, here's summary:

  1. Create risk buckets i.e. "test cases" in Test Plan.
  2. Gather these "test cases" into sessions i.e. "test sets" in Test Lab.
  3. Guide your testing with charters. Attach them to "test sets".
  4. When doing your testing, do it with the tool you like to use. Export results and attach them to "test cases" in Test Lab. Use tags to help filtering.
  5. Report bugs and link them to "test cases" in Test Lab.
  6. Report via dashboard in Test Plan or via session reports in Test Lab.
  7. Smile!

If you wan't details, read the damn thing! :D

You can also easily use mindmaps in similar fashion. I'll conjure a mindmap of all this and attach it here shortly. I will update other screenshots and stuff as I manage to create them. In the mean time you can always contact me (sami.soderblom@gmail.com) for more info. I also welcome all kind of challenging because I've worked like this for a while, blindly perhaps. There might better ways to do things and I'd like to know about them.

Quote time! A mile ago I gave you some reasons why I wrote this post. Here's one more:

"You can't manage ET with QC!!" -Unknown

Yours truly,

Sami "QC Sami" Söderblom

3 comments:

  1. Nice blog-post Sami. I would like to hear some opinions from a person who has done ET with HP Sprinter. Have you tried it Sami? That could help with documenting ET with QC if I have understood Spirters abilities correctly.

    Managing ET with QC is "dooable" as you posted even if QC doesn't quite support it. You could have advised people to use links instead of attaching files :)

    - Antti

    ReplyDelete
  2. Hi Antti,

    And good to hear from you! Long time no see! :)

    I haven't used HP Sprinter myself, but could imagine that it goes well with this. I'd also like to hear experiences from that, from someone else than a salesman. ;)

    Links work well when the attached material changes. For instance attaching requirements documentation, specs, etc. they are in order. But when doing testing and attaching reports to "test cases" and "test sets", it's always a unique situation that depends entirely on the context. This material does not and should not change. It can be attached as file. If you on the other hand create an archive of test reports to network drive, you could easily replace QC with mindmaps, or even Excel sheets, and save a load of money.

    Sami

    ReplyDelete