Exploratory testing is all the rage right now, and rightfully so. If used in the right way, this testing technique can be fun and valuable to any tester. However, if not executed properly, it can be a waste of time and quickly garner a negative reputation. I want to help you avoid the latter by giving you all the knowledge I have, or can find on this subject, in one location.
In the ’80s, a test manager by the name of Dr. Cem Kaner coined the term Exploratory Testing. James Bach, another practicing tester at the time, took the term and ran with it. Since then, many have written about it, offering their strategies for exploratory testing.
Exploratory testing can be easily defined, on the surface, as a technique where testers test on the fly. No test scenarios, no test cases, just good ol’ fashioned poking around. However, if you dig in a bit more you’ll see that it can also be a tool for learning the application, as well as a way to design better tests. Whether you knew the name of it or not, we’ve all been there, executing a test case and thought to yourself, ‘oh I wonder what would happen if I did this’. Because let’s face it, no matter how good your requirements are, or how much you plan your test strategy, you’ll always discover a rock unturned if you take the time to poke around. Exploratory testing is helpful because it puts a name to the testing technique and allows testers to strategically “poke around”. Unlike scripted testing, exploratory testing emphasizes adaptability and learning. What you learn in your first step should inform your next step. Here are some key differences between exploratory and scripted testing.
|Scripted Testing||Exploratory Testing|
|Executed from test cases||Executed from a charter|
|Well defined test cases||No defined test cases|
|Isn’t time boxed – you test until scripts are executed||Is time boxed|
|No post-testing discussions typically||Session debriefs happen with a test manager|
|Is mindless – once the scripts are created you execute||Requires in-the-moment thinking|
|Sole purpose is executing scripts||Engages continuous learning and thought|
There are two main things to think through before venturing out for an exploratory testing session: how will you structure your testing AND how will you track what you’ve covered. These two questions, answered upfront, will help you build a consistent exploratory testing approach your organization will grow to value.
Ways To Structure an Exploratory Test Session
The key to structuring your exploratory test session is finding the sweet spot. You don’t want to over structure and be left with essentially test scenarios. You also don’t want to understructure and randomly go about your application, as that will lead to high-level testing and may add very little value in terms of defects found.
Here are a couple of running rules I live by when exploratory testing. They are simple but provide me the focus I need:
- Time-box your session – Typically I keep it between 30 and 90 minutes. Anything more than that and I get distracted or my brain is fried. Anything less than that is probably too limited in scope.
- Avoid distractions – Get your coffee, use the restroom, turn off Slack notifications, set your status to Do Not Disturb, or whatever you need to do to avoid being distracted during the session. If something comes up like a surprise meeting, stop the session early.
- Have a Plan – Before you dive in, make sure you’ve detailed your testing goals and how you’ll execute them. See the Session-Based Testing section below for properly developing your plan.
Alright, so let’s dig in and talk about some effective exploratory testing methods you can use to keep your session appropriately structured.
Role-Playing: Role-playing can be a fun way to look at your application from a different vantage point. For example, let’s say you’re testing for a consumer banking application. You might have user personas such as new users, users of varying age groups, users who do basic banking, and users who do extensive banking across multiple products of yours, just to name a few. Pick one of those user personas and test your way through the application, using it in the manner that you think that user would. Keep in mind what makes that user persona specific and ensure you keep that focus through your testing. Bonus points if your product team creates personas – those can feed right into this testing strategy.
Freestyle a Feature: Instead of exploring an entire application, you can freestyle specific features, one at a time. Using the banking example, you can limit your testing to only transferring money from one account to another. Remember to not just focus on happy paths!
Design a Soap Opera: Soap operas are known for being dramatic and over the top. It’s not uncommon for someone to get in an accident and develop amnesia right before they were supposed to uncover a lovers’ triangle. This approach focuses on testing the random edge cases that are made up of extenuating circumstances. For example, let’s say you have to be 18 to enter (and gamble) on a gaming site. You’re turning 18 right at 12am. At 12am and 10 seconds, can you enter the site if you’re in EST but the gaming site is hosted out of PST?
Testing Tours: My favorite! There are so many options to choose from and you can be as creative as you want. The concept is basically this – because you can’t fit everything into one testing session, you select a tour to help guide you through specific goals.
Let’s explore some popular testing tours that might give a better overall understanding of the concept:
- Amazon Testing Tour: Every year Amazon ships 2.5B packages (Forbes 2019). That means every day thousands of packages flow through their warehouses, onto their trucks, and get delivered to customers. To ensure everyone gets their correct package, and on time, data management is essential to the process. In this tour, instead of Amazon packages moving through their processes, think of your data flowing through your applications. For example, if you’re creating a customer account, ensure the billing address is accurately updated to the database and shared on other screens where billing addresses may be displayed.
- Garbage Collectors Tour: A garbage collector has one primary job – visit each house in the neighborhood and pick up the trash. This tour is helpful to do a spot check of every item in a feature. For example, checking that every item in a menu list takes you to the correct page.
- The Money Tour: Every city you travel to has a guidebook that advertises the main points that the city wants you to visit, the “moneymakers”. As such, this tour takes the tester out to explore the primary advertised features of your application.
- Documentation Tour: Some applications give specific documentation to users providing step by step directions on how to complete a task. This tour specifically focuses on that documentation and ensures the steps are accurate and easy to follow.
- The Banking Tour: Most people use some type of banking for online purchases, whether a credit card or a checking account. This tour focuses on the “banking” aspects of your application. Can you buy a product, save your card information, update/delete a card, etc…
- The Crime Spree Tour: As the name suggests, this tour focuses a tester on trying to undermine the system by doing things that a not-so-honest person would try.
Session Based Testing
Session-based testing is an extension of exploratory testing in that it also incorporates accountability. This specific concept was developed by Jonathan and James Bach in 2000. Remember above when I mentioned that without the right structure, exploratory testing can turn sour and offer little value? Being able to structure your testing and communicate your findings to your target audience is a large step in ensuring exploratory testing is seen as a legitimate testing approach.
There are a few elements that make session based testing unique:
- Mission: The mission identifies the purpose of the session. According to the creators, the mission tells us what we are testing or what problems we are looking for.
- Charter: The charter is the agenda for your test session. The section How to Write a Useful Charter will expand on this.
- Session: A session is the actual time spent testing, ideally time-boxed.
- Session Report: A session report records the test session. The section How to Write a Session Report offers a template on how you can accomplish this.
- Debrief: Each session should result in a debrief between the tester and their team or manager. The section How to Conduct a Valuable Debrief will offer more advice on this.
How to Write a Useful Charter
There are different charter templates floating around the internet. In my opinion, some of them are way too detailed. The goal of the charter is to make sure you have your test session strategy outlined. My favorite template looks like this:
To discover <information>
Explore <target>: The target might be a specific feature, an area of the application, or an API, etc…
With <resources>: This can identify the tools a tester will use, like Postman. In addition, it can define the approach you’ll be using, like a specific testing tour.
To discover <information>: Do you want to find out if your menus navigate properly? Is a new feature user friendly? Is your website accessible to all? Hint: Read my blog on Accessibility Testing for some great ideas for this!
How to Write a Session Report
Now that you’ve written your charter, and understand your primary goals of the session, it’s time to get your session report prepared for testing. A session report is the primary spot for notes during testing.
In addition to notes, I recommend you also incorporate screen captures or, better yet, video recording. This is extremely helpful in tracking the areas you’re testing during your session. Video is helpful if you find a defect as sometimes reproducing a defect is tough in an exploratory testing session. With video recordings of your every step, reproducing becomes a lot easier. Make sure to do a voice-over as you’re recording so you know what you’re looking at later!
A session report can vary slightly with information but here is what I recommend including:
- Your system setup (OS, device, browser, build#)
- Areas/features tested
- Notes on how you tested – did you use any tools, testing techniques such as touring, etc…
- A list of bugs found and steps to reproduce
- Any open issues/questions (production questions, outstanding technical questions, etc…)
- Any videos/screenshots taken
- Session start/end time
- Any time spent investigating things not in the charter
How to Conduct a Valuable Debrief
A debrief is an important step in your exploratory test. It gives you, the tester, an opportunity to walk your team through what exactly you covered and what came of it. It’s a chance to showcase that your exploratory session was in fact structured and valuable.
Typically a test lead or manager is engaging in the debrief and may have questions of their own. However, here is a good starting point of what you can cover during this time:
- What you tested – areas or features
- How testing was conducted (tools used, methods such as a specific tour that is used)
- What bugs were discovered
- What obstacles do you encounter
- What was your overall impression
Regulated Industries and Audits
For those of you in heavily regulated industries that require frequent audits, I’m sorry! I haven’t had the pleasure of maneuvering around audits and exploratory testing. So instead of me providing advice on something I don’t much about, I’ll direct you to Sticky Minds where Josh Gibbs discusses his experience and provides some solid advice on the subject.
Exploratory testing can be a great way for testers to uncover issues that normal test scripts might miss. If executed in a structured manner, it can allow testers to use their skill sets to broaden their test strategy and test further than before. Outlining a charter ahead of time and documenting your testing in a session report will ensure those around you see the value of exploratory testing and support the effort long-term.
This is still useful years later!
We’ve been working with the community to build an open source product to help make exploratory testing easier/more seamless.