After the unexpected success of the series “Hindsight lessons about automation”, I think I need to spend more time in the area where I feel my expertise and passion lies and this is exploratory testing or as I will refer to it further in the series, simply exploration.

Why do so? I think I kind of sent few mixed signals to certain individuals – I had to reject couple of automation consulting requests I got, because people got the wrong impression I am an automation expert. Although, I am deeply honoured of such qualification, I can’t honestly pretend I am one. I am self-taught hobbyist who simply has the balls to have an opinion and a blog to share it, as for my skill level, I consider my knowledge level the desired minimum. Anyway, I don’t support the tool-centric view of automation and I think most problems people have with automation are due to the fact they don’t understand testing well enough. Having that said, Hindsight lessons about automation sidetracked from my primary goal of building the missing link between testing and automation too much into the technical side of automation. If its main purpose was to demonstrate what is worth automating to be helpful to testing, so there’s a need for another series that will demonstrate what is important from testing perspective and what automators need to know about the goals and essence of expert testing.
I decided this series will be called “Hindsight lessons about exploration”. I will also be useful for all those who are interested in the exploration part of testing and tired of being asked to follow strict scripts, when no actual benefit is visible from this.
Why exploration and not exploratory testing?
I have been teaching exploratory testing for almost 2 years and I tend to see the pattern of people seeing it as some different, hipster-ish trendy way of doing “manual testing” and this is a problem. I can’t work with that. Having any <adjective> appended to the word “testing” makes it look like it’s something totally different, something new, something special, something unlike the “regular” testing. And this is wrong – testing is testing, it makes sense to use adjectives when we describe different types of parafunctional testing, ex. performance testing, security testing, usability testing, these are testing activities with specific focus. When we speak about exploration, we are speaking about core asset of testing, something testing can not do without. In other words, there is no testing without exploration, using a term like “exploratory testing” is like saying “meaty steak”.
Exploration is integral part of testing and it gets very often omitted or people pretend we can replace it or work around it if we follow very strict scripts or formal documentation. The aim of these series is to prove this wrong.
Getting rid of garbage verbiage
The software testing craft is full of malformed terminology of half-truths that we take for granted and more importantly, we use this sort of terminology as a foundation to develop our craft.
Normally this is the part where people start arguing for:
- The language we use is not important, this is nit-picking
- You are not right, you are trying to redefine terms and invent language
- I don’t care what you think, it simply doesn’t align with my beliefs, so I will just repeat you are wrong
The problem with malformed terminology is it is not referring to the essence of the testing craft but is normally referring to only one aspect of it, most often test execution. I will only focus on the two big ones, that really destroy understanding about testing.
- There’s no “manual testing” – having in mind the concept that human intellect is the essence of testing, terms like “manual” testing sound ridiculous at best.
That doesn’t mean there’s nothing manual in testing, we are performing tons of manual tasks, but that doesn’t mean testing is manual, this is terrible, dangerous oversimplification.
Some folks like to use the term “manual” to distinguish human performed tasks from automated ones, but again if you do so, please don’t.
The general definition for the term manual testing, to new testers and non-testers is – some sort of low-cost incompetent human activity of following scripts that is a legacy from a long time ago. If you don’t trust me, there’s million examples all around us, in the testing community, just listen to people speaking if they are manual or automation testers or how to move from the first to the other, to be more efficient. If you are interested on my position on the question about manual testing, take a look here. - Analogically, there’s no test automation – if we want to be precise, we don’t automate testing. We automate tasks, certain tasks that are part of testing, that’s true, you can also find the term “automated checking” or “automation in testing” all of these make sense, but we don’t automate the full process and performance of testing.
“Automated testing” is another term that screws up everyone, giving the false impression automation is some sort of superior form of testing and any tester should be automator, otherwise they suck. Wrong, automation is simply a different type of testing and it’s equally important as exploration, as creating good test scripts, good bug reports, etc.
I know you will most likely say “but, Viktor, this is so fucking obvious”. Are you sure? Go to your boss, I don’t care who that is, and ask him or her, why is it important to automate. I bet that 99.9% you will get an answer like “because we will save time and money and we won’t execute the same tests over and over again, manually”. Bingo! You think you have problem with your automation? There it is, but it is in fact problem with someone’s expectations about automation and its relation to testing. And this is our problem, it’s our fault somebody thinks like that.
Getting rid of garbage concepts
After we get rid of terms that don’t develop testing, but are holding it back, it is good to try to frame what testing really is like.
When I start lectures with new group ofstudents I normally start with a question like that:
Q: You’ve got several lectures already on testing, what is testing, why do we test? What’s the purpose of testing?
This is also a good starting question for an interview, it exposes really good if a person knows what they are doing or is blindly repeating definitions.
As I mentioned already, my students are novice and they can’t share their definition based on knowledge, they share what they were taught or read or heard. So, the answers that I often hear go like this:
- “We test to find bugs”
- “We test to improve quality”
- “We test to find deviations between the spec and the actual product”
- “We test to make sure product works/We test to make sure nothing is broken”.
And many more. As you see, the view that novice testers have on the purpose of testing is partially misunderstood ISTQB syllabus, partially urban legends, partially absolute lack of understanding what relation do testing and quality have.
What I clearly see as a pattern every time I ask this question, could be summed up like this:
- New testers don’t know that testing doesn’t improve quality – there’s an action to be taken after we find a problem, to have some improvement in quality, eventually. It seems transparent to us, but it takes them a while and a couple of examples to fully understand it.
- New testers, sometimes even experienced ones, are incapable of thinking about testing without a spec. Anytime I get “We test to find deviations between the spec and the actual product” answer, I ask them a counter question: “What if we don’t have one?”. The answer is silence. And this a real case scenario – in the real world, when real projects happen, not in the ISTQB syllabus, people don’t have specs, or specs get outdated and confusing and nobody gives a fuck about them anymore. So, what do we do to adapt to that fact – produce testers that panic when they don’t have a document?
- Novice testers think testing is confirmatory activity R