After the unexpected success of the series “Hindsight lessons about automation”, I think I need to spend more time in the area where I feel my expertise and passion lies and this is exploratory testing or as I will refer to it further in the series, simply exploration.
Why do so? I think I kind of sent few mixed signals to certain individuals – I had to reject couple of automation consulting requests I got, because people got the wrong impression I am an automation expert. Although, I am deeply honoured of such qualification, I can’t honestly pretend I am one. I am self-taught hobbyist who simply has the balls to have an opinion and a blog to share it, as for my skill level, I consider my knowledge level the desired minimum. Anyway, I don’t support the tool-centric view of automation and I think most problems people have with automation are due to the fact they don’t understand testing well enough. Having that said, Hindsight lessons about automation sidetracked from my primary goal of building the missing link between testing and automation too much into the technical side of automation. If its main purpose was to demonstrate what is worth automating to be helpful to testing, so there’s a need for another series that will demonstrate what is important from testing perspective and what automators need to know about the goals and essence of expert testing.
I decided this series will be called “Hindsight lessons about exploration”. I will also be useful for all those who are interested in the exploration part of testing and tired of being asked to follow strict scripts, when no actual benefit is visible from this.
Why exploration and not exploratory testing?
I have been teaching exploratory testing for almost 2 years and I tend to see the pattern of people seeing it as some different, hipster-ish trendy way of doing “manual testing” and this is a problem. I can’t work with that. Having any <adjective> appended to the word “testing” makes it look like it’s something totally different, something new, something special, something unlike the “regular” testing. And this is wrong – testing is testing, it makes sense to use adjectives when we describe different types of parafunctional testing, ex. performance testing, security testing, usability testing, these are testing activities with specific focus. When we speak about exploration, we are speaking about core asset of testing, something testing can not do without. In other words, there is no testing without exploration, using a term like “exploratory testing” is like saying “meaty steak”.
Exploration is integral part of testing and it gets very often omitted or people pretend we can replace it or work around it if we follow very strict scripts or formal documentation. The aim of these series is to prove this wrong.
Getting rid of garbage verbiage
The software testing craft is full of malformed terminology of half-truths that we take for granted and more importantly, we use this sort of terminology as a foundation to develop our craft.
Normally this is the part where people start arguing for:
- The language we use is not important, this is nit-picking
- You are not right, you are trying to redefine terms and invent language
- I don’t care what you think, it simply doesn’t align with my beliefs, so I will just repeat you are wrong
The problem with malformed terminology is it is not referring to the essence of the testing craft but is normally referring to only one aspect of it, most often test execution. I will only focus on the two big ones, that really destroy understanding about testing.
- There’s no “manual testing” – having in mind the concept that human intellect is the essence of testing, terms like “manual” testing sound ridiculous at best.
That doesn’t mean there’s nothing manual in testing, we are performing tons of manual tasks, but that doesn’t mean testing is manual, this is terrible, dangerous oversimplification.
Some folks like to use the term “manual” to distinguish human performed tasks from automated ones, but again if you do so, please don’t.
The general definition for the term manual testing, to new testers and non-testers is – some sort of low-cost incompetent human activity of following scripts that is a legacy from a long time ago. If you don’t trust me, there’s million examples all around us, in the testing community, just listen to people speaking if they are manual or automation testers or how to move from the first to the other, to be more efficient. If you are interested on my position on the question about manual testing, take a look here.
- Analogically, there’s no test automation – if we want to be precise, we don’t automate testing. We automate tasks, certain tasks that are part of testing, that’s true, you can also find the term “automated checking” or “automation in testing” all of these make sense, but we don’t automate the full process and performance of testing.
“Automated testing” is another term that screws up everyone, giving the false impression automation is some sort of superior form of testing and any tester should be automator, otherwise they suck. Wrong, automation is simply a different type of testing and it’s equally important as exploration, as creating good test scripts, good bug reports, etc.
I know you will most likely say “but, Viktor, this is so fucking obvious”. Are you sure? Go to your boss, I don’t care who that is, and ask him or her, why is it important to automate. I bet that 99.9% you will get an answer like “because we will save time and money and we won’t execute the same tests over and over again, manually”. Bingo! You think you have problem with your automation? There it is, but it is in fact problem with someone’s expectations about automation and its relation to testing. And this is our problem, it’s our fault somebody thinks like that.
Getting rid of garbage concepts
After we get rid of terms that don’t develop testing, but are holding it back, it is good to try to frame what testing really is like.
When I start lectures with new group ofstudents I normally start with a question like that:
Q: You’ve got several lectures already on testing, what is testing, why do we test? What’s the purpose of testing?
This is also a good starting question for an interview, it exposes really good if a person knows what they are doing or is blindly repeating definitions.
As I mentioned already, my students are novice and they can’t share their definition based on knowledge, they share what they were taught or read or heard. So, the answers that I often hear go like this:
- “We test to find bugs”
- “We test to improve quality”
- “We test to find deviations between the spec and the actual product”
- “We test to make sure product works/We test to make sure nothing is broken”.
And many more. As you see, the view that novice testers have on the purpose of testing is partially misunderstood ISTQB syllabus, partially urban legends, partially absolute lack of understanding what relation do testing and quality have.
What I clearly see as a pattern every time I ask this question, could be summed up like this:
- New testers don’t know that testing doesn’t improve quality – there’s an action to be taken after we find a problem, to have some improvement in quality, eventually. It seems transparent to us, but it takes them a while and a couple of examples to fully understand it.
- New testers, sometimes even experienced ones, are incapable of thinking about testing without a spec. Anytime I get “We test to find deviations between the spec and the actual product” answer, I ask them a counter question: “What if we don’t have one?”. The answer is silence. And this a real case scenario – in the real world, when real projects happen, not in the ISTQB syllabus, people don’t have specs, or specs get outdated and confusing and nobody gives a fuck about them anymore. So, what do we do to adapt to that fact – produce testers that panic when they don’t have a document?
- Novice testers think testing is confirmatory activity – we test to prove it works, we test to prove it’s not broken, we test to make sure all “test cases pass”, etc.
In other words, we train testers to test software like it’s the 80s or the 90s. Not that I was testing software back then, but I read books about it, if I am wrong I am ready to take criticism about it.
The problem with all these is – they are all basic points of understanding the testing craft. If we are teaching these wrong in the beginning or let people have them unnoticed, we are practically building a house on an unstable foundation. And it takes a lot to unlearn some of these.
What is testing, instead?
This is my definition and it is most likely influenced by definitions from Cem Kaner, James Bach, Michael Bolton, Jerry Weinberg and other experts that have meaningful things to say about testing, but here’s what I got from them.
To test means to continually learn new facts and information about the software product through the methods of exploring it, experimenting with it, creating mind models about it, questioning it. We do this with one specific goal – providing information about problems that represent risks, existing problems that threaten quality or the timely completion and delivery of the product.
In order to do so, we gain information by different sources:
- Artifacts – documents, specs, designs, commercial materials
- Exploration – looking at the product itself, “questioning it”, comparing the answers with our assumptions about it.
- People – significant individuals that hold important information about the product, quality criteria or client’s expectations.
- Other products – community standards, competitive products
- Domain experience and knowledge
- Industry experience and knowledge
- Common sense – I know this might sound a little bit ridiculous, but we use common sensical knowledge in order to make decisions about software more often than we even suspect.
- And many more…
If you want to see the full picture of what testing is, you can take a look at this really good list of sources and definitions that Michael Bolton put together here.
What does it have to do with exploration?
It has everything to do with exploration. It has a very important accent that we should never forget, testing is evolving activity. To test means to learn and in order to learn, you need to explore, once you stop learning new information, you start moving in circles.
Few times I happened to say we testers are developers and I always get this angry comments or allergy-like reactions, but yes, we are, but we don’t develop code, we develop knowledge, we develop understanding, we develop our skills and proficiency in performing testing at a higher exert level, in order to uncover new aspect of information about the software product. And this is not a straight forward process, it has a lot of specific things about it, it takes time and practice to be good at it and the catch is – it is totally different for every different project.
In other words – performing good exploratory testing, or just testing, because any testing is exploratory, is hard, very hard – it takes time, patience, knowledge and most important practice. And this is something most certifications and academies and tool vendors don’t tell you, because it’s not very easy to sell a certificate by saying – “well you see, testing is hard, it takes time and, knowledge and practice to do it good”. It is much easier to say – “take our 3-day course and you will be a testing guru!” 😀
This was the introductory post about the lessons of exploration that I learned by learning and teaching testing. It might be a bit too generic for most readers, but I aim to use these posts for my students as well, so I intentionally take some time to speak about basics. In the future posts, I will focus on any aspect of exploration that I think matters.
Thanks for reading this and good luck! 😉