If you follow this blog on a regular basis, although my writing is never regular, or you saw me speak somewhere, you might have heard me say something like “testing is like science” or “science of testing” and it sounds very compelling and catchy, but so far I haven’t heard or saw a lot of good explanations why is that. I also didn’t provide any, which I guess makes it my fault, as well.
Anyway, you will see in your career many individuals naming themselves experts in testing, testing philosophers, testing gurus or testing evangelists that will be totally incapable to explain to you or even themselves how testing and science are tangled together or what sort of knowledge can testing borrow from science.
Instead, they will try to fool you about the importance of collaboration in testing or soft skills in testing or various individual traits like compassion, agreeableness, humbleness, creativity etc, etc. All of these are of course compelling and hot humanitarian topics, all of them important, but they all share one common problem – they have nothing to do with the essence of testing. Anyone who claims one of these is more important than your testing practice and domain knowledge is simply trying to avoid demonstrating real testing knowledge and drive the conversation into the broader domain of humanitarian disciplines.
The main goal of these series of posts is to be pragmatic guide for testing with focus on exploration, so my hope is that anything you read here is either derived from practice and can be used in practice. Having that said…
Why is testing like science?
In the first chapter of the series it was made clear that testing revolves around timely provisioning of vital information about risks and problems that threaten the product’s value and timely completion. In order to provide this, we go on a long journey into trying to discover new information. There goes our first similarity with science – it also aims for discoveries to prove certain facts and provide information about the world that surrounds us.
Another aspect that we can observe in testing, that resembles science is the way that we demonstrate our findings.
One of the core principles of testing is a thought by Edsger Dijkstra, that I hope everyone has heard of, at least once:
“Program testing can be used to show the presence of bugs, but never to show their absence!”
― Edsger W. Dijkstra
This is a well-known fact, every tester starting their career is aware of it, if you are not aware of it, you will be in big trouble.
We also know that, according to many commonly shared definitions the purpose of testing is to provide timely, accurate, evident, empirically obtained information about the evaluation of product’s quality.
Combining both, we are facing interesting dilemma – how can we demonstrate the product we are testing is stable and reliable, providing value to our clients and fulfilling certain quality criteria, since we can only demonstrate how it doesn’t work and not the other way around? It seems like we must make a conclusion about the overall quality based on a very limited amount of reliable data.
If you think that sucks, well scientists do this every time they want to make a discovery – they take a tiny bit of knowledge, normally proven through science, they apply it to a hypothesis, and they try to confirm it or reject it.
The power of experimentation
Testing is a “yes/no” question game just like science.
Imagine a dark room, there’s an object in it and you must guess it. In order to do so, you can only ask questions that will produce answers yes or no, like: “is it large?” or “is it a cat”, or “is it square”? You will need to ask a lot of questions to guess it, but more important, if you are clever enough, you will try to follow-up your questions, building on the information you get from them, in order to narrow the possibilities, you have in front of you.
I know what you are going to say here – “that’s not fair, I can ask much more complex questions, that are much more sophisticated and direct and goal oriented, if I don’t have to follow that dumb rule about yes/no answer”. And that’s true, but it’s not the way it works. It doesn’t work in science, it doesn’t work in testing.
Here are few examples. Imagine you are a scientific researcher, you want to prove if acceleration of a free-falling object in Earth’s atmosphere is a constant or it is some variable that changes based on certain conditions (for the sake of the example we ignore the fact we know that). You can’t simply go and ask the gravity – “hey, gravity is your speed constant or variable?”. To be honest this is a great question, but it will remain unanswered. You will have to construct an experiment to prove it.
Similarly, whenever we test a credit card form and we want to discover all issues that might significantly impact our clients, we can’t simply go and ask the form – “Hey form, what significant problems is there in you? Can you demonstrate them?”. This sounds like a great question, in fact, it’s exactly what we are interested in, but we won’t get any answer, because it simply doesn’t work this way.
Instead, we must use the knowledge we have about credit cards and various testing oracles that we can use – artefacts, people, common sense, standards, regulations and use them as requisites to create experiments in which we can demonstrate correct or incorrect behaviour of the system.
The largest and most significant similarity between testing and science is that whatever finding we thing we’ve made about the subject of our interest, in order to justify our position, we have to prove it through demonstration.
And this is true, it’s a way we can tell science from pseudo-science for sure and also, we can tell testing from pseudo-testing.
For instance – there’s a huge group of people who claim the Earth is flat, anyway they have never provided a sufficient scientific proof about it. Therefore, they don’t have a proof, but they can only speculate with it.
Similarly, in testing we can observe many such “societies” that speculate with facts, they miserably fail to prove, such as:
- The Test case society
- The “Automate all” society
- The AI replacing testing society
- The Formal testing society
- The mature models and standards society
- The soft-skills bullshit society and so on
You can always spot them by few significant factors:
- They always make bold claims like – testing is automatable or AI-able
- They never have dialogue about it, or run from it or if caught accidentally in it, provide the dumbest ever argument, that you feel ashamed to even comment.
- They don’t have dialogue about it, because they consider it self-evident and it’s not. It’s never self-evident.
Normally the only way they make a point is by falsifying reality or rejecting it, in the same manner flat earth society tries to falsify reality by rejecting basic physical laws.
What we can have as a take-away from this – if we want to protect our credibility as testers and as reliable sources of information, we should be able to demonstrate whatever we claim – this is what bug reports are – we claim there’s a problem, we demonstrate it.
How testing works?
Here’s a small schema that explains how testing works:
Let’s take a look at a hypothetical situation – we have a field let’s say it’s a field which format we need to check very explicitly, like a SSID or VAT number.
Here’s how this model would work in a typical test:
- This field has very specific format validation, normally verified by regex or in other way. Every symbol matters as sometimes we have check digit at the end. Therefore, adding characters, should trigger a format error (I made an observation)
- What will happen if this special symbol is a space char? We often add these involuntarily or by copy-pasting. (Think of interesting questions)
- We shouldn’t punish this, as we mentioned this is unintentional and best, we can do is just trim white spaces. Very often developers forget to handle such situation, which will trigger the field error. (Create a hypothesis)
- I can test and demonstrate this if I simply adding white space in the begging, end or anywhere within the string. (Test hypothesis)
- Based on the result, if positive and the existence of problem, I can question the fact of other fields are having the same problem, because they use the same text processing method (Develop a theory) and so on.
You see this is a very trivial and simplistic test, but it does serve well to demonstrate how testing works. We all do testing in the same manner, it might not necessarily be this explicit – I made it explicit deliberately to make a clear point, we might not use the same steps, but the underlying process of discovering information is pretty much this one.
Now the big discovery…
I am a fucking liar…
Or at least partially, because this model is not just the way testing works, it’s in fact a model of the scientific method. This is not simply the way we do testing, but it is also the way scientific experiments are performed.
What we can derive as a conclusion from this is that we hold a pretty important and powerful weapon to perform testing with, which has in its core human intellect and critical thinking. Having this knowledge, if anyone tries to make you believe, that following a test procedure, case, scenario or automated script is testing good enough, you should seriously question their knowledge in the domain of testing.
The trap – confirmation vs. refutation
It is necessary to be aware that our scientific-like way of approaching testing problems, can lead is into a common trap, that can be very significant for the profession that we perform and it’s the trap of confirmation or confirmatory testing.
The fact we have testable predictions will inevitably make us look for an experiment to prove that prediction, the question is – is that good enough?
The problem that might occur in this case is that we can willingly or unwillingly start looking only for experiments that prove our point and none that try to refute it. This on its own might trigger some natural bias we have like confirmation bias (top favourite for testers that want to look psychologically savvy at conferences), anchoring, tunnelled vision which is practically taking these to extreme, only looking for information that supports our opinion, ignoring the one that is incoherent with it and so on.
It’s important that in the case of testing, the confirmatory game can lead to disasters. Ask a novice tester what the purpose of testing is and you will get answer like: “To prove the product works (as expected)”. That’s exactly what we are trying to avoid, If we only think how it should work, we might blind ourselves from all the ways it might not work, or isn’t suppose to work, or such that might question it’s capability.
More important – even if we discover all the 99 ways it could work, we can’t for sure say it really does work, if there’s in fact that 1 case where it won’t work.
“Science must begin with myths, and with the criticism of myths”
The most important asset of testing as a scientific activity is to reject the validity of rules, to question the status-quo and challenge constraints.
If our developers and product owners stick to the hypothesis that this is a stable software and it’s “bug free” and “ready to ship”, it’s our job to put our best effort to reject that theory, by providing a meaningful example of why this is important.
This is also related to Karl Popper’s (quoted above) theory of falsifiability which sounds like that:
I shall require that [the] logical form [of the theory] shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical scientific system to be refuted by experience.
Meaning, that every claim that we make, as part of our discovery is only valid and scientific if it can be refuted by an event that can be experienced.
I hope that “unveiling” the domain of science and it’s applications in testing will help you to take a look at your testing more seriously, more responsibly and will help you understand the depth and complexity, of the task made by some to look simplistic and trivial.
I hope you enjoyed reading it. See you next time! 😉