Software testing is not … easy to explain. Last part.

Software testing is not…

Software testing is not … part 2.

Software testing is not … part 3.

Software testing is not… finding bugs part 4

Software testing is not … playing. Part 5

There it goes, finally I come to the final chapter of the series of what software testing is not. The topics that I choose to cover in it are:

Software testing is not … set of predefined actions.

I bet you and I am pretty sure I would win, that if you ask 100 people related to testing and involved in software development, what software testing is about, you will mostly get answers like – finding bugs, following testing scripts and steps and scenarios, in order to discover bugs. So, why is that? What happened?

At first it is the enormous effort of companies and testing certification academies to “formalize” testing and represent it as something easy to do, easy to learn, easy to transfer as knowledge, so they easily sell it or replace one specialist with another.

And it’s totally understandable why people fall for that lie, why they prefer taking the short cut of the formalized version of testing, instead of the more analytical and more complex version of it. Imagine the following situation, just like Morpheus in Matrix with the red the blue pill in his hands. You are the new tester in our community and you have that choice:

  • You take the red pill, you go to certify yourself, you get to write test scripts in a very document or requirement driven way, you follow the development methodology guidelines, you follow the “best practices”, you act like tools are silver bullet that could solve all problems and that would ease the testing effort so much that you will practically have to do nothing, just hit run and the magic will happen.
    In general, to believe that testing is easy, once it is documented well, if you have the proper functional documentation, design documentation, if your testing scripts are written in a detailed way, so everyone could read and repeat them. Or…
  • You take the blue pill, you learn that software testing is complex cognitive activity, that its roots lie not only in the area of technology and programming, but much more in epistemology, psychology, sociology, logic, philosophy in general. You learn that certification means only, that you showed knowledge to someone’s understanding for software testing and that sometimes that “someone’s” understanding might have nothing to do with reality. You learn that your path as a software tester will be long and rough and you will have to put a lot of effort in order to stay relevant and make a difference. You will have to learn that no practice, methodology or tool is the ultimate solution to a problem, and every time you approach a problem you will have to investigate it and build your solution on the fly, rather than following someones “instructions”.
    You will learn that automation is in fact tool assisted testing, that its purpose is not to replace human testing, but to modify it, to enable it, to achieve more. You will learn that tools don’t do miracles, they only do what they are instructed to do, that automated checking is in fact one more responsibility in your testing practice and you will have to spend additional time taking care of it, extending it, debugging it, keeping it up to date and accurate.
    You will learn that testing is not about requirements and documentation, and that tester’s purpose is not to find bugs only, but to asses risks and provide information, not even to prevent issues from happening, just to provide accurate information about the product/project and the risks that might make the difference between successful product and a total failure.
    And last, but not least, you will learn that testing is about learning, it is organic, it is constantly evolving, it’s analytical, it’s experimental and is extremely hard to do it well, not only, it is also extremely responsible activity. It will probably take you decades to learn to do it well, to explain it well and to teach others to do it well.

So, given these two options, as a novice tester, which one will you choose?

matrix-morpheus-red-pill-blue-pill
Source:http://counterinception.com/sites/default/files/pictures/MatrixBluePillRedPill.jpg

… easy to explain

Unfortunately, in the above situation many novice testers will get the easy way, the short cut. And we shouldn’t blame them for that. There is one major reason for that and it is, testers don’t talk enough about testing, we don’t explain what testing is and what it is not and one of the reasons for that is – explaining testing is not easy.

Everyday in our careers we are told by other people what testing is and how we should perform it, but think about it, how many times did you take the responsibility to say: “No, this is wrong. Let me explain what testing is and why I do it that way”. Conforming with other people’s false opinion about testing is harmful and toxic for the testing craft, we should take care and responsibility to educate our team members or other members involved in software development, what testing is and how we do it, to achieve high quality.

And by this I don’t mean confront their point of view. We should approach this with a lot of understanding, because the effort to downgrade testing, to present it as formalized activity, to strip its organic and analytic nature was huge. That’s why we should approach such opinions in an educational way, just like a teacher. As in school, when a student has wrong opinion about some problem, if we just say “no, you are wrong, you should listen to me, because I am the one who knows”, we won’t achieve anything, in fact the chance to have the student rebel against our position gets even higher. Instead we should focus on providing the other perspective, explain why do we think that this is wrong, provide information, provide our personal experiences in support to our claim. We should drive the conversation or the argument in the educational domain, where both sides will have the opportunity to test their view points and rethink them, in order to gain knowledge.

And this is not easy, it’s not easy to speak about testing and do it in a structured way, making logical conclusions about your positions. That’s why we prefer to say what testing is not. And we are comfortable doing it, that was the reason why I started the series with “What software testing is not…”, but we should take that effort and move to the other side and start telling the story of what software testing is, what is its nature, how it is beneficial for the product and so on. And that’s what I intend to do.

So, as last words from these series, I’ve said about a hundred times, but I will again, testers, remain active – blog, participate in conferences, discussions, forums, webinars, write and read blogs, comment, make a fucking difference, no one will do that for you. 🙂

I hope you enjoyed the series, if you liked this post, I would love to read your comments, I will appreciate your shares and retweets.

Also, I have a challenge for you. Do you think I missed something? (I sure did) Were you ever pissed off by a ridiculous claim about software testing that you don’t see in this list, well tell me, extend my list, in the comments below. I would greatly appreciate your input.

Thanks for reading and I hope I will see you again guys in the second part of the series – what software testing is. 🙂 Good luck.

Software testing is not … playing. Part 5

Software testing is not…

Software testing is not … part 2.

Software testing is not … part 3.

Software testing is not… finding bugs part 4.

Software testing is not … meant to add value

I know this is controversial claim and many testers might object it. So, let me explain.

This is my position on a silly claim that was trying to prove that human testing is not useful, because it doesn’t add value to the product, the way that software development does, therefore, we don’t need it and we can absolutely replace it with machine driven checks.

Now, I want to make an important distinction here. Testing is being valuable as being part of the development process, but that doesn’t mean our testing performance and expertise adds monetary value to the product itself. On the opposite, I think our job as investigators and analysts, providing information, assessing risk and quality is actually to help the product to keep its value and help the team to prevent devaluation.

In other words, testing is not meant to add monetary value from which our product or company can benefit. It is meant to provide information and insight about the product, about the processes, about the risks that can harm the value or compromise it and/or our credibility or our client’s as provider of a software solution. And let’s not forget, software is a solution to a problem our clients have, we as testers are not the solution itself, but we are involved the solution as far as we have to make sure whether or not the solution we provide solves the right problem, if it solves it at all and if it solves it in the most optimal way we can provide as a team, given the time, resources and constraints that we have. And this is not an easy task.

Therefore, saying that testing isn’t valuable because it doesn’t provide value is like saying breaks in the car are not useful, because they don’t add horse power. Also person claiming this must have very limited or shallow view on the software testing profession and its expertise.

… playing with the product

Many of the things I can add to this will overlap with the ones I mentioned in the first part of these series, that’s why I want to add some different point of view.

One of the reasons why people see testing as an easy activity that everyone can do is, because it looks easy, in fact to some non-testers it looks like playing with the product. Tell me how many times did you hear that phrase from a developer: “Just play with it a little bit, to see if it works” or “just check if it works”.

Pretty recently Ministry of testing made a list of the icky words in software testing and “checking” was one of them. Michael Bolton stated his disagreement with the presence of “checking” in the list, then he clarified he meant checking in the RST context.

Yet, I think words like “checking” do belong to the icky list and the reason why they belong is the reason why Michael Bolton and James Bach did so much work on checking and testing series – and the reason is – we, as professional testers, don’t like someone to downgrade our effort and expertise to simply checking. In other words testing is not limited to checking simple facts that can be formalized to the pattern:

if (condition) check that result == something.

Same thing applies to the term “playing”. We might look like playing with the product, but we don’t do it to waste our time or to entertain ourselves. Anytime you see a tester or a QA specialist “playing” with the product, you should be aware that there’s way much more analytical thinking, planing and structured actions behind what looks to you as “playing”.

In fact, every educated approach towards an activity looks like playing to a viewer non-familiar to the area of the expert. Have you ever seen a master chef preparing a meal or a professor in the university explaining on a subject? They always look like they are doing minor effort doing what they do, almost like playing. Yet, what they do is the result of years of experience, improvement, planned actions and most important – errors. In fact, this is one way to make sure you are actually becoming skillful in an area, if I have to rephrase Einstein’s words, is to make your field of expertise look like you are playing.

This is it for this part. Any comments and shares are welcome, thanks 🙂

Software testing is not… finding bugs part 4.

Software testing is not…

Software testing is not … part 2.

Software testing is not … part 3.

Software testing is not … finding bugs

This is one of the first things that I thought in software testing – that our job is to find bugs or defects. It’s true, but it took me some time, couple of books and a lot of hours reading different resources in order to figure out it’s not just that. To claim the above is like claiming that software development is just about writing code and it’s not.

At first, it is good to mention that we don’t automagically find bugs just like that. We have to learn the system and build our oracles based on our knowledge of it, previous experience, knowledge of the domain, technology knowledge and so on. So, no bugs included here.

How about documenting our process – writing models of the system, mind mapping it, write check lists or design test cases (in case that works in our context, of course). Is that finding bugs? No.

In other words, if we say that testing is about finding defects, we probably never had a more profound thought on what testing is and what is its role in the big picture.

Let’s talk about defects. What is a defect? To us, as testers, normally it’s an error in the code, may be a typo, may be a logical misunderstanding, but might be also bad  interpretation of requirements, or due to missing requirements, it might be “taking the easy short way” due to laziness, might be bad design or unexpected dependency. It might be a lot of things. What it is for our management? It is a way to lose money, a way to lose credibility, a way to lose clients, or combination of all of them, or even go out of business. What does a defect means to our clients? It means they can’t use the service or product they paid for, or have a risk to lose their clients and their money due to a mistake that we made, sometimes, defect on our side might cost someone’s life. So, here you see three different perspectives of what a defect is and I bet you can think of more.

So, what’s the point here? Testing is not about finding bugs, it’s about examining the risks that defects in our software might introduce. Who is impacted by these risks and how serious will be the impact. I believe this is a better way in determining what our job is, not just in our day-to-day tasks, but in the context of our business, how our job is valuable for what we are producing as end product. In other words, it helps us think of our work holistically.

… meant to add value

This reminds me a silly statement somewhere over the internet, that you see, testing is not necessary or important, because it doesn’t add value to the product. Well, that’s almost as stupid as saying that breaks are not necessary in a car, because they don’t add horse power.

From my point of view, testing was never meant to add value to the product, in fact one of the main goals of testing is to help the product and project from devaluation or in other words, loosing quality and our clients’ credibility due to defects that we might have missed.

So, to frame this in another different way – testing won’t help you to make a valuable product – that is something your ideas and strategy should take care of, but testing will help you not to fail miserably due to low quality and bugs that might have serious impact for your or your clients.

That’s for this week. Thanks for reading and if you found it interesting, please share in social medias and/or share your opinion in comments! 🙂

Software testing is not … part 3.

Links to the previous two parts:

Software testing is not…

Software testing is not … part 2.

Software testing is not activity that could be performed by irreplaceable robots.

I can say for sure that this is one of the most incorrect statements in testing and also one of the most toxic ones. Of course no body is saying that, no body is that dumb, but it is based on a false assumptions and it produces a lot of other false assumptions which become popular for the only reason that someone trusts them blindly, without asking questions. So, I split these assumptions in two groups:

  1. Everyone can test, testing is easy. Your manager can test, the dev can test, the designer can test, the client can test, your managers dog can test, etc, etc… But, they can’t do it, because they are busy doing their important stuff, so you seem to be the guy/girl who has to do it.
  2. Testing is easy, because a well written test is described in a finite number of steps or instructions, and if these instructions are followed exactly you can test the problem without any impediments.

Now, what I want you to do is – write down these on a piece of paper, bring it outside and burn it, while chanting some evil spell. 😀

Believe it or not, the above statements somehow made it in our craft and are even institutionalized, meaning a lot of people take them for granted. Which leads to a whole  bunch of false assumptions that derive from these, such as:

  • Every step in testing needs to be documented in a script or a test case.
  • If you don’t have test cases, you are doing testing wrong.
  • If you don’t have test cases, you can’t prove that you did any job on the project.
  • Testing is expensive.
  • Testing is unnecessary.
  • Testing can be performed automatically, so we reduce cost.

It’s easy to understand why these myths made it into our craft – one reason is, the senior management doesn’t really care about testing, they care about budgets and efficiency. Another reason is – we fail to provide the true story behind testing, how it is efficient and beneficial for the product/project, if we fail to do that, it’s our fault. And third reason – there are way too many vendors of tools and software testing courses and academies that have interest to sell you “snake oil” – the ultimate solution that will solve all your testing problems, whether it is a tool or methodology, it doesn’t matter.

Why testing can’t be performed by irreplaceable robots, and here I mean both human robot-like acting specialist and actual automation? One good reason for that is the fact that testing isn’t based on instructions, it’s based on interaction, simultaneous information gathering, evaluation and action according to that information. Testing is experiment, investigation and decomposition of statements that the others (development, management, you name it) believe are true(paraphrasing James Bach). Using all this, we have to provide relevant information, about the risk and the product itself. So, how all that aligns with the “concept” of following instructions?

And there’s one more general reason why testing can not be represented as a limited set of connected steps or instructions, which causes also the impossibility to fully automate testing – converting these instructions to machine instructions. The reason is the way knowledge works. More specifically the impossibility to convert tacit knowledge into explicit or even explicable knowledge(Collins, “Tacit and Explicit knowledge”).

This might be explained very simple – we all done cooking, sometimes, right? Well, we all know a cooking recipes are sort of algorithmic, step-by-step guide how to make a dish X. And they could be written by a really good expert, like a master-chef, for example. Yet, following his advice, even to the smallest detail, you might be able to make a decent dish, but not one made by the expert. You wouldn’t notice the small tweaks that the master chef will make, depending on his environment or the product that he/she uses, or the freshness of the spices, all based on a “gut feeling”. Why is this so? We will call that experience or routine, what it really is –  the huge underwater part of iceberg of knowledge, called tacit knowledge.

Same thing is applicable to any area, testing as well. “We can know more than we can tell” (Polanyi), therefore in testing we do much more than we can actually put in words, scenarios, cases, steps, scripts or you name it. Testing is an intellectual process and it has organic nature, it is normal to be inexplicable and for us to have trouble describing it or documenting it. That’s how knowledge and science work.

The reason why this is not a shared belief in the mainstream testing is the fact that many consultants and certification programs like to give a short and exciting story on “how can you become great tester by following this simple set of rules”. And it  is “sold” to us under the cover of the words “best practices”, meaning “I have nothing to prove to you, this is what everyone is using”. Well, newsflash, this is the testing community, you do a statement here, you have to be ready to defend your position, no body cares how much of a guru or an expert you are.

That’s it for this part, would like to read you comments and reactions on the social media. Thanks for reading. 🙂

Software testing is not … part 2.

Software testing is not … part 1.

I am moving towards next couple of branches from the mind map that I posted in first post of the series of “Software testing is not…”. You can find the mind map following the link up here.

Software testing is not … quality assurance.

I wrote about it in the “Quality guardian” part of Outdated testing concepts, so I will really try not to repeat, but build on the statements made there.

I believe it is more correct to say that quality assurance is not limited to the testing role alone and it’s something that has to be a philosophy and strategy shared by the whole development team, no matter what role. The opposition that I want to distinct clearly here is the old concept of quality guardian, having the right of veto to the release versus a modern approach of team responsibility towards quality. It is obvious that the former model doesn’t work even though many organisations still support the myth.

I’d rather like to think of quality as collective responsibility and I want to emphasize on the term collective responsibility, not shared responsibility. Why do I want to make that important distinction? If we share the responsibility, we may intentionally or not, give bigger part of it to a single member or make another less responsible. Collective responsibility require a collection of all team members’ personal responsibilities on the project. This way, if one person fails to be quality driven and delivers a crappy work, his part of the collection will be missing, therefore not all criteria for collective quality will be present. Also, we are able to determine quickly where things go wrong.

To summarize, we are really not in the position to give a god like decisions on whether or not the product should go live, whether or not a bug is critical enough to hold a release etc. But that isn’t necessarily a bad thing, because we can do so much more. Testing is about being aware of the products risks and providing correct information about it. Therefore, we can be quality advocates, but we are not actually assuring it by ourselves. And I’d like to present to you a quote from James Bach’s paper the Omega tester:
You are part of the process of making a good product, but you are not the one who controls quality. The chief executive of the project is the only one who can claim to have that power (and he doesn’t really control quality either, but don’t tell him that).”

And if you are interested in that topic, Richard Bradshaw made an awesome video in his series “Whiteboard testing”, where he explains his view-point on that same issue, you can watch it here: I’m QA. It’s in QA. It’s being QA’ed. You sure about that?

Software testing is not … breaking the product.

I will never get tired repeating what Michael Bolton has to say on that same topic and it is: “I didn’t break the software. It was already broken when I got it.”

It is funny, I know most people say this in ironical way, but yet that sort of speaking is one of that “toxic” phrases that actually do us no good and harm our craft along with stiff like “playing with the proiduct”, “just checking if it works”, “finding bugs”, but I will cover those in the next branches.

We are not actually breaking the product in the meaning of – causing damage that was never there and changing the consistency of the logic of the SUT. Our job is to expose inconsistencies in the logic, misbehaviour, false assumptions in logic or more generally – problems that occur while the SUT is performing its job. So, as you see, there’s no demolition squad, no bombing, no gunfire, we are not that bad, what is our job is – investigation, we have to investigate such behaviour, document the process that causes it and determine what are the reasons why we believe it is actually incorrect and raise the awareness of people who are interested in fixing it or the ones that will be affected if it continues to exist.

That’s it for these two branches of the mind map, please feel free to add your opinion in the comments and share it with your followers if you liked it. Thanks for reading. 🙂

Outdated testing concepts #4

certified stamp red ink
Image source: http://hostagencyreviews.com/wp-content/uploads/2013/03/travel-agent-certification.jpg

Link to Outdated testing concept #1

Link to Outdated testing concept #2

Link to Outdated testing concept #3

I was having this internal struggle for a long time – certification. At first, I thought “it will be cool to be certified, it will probably drag some attention to me”, keeping in mind I was green and didn’t have enough routine. Then, I thought that certification is a must, I should have it in my skill set at any cost, it will show people that I spent some time and invest in my own development. Now, I believe it is up to me, to be recognized as a skillful tester and it has nothing to do with being certified. This week’s outdated testing concept is dedicated to certificates and other cool scratch paper.

Outdated testing concept #4: Certified means qualified.

I have been thinking on this article for quite some time, I even failed to post it last week as I had doubts on talking about it. I was having this struggle within, whether or not I am qualified enough to give my opinion on certification. And a miracle happened, an argument in the local QA Facebook group I participate in made my belief solid, that there’s something fishy in certification, after all.

What was certification supposed to mean?

I will try to speak on testing certification only, but I think this applies to all certificates. Many people have different expectations towards software testing certification, unfortunately most of them wrong:

  • Certification is supposed to mean you took some formal software testing education.
    This is a broad topic and I will only scratch the surface with what I have to say, but software testing could not be learned by notebooks, nor certification courses. It is a practical activity and could be learned, educated and trained only through practice. You can learn the principles of testing, you can learn the testing terminology or the testing glossary, you can learn common techniques that are used in testing, but you can not learn how to perform testing at an expert level.
    If you want to go deeper in the topic – education isn’t what it was supposed to be, we are aware already that great minds are not “laboratory grown”, e.g. could not be cultivated in schools, colleges and universities. I am not saying schools and universities are useless, I’m saying they are not enough. Formal education isn’t satisfying our needs for natural interest, our professional choice, etc. I referred many times to Sir Ken Robinson‘s work, in one of his talks “Ken Robinson: How to escape education’s death valley”, he is talking how modern education is applying the so-called “fast food” approach towards students. This is commonly observed in testing certification as well, the “one size fits all” concept that tried to make you believe you can learn something easily and by following simple steps, which is far from true. Another important point that Sir Ken Robinson makes, is that above all, intelligence is diverse, and we should take this diversity into account, while educating, in software testing or out of it.
  • Certification is supposed to standardize testing.
    This is another fallacy that has to be brought to rest. The purpose of certification is not standardize the process, but to assert that you gained specific expertise. In fact, there is a way to standardize the process already like the IEEE standard for test documentation and if you take a look at it you will find out it is not that cool as it seems. If we have to follow the standard point by point, the testing process will become unnecessarily document heavy, slow and boring, resembling more of a court case rather than experimental process. 
  • I will find better job / get promoted, if I am certified.
    It is possible, certificates will have certain value for your future or current employer, but that shouldn’t be all. Me personally, I wouldn’t trust employer who uses certificate only to evaluate an employee. Most of the times, when it comes to recruitment, your motivation and willing to progress will play more general role in decision, rather than the certificate.

You can go and continue the list. What probably pops in your mind as a question is…

Who values certification, anyway?

My personal opinion is we can split this group in 3 sub-groups:

  1. New testers who are looking for a way to prove themselves – and this is kind of normal. Every new tester is trying to prove his/her value, wants to progress fast and dynamically and most of all is not familiar with all testing fallacies, one of which is the certification.
  2. Testing professionals who invested too much in certification, themselves – psychology is a weird thing and it states that, when an individual makes a reasonable choice, he or she must find a way to justify it. And this is the case with this sub-group of testers – they spent probably $200 for the foundation level course and probably much more for the intermediate, manager and ultra-super-mega-uber-testing-ninja-master-rockstar level. It is understandable why they will tell you certificates are valuable, otherwise these guys would have to admit that they threw their money into the ocean.
  3. Non-testing professionals who are involved in recruitment process – and we have to make a condition here, even when they are interested in you having certificate, don’t be too proud with it, it is just conversation starter, you will have to actually prove what you can and not just rest on “certification laurels”. Why are HRs and recruiters interested in certification? Well, normally they have none to minimum knowledge on what a really good testing professional is, so if he has a certificate, it is a really comfortable social contract. If he happens to be a complete moron, knowing nothing about testing, they will blame the bastards that certified him, because it’s their responsibility to tell good testers from the incapable ones. And if he turns to be greatest professionals – “Yay, see that? I told you, certification is an important thing to have for a tester.”

So, what is the truth about certification.

The truth is – it is useful to some extend. The reason why people are basically impressed by certificate is – it shows that you have the dedication and the motivation to invest in your own professional development. It is also useful, or at leas its foundation level, because you can get familiar with some basic terminology in the testing domain. That doesn’t necessarily mean you will be able to explain what it means or understand it profoundly, but you will be comfortable using it in certain occasions.

Should I get certified, after all?

I am not the one to answer this question, it’s you and only you. My suggestion is, if you want to invest in yourself and your professional development, invest in practical courses and not in certification. Testing is a practical activity and  it is only learned and explored effectively through practice. And not testing only, you can invest in learning some basic programming or networking or any other generic IT knowledge. That will always be in your favor. I don’t want you to stay off certification, because of me, I don’t want you to take it because of me, either. It is all up to you and your personal philosophy. My own opinion is – if you are passionate about your profession, if you strive to progress, if you make yourself stand from the crowd by being proactive or a blogger or a conference speaker, I believe no one will ever ask you if you have a certificate.

And one more suggestion if you decide to take certification, anyway. It is from the book “Lessons learned in software testing” by Kaner, Bach and Pettichord – I quote by memory “if you can get a black belt for two weeks only, try to stay out of fights”.

That’s it for this week, thanks for reading and as always I would love to read your opinion on the topic. That was it for Outdated testing concepts, as well. I think I said what I had for now, on this topic. Some day I will probably continue with a new one. Thanks, good luck! 😉

Outdated testing concepts # 3.

Link to Outdated concept #1 – Anyone can test.

Link to Outdated concept #2 – The guardian of quality.

This week’s outdated concept in testing will be kind of celebrity in its area, because it created so much confusion, so much excitement at the same time and is so often misused and misunderstood, that it’s probably the best candidate for review in outdated testing concepts and should take its honorable place – automation in testing.

rusty gears
Source: http://i.istockimg.com/file_thumbview_approve/11202223/3/stock-photo-11202223-rusty-gears.jpg

Few words before we start…

The definition that I use about automation is in the general meaning – automating an action and it’s possible uses in the testing process, and by this I want to explicitly state that I mean automating an action in testing and not automating the process of testing. I deliberately try not to use the term “checking” and dive into discussion about testing and checking, just because I don’t feel ready to give my opinion on that. Other wise, the post “Testing and checking refined” and the work Michael Bolton and James Bach have put into it, is amazing, I share many of their thoughts and conclusions and I would strongly recommend anyone to read it.

Outdated concept #3: The cult to automation.

I believe every one of us, testers, did at least in some period from his career been part of the cult to automation as I said earlier, automation created a lot of excitement, as well as a lot of confusion. The main reason behind both is the same, it was and still is presented as the ultimate solution to problems in software testing, it’s “The Cure” of everything. When I was a newbie (in fact I still am, I just pretend I’m a smart ass 😀 ) I was presented with this bright vision of automation and how cool it is, and all the possibilities it could give you and so on and it looked so cool and awesome, I was sure that’s the thing I want to do. Of course we would fall for that, we are not dumb, we all want to perform our job faster, more efficiently, with fewer errors on our side and automation was pretending to provide that. BUT this believe back then, was based on some false assumptions, we need to take a look at:

  • “Automation is cool, because it saves time” – from the perspective of “I have to fill in that form with all that data and submit it” or “have the script do that for me”, yes, it looks like it saves time. But there are so many other aspects missed here – building the proper infrastructure to have effective automated scripts is a time-consuming activity and it is a full development process, having all the development phases in it, including bugs. We can spend way more time debugging our scripts, rather than just perform the tests manually.
    The maintenance of these tests is as well time-consuming activity, consider all the changes that will occur in your application, for example, a simple hiding of an input field in the form might cause all your tests to break and you will have to spend time in order to adapt them.
    There’s different types of testing, requiring different type of infrastructure and we can’t always reuse all our tests, which is another time investment.
  • “Automation is cool, because we can automate user interaction with the system under test (SUT)”.
    No, we can’t. We can make the script act on it, but not interact with it. The term interaction itself suggests the process of mutual action, of communication. Human can communicate, process and evaluate the information that is distributed from the SUT, automated script can not, it can just act.
  • “Automation is cool, because we can automate all the tests.”
    … and rainbows and ponies, that poop Skittles candies and other mythic creatures … This obviously is the “elephant in the room”, we all have seen it, sometimes. Everyone imagines the utopia where all of us will be able to automate all the tests that we will have to do and we will just go and “click” in our IDE and it will run on its own, will “execute” all the tests, will make screen shots for us, report a bug if a script fails, attach logs, screen shot, it will probably do the dishes and the laundry, and eventually breakfast for us in the morning … joking of course.  This is impossible, for too many reasons some of which are – there’s no such thing as “all tests”, we all know exhaustive testing is impossible, therefore all tests is a hollow statement, we can’t even sit and create all the scripts that will be executed. Another reason is, not all types of testing are good candidates for automation. The non-functional tests or as the CDT community prefers to call them – para-functional types of testing are a great example – usability, accessibility,  installability, maintainability etc.
  • Automation is cool, because testing, in a nutshell is performing series of predefined actions.
    Saying this is like saying “software development is just writing code”. It’s just the tip of the iceberg, there’s so much more that testing includes in order to provide a high quality service – observation, exploration, critical thinking, evaluation, experimentation, application of different heuristics, adapting to specific conditions and context … and many, many more actions, that could not be predicted and predefined.

The list can go on and on, if you prefer you can continue it on your own. In time, the automation concept has become that demi-god that is supposed to solve issues automagically. Mike Talks (@TestSheepNZ) amazed me with his article on MacGuffin effect in testing, so inspired by him I will make another analogy.

In ancient Greek tragedy, there was the term – “deus ex machina” or “god from the machine”. In ancient tragedy it was a character, playing a deity that was brought on the stage with a crane(that’s the machine part), in order to solve unsolvable mystery or a problem that seemed to came to a dead-end. Enough with the history lesson, in case you don’t see the resemblance I will be more explicit, I believe that the common understanding for automation is the same – it should solve all problems, detect all errors, automate all actions effortlessly. Which is impossible and this is where confusion comes from. As in the ancient tragedy in automation also, there’s a human pulling the crane or the automated script. And it’s all up to human intellect to use efficiently the tool to automate action or fall into false beliefs.

So, is automation evil? Is it the enemy?

Of course not. This article isn’t about how useless automation is. It’s about how to think of it in order to make it useful, rather than expect surreal results of its use.

One more thing, sometimes I see some rebellious comments among testers, that go to the extreme of – automation is useless, it is wrong, nothing should be automated, and the most absurd – “we will be replaced by machines”. That last claim alone, shows that the person who said it, declares his agreement, that testing performed by a human is completely irreplaceable by an automated tool/script, which as I stated above is false. And actually, it’s proven long ago by respected contributors to the testing craft such as Cem Kaner and Gerald M. Weinberg, so I am literally just paraphrasing them.

So, how is automation useful?

  • At first we should consider to stop the whole manual vs. automation dispute. This was never true in first place, there is no such thing as manual only, or automation only (having “manual” in the meaning of “human performed testing”).
  • We should threat automation as a tool, not as a solution. It’s a tool to help us gain information about the product we are testing.
  • As a tool, we should know what is it good for as well as what are its limitations. There’s some purposes for which automation is useless, let’s not try to hunt rabbits with a bazooka.
  • Automation tools can only automate actions. They cannot automate observation, analytical thinking, problem solving, nor any strategy for discovering bugs.
  • Last, but not least and here I am quoting by memory Michael Bolton’s tweet – we shouldn’t think of automation as freeing us from doing something, but as enabling us to do something. So, the option of letting the thing run and go to grab a beer and watch the game isn’t an option, sorry.

That’s it for this part. I know it’s a controversial topic and many more opinions are about to come on it, but I believe talking about it will help us to clear things out. Of course, I should give the credit to the materials I used – many of the conclusions I make in this blog post are not my own, but influenced by James Bach, Michael Bolton, Cem Kaner, Gerald Weinberg’s works and many other blog posts and comments I’ve read.

Thanks for your time, I hope it was interesting and useful for you, too and, of course, don’t forget – your opinion matters and I will be happy to read about it. 😉

Outdated testing concepts #2

Link to part one: Outdated testing concepts #1

This week’s outdated concept will be dedicated to the “quality guardian” or “The gatekeeper to our product’s success”.

Outdated concept # 2: The holy guardian of quality.

quality stamp
Source by: http://kmtextiles.com/wp-content/uploads/2013/11/Quality.jpg

The first stereotype that was branded in my brain, in the first months I started working as a tester was: “QA is the person responsible for product’s quality” and “Your job is to prevent bugs from being released in production” or “Your job is to test the product so it meets the functional requirements / acceptance criteria” and so on.

I can now recall that I wasn’t really happy with the job title “tester”, to me it used to sound like someone who’s about to get shot out of a cannon or perform crash tests on a car or something. In other words “tester” seemed to me like a bit of an insult, while Quality assurance … wow that was shiny, at least to me, at that not too distant point in my past, it was rock star title to me, I was responsible for the quality of the product and I was told, I am the man to give my final verdict on shall it live or we will have to fix bugs until it’s done. Seemed like a pretty bad ass job to me … and it was all lies.

How are we responsible for quality?

Now, enough rainbows and unicorns so far, we are back in the present and we put on our “investigation hat” and analyze our part in product’s quality, and if it is really the principal role that was described to us, or just another outdated concept.

Let’s start at the pretty basic level:

  • If the developers are writing crappy code, no unit tests, can we impact on that aspect of quality?
    No, of course we can discuss with them, brag, ask, threaten or whatever negotiation tactics we might use in order to make them do it, but we can’t actually force them to do it, therefore we are not responsible for that part of the quality, of the software product.
  • If the requirements for the new features in our product are vague and ambiguous, impossible for us or the devs to decipher, do we have the power to change that?
    In fact we can, but it doesn’t guarantee anything. Why is that? We can protest against bad written requirements, we can ask for revisions and meetings in order to “sort things out” and make them clear and understandable for the whole team, but let’s face the reality, we are in the 21st century, we are mostly being involved in Agile and Scrum projects, because development teams like changes and they like ’em in big portions and at really high-speed. These poor requirements are probably not going to be the same only in a week, and who says they are the only source of knowledge for the product? What about the emails, or instant messages, other documentation like design blueprints, how about meetings, phone calls, conference calls, hangouts, 1 on 1 meetings or managers who tend to sit next to a developer and tell him “how it is supposed to work”… Do you still think you have reach on all that ? I strongly doubt it.
  • If the management takes bad decisions for the vision of the product?
    No, after all we are there to help the product to succeed, but it’s not our job to manage it. Of course, we could be involved in management, but there’s certain management decisions that could not be argued. And this is the end of the belief that we have the right of “veto” over the release. And me personally, I am happy with that, I wouldn’t like to bear the responsibility for that decision on my own. This isn’t one man decision for a good reason – everyone makes mistakes and we don’t want to turn anyone, not just the tester, into a sacrificial lamb. It’s easier to have someone to blame and many companies might still do this, but it’s not the way to progress.
  • Can we assure the quality of testing in general and/or the quality of our own job?
    This is a really tricky question, but I would say no. Do we really have a way to track the work progress and quality of the other testers, can we guarantee that the job they perform, even if documented, follows our understanding of high quality testing? And what’s even more critical – can we guarantee the quality of our own job? I personally wouldn’t, because it will be easy to delude yourself, and after all we as testers must always be suspicious and even our own skills should be tested and analyzed, through our own senses as well as from the feedback that other members of the development team, Only this way, we can be sure, that the job we perform adds value to the project, not only to our expertise, or as Keith Klain mentioned in a talk I linked in this post, this is a way to make sure “we are performing our testing holistically”.

And the list goes on and on… So, as we see, the responsibility about product quality isn’t a single person responsibility, it’s a team responsibility. Not in the terms of  “shared responsibility is no one’s responsibility”, but in the terms of, everyone adds value with his own unique skills to the product and bears the responsibility for the consequences of his mistakes.

So, from the picture that I drew so far it seems a bit worrying what we can do, isn’t it? Don’t worry …

“There has been an awakening…”

We are no longer happy being called QA and we realize that our real profession is testing. And by testing I don’t mean just “play with the product”, as we often get thought to believe by non-tester individuals, but the process of analytical, conscious experiment with the product in order to explore it, evaluate it(J.Bach) and provide expert opinion what are the possibilities for it to succeed or the risks that can lead it to failure.

So, how could we promote our job, from all what I said, doesn’t it seems that we all have no chance to change anything? No. Here’s a good list of actions we can perform to promote the real value of the testing profession:

  • We have the most wonderful, interesting profession in the software business – the testing part, we can experiment with the product in order to expose its weak and strong sides.
  • We can construct our testing as complete scientific experiment and develop not only our testing, but as well our scientific skills and the way we learn.
  • We have the opportunity to put our hands on the product first, after the development team is done with it.
  • We have the opportunity to mediate between the marketing, business, management and the technical teams within a software project and in this way learn way more information than any of these departments’ representatives alone.
  • We can literally break stuff and no one is angry with us, sometimes we even get congratulated for breaking it. This is a joke of course, we all know “we didn’t break the software, it was already broken when it came”(Michael Bolton).

The list goes on and on. As you see, speaking of software testing as simply “someone being responsible for the products quality” is just naive and barely touches the surface of what software testing profession really is.

And last, but not least, I believe that the main way in which we can drive our profession to progress is:

I believe James Bach once mentioned these..

  1. Learn the testing vocabulary and terminology.
  2. Learn the testing methodology and techniques e.g. learn how to be experts in our craft and how to provide high quality service as testers.
  3. Learn to explain our craft in a manner to promote our profession.

And I believe the last one is really important, we need to make ourself heard and seen, we need to learn to give a compelling story on what testing is and what is its true value.

That’s for this week, I hope it was interesting and useful, stay around for the next part. I will be happy to see your feedback, as always. Good luck 😉

 

 

Outdated testing concepts #1

End of the previous year and the beginning of the current one are normally the time we are determined to make new year’s resolutions, which most of the times turn to be big, fat lies, but anyway. I am not writing this to make resolutions, but to set goals. And we as a testing community have a lot of work to do in order to progress, since our craft was, as James Bach called it, “in perpetual childhood” for more than couple of decades. I believe the reason for this is there’s still a lot of misconceptions, myths or just outdated beliefs about testing that need to be declined, so we could progress in the right direction and leave the useless luggage of principles that never worked or were ineffective.

Stereo cassette
Image source: https://s-media-cache-ak0.pinimg.com/736x/68/cb/c3/68cbc3afe76b87fb9963e88a416f2a6a.jpg

So here is our list:

Outdated concept # 1: Testing is easy. Everyone can do it.

This concept was around for far too long and I believe every one of us had to explain at least thousand of times how our job isn’t simply “play with the product” or “break something” or “find the bugs” in it. Testing craft is profound and complex as it possibly gets, and no, testers are not failed programmers, they just don’t mix with programmers. Anyway, this is useless info, since many readers of this are testers, this isn’t helpful at all.

The thing that really surprises me is that companies accepted that concept for granted and even adapted their processes to it. All the outsourcing companies try to make new testers believe that it’s isn’t up to tester’s specific skills and knowledge of the product, nor the experience he has, but the documentation of the processes itself. They will make you believe that testing could be performed by mindless morons and you should write your test plans and test cases as if they going to be read by mindless morons. This way, they assure once you are gone, next totally green and incompetent tester will take your spot doing just the same.

Their biggest fear is when you try to defend your value by explaining how complex a testing process is and that in fact IS scientific experiment with all it’s assets, therefore needs educated expert in order to be performed in the right way and effectively.

I had an interview once, the interviewer was only concerned if I could write test cases, just that. So, we had an argument, that testing shouldn’t be determined in a specific set of actions to be effective, that this is only one approach to do it, that testing documentation is not test cases in the best way and that testing process is far too complex and fluid to define it as a set of rules, it depends on observation, critical thinking, oracles, assumption, risk assessment, heuristics etc, etc … Poor guy was almost shocked. It was a cataclysm in his poor narrow view of what testing is, finally he got relieved by knowing I am able to write test cases, but I don’t like to, because it’s boring and doesn’t document the way that I perform testing. I finished with the statement that this is so-called “best practice” we tend to stay away from, because it’s not always relevant. The answer was: “Well, you see, these are best practices for a reason”.

So, how can we react against this and justify the importance of testing as an experimental process and not just scripted activity?

  • In first place you don’t have to be afraid to stand for your beliefs, make sure you are able to explain the work you do, not just do it.
  • On the opposite, make sure you could give clear examples “why” and actually apply what you believe in and not just brag about it. Demonstrated knowledge is one of the most valuable skills in any craft.
  • Don’t expect anyone to know the specifics of testing, it is our job, but give the details carefully and in understandable manner for a person that has no background in software testing, as they could cause a lot of confusion if served at once.
  • Knowing your craft and being able to answer the question “why” is vital for your development as a tester. As you see from the opponent’s view-point, in my interview, the answer to the question “Why follow best practices?”, varies from “Because we have to”, to “because everyone else does”. As you can see, it’s not an enormous effort to question such an explanation and we could definitely do better than that.

I wasn’t expecting this to get so long, but as it gets in testing, things are dynamic and we should adapt to them. I will write some more post on obsolete concepts later in my blog.

Important notice: Some important sources that provoked me to get to this point are the book “Lessons learned in software testing” by Kaner, Bach and Pettichord as well as the videos from Introduction to BBST by Cem Kaner.