Software testing is… part 2 – rooted in social science

This is the second part of the series software testing is, based on the mind map I provided in the initial post, you can take a look here: http://mrslavchev.com/2016/11/04/software-testing-is-part-1/

I bet that big part of the readers of this blog will be puzzled by the presence of the words software and social science in the same sentence and that’s OK. Here’s how it goes.

The current state of thinking about software testing.

When I was a newbie, not that I consider myself experienced now, I am just an old newbie, so when I was a newbie and first started studying the software testing craft I was presented with the following picture of software testing.

Software testing is a technical activity that relies on understanding of the technical side of the product, such as:

  • How the product is built – structure and frameworks
  • How specific functionalities are implemented?
  • What protocols are used for communication?
  • How does an application works – databases, interfaces, memory, processor usage?
  • How the internet operates – basic networking knowledge?
  • How is the application dependent on its environment – OS, hardware, 3-rd party libraries?
  • and many more

All of this is correct, but far from enough. It would be enough, if we only accept the definition that an application is just a series of instructions executed following a specific logic.

I like the example that Cem Kaner gives in the BBST foundation course, saying the above is like saying a house is just a bricks and materials ordered in a specific way, which is not true. A house is a place where someone can live and feel at comfortable, so there’s something more to a house than just the materials. Same with a software product – it’s not just the code, but it comes to partially replace a human interaction, but I will get into more depth in this a bit later.

The soft skills people.

Suddenly, when speaking of testing a person or two will show up saying that testing is not just technical, but incorporates some so-called “soft skills”. By “soft skills” they normally mean some part of negotiation skills, some part of team work abilities, some part of leadership, presentation skills, communication, etc. I say “some part”, because, so far, I didn’t hear or read any credible explanation what “are soft skills made of”, exactly, normally they are a small amount of this and that, etc.

I also agree, that this is important, too, but too general, in my opinion. It’s expected by a good professional in almost any area, interacting with clients/colleagues or any other human beings, to be a good communicator or a good negotiator and a good team player. So, I personally consider these – basic corporate survival skills, they are not something that’s strictly specific to software testing. So, what is it?

Software testing is rooted in social science.

Social science cloud
Source: http://www.proprofs.com/flashcards/topic/social-science

I mentioned earlier that a software product is not just a bunch of computer instructions ordered following specific logic. Software applications came to existence in order to replace, although partially, a human interaction, therefore, software applications are not only there to do what they are supposed to do, but they also have to bring us comfort and satisfaction. To understand how an application comforts our clients and how we can measure this in a reliable way, we need to understand the human nature and this is where social sciences come to help us. Or, if I have to use the house analogy Kaner made, it will be more useful to know how to evaluate a comfortable house, rather than just knowing how it’s built.

So, many times, under the influence of Kaner, Bach, Bolton and the CDT community in general, I have claimed that software testing is a science, or may be not a science, but uses many scientific methods and approaches and it corresponds with other sciences, not only but we can apply knowledge from other sciences in testing, so our testing can evolve and be more diverse. The aim of this topic is to give some perspective how testing corresponds with social science and which branches of it, I believe were useful for me until now. It’s important to mention that I am not considering myself an expert in any of the following, so all of the conclusions are made by my personal, non-expert, observations.

Psychology

In fact psychology is a science that can help any aspect of life and it has so many branches and sub-branches, that it will be practically impossible to cover all the possible applications it can have on software testing.

Psychology is science about the mind and behavior and that’s something that we rely on very heavily in testing, in two aspects:

Clients

In order to satisfy our clients’ needs, producing specific software product, we want to make them happy, we want to influence their emotions. And in no way, we want to influence their emotions in a negative way. That’s why, psychological knowledge can be useful in software testing, specifically in areas like usability evaluation. If you want more information on this I recommend you the book “The Design of Everyday Things” by Don Norman where you can learn how design of some utensils we use in our daily life has actually the ability to call for specific action.

Ourselves

The second aspect in which psychology can be very useful to software testing is sort of reflection oriented. We are professionals in assessing quality and performing experiments, but how can we say if we make the right experiments, if we have the correct judgement or in other words – how do we test a tester ?

Well, there’s no guideline actually, how to be a better tester or how to have the proper judgement, but we can be aware of what our own weak sides are or what known “defects and features” (because most of them are actually features) our minds have. And this is where cognitive psychology can help a lot.  We need to know how our minds can trick us, how our perception might be distorted, absolutely naturally and how our judgement can become biased. Another interesting topic that cognitive psychology reviews is the heuristic method for solving complex problems. This is a topic that is often discussed in the CDT community and a popular approach of solving testing problems, even when people are unaware they are doing it and don’t call it a heuristic.

A book that might be of great help to those curious of you, which makes a great analysis of all of these is: “Thinking, Fast and Slow” by Daniel Kahneman

Quantitative and qualitative research

Again, without any ambition to be an expert – this is something that is widely used in other fields like statistics and marketing and believe it or not a problem that we solve in testing every day.

In testing we often have that argument on whether or not our testing process should be structured around trying to provide some specific metrics or focus on experiment that is profound and provides valuable contextual information.

Seems that in science this is a problem that existed for ages in the face of quantitative and qualitative scientific research methods. And we deal with it in testing as well. How many times have you heard the argument on whether or not we should use metrics to direct our testing and if yes what kind of metrics, how do we translate these metrics in the language of quality and risks? How are they meaningful for us and for our peers?

On the other hand, there’s this group of people saying that metrics are easily manipulatable and they give us a good statistical results, but bad human or contextual results. They advice us to more “qualitative”, human and contextual approach in testing, which is in its nature experimentation.

Another question arises on how we choose one or the other, or if we have to choose at all? Do we need to combine them? If yes, how do we combine them, which is first, which is more important.

Seems to me we have a lot to learn from the field of qualitative and quantitative scientific research, in fact while preparing this article I reviewed some materials that made me think of a separate post on this topic only, so expect a follow-up on this in the near future.

Linguistics

This might seem a little bit odd to you may be, but expressing ourselves in testing is actually a vital part of our job. I often see people having arguments on the words that we use in testing or what a specific testing term means, what is the purpose of using this term over that term, I also often see these distinctions being qualified as “just semantics”, but I don’t think people realize how vital are these semantics for the quality of our craft. And believe me, this comes from me – a person that normally communicates his thoughts in testing in a language that’s not his native, so I am used to saying something stupid or incorrect and having to deal with the result.

Here’s how proper use of the language can be crucial for testing:

Bug advocacy:

Bug reporting is not just a routine task that we do in our day-to-day activities, although it looks like one. Quality bug report is not “just a bug report”, it’s your evaluation of the quality of the product, sometimes it’s the only proof of your input to the product. So, preparing a high quality bug report is crucial for everyone in the team and the product itself.

The more you develop that skill, the more credibility you and your job gain in the eyes of development, analysts, management and so on. And believe me, no matter how much time and effort you’ve put into building it, you can ruin it within a day, by letting negligence crawl into your bug reporting language, so it is very important to be careful, professional and specific in what we report as a bug. Don’t forget there are people depending on our judgement.

Describe your strategy and tell it in a compelling way

I think I wrote about this for probably a thousand times and a thousand times more won’t be enough to put the accent on how important it is to be able to explain what you do in a compelling and professional way.

Turns out in my experience, that many testers, even experienced ones have no issues performing their job as testers, including all their routine tasks, all but one – being able to explain what do they do. In other words, in the words of Harry Collins in fact, they hold a lot of “tacit” knowledge and they have problems turning it into explicit, in fact sometimes we even fail to explain our explicit knowledge in a good way, without oversimplifications.

This is why linguistic knowledge and the rich vocabulary and the ability to be precise in what you do are so important. Yes, we do report bugs, but no one is looking for bug reporters, after all, they are looking for testing professionals.  In order to be professional tester you have to know testing, to be able to do it well and to be able to explain and defend the approach that you’ve chosen and sound like a professional while you do it.

Philosophy

Philosophy has so many branches and so many great thinkers involved in it, that I can only wish I can make a profound analysis how it is involved in testing. Also, I am probably not going to dive too deep in it, as I plan to write a separate blog post on how software testing is related to epistemology.

Here’s just some quick thoughts on how philosophy is related to testing. What does philosophy means? From ancient Greek, the literal translation is – “love to wisdom”. Well, that’s interesting, I think I’ve seen this somewhere in IT as well, remember these guys that are always ready to learn more and more about the product, that claim software testing is perpetual learning ?

And there’s actually more, but hey, I don’t want to spoil all of it all at once.

And I think this topic became waaay much longer than planned. I hope with the above I gave some more ideas to think on next time you speak with someone on how “technical” testing is and I hope you consider the big part of social science knowledge that we use, in order to be valuable testers.

Of course, I would love to read your thoughts on the topic, that’s what comments are for. Like, share and retweets are always appreciated. Good luck. 🙂

What is software testing according to the “others”

I took a long break from blogging and the series of “what software testing is”, so before I go back to them, I would like to share my personal observations on some interesting facts. It’s actually related to what software testing is, but it’s sort of outsider’s view. It’s actually part of the reasons why I started the series what software testing is/not- the fact that opinions about testing are often shaped by people outside of testing.

So, have you ever been wondering what are others in your company thinking you’re doing? What is their perception of your job? You don’t know? Well it’s fairly easy to test this, simply read through the job description that is posted when hiring new employees. Because that’s what it is – job descriptions are often prepared by HR, leads and managers sometimes development managers, so I consider this a really good measure of what other people think testing is.

I read job postings very often – not because I am constantly looking for new job, but because I am constantly looking for personal improvement and I want to make sure my skills are still relevant to the market. I believe that’s something every good professional should do and also something that might give us perspective over the craft. Reading job posting always leaves me with the impression that people posting job offers for testers are either non-testers, copy-pasting job descriptions, without putting any sort of analytical thinking in it or they are drunk. Here are some of my favorites that I found while I was making research for this blog post(these are all requirements from real job postings):

Analyse automation and other failures and:

  • Accurately report and track defects
  • Identify areas of improvement

“Automation and other failures…” seems to me like the company is telling you, pretty straight-forward, that their automation testing strategy/tool/thing is a total failure and they want you to analyze it and probably fix it. Joke aside, I bet the author meant something way different, but didn’t pay attention close enough.

“Keeping track of developer changes and doing the appropriate tests when needed”

“Developer changes”?! Does this mean the tester should pay attention if developers switch places or if a developer changes his clothes, haircut, political views, what kind of change do they imply?

Not to mention ridiculous role headings like “We are hiring an automated tester“. Wtf do you mean by “automated tester”? Who are you going to hire T 1000? The Iron man? Wall-E? Testing is a human activity, performed by humans, which occasionally use automated tools, but that doesn’t turn them into automated testers.

Besides that job descriptions normally contain the following list of activities that are sort of “what are you going to do” when you get hired explanation. So, here’s how this looks like – it’s a raw material that I practically copy-pasted from several offers:

  • Analyse business and technical requirements, identify potential software issues
  • Design and execute test plans and test strategies
  • Execute functional, regression, integration and performance tests
  • Create test reports, defect analysis and troubleshooting
  • Maintain and regularly update QA related documentation
  • Prepare, monitor and maintain test environments and systems
  • Execute manual tests;-
  • Create detailed test plans and high fidelity test suites with test cases;-
  • Test suite execution, result analysis and reporting;- Estimate testing complexity and tasks for User Stories;- Coach other team members.
  • All applicants should be familiar with industry best practices for testing products including:
    • Defect classification and issue severity rating
    • White box, black box, and gray box testing
    • Usability testing
    • Code coverage
    • Unit, integration, system, and regression testing
    • Security and performance testing
    • Common automated testing tools
    • Continuous integration and continuous delivery
    • Agile methodologies e.g. SCRUM
    • Team sprint planning tools like JIRA
    • Customer use cases
    • Test documentation
  • Build tools and scripts to reduce the need for repetitive and manual tasks and tests.
  • Analyze requirements and product specifications.
  • Create, implement, and execute tests to break our software
  • Interact with engineers and managers to create good testing processes and test plans for software projects.
  • Interact with customers to understand their testing requirements and report issues.
  • enforce the acceptance criteria of features;
  • Design automated test cases, review existing such and analyze results;
  • Executing different types of black-box testing, including functional and non-functional
  • Automating test cases using various tools and languages
  • Documenting and assuring the quality of software applications across all architectural layers
  • Define and execute functional, automation and performance test plans and strategies;
    •   Prepare test environment, business scenarios and scripts, test scenarios, data and test scripts;
    •   Execute test cases, file bug reports, and report on product quality metrics;
    •   Drive testability requirements into the product;
    •   Follow good engineering practices.
  • Develop new and maintain existing automation tests;
  • Write automatic integration tests.
  • Write test documentation.
  • Work on understanding scenarios, and reviewing test cases to ensure that they meet the testing approach.
  • Ensure that the business process is respected
  • Work in collaboration with the managers responsible for the quality of the deliveries.

The problem:

I believe every person in IT, and testing in particular, is very good at seeing patterns. Very often, when I review job offers I see the following pattern:

Testing is mostly presented as:

  1. Writing documentation:
    Examples:
    “Create detailed test plans and high fidelity test suites with test cases”
    Design automated test cases
    Write test documentation
  2. Performing predefined tests:
    Examples:
    “Execute functional, regression, integration and performance tests”
    “Executing different types of black-box testing, including functional and non-functional”
    “Execute test cases, file bug reports, and report on product quality metrics;”

Somewhere  along these, not so frequently, but yet worth mentioning, appear stuff like “being a good team player”, “exchange knowledge with team members”, “analyse documentation”, “communicate with clients”, etc.

So, the problem itself is that none of the above descriptions mention some of the core activities during testing:

  • Creative thinking – test design is something mentioned very often, but nobody ever mentioned how do we design those tests, it’s like they are written on their own, or “we just figured it out already”
  • Problem solving – every job description reviews testing just like it is execution of steps and not an active process of problem analysis and resolution.
  • Exploration of the product – Few thousand times I saw “analyse documentation” and “analyse requirements” and none I saw “analyse the product”. We are shipping the product, after all, analysing the requirements is testing the requirements, but they are not the product. And we are not even reviewing the case when requirements and docs are missing or being outdated.
  • Experimental approach – Testing is all about learing new information about the product by the process of experimenting with it. If we remove this activity from testing we turn it into mindless zombie clicking.

In other words – job offers describe testing as the unsexiest activity ever

The way that job offers describe testing is like a simple set of activities that all revolve around – documenting, test execution and reporting.

This is why testing is often falsely viewed as an activity that can be fully automated – because the part of testing visible to management and individuals outside of testing is the one that has components easy to automate – execution, documentation and reporting. While the important components of testing, ones that are vital for expert testing are often omitted or taken for granted, such as:

  • Exploration
  • Experimentation
  • Problem solving
  • Modeling
  • Questioning
  • Risk assessment

And guess what – all of them are totally non-automatable!

Which leads to my personal favorite from the list above:

Build tools and scripts to reduce the need for repetitive and manual tasks and tests.

I’ve also seen this formulated as: “Perform automation to reduce boring and repetitive tasks.”

Now, let’s be honest about two things:

  1. You can reduce the “need of repetitive testing”, by not performing repetitive testing, anymore. Captain Obvious thought me this one.
  2. If you consider testing “boring”, you probably lack the proper mind-set in order to perform it.

Dear testers, testing is activity based on natural curiosity, if you are curious enough and love what you do, you will find every part of your job interesting. And by “interesting” I don’t mean “amusing”, you don’t work to entertain yourself, by “interesting” I mean challenging. If you have the mind-set of looking for challenge in every task that you perform, you will be curious enough to look for a solution. On the other hand – if you lack the proper motivation and curious mind-set, you will find every activity in testing being dull and boring.

What can we do about it?

I think I stated the problem well enough, but the more important question comes to mind: what should be our reaction to this?

At first place I think it’s our fault. We have been agreeing for too long to be told what are we doing and what our job consists of, without making any complaints about it and now we complain about the result that we see.

What we can do in order to fix things. Well it’s fairly simple – tell your testing story in compelling and scientific way. I know and you know that testing is not what we are told it is – execution of test cases and filing bug reports. Well, then tell the true story about it. Make a presentation about what problem solving approach you’ve chosen in resolving a particular problem or write a post about how traditional test cases strategy failed you, so you had to “switch gears” to something more effective.

My advice is very simple – Tell the true story of testing, don’t let others tell you what it is.

I would love to read your thoughts on the topics. Any shares and retweets are highly appreciated.

The non-manual, unautomated tester

I’ve been struggling with the division of manual and automation testing, practically since I started testing – about 3 years ago. I’ve switched sides couple of times, probably sharing all the false expectations that people have even today, claiming to be on any of the both sides. After all I decided for myself that I won’t pick a side, that testing is something that’s above the approach that we use to perform it. Occasionally, I will speak with people and they will ask whether I am manual or automation tester and I will explain that I don’t fit any of these descriptions and what my attitude towards testing is. But often I notice a problem.

Manual and automation turned from job descriptions to a way testers define themselves.

It was probably the 100-th time when I listened to somebody asking for advice “how to switch from manual to automation” when it struck me – people actually define themselves as being part of a tribe, they seem to see that glass wall that’s stopping them to use tools in effective way. Which often leads to absurd claims like: “I can’t write code, because I work as a manual tester” or even a better one – “I want to work as an automation tester, so I don’t have to do repetitive tests manually”.

Let’s see a simple mind map of what testing consists of, what actions do we perform, while testing:

test activities mind map

 

So, as we can see in the picture above there’s one area where software testing can benefit from automating in a really effective way and that’s the actual testing performance. Of course, aggregating results and creating reports can also benefit from automation, but I don’t consider a pile of data, no difference how it is arranged, a valid report about testing, unless it contains some sort of analysis of the process, some part with conclusions, with pros and cons, etc.

I think you might start to realize what I mean, if you don’t, you might want to take a look at an old article of mine called Test automation – the bitter truth.

So, let’s think of some conclusions.

Automation is not manual testing on steroids.

No matter how deep in love you are with idea that a script will perform your job instead of you, I will have to make you unhappy, it won’t. Unless… you downgrade the quality of the job you perform that bad, that it is totally replaceable by any programmatic script. I can guarantee you that, if you consider testing to be only the “test execution”, yes, it will be possible to be fully automated, guess what – it will be also full of crap.

Machines will not overtake humanity, Skynet is not coming for you.

Like it or not, in testing there’s plenty of “brain work” to be performed, if you want to be successful in testing and want to deliver high value testing services, you should do the “brain work”, no matter how loud you scream it drains all your energy out. One advice – get used to it, this happens everyday, anytime, to any professional that has a creative job, this is what creativity is supposed to do – many professions – surgeons, engineers, writers, painters, scientists, developers AND TESTERS do it every day. Yes, they might use tools to facilitate their job, but they are not defined by their tools. It is useful to think of testing tools in the way we think about any other tools – as an extension of the human body, not a replacement of it.
Imagine a blacksmith – he uses a hammer, it’s a tool, but he’s not using it because it will help him work less, it’s totally the opposite – he uses it because it will help him achieve more, work better, more effective, but the creative part of his job, the expertise, the experience still remain property of the blacksmith, not of the tool.

By saying “I am an automated tester only”

I feel that the so-called “automation only” people try to skip the responsibility of owning their testing. Owning the creative and analytical part of their testing. In exchange their prefer leaning on tools and technology, looking for the root cause of poor testing in the tools they use and blame them instead, or the technology or the environment. They seems to call that “flaky tests”. But they are not flaky tests, they are crappy tests, it’s flaky logic that stands behind them. It’s like asking a fish to climb a tree and then blaming the fish for sucking at tree climbing.
In order to be successful and productive, we should know the limitations of our tools and one of the limitations of the automated tools we have today is – they are not humans and they can not replace them.

By saying “I am manual tester only”

Saying “I am manual tester only” seems like an admission of the fact that a person lacks technical knowledge, coding skills and perspective how to develop them, just because they feel insecure. So, they are willing to take one of the two options:

  1. Stay in that small labeled box, called “manual tester” and complain their asses off how much of an dying breed they are and how “big bad automation” will wipe us all.
  2. or try to look for that “magical shortcut” that will skyrocket them into “automation testing” where they will somehow automagically remain relevant and up-to-date, without doing any effort.

There’s no shortcuts and, of course, there’s nothing stopping you to use any tool, you don’t have to go to a ninja school to be able to use them. Develop the skills and get rid of titles and labeling, focus on results and none of the above will seem relevant to you.

Final, final conclusion

I am really not aware when was that historic moment when a white bearded, magician looking, prophet of software testing, struck his staff in the ground and divided the testers into manual and automation testers. And I still don’t quite grasp on the idea why should anyone consider himself being one or the other.

We are testers, we perform testing – it’s an important job, it’s a complex job and as any complex job it will greatly benefit to use tools to facilitate it. That’s it.

So, tell me, will you remain in that small square having only role written in it and no meaning, or you will try to reach your full potential as a professional tester?

Hope you enjoyed the topic guys, will love to read your opinions on it. 🙂

Test automation – the bitter truth

Recently, I came across this awesome blog post by Mike Talks where he tells the story of his personal experience with automation and all the false expectations that he and his team had about it, and it is really an awesome post, if you hadn’t read it, go do it now.

And I really got inspired by it, I recently write a little post concerning automation and I figured out I had much more to say. I was really busy doing the “Software testing is not….” series, so I didn’t really feel like going off topic. But now I feel the time came to tell each other some bitter truth about “test automation”. It is bitter, because people don’t like it, they don’t like talking about it, they don’t like promoting automation like this, but, yet it’s the truth.

It all starts with love.

I have to say I love automation, I really do. And the reason why I write this post is, because I love testing and automation and I want to give the best perspective how they can supplement each other. Yes, it is related to saying some uncomfortable truths about it, but it has to be done.

So, I said I love test automation, not because I believe it is the one and only solution to all my testing problems, yet I am sure when I was new I had that delusion about it, probably. The reason I like it, is because I like to code. I like writing cryptic “spells” and see them turn into magic and I really don’t care if it’s a source file, an interpreter or a console, I just love it.

I see a lot of people being excited about automation, but for a different reason – because they see it as the ultimate solution to all testing problems, the silver bullet, the philosopher’s stone or you name it, it seems to them that human testing might be  replaced with automated scripts and everything will be great, we will ship the product in no time and the world will be a better place.

I have to object this. And trust me, this is not a hate post against automation, in fact it is post aiming to justify why automation is useful, but useful in the right way, with the right expectations, not the way that we want it to be. So, in order to achieve this, I will have to write for some bitter truths, about automation.

Bitter truth # 1: It’s not test automation, it is tool assisted testing.

James Bach and Michael Bolton did an awesome job, making the difference between “testing” and “checking” , so I think it will be useless for me to repeat what they already said. If you are  not familiar with their work, I strongly recommend you do check “Testing and checking refined” and their white paper on automation in testing – A Context-Driven Approach to Automation in Testing.

In a nutshell, the testing that we perform as humans is not easy to translate in machine language, we can not expect from a machine to act like a human. So, they make the difference between “checking” – shallow form of testing that could be translated into set of instructions and  expected conditions and “testing” – describing the process of human testing with all its complexity and depth. Therefore, the term “test automation” carries a false meaning – we can not automate testing, we can automate checking and it is more correct to say “automated checking”. Yet, I prefer another  term they used, for the reason it sounds more elegant and provides more information – “tool assisted testing”. Because that’s what we do, when we automate something – we use tools to assist our testing, that’s it, we don’t want to replace it, or get rid of it. We want to use tools to enable us, to do more, rather than to replace the human effort in testing.

Bitter truth # 2: Automation doesn’t decrease cost of testing.

I am really interested how does that equation work?. People claim automation reduce cost of testing. Let’s see.

  • You hire an additional person to deal with building the framework and write the tests. (If you think your existing testers can do it part-time, while doing their day-to-day tasks, my answer is … yeah, right ) so that’s additional cost
  • You probably will spend additional money for the tool licence, or otherwise you will use open source tools which means that guy will have to spend additional man hours in order to make that framework work for your specific testing needs.
  • You will have to pay that guy to write the actual tests, not only the framework, so that’s additional cost.
  • The code that you write isn’t “pixie dust”, it’s not perfect, it turns into one additional code base that has to be taken care of, just as every other production code – maintenance, refactoring, debugging, adding new “features” to it, keep it updated. Guess what, it all will cost you money.
  • And of course, let’s not forget about all the moments of “oh” and “ah” and “what was the guy writing this fucking thinking” that you will run into, related to the framework that you use and its own little bugs and specifics, that will also cost you additional money and time investment.

I think that’s enough. The main reason that people give for the “low-cost” of automation is – it pays back in long-term. Now that’s awesome. And it would be true, if we accept that once written a check is a constant that never changes, that it works perfectly and our application never changes until the end of the world. Well, that would be nice, but there’s two problems:

  1. We all know from our testing experience that things written in code don’t actually work perfect, out of the box. In fact it takes a lot of effort to make them work.
  2. If your code base never changes, if it stays the same, you don’t add anything, you don’t change anything, you don’t redesign anything, it is pretty possible that you are going out of business. We are working in a fast evolving environment and we need to be flexible to changes.

So, again. I don’t get it, how having all of the above leads to cost reduction.

Bitter truth # 3: Testing code isn’t perfect.

I see that very often at conferences and in talks among colleagues. It seems that test automation scripts are some kind of miracle of the nature that never breaks, never has bugs, once you write it and it’s a fairy tale. Well, unfortunately it is not.

As stated above already, it is code, normal code just like any other, it has “its needs” – to be maintained, to be refactored and guess what … to be tested :O Why is nobody talking about testing the automated checks, I mean are these guys testers, did they left their brains in the fridge or something?! It’s code, it has bugs, it has to be tested, so we make sure it checks what we want it to check, wake up! Not only that, I bet you, that doing your automation, you will be dealing with bugs introduced by your checks, much more than bugs introduced by the code of the app that you test.

Bitter truth # 4: Automation in testing is more reliable… it’s also dumb as fuck

One of the many reasons why testers praise automation is – it is more reliable than human testing. It can insure us that every time we execute exactly the same check, with exactly the same start conditions, which is good, at least it sounds good. The truth is, it’s simply dumb and I mean machine-like dumb. It’s an instruction, it simply does what it is told, nothing more, nothing less. Yes, it does make the same thing every time over and over, until you don’t want it do that thing any more. Then it turns into huge pain in the ass, because someone will have to spend the time to go and update the check and make sure it checks the new condition correctly. In other words, automated checks are not very flexible, when it comes to changes and changes are something we see pretty often in our  industry.

Bitter truth # 5: Automated checks don’t eliminate human error

For reasons that I stated above I claim that automated checks can not eliminate human errors. Yes, they eliminate natural variety that human actions have, but that doesn’t mean we don’t introduce many more different new human errors while writing the code. The conclusion is simple, while the code is produced by a human being, it is possible it has errors in it, it’s just life.

Not only that, but having our checks automated, we introduce the possibility for machine error. Yes, our code has all the small bugs and weird behaviors that Java, C#, Python, Php or any other language have. The framework that we use also might have these, the interaction that it has with the infrastructure it runs on, might also introduce errors. So, we must be aware of that.

Bitter truth # 6: Automated checks are not human testing automatically

I see and hear that pretty often, everyone saying – “we know we can’t automate everything” and yet they continue to talk as if they could. Not only that, they talk like if they could mimic exactly the same process that a human is doing. And it is not possible. There is no human activity that is cognitively driven, that includes analysis, experience and experimentation to work together in order to achieve a goal, that could be automated, not now, not with the current technology. Yes, AI and robotics are constantly moving forward, if in some beautiful day that happens, I would love to see it. Until then, human testing can not be automated, at least not the one I understand as high quality human testing, what we can automate, though, is shallow testing and act like it will do the job, just right.

Conclusion

Again, this is not a hate post. It is informative post, informative of the risks that automation has. Clever testers know these risks and base their testing on that knowledge. And yet there’s a lot of people running excited from conference to conference, explaining how automation is the silver bullet in software testing and it will do magic for your testing. This is wrong.

Tool assisted testing is useful, it is very useful. And it is fun to do, but we have to use it in the right way – as a tool, as an extension to our testing abilities, not like if it’s their replacement. We should know why and when it works, how it works and what traps it might have for us. In other words, when we want to use it, we should know why and if we are using it in the right way.

And most important, human and machine testing, simply don’t mix. One can assist the other, but they are not interchangeable.

Hope you liked the post. If you did, I would appreciate your shares and retweets. If you didn’t, I would love to see your opinion in the comments. Thanks for reading 🙂 Good luck.

Automation or manual testing – mentoring question.

I am happy to say, that this is the first question that came through the mentoring form on my blog. It has been a while before someone “takes the courage” to ask a question 🙂 So, once again people, if you have any questions – ask, I don’t bite, I am only trying to help. As I promised, I won’t use real names, so I will replace them with any that I made up. Here it is:

Hello,
I have some questions for you. Manual or automation tester? And if I want to change manual testing with automation, how to do that. Is there good practise for that.

Thanks in advance
Lisa

Hi, Lisa!
Thanks for asking a question via the mentoring form on my blog. I greatly appreciate the fact that you ask for my opinion in your question.
Now, from the way that  your question is structured I can assume, that you are considering to move from “manual”  to “automation” testing. Why I put these in quotes, well it’s a long story… Anyway, I don’t believe in the devision of automation and manual, but many  people do.
So, back to your question, which one of both?
First, it’s a good idea to mention what do these two mean to non-testers involved in SDLC and, unfortunately, sometimes even to testers. “Manual” tester normally is translated as “someone banging on keys mindlessly”, no expertise is taken into account, no technical knowledge, he or she is just someone “looking for bugs”.
On the other hand, “automation tester” means someone smart enough to automate the “boring job” that “clicking monkeys” will normally do by hand. Now, I don’t agree with both these claims, but if you listen to people while they talk of testing, they mention these, implicitly or explicitly. That’s why automation testing seems to be that dream career that everyone looks after and manual testing is considered deprecated and everyone tries to get rid of it.
The facts – you are more likely to get better salary as automation tester, rather than as manual, it’s just the market. Also, people get more excited when you say you are doing automation, without any regards to how actually you are doing it, you just say “I am an automation tester” and  you are considered a demi-God or something.
How to move from manual to automation?
You have to be comfortable with programming. If you are not, get comfortable with it. If you can’t, forget about it. Yes, I know all these recorders and other tools that produce tests, but they are silly, if you want to do some real stuff you need to know at least one Object oriented language like C#, Java, Python, Ruby, Perl etc.
There’s plenty of courses to do that, a lot of tutorials, books and webinars, you just have to go there and practice. Also, you need to be familiar with the automation tools and frameworks there  is, in order to show some relevant knowledge in the domain. In general, this is Selenium, but it wont be a bad idea to add some variety to your skill set by looking at tools for mobile automation like Appium and Robotioum or some BDD tools like Calabash. Also, it is strongly recommended to be familiar with certain design patterns in automation, that will help you write more concise code, avoid repetition of logic and will make your job easier in general. You will have to also be aware, that moving in automation, what you will be doing 8 to 6 is development. This is what it is, you will be writing code that tests, doesn’t matter if it tests UI, service or database. So, if you find this boring in a way, you might want to think about it.
If you are asking how to do that from a career shift point of view, I’d say, just make some good looking examples of automated testing in GutHub and start going to interviews. This will be the best way to make yourself aware of what companies are looking for, what skills and frameworks and also, to benchmark you skills.
My personal opinion on testing manually vs. automatically is – look for a company that can offer you both and let you choose how to approach testing. Getting in any of the directions too deep, in my humble opinion, downgrades your testing expertise. Testing, after all, is something we do with our brains, our experience and our personal perspective, not something that depends on tools or manual techniques.

Thanks for your time!

Good luck,
Viktor Slavchev

If you like to ask a question and get it answered by me, please don’t be shy, you can do it in my mentoring form here

Thanks for reading! Good luck! 🙂 

Results for State of testing 2016 survey are live.

Hello everyone,

I was a contributor for this years State of testing survey. You might see the posts in social media and on my blog to go to the official site and fill in the survey. In case you didn’t, you can now go and see what you missed to be part of 😀

The state of testing survey is an annual survey that Joel Montvelisky and Tea time with testers magazine make and it is by far one of the valuable things that happen every year in testing. So, I think we really owe these guys a big thank you for the great job they do.

The survey itself provides tons of interesting facts that professionals in software testing shared from all over around the world. Stuff like:

  • What title does testers have in their company?
  • Size of testing teams?
  • How testing teams are distributed around the world?
  • What percent of tests are automated/unit tests/integration tests?
  • What are the percent of adoption of CI?
  • What testing techniques are used among testers?
  • What valuable lesson did they learn during last year?  And many more…

So, enough talking from me, you go see it on your own.

Download State of testing 2016 results here.

On trolls and men. Summary of a lightning talk.

I had my first talk at a conference hooray!

It was exciting, it was interesting and it was challenging for me. So, having in mind my inability to write short stuff and the way I always add more and more, my lightning talk preparation really helped me master a new skill and not try to squeeze 30 min lecture in 5 mins.

The conference is called QA challenge accepted and is a local conference about software testing in Bulgaria, where I am from. There’s a lot to say how exactly the idea of the testing troll came to me and what were the previous versions of the talk that I rejected, but that will be sometimes in future.

The purpose of this post is to make  a summary of the talk and a bit of an analysis of the parts where I’ve messed up. So here are the slides.

There was a short introduction that I made which was something like:

I am here to present for you a couple of advises from my good friend, mentor and spiritual guide – The testing troll.

Wisdom #1: The testing troll doesn’t follow “best practices”

The testing troll is a strange animal, once I asked him: “Why don’t you follow the best practices in testing? Everyone uses them, they are approved by the community?”. And he answered something like that:

“See, every time I hear the words “Best practices” I recall when I was a little troll, there was this guy on the TV, wearing golden trinkets and rings. He was selling this magic frying pan, where you could fry fish, after that a meatball, after that eggs and tastes wont mix. Finally you can bake some milk in it and just wipe with a piece of paper and take the pan back to the drawer like nothing ever happened.”

Note: The above might not be comprehensible for anyone who didn’t grew up in Bulgaria, but in my childhood there was a commercial show, selling crappy goods and that’s one part of it. 
So, much like the frying pan, “best practices” are probably a useful tool, but not for all cases. After all, in life in general, there’s no such thing that works in all cases, there are methods and strategies that work, but only being applied in the right context. Each one of us should decide on his/hers own, what methods and practices to use.

Plus, as the testing troll says, “we have to use it because everyone does” is a really dumb argument.

Wisdom #2: The testing troll is not a manual tester, nor automation.

The testing troll doesn’t like the labels “manual” and “automation” tester. He isn’t manual, because isn’t only using his hands and when asked why he isn’t automated, he responded: “None of my body parts is automated, I am pure organic.”

Note: Organic, by that I actually wanted to make a joke with all the organic buzz that is out there right now, but as I think of it more and more, “testing is organic” and I can add some serious proves about it, in a future blog post. 

The main problem, he thought, is that both labels lay on the false assumption that testing is a process that could be performed by anyone, and that testing is process that could be defined in a finite sequence of steps.

And this is not true – testing is a mental activity and being a mental activity it is constructed by mental sub-activities – exploration, analysis, evaluation, selection of strategies and methods for action, application of strategies and actions, evaluation and analysis of the results.

The testing troll believes that any type of testing is performed in the mind and depending on the context, any good tester could decide if he or she would perform it using tools or not.

Wisdom #3: The testing troll treats tools like tools.

The testing troll likes going to testing conferences, to meet the community and learn new stuff. Unfortunately, people don’t talk about testing in testing conferences any more, they talk about tools – how to configure them, how to use them.

The testing troll doesn’t understand people’s passion to replace human testing with a machine one. Machine testing could only replace certain actions, but not interaction, as with the testing of a skilled human tester.

Note: I totally messed here and I forgot what I was about to say. I skipped the rest of this part and moved to next slide, so I don’t waste time and screw up totally. 🙂

As a result, many shallow and inaccurate checks are thrown out there, trying to make us believe that quantity could replace quality, totally missing the fact it doesn’t in any way improve quality of testing, or the information that we obtain, by doing it.

After all, instead of extending their abilities with tools, people are actually shortening them, waiting for tools to do the work, instead of them.

Wisdom #4: The testing troll knows about certification.

One day, while sitting in his cave and testing, someone knocked on the cave door. It was some travelling salesman from some federation.”Come to a certification course”, he said “you will be able to become super giga mega testing troll within three days, plus we are going to give you a certificate about that. And it’s going to cost you just 99.99”.

And the testing troll thought a little bit. All the other testing trolls in his homeland, Troland, have the certificate for being super giga mega certified testing trolls, how does that make him different, then. Plus, every time he went to an interview, people were interested in what he could really do, not if he had a certificate. Also, the cave was small, there was no additional place for another piece of rock to place the certificate on, plus everyone who got inside get eaten, so any way. He sent him away.

Note: Troland is a name I made up to make it funnier, by just combining “troll” and “land”. It’s not meant to mock any real country’s name or insult anyone. 

Wisdom #5: The testing troll believes testing is exploratory by nature.

The testing troll believes that any type of testing is exploratory. Its purpose is to help us expand our understanding about the product. Each test we are performing is a valid scientific experiment, which we perform against hypothesis we have. In software testing we call such hypothesis – testing oracles. Each test is interesting only when it gives us new information. Repetition of an experiment could be of interest to us only when we want to test the system for its internal consistency – if same input conditions lead to the same output results.

A test or experiment wouldn’t be helpful if it doesn’t provides us with information which we could apply in our further testing.

That was my lightning talk. It was meant to be provoking, amusing and share some opinion, hope it worked and based on the feedback it was interesting for the audience. Of course, there’s so much more to improve in doing lightning talks for me, but it was a challenge definitely.

Thanks for reading. 🙂

 

Some kick ass blog posts from last week #11

Hello, here’s the new portion of kick ass blog posts:

  • Software testing isn’t just a set of skills that we all read about in testing books and white papers, there’s a large variety of skills that we primarily don’t relate to testing, but might benefit our testing in a great way. Simon Knight made a great point about it in his post here:
    7 Things Awesome Testers do That Don’t Look Like Testing
  • Another great post by Michael Bolton in the series “Oracles from the inside out”. In this article Michael is talking about “conference” as a process in trying to reach shared understanding with the rest of the team. You can see the whole article here:
    Blog: Oracles from the Inside Out, Part 9: Conference as Oracle and as Destination
  • Another interesting and inspirational blog post by Simon Knight on writing a blog post. In it, Simon gives his view on a simple plan he follows when trying to write compelling content. You can see the full article here:
    Write powerful blog posts with this simple template
  • And if you are interested in the topic of how people are writing their great content, I encourage you to read Mike Talks’ article, which is inspired by the one Simon wrote:
    WRITING 106 – A scientific template for writing a blog article…
  • I love reading automation posts that aim to teach testers something new and new ways to improve their testing abilities and I am happy to say, Bas Dijkstra is always helpful in that matter. His recent post teach us how to write readable test code, something like recommended coding practices for testers. Which, in my experience is something that is often neglected. The full article you can read here:
    Three practices for creating readable test code
  • Great point from Katrina Clokie on making testing visible and letting other members of the team know what testing is actually about. You can review the whole post here:
    Use your stand up to make testing visible
  • Not to miss an important event “Dear Evil tester” by His Evil Testerness, Alan Richardson, is out, go and download it. By far I am like 10 % in it, really at the beginning, but I love the portion of dark, sarcastic humor that “Dear Evil tester” offers. I will keep you updated with my opinion on it. Until then, you can do it on your own:
    “Dear evil tester” on LeanPub
  • New issue of the testing magazine “Tea time with testers” is out. Don’t ask me what’s in it, I had no time to check it out, yet. Yes, I am human, laws of physics and time apply to me, too. 🙂 You can review the February issue here:
    Tea time with testers – February 2016

Other roundup posts: 

Automate the planet’s – Compelling Sunday. 

That’s it for this week. See you next week! 🙂

Some kick ass blog posts from last week #10

Hey there guys yaaay 10 kick ass blog posts already, I can’t believe I did something 10 times consistently without failing at least once. Here’s the list of posts for this week:

  • Jeff Nyman with a great post about WebDriverJS and the use of call backs and promises, really interesting if you are into JavaScript:
    WebDriver in JavaScript with Promises
  • Great talk from Test Bash NY 2015 by Keith Klain  on the lessons learned on selling software testing. It is a great opportunity to see the perspective of a test managers who tries to drive his team based on the CDT principles and all the lessons he learned by doing it. Not only that, Keith addresses many issues within the CDT community that we need to  work at. Great, inspirational and definitely a must-watch:
    Lessons Learned in (Selling) Software Testing – Keith Klain
  • Awesome post by Dan Ashby, explaining again that the role of automation in testing should be supplementary and not as a replacement of human testing activities. Dan made a great model of the testing and checking concepts and how they work together. Awesome post, I strongly recommend it:
    Information, and its relationship with testing and checking
  • Great news again, another software testing book is on the way, by Alan Richardson this time. “Dear Evil tester” is its name. What it is about and when to expect it, you can see for yourself here:
    Announcing “Dear Evil Tester” coming soon, and why I wrote it
  • I really recommend taking a look at Brendan Connolly‘s new post on ego, apathy and test cases. Interesting analysis with a little bit of philosophic of psychological taste. You can find the whole post here:
    Ego, Apathy, and Test Cases
  • And one last thing, that I found out, not a testing topic, but part of my other passions – hacking and security. We all know the Tor Browser and how everyone looks at it as the single option of being unrecognizable in internet, since the information that we all know is gathered by some agencies and the social media. Turns out, it’s not only the network security that we have to look out for, but there’s other smart tricks to identify user behavior. In this post the author explains how mouse motion and scrolling actions, can be tracked to patterns in creating a digital fingerprint, with which user could be identified online. It is a really interesting article:
    Advanced Tor Browser Fingerprinting

Other roundup articles: 

Automate the planet’s Compelling Sunday. 

Some kick ass blog posts from last week #8

Here’s the new portion of kick-ass blog posts from the last week:

Another round-up blog posts.

Automate the planet – Compelling Sunday.

That’s it for this week. See you all next week.