If you assume that MORE UI Automation is a good thing, you are guaranteed to lose trust in automation or kill your automation program. When it comes to anti-patterns, there are very few that will kill an automation program as quickly as using UI automation to test everything.
The assumption that more is better is a common assumption to make, but it’s the wrong one.
More is not always better. Sure, when it comes to certain things, more works as the better option. However, as a new company that is starting out, less stable automation is way better than an option. You want to do better than other companies and those may not know how beneficial it would be to run less stable automation.
Let’s take a look at an example:
I’m super lucky that I get to consult and work with clients all over the world. So I’ve seen all sorts of organizations.
An organization runs 123K UI tests over a week — that’s a lot of UI tests!
This organization has executed 123K automated UI tests in 7 days!
Take a look at this graph, you notice that only 15% of the tests passed — a very low passing rate.
You wouldn’t say that 85% of those features that you have tested have errors, because the idea of over 100,000 bugs being logged in seven days is impossible! Well, it’s highly unlikely, and yet here you are dealing with errors and failures in your system.
These errors are false positives — not real ones — and that’s a good thing. False positive readings are annoying, but it means that your software is not at fault and that’s important.
You do, however, need to know who is sorting through all of those failures! You should have someone in your team sorting through all of the failures that occur and become apparent in testing, and with 104,000 failures, you need to learn why they have.
So, what’s the story behind these failed tests? What are the reasons behind the tests failing?
- The assumption is that there is one bug in the application causing the failure
- The next assumption is that there are two or more bugs in the software causing the failures.
- OR — there are no bugs at all, but the automation efforts happening right now are entirely worthless.
Most would put money on the fact that option 3 is far more likely than 1 or 2
This isn’t the worst of it, either.
There are not enough engineers in any business that could keep up with sorting through over one hundred thousand failures in one week.
Those who try can barely keep up with under 10 failing per week, which means that there is no one analyzing these tests.
Would you agree? Please comment below what you think!
It leads you to question then, what value do these tests serve the organization? What decisions are being made to help your business to make about software quality?
If you had huge failure rates in manual testing, you’d be tempted to move your software to production. However, it’s not a good idea.
The thing is, you must ask yourself why so many automated tests that are failing over and over are allowed to continue running?
This automation is simple noise and no one is listening to it — not even those who developed it in the first place.
There is, however, hope!
Your organization can do it better. Here’s an example of one…
With automated tests happening over a year instead of the course of a week, there will be more success. Why?
Well, over a year, there are often fewer failures.
Of course, over a year it doesn’t imply that the automation was more successful, but it does mean that if you have tests being run and passed every few months versus tests over a week that mostly fail, you’re going to trust the pass rate moreover a year.
Here’s where it gets interesting:
Think about a single feature from a website. Take an Instagram search, for example.
How often does that particular feature stop working and break — according to your experience? Rarely.
If you have an automated test case for this feature, you should see that they behave exactly as the development of the feature happens.
This means that the majority of the time (99.5%), you should pass. Failing once in a blue moon is normal, but failing in huge numbers is not.
Btw, if you want to see these tips in code, as well as dozens of others, check out the Complete Selenium WebDriver with Java Bootcamp.
It gets better:
The next step is to make your automation more valuable and you can do that by watching it.
If your automation is not providing a correct result more than 99.5% of the time, you need to stop the automation and fix the reliability. You can only have five false positives in every one thousand executions, and this is known as quality automation.
Is that impossible?
Not at all. I ran the team that had these execution results below…
You should achieve under 5 false positives per day, and it’s not impossible to do it, either. You can run the tests and their results should show you that you have a much better pass rate.
You can see the red dots on the graph, which signify a failure. Note one of the long non-failure gaps between build ~1450 and ~1600. That’s ~150 builds of zero failures.
Almost every failure in the graphs provided was a bug that was introduced into the system and not a false positive.
The false positives are so common in UI automation, but not in this case and this came down to less automation over a length of time and better reliability.
It is very possible to see 99.5% reliability from UI automation and it gets better over time.
I recently came across an excellent post by a Microsoft Engineer talking about how they spent two years moving tests from UI automation to the right level of automation and the drastic improvement in automation stability. Here’s the chart:
You can stop making silly and small mistakes in your automation that are ruining your reliability and causing all of these failures and errors.
You can expand your testing capacity and gain better, faster feedback while improving quality when you perform tests on your automation. You may already know the value of software testing, but it can be hard to test an application thoroughly before release. You want to make sure that you can execute less automation over a longer period of time so that you can get the best results possible.
Less automation is the best automation, so consider your current testing strategy and slow it down. Take your time and avoid those failures!