Do I need auto-testing? When is it needed? What value does it bring?
The article discusses when and why testing is needed as such and in what cases its automation is needed.
During the discussion on this issue held in the club to them. Francis Bacon (“KifB Web-Meetings” in teleram), colleagues exchanged experiences and wrote down their thoughts.
Test automation is needed if it brings value. When does testing itself bring value? Two cases have been identified.
If the process catches errors in the software before laying out for battle
If testing the new version revealed errors that were subsequently corrected, then this testing was not in vain.
But what if the situation is reversed? If the testing did not find an error, but they showed up in battle? If errors were detected on the test bench (including the test bench configured most closely to the battle).
It is alleged that in this case the testing was carried out poorly.
How to measure the quality of testing? For this case, the metric is suitable = the number of errors on the test environment / (the number of errors on the test + combat environment). In this case, the number of errors is taken weighted by the level of criticality.
But what if the testing did not find errors and they were not found on the battle server?
It is argued that in this case, testing, as such, did not bring value and these works were done in vain (with the exception of the following case, which we will discuss later). From Lean's point of view, this is a loss.
When is this possible? When the module under test did not change. What can change a module?
- Modification of the module code.
- Changing the version of the libraries used (including OS, database, etc.). The change may be due to the requirements of regulators, which is why the library needs to be updated in a fixed time frame.
- Changing settings or data that seriously affects the behavior of a universal functional, the comprehensive testing of which is unnecessarily difficult to conduct, and as a result, testing was abandoned. Examples:
- Promotional settings in solutions like Siebel CRM or Oracle ATG can degrade the performance of the promotional calculation functionality, and the possibility of comprehensive verification is not possible in a reasonable time due to the excessive flexibility and versatility of these solutions
- The html description of the product card may contain a broken document structure or errors in the js code description written inside, which breaks the product card page
- The use of the functional is not intended for this purpose (hammer nails with a microscope). For example, a change in the type of load that is not inherent in the requirements and, as a result, is not taken into account during the testing. For example, before Black Friday, it is worthwhile to conduct a separate load testing for the landing page, where will the traffic go if there were no such load requirements for this type of page.
- Changing the behavior of the API of other modules that the developed module uses. Especially if the API functionality is not covered by regression testing.
Since the options for changes may be different, and conducting a full regression test costs money, it is not worth all the tests. One control option is to tag tests with tags and before testing. The test manager determines which set of tests should be sent for execution for a given portion of changes.
When do I need to write autotests in this case?
To begin with, automated testing does not negate setting testing, design testing and writing test setting cases! And does not replace them. If this is not the case, then automation should not be dealt with. At the same time, autotests should be understood not only as the script itself, but also as preparation for their run and use of the results.
If you write self-tests after creating the code, this will lead to an increase in time2market (which will automatically lead to an increase in connected capital). Therefore, if it is decided to cover the code with autotests, then you should write these autotests parallel to the main code, in the development paradigm of “TestFirst” or “TDD”.
The main value created by the automation of testing is the reduction of time2market due to the faster laying out of the new version.
Tests are needed to guarantee the performance of critical processes.
Despite the fact that the car never caught fire, the presence of a fire extinguisher in it is not useless.
An error on a site with high traffic that does not allow adding goods to the basket can lead to more significant losses than the cost of developing and conducting testing of this function for the year.
Therefore, it is necessary to highlight the critical processes that will go into regular checks (which are worth doing if some kind of change occurs). Compare:
- loss from downtime of the function from the time of detection to its correction,
- expenses manual regular testing or its automation and its subsequent implementation during the payback period.
But what if regular testing does not find errors for a long time and they do not occur in battle? Then testing does not bring value, and therefore it is not necessary. When it's possible?
- The module under development is not very large.
- Stable, highly competent team.
- Integrations are closed by tests or on the other side there are no changes.
Is it possible not to do testing at all?
This is possible through the launch of several installations of the solution and testing of new beta versions on cats, if this is technically possible and if such volunteers are found. After laying out the new version, telemetry is monitored and a rollback is performed, in case of degradation of indicators. (Recall that telemetry in battle must be independent of the availability of testing).
Another useful case of regression auto-testing is API testing (API contract testing), if this API is required to support a critical process. This is especially important if the developers of another module change something and do not do quality testing of changes on their side.
When test automation is not needed
If you got a large amount of inherited not very high-quality code. Covering autotests with such chaos is increasing chaos.
In this case, it is worth highlighting the logical module from this solution. After selecting the interaction layer of this module with the rest of the code, you need to cover the interaction with autotests. This will provide guarantees of the integrity of the module's behavior and integrity after its transcoding.
Autotests do not replace manual testing. Manual testing is most often carried out faster and cheaper than writing and subsequently accompanying autotests. In particular, if testing is carried out after the development of the code, then from this testing only the piece that will go into regular testing of critical functionality should be automated.
And finally - a checklist of the company's readiness for autotests
Let’s make a reservation right away, this checklist is not for everyone, it is written for testers of companies for which software development is the main source of income.
Check list
- The company has drawn the main business delivery process and there is an understanding of where you are located.
- The company has drawn up the process of delivering value to customers.
- Task management has been set up, which means that all involved take tasks to the desired status. And the tasks are typified.
- The company has formulated testing goals.
- The headings of the tasks in the tracker are “combed”, in other words, by the title you can understand what kind of task it is.
- The task registry is manageable, at any time it reflects the current status and policy of the project / product.
- There is a registry of requirements and it is manageable.
- There is traceability of task requirements.
- There is traceability of test requirements. It is known what requirements are covered by what tests.
- There is traceability of task tests. It is known that it has already been tested where and how.
- There is a matrix of the influence of the components on each other.
- Tasks are traced to components.
- Everything is on version control.
- There is a versioned policy of who, how and why. There is an understanding of why git flow is a bad model.
- Existing standards: coding and other
- There is a ci
- There is a release policy, where in particular methods of versioning are prescribed, all that is needed.
- There is a repository for artifacts from where you can uniquely take out a product ready for installation.
- There is a policy for marking artifacts according to different criteria. Static code analysis is not forgotten.
- Product sweep medium rises at the click of a finger. The environment is also on version control.
- The environment is fully automated checked for correctness. Ports, Java version, ...
- Product unfolds on click with check
- The product is configured fully automatically for the required task. By the way, this also applies to business configurations. And this is also checked in automatically.
- You have a way to repeatedly and automatically generate all the necessary test data. Generation scripts are also on version control and are associated with product artifacts.
- All of the above works for any version of the product.
- You have a delivery pipeline registered inside the release policy.
Finally, thanks to the group members for the discussion and help in preparing the article.