When tests and autotests are needed, a look from the supersystem

Do I need auto-testing? When is it needed? What value does it bring?



The article discusses when and why testing is needed as such and in what cases its automation is needed.



During the discussion on this issue held in the club to them. Francis Bacon (“KifB Web-Meetings” in teleram), colleagues exchanged experiences and wrote down their thoughts.



Test automation is needed if it brings value. When does testing itself bring value? Two cases have been identified.



If the process catches errors in the software before laying out for battle



If testing the new version revealed errors that were subsequently corrected, then this testing was not in vain.



But what if the situation is reversed? If the testing did not find an error, but they showed up in battle? If errors were detected on the test bench (including the test bench configured most closely to the battle).



It is alleged that in this case the testing was carried out poorly.



How to measure the quality of testing? For this case, the metric is suitable = the number of errors on the test environment / (the number of errors on the test + combat environment). In this case, the number of errors is taken weighted by the level of criticality.



But what if the testing did not find errors and they were not found on the battle server?

It is argued that in this case, testing, as such, did not bring value and these works were done in vain (with the exception of the following case, which we will discuss later). From Lean's point of view, this is a loss.



When is this possible? When the module under test did not change. What can change a module?





Since the options for changes may be different, and conducting a full regression test costs money, it is not worth all the tests. One control option is to tag tests with tags and before testing. The test manager determines which set of tests should be sent for execution for a given portion of changes.



When do I need to write autotests in this case?



To begin with, automated testing does not negate setting testing, design testing and writing test setting cases! And does not replace them. If this is not the case, then automation should not be dealt with. At the same time, autotests should be understood not only as the script itself, but also as preparation for their run and use of the results.



If you write self-tests after creating the code, this will lead to an increase in time2market (which will automatically lead to an increase in connected capital). Therefore, if it is decided to cover the code with autotests, then you should write these autotests parallel to the main code, in the development paradigm of “TestFirst” or “TDD”.



The main value created by the automation of testing is the reduction of time2market due to the faster laying out of the new version.



Tests are needed to guarantee the performance of critical processes.



Despite the fact that the car never caught fire, the presence of a fire extinguisher in it is not useless.



An error on a site with high traffic that does not allow adding goods to the basket can lead to more significant losses than the cost of developing and conducting testing of this function for the year.



Therefore, it is necessary to highlight the critical processes that will go into regular checks (which are worth doing if some kind of change occurs). Compare:





But what if regular testing does not find errors for a long time and they do not occur in battle? Then testing does not bring value, and therefore it is not necessary. When it's possible?





Is it possible not to do testing at all?



This is possible through the launch of several installations of the solution and testing of new beta versions on cats, if this is technically possible and if such volunteers are found. After laying out the new version, telemetry is monitored and a rollback is performed, in case of degradation of indicators. (Recall that telemetry in battle must be independent of the availability of testing).



Another useful case of regression auto-testing is API testing (API contract testing), if this API is required to support a critical process. This is especially important if the developers of another module change something and do not do quality testing of changes on their side.



When test automation is not needed



If you got a large amount of inherited not very high-quality code. Covering autotests with such chaos is increasing chaos.



In this case, it is worth highlighting the logical module from this solution. After selecting the interaction layer of this module with the rest of the code, you need to cover the interaction with autotests. This will provide guarantees of the integrity of the module's behavior and integrity after its transcoding.



Autotests do not replace manual testing. Manual testing is most often carried out faster and cheaper than writing and subsequently accompanying autotests. In particular, if testing is carried out after the development of the code, then from this testing only the piece that will go into regular testing of critical functionality should be automated.



And finally - a checklist of the company's readiness for autotests



Let’s make a reservation right away, this checklist is not for everyone, it is written for testers of companies for which software development is the main source of income.



Check list



  1. The company has drawn the main business delivery process and there is an understanding of where you are located.
  2. The company has drawn up the process of delivering value to customers.
  3. Task management has been set up, which means that all involved take tasks to the desired status. And the tasks are typified.
  4. The company has formulated testing goals.
  5. The headings of the tasks in the tracker are “combed”, in other words, by the title you can understand what kind of task it is.
  6. The task registry is manageable, at any time it reflects the current status and policy of the project / product.
  7. There is a registry of requirements and it is manageable.
  8. There is traceability of task requirements.
  9. There is traceability of test requirements. It is known what requirements are covered by what tests.
  10. There is traceability of task tests. It is known that it has already been tested where and how.
  11. There is a matrix of the influence of the components on each other.
  12. Tasks are traced to components.
  13. Everything is on version control.
  14. There is a versioned policy of who, how and why. There is an understanding of why git flow is a bad model.
  15. Existing standards: coding and other
  16. There is a ci
  17. There is a release policy, where in particular methods of versioning are prescribed, all that is needed.
  18. There is a repository for artifacts from where you can uniquely take out a product ready for installation.
  19. There is a policy for marking artifacts according to different criteria. Static code analysis is not forgotten.
  20. Product sweep medium rises at the click of a finger. The environment is also on version control.
  21. The environment is fully automated checked for correctness. Ports, Java version, ...
  22. Product unfolds on click with check
  23. The product is configured fully automatically for the required task. By the way, this also applies to business configurations. And this is also checked in automatically.
  24. You have a way to repeatedly and automatically generate all the necessary test data. Generation scripts are also on version control and are associated with product artifacts.
  25. All of the above works for any version of the product.
  26. You have a delivery pipeline registered inside the release policy.


Finally, thanks to the group members for the discussion and help in preparing the article.



All Articles