Work with objections: static analysis will take part of the working time

Hold the bug Communicating with people at conferences and in commentaries to articles, we encounter the following objection: static analysis reduces the time to find errors, but takes up time from programmers, which eliminates the benefit of its use and even slows down the development process. Let's analyze this objection and show that it is groundless.



The statement “static analysis will take up part of the working time” in isolation from the context is true. Regularly reviewing static analyzer warnings issued for new or changed code really takes time. However, the thought should be continued: but the time spent on this is much less than that spent on identifying errors by other methods. Even worse is learning about bugs from clients.



Here, unit tests can be a very good analogy. Unit tests also take time for developers, but this is not a reason not to use them. The benefit of writing better and more reliable code when using unit tests exceeds the cost of writing them.



Another analogy: compiler warnings. This is generally a very close topic, since the warnings of static analysis tools can be seen as a first approximation as an extension of compiler warnings. Naturally, when a programmer sees a compiler warning, he spends time on it. It should either change the code, or explicitly spend time suppressing the warning, for example, using #pragma. However, this time is not the reason to disable compiler warnings. And if someone does this, it will be unambiguously interpreted by others as professional unsuitability.



Nevertheless, where does the startle of the need to spend time warning static code analyzers come from?



Everything is very simple. Programmers who are still new to this methodology confuse test runs and regular use. At the first launches, any analyzer displays a huge list of warnings, which is even scary to look at. The reason is that the analyzer is not yet configured. A tuned analyzer produces a small number of false positives during regular launches. In other words, most warnings reveal real defects or an odor code. It is only important to make this setting. This is the whole trick that turns a static analyzer from evil, which takes time, into a friend and helper.



Any static analyzer will first produce a lot of false positives. There are many reasons for this, and this topic deserves a separate article. Naturally, both we and the developers of other analyzers struggle with false positives. But there will still be many positives if, without preparation, you suddenly take and run the analyzer on some project. The same picture, by the way, with compiler warnings. Suppose you have a large project that you have always built, for example, using the Visual C ++ compiler. Suppose the project was miraculously portable and compiled using GCC. Even so, you will receive a mountain of warnings from the GCC. Anyone who has experienced a change of compilers in a large project understands what this is about.







Warnings








However, no one forces to constantly dig in the mountains of garbage from warnings after changing the compiler or after starting the analyzer. The natural next step is to configure the compiler or analyzer. Those who say “analysis of warnings is time-consuming” assesses the complexity of implementing the tool, thinking only about all these warnings that need to be overcome at the beginning, but do not think about calm regular use.



Setting up analyzers, like compilers, is not as complicated and labor-consuming as programmers like to scare. If you are a manager, do not listen to them. They are just lazy. A programmer can proudly tell how he spent 3 days looking for a bug found by a tester / client. And this is normal for him. However, from his point of view, it is unacceptable to spend one day tuning the tool, after which a similar error will be detected before it gets into the version control system.



Yes, false alarms will be present after tuning. But their number is exaggerated. It is quite possible to configure the analyzer so that the percentage of false positives is 10% -15%. Those. for 9 defects found, only 1 warning will require suppression as false. So where is the “waste of time” here? At the same time, 15% is a very real value, which can be found in more detail in this article .



One more thing remains. The programmer may object:



Well, suppose regular runs of static analysis are really effective. But what to do with the initial noise? On our large project, we will not be able to configure the tool for 1 promised day. Only one recompilation to check the next batch of settings takes several hours. We are not ready to spend a couple of weeks on all of this.



And this is not a problem, but an attempt to find a reason not to introduce something new. Of course, everything is not easy in a large project. But, firstly, we provide support and help integrate PVS-Studio into the development process. And secondly, it’s not at all necessary to start dismantling all warnings.



Since your application works, it means that the errors existing there are not so critical and most likely live in rarely used code. Serious obvious errors have already been found and corrected using slower and more expensive methods. But about it below in a note . Now something else is important to us. It makes no sense to engage in massive edits in the code, correcting many minor errors. With so much refactoring, something is easy to break and there will be more harm than good.



It is better to consider existing warnings a technical debt. It will be possible to return to debt later and work with old warnings gradually. Using the mass alert suppression mechanism , you can start quickly using PVS-Studio in a large project. Very briefly, it happens like this:



  1. You exclude explicitly redundant directories (third-party libraries) from the analysis. In any case, this setting is best done at the very beginning in order to reduce the analysis time.
  2. You try PVS-Studio and study the most interesting warnings . You like the results, and you show the tool to colleagues and superiors. The team decides to start its regular use.
  3. The project is being checked. All warnings found are disabled using the mass suppression mechanism. In other words, all currently available warnings are now considered a technical debt, which can be returned later.
  4. The resulting file with suppressed warnings is laid down in the version control system. This file is large, but it's not scary. You do this operation once (or, at least, you will do it extremely rarely). And now this file will appear in all developers.
  5. Now all developers see warnings that apply only to new or changed code. From this moment, the team begins to benefit from static analysis. Gradually configure the analyzer and do technical debt.






Fine!






By the way, the system for storing uninteresting warnings is smart enough. Hashes are stored for a string with a potential error, as well as for the previous and next. Due to this, if a line is added to the beginning of one of the files, then nothing will “corrode” and the analyzer will still be silent on the code considered a technical duty.



I hope we were able to dispel one of the prejudices regarding static analysis. Come, download and try our PVS-Studio static code analyzer. It will detect a lot of errors in the early stages and will make your code as a whole more reliable and high-quality.



Note



When developing any project, new errors constantly appear and are corrected. Undetected errors “settle” in the code for a long time, and then many of them can be detected when starting a static code analysis. Because of this, sometimes there is a false feeling that static analyzers find only some uninteresting errors in rarely used sections of code. Of course, this is the case if you use the analyzer incorrectly and only run it from time to time, for example, shortly before the release of the release. More on this topic here . Yes, when writing articles, we ourselves perform one-time checks of open projects. But we have a different goal. We want to demonstrate the capabilities of a code analyzer to detect defects. This task has little to do with improving the quality of the project code as a whole and reducing the costs associated with error correction.



Additional links:



  1. PVS-Studio ROI .
  2. Static analysis will improve the code base of complex C ++ projects .
  3. Heisenbug 2019. Report by Ivan Ponomarev " Continuous Static Code Analysis ".
  4. Ivan Ponomarev. Embed static analysis into the process, rather than looking for bugs with it .










If you want to share this article with an English-speaking audience, then please use the link to the translation: Andrey Karpov. Handling Objections: Static Analysis Will Take up Part of Working Time .



All Articles