Two ways to make reliable unit tests

It is believed that unit tests are not needed. That only half the truth is hidden in them. And that genuine information about the behavior of the program will be revealed only when we collect them into an integration test.



There is a reason for this, but are unit tests really incomplete and can they be made more reliable? How many reasons for their incompleteness?



Suppose we have two unit-covered components, Caller and Callee. Caller calls Callee with an argument and somehow uses the returned object. Each of the components has its own set of dependencies, which we get wet.



How many scenarios in which these components behave unexpectedly during integration?



The first scenario is a problem external to both components . For example, both of them depend on the state of the database, authorization, environment variables, global variables, cookies, files, etc. Judging this is quite simple, because even in very large programs there is usually a limited number of such contention points.



The problem can be solved, obviously, either through a redesign with reduced dependencies,

or we directly simulate a possible error in a top-level scenario, that is, we introduce the CallingStrategy component (OffendingCaller, OffendedCallee) {}, and simulate the Callee crash and error handling in CallingStrategy. For this, integration tests are not required, but an understanding is required that the specific behavior of one of the components poses a risk to the other component, and it would be good to isolate this scenario into a component.



Second scenario: the problem is in the interface of integrable objects, i.e. an unnecessary state of one of the objects has leaked into another .



In fact, this is a flaw in the interface that allows this. The solution to the problem is also quite obvious - typing and narrowing of interfaces, early validation of parameters.



As we can see, both reasons are extremely commonplace, but it would be nice to articulate clearly that there are no others.



Thus, if we checked our classes for 1) internal state and 2) external dependencies, then we have no reason to doubt the reliability of unit tests.



(Somewhere in the corner, a functional programmer is crying quietly with the words โ€œi told you so,โ€ but not about that right now).



But we can just forget or miss some kind of addiction!



Can be estimated rudely. Suppose there are ten scenarios in each component. We skip one scenario out of ten. For example, Callee suddenly returns null, and Caller suddenly receives a NullPointerException. We need to make a mistake twice, which means that the probability of falling somewhere is 1/100. It is hard to imagine that the integration scenario for the two elements will catch this. For many sequentially called components within the integration test, the probability of catching some of the errors increases, which means that the longer the stack of the integration test, and the more scenarios, the more justified it is.



(The real mathematics of accumulating errors is, of course, much more complicated, but the result does not vary much).



In the process of performing the integration test, however, we can expect a significant increase in noise from broken dependencies and significant time spent on finding a bug, they are also proportional to the length of the stack.



That is, it turns out that integration tests are needed if unit tests are bad or missing . For example, when in each unit test only a valid script is checked, when they use too wide interfaces and do not analyze general dependencies.



All Articles