To argue: after reading to the end, you will understand how and why exactly GC works

I will say right away: I never wait for a detailed answer to this question on the social security. This is stupid and in my case selfish. However, in my opinion, in addition to the general interest in the platform, it is very useful to know how it works, because this removes a number of issues. For example, it excludes the option when the developer believes that Dispose



is called automatically and you do not need to call it yourself. Or, if the developer is more experienced, helps him automatically, at the level of muscle memory, write code that leads to the least number of problems.







Another question that I subjectively do not really like is how his work is explained. Therefore, I propose an alternative approach described in my book, .NET Platform Architecture .







If you and I will thoroughly understand why these two memory management algorithms were chosen: Sweep and Compact, we will have to consider dozens of memory management algorithms that exist in the world: starting with ordinary dictionaries and ending with very complex lock-free structures. Instead, leaving our thoughts of what is useful, we will simply justify the choice and thereby understand why the choice was made that way. We no longer look at the booster launch booklet: we have in our hands a complete set of documentation.







The dispute is mutually beneficial: if it is not clear, I will correct the unclear points in the book , a small part of which is this text.













I chose the format of the reasoning so that you feel like the architects of the platform and myself have come to the same conclusions that the real architects came to at the Microsoft headquarters in Redmond.

Based on the classification of allocated objects based on their size, you can divide the space for memory allocation into two large sections: a place with objects sized below a certain threshold and a place with a size above this threshold and see what difference can be made in the management of these groups (based on their size) and what comes of it.







If we consider the management of conventionally " small " objects, we can see that if we adhere to the idea of ​​storing information about each object, it will be very expensive for us to maintain memory management data structures that will store links to each such object. In the end, it may turn out that in order to store information about one object, you will need as much memory as the object itself takes. Instead, you should consider: if during garbage collection we dance from the roots, going deeper into the graph through the outgoing fields of the object, and we need a linear passage along the heap only to identify garbage objects, is it necessary for us to store information about each object in the memory management algorithms? The answer is obvious: there is no need for this. So, we can try to proceed from the fact that we should not store such information: we can go through a bunch linearly, knowing the size of each object and moving the pointer each time by the size of the next object.







There are no additional data structures on the heap that hold pointers to each object that the heap manages.

However, however, when we no longer need memory, we must free it. And when freeing memory, it becomes difficult for us to rely on the linear passage of the heap: it is long and not effective. As a result, we come to the conclusion that we need to somehow store information about free memory areas.







The heap has lists of free memory.

If, as we decided, to store information about free areas, and while freeing up memory, these areas turned out to be too small, then first of all we come to the same problem of storing information about free areas that we encountered when considering occupied ones (if on the sides of the occupied one object was freed, in order to store information about it, it is necessary in the worst case 2/3 of its size. Pointer + size versus SyncBlockIndex + VMT + any field - in the case of the object). This again sounds wasteful, you must admit: it’s not always good luck to free a group of objects that follow each other. Usually, they are released in a chaotic manner. But unlike busy sites, which we don’t need to search linearly, we need to search for free sites because when we allocate memory, we may need them again. Therefore, a completely natural desire arises to reduce fragmentation and squeeze the heap, moving all occupied areas to free places, thereby forming a large area of ​​the free area where memory can be allocated.







This is where the idea of ​​the Compacting algorithm comes from.

But wait, you say. After all, this operation can be very difficult. Imagine that you freed an object at the very beginning of the heap. And what, you say, do you need to move everything ?? Well, of course, you can dream up on the subject of vector instructions of the CPU, which can be used to copy a huge busy memory. But this is only the beginning of work. We must also fix all the pointers from the fields of objects to objects that have undergone movements. This operation can take a wildly long time. No, we must proceed from something else. For example, by dividing the entire segment of the heap memory into sectors and working with them separately. If we work separately in each sector (for predictability and scaling this predictability - preferably, fixed sizes), the idea of ​​compression does not seem so heavy: it is enough to compress a single sector and then you can even start talking about the time it takes to compress one such sector .







Now it remains to understand, on the basis of what to divide into sectors. Here we need to turn to the second classification, which is introduced on the platform: memory sharing, based on the life time of its individual elements.







The division is simple: if we take into account that we will allocate memory as the addresses increase, then the first selected objects become the oldest, and those that are in the senior addresses become the youngest. Further, being smart, you can come to the conclusion that in applications objects are divided into two groups: those that were created for a long life and those that were created to live very little. For example, to temporarily store pointers to other objects as a collection. Or the same DTO objects. Accordingly, from time to time, squeezing a bunch we get a number of long-lived objects - in the lower addresses and a number of short-lived objects - in the older ones.







Thus we have received generations .

Dividing the memory into generations, we get the opportunity to less often look into the objects of the older generation, which are becoming more and more.







But another question arises: if we have only two generations, we will get problems. Or we will try to make the GC work out masklessly fast: then the size of the younger generation, we will try to do the minimum size. As a result, objects will accidentally fail in the older generation (if the GC worked "right now, during a furious allocation of memory for many objects"). Or, to minimize accidental failure, we will increase the size of the younger generation. Then the GC on the younger generation will work long enough, slowing down and slowing down the application.







The way out is the introduction of the "middle" generation. Teenage. In other words, if you live to adolescence, you are more likely to live to old age. The essence of its introduction is to achieve a balance between getting the smallest younger generation and the most stable older generation , where it’s better not to touch anything. This is a zone where the fate of the objects has not yet been decided. The first (do not forget what we think from scratch) generation is also created small and GC looks less often there. GC thus allows the objects that are in the temporary, first generation, not to go to the older generation, which is extremely difficult to collect.







So we got the idea of ​​three generations.

The next layer of optimization is an attempt to refuse compression. After all, if you do not do it, we get rid of a huge layer of work. Back to the issue of free sites.







If after we have used up all the memory available in the heap and GC has been called, there is a natural desire to refuse compression in favor of further allocating memory inside the freed sections if their size is sufficient to accommodate a certain number of objects. Here we come to the idea of ​​the second algorithm for freeing memory in the GC, called Sweep



: we do not compress memory, we use voids from freed objects to place new objects







So we described and justified all the basics of GC algorithms.

So, after two days, we can draw some conclusions. As I understand it, most people understood most of the text, or even the whole. Some people answered that they did not understand, some, respectively, partially understood. The dispute was won by a team of readers, albeit with a slight margin, as they say. But, as I said, everyone will benefit from this: the text will be amended and supplemented. Plus, updated in both places: both in the book and here, in the article.








All Articles