-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] What's the relationship of GCHeap, gc_heap, heap_segment, alloc_context and generation? #7275
Comments
That's correct. Whenever a
The
I'm not really sure about that - perhaps there's a historical reason I'm unaware of?
The two segments reserved on startup are the ephemeral segment, which contains Gen 0 and Gen 1, and the initial LOH segment. When it comes time to promote things to Gen 2, an additional heap segment will be reserved, which becomes the new ephemeral segment and the existing segment becomes a Gen 2 segment. This is not to say that there are only ever Gen 0 and Gen 1 objects on the ephemeral segment there still may be situations where Gen 2 objects end up on the ephemeral segment.
I suppose that depends on how you draw up the heap - when the GC is initialized, the ephemeral segment is at the "top" of a heap with regards to memory addresses (i.e. any objects greater than the lower bound of the ephemeral segment must be in the ephemeral segment, because there are no segments past the higher bound of the ephemeral segment). Though this is the case at startup, it is not always the case during normal execution, as new segments may get reserved that are located at a higher memory address than the ephemeral segment. When this is the case, we have to inform the EE that this has occurred, since we have to switch write barriers. (See
Yes, LOH segments are logically separate from SOH segments.
For Server GC, this ends up calling
Yep! There are many different states that the allocator can be in, but this is the case for the most common state.
The point of Server GC is to parallelize GC work across multiple cores, and it's a serious problem if the heaps become unbalanced - by that I mean it's a problem if there exists a heap such that a GC has more work to do than another heap. That's pretty vague, but the idea is that we want every GC on every heap across every processor to take approximately the same amount of time, so that we minimize the amount of time that the other cores spend waiting on a single heap that's taking a long time to do something(*). In order to remedy this problem, we actively try to balance allocations across heaps. Usually an allocation's alloc heap (the heap it is currently allocating upon, and the heap that gets asked for more allocation quantums) is the same as its home heap (the heap chosen to be the "home" of allocations from this thread). The two concepts are very close to one another, and the alloc heap and home heap are often the same. Whether or not the alloc heap and home heap are the same is decided through heuristics in Looking at the code, the GC will freely relocate the home heap to another heap if it's advantageous, but it will only relocate the alloc heap if it finds a heap that's "better" for allocating upon than the current alloc heap. (*) A footnote: all server GC threads have to synchronize at certain points throughout the process of a GC. These are called "joins" in the code and docs. We want to minimize the time spent joining, since it's wasting processor time. |
Thanks for your detailed answer! That make me learned a lots.
Now i have another question about allocation context and segment. |
When using Workstation GC, all three threads will have allocation contexts that point to the same (the only) ephemeral segment. When using Server GC, threads 0 and 1 might be using the same ephemeral segment, depending on whether or not their alloc heaps have been set to the core they are currently running on. Each heap's allocator is handing out logically separate allocation quantums to each context, so no thread is allocating in any other thread's allocation region. I'm not quite sure what you mean by your last question - When an allocation quantum is exhausted, all remaining memory that wasn't used to allocate objects gets turned into a free object. |
Today I used lldb to trace coreclr process, and found the previous question maybe stupid :( And ephemeral_heap_segment.next is always be nullptr, remain segments are in generation_table. |
@303248153 On a related note, have you seen the GCSample that's available? It might give you another way of examining the GC as it allows you to access it outside of the CLR, from the comments in the file:
Also thanks for the info that you and @swgillespie have provided in this issue, I've learnt lots!! |
Debugging through the code is the best way of understanding what the code does. Also I would encourage you to take a look at the GC BotR chapter which answers many of your questions. For example: for allocation contexts: Keeping the heap crawlable: The allocator makes sure to make a free object out of left over memory in each allocation quantum. For example, if there is 30 bytes left in an allocation quantum and the next object is 40 bytes, the allocator will make the 30 bytes a free object and get a new allocation quantum for heap segments: There’s always only one ephemeral segment in each small object heap, which is where gen0 and gen1 live. This segment may or may not include gen2 objects. In addition to the ephemeral segment, there can be zero, one or more additional segments, which will be gen2 segments since they only contain gen2 objects. |
Thanks for all your guys. |
As I asked in previous issue #7249, now I want to known the relationship of these classes.
I made a graph and several conclusions, please help me determine they're correct or wrong.
generation_table
have 5(NUMBERGENERATIONS+1) elements?Also I have a other small question, what's alloc_context::home_heap stand for, and what's the purpose of balance_heaps?
cc @swgillespie
Please help me if you have some time, thanks your guys first.
The text was updated successfully, but these errors were encountered: