Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental Garbage Collector (Mark Bit in Header) #3756

Closed
wants to merge 514 commits into from

Conversation

luc-blaeser
Copy link
Contributor

@luc-blaeser luc-blaeser commented Feb 1, 2023

Incremental GC

Incremental evacuating-compacting garbage collector.

Objective: Scalable memory management that allows full heap usage.

Properties:

  • All GC pauses have bounded short time.
  • Full-heap snapshot-at-the-beginning marking.
  • Focus on reclaiming high-garbage partitions.
  • Compacting heap space with partition evacuations.
  • Incremental copying enabled by forwarding pointers.

Design

The incremental GC distributes its workload across multiple steps, called increments, that each pause the mutator (user's program) for only a limited amount of time. As a result, the GC appears to run concurrently (although not parallel) to the mutator and thus allows scalable heap usage, where the GC work fits within the instruction-limited IC messages.

Similar to the recent Java Shenandoah GC [1], the incremental GC organizes the heap in equally-sized partitions and selects high-garbage partitions for compaction by using incremental evacuation and the Brooks forwarding pointer technique [2].

The GC runs in three phases:

  1. Incremental Mark: The GC performs full heap incremental tri-color-marking with snapshot-at-the-beginning consistency. For this purpose, write barriers intercept mutator pointer overwrites between GC mark increments. The target object of an overwritten pointer is thereby marked. Concurrent new object allocations are also conservatively marked. Objects are marked by a reserved bit in their header. Moreover, the phase employs a mark stack that is a growable linked table list in the heap that can be recycled as garbage during the active GC run. Full heap marking has the advantage that it can also deal with arbitrarily large cyclic garbage, even if spread across multiple partitions. As a side activity, the mark phase also maintains the bookkeeping of the amount of live data per partition. Conservative snapshot-at-the-beginning marking and retaining new allocations is necessary because the WASM call stack cannot be inspected for the root set collection. Therefore, the mark phase must also only start on an empty call stack.

  2. Incremental Evacuation: The GC selects non-free partitions with a defined minimum amount of garbage by consulting the statistics prepared by the mark phase. Subsequently, marked objects inside the selected partitions are evacuated to free partitions and thereby compacted. To allow incremental object moving and incremental updating of pointers, each object carries a redirection information in its header, which is a forwarding pointer, also called Brooks pointer. For non-moved objects, the forwarding pointer reflexively points back to the object itself, while for moved objects, the forwarding pointer refers to the new object location. Each object access and equality check has to be redirected via this forwarding pointer. During this phase, evacuated partitions are still retained and the original locations of evacuated objects are forwarded to their corresponding new object locations. Therefore, the mutator can continue to use old incoming pointers to evacuated objects.

  3. Incremental Updates: All pointers to moved objects have to be updated before free space can be reclaimed. For this purpose, the GC performs a full-heap scan and updates all pointers in alive objects to their forwarded address. At the same time, the mark bits in the alive objects are cleared too. As mutator may perform concurrent pointer writes behind the update scan line, a write barrier catches such pointer writes and resolves them to the forwarded locations. The same applies to new object allocations that may have old pointer values in their initialized state (e.g. originating from the call stack). Once this phase is completed, all evacuated partitions are freed and can later be reused for new object allocations. The update phase can only be completed when the call stack is empty, since the GC does not access the WASM stack. Different to the Shenandoah GC, no remembered sets are maintained for tracking incoming pointers to partitions.

Humongous objects:

  • Objects with a size larger than a partition require special handling: A sufficient amount of contiguous free partitions is searched and reserved for a large object. Large objects are not moved by the GC. Once they have become garbage (not marked by the GC), their hosting partitions are immediately freed. Both external and internal fragmentation can only occur for huge objects.

Increment limit:

  • The GC maintains a synthetic deterministic clock by counting work steps, such as marking an object, copying a word, or updating a pointer. The clock serves for limiting the duration of a GC increment. The GC increment is stopped whenever the limit is reached, such that the GC later resumes its work in a new increment. To also keep the limit on large objects, large arrays are marked and updated in incremental slices. Moreover, huge objects are never moved. There exist two types of GC increments: (1) Regular GC increments triggered at compiler-instrumented scheduling points when the call stack is empty; (2) additional shorter increments performed on object allocations during active garbage collection. The latter serves to reduce the reclamation latency on a high allocation rate during garbage collection.

Configuration

  • Partition size: 32 MB.

  • Increment limit: Regular increment bounded to 2,500,000 steps (approximately 600 million instructions). Allocation increments are performed in intervals of 5,000 allocations with 500,000 steps.

  • Survival threshold: Partitions are evacuated if less than 85% of the space is alive (marked).

  • GC start: Scheduled when the growth (new allocations since the last GC run) account for more than 65% of the heap size. When passing the critical limit of 3GB (on the 4GB heap size), the GC is already started when the growth exceeds 15% of the heap size.

The configuration can be adjusted to tune the GC.

Measurement

The following results have been measured on the GC benchmark with dfx 0.12.1. The Copying, Compacting, and Generational GC are based on the original runtime system without the forwarding pointer header extension. No denotes the disabled GC based on the runtime system with the forwarding pointer header extension. Measurement results are rounded to two significant figures.

Scalability

Summary: The incremental GC allows full 4GB heap usage without that it exceeds the message instruction limit. It therefore scales much higher than the existing stop-and-go GCs and naturally also higher than without GC.

Average amount of allocations for the benchmark limit cases, until reaching a limit (instruction limit, heap limit, dfx cycles limit).

GC Avg. Allocation Limit
Incremental 140e6
No 47e6
Generational 33e6
Compacting 37e6
Copying 47e6

3x higher than the other GCs and also than no GC.

Currently, the following limit benchmark cases do not reach the 4GB heap maximum due to GC-independent reasons:

  • buffer applies exponential array list growth where the copying to the larger array exceeds the instruction limit.
  • rb-tree, trie-map, and btree-map are such garbage-intense that they run out of dfx cycles or hit another limit.

GC Pauses

Longest GC pause, maximum of all benchmark cases:

GC Longest GC Pause
Incremental 6.0e8
Generational 1.2e9
Compacting 7.5e9
Copying 5.3e9

2x shorter than the generational GC, 11x shorter than the compacting GC, and 8.9x shorter than the copying GC.

Performance

Total number of instructions (mutator + GC), average across all benchmark cases:

GC Avg. Total Instructions
Incremental 2.7e10
Generational 2.1e10
Compacting 2.7e10
Copying 2.5e10

29% slower than the generational GC. Equal to the compacting GC. 8% slower than the copying GC.

Mutator utilization on average:

GC Avg. Mutator Utilization
Incremental 92%
Generational 85%
Compacting 68%
Copying 71%

7% higher (better) than the generational GC, >20% higher than the compacting and the copying GC.

Memory Size

Allocated WASM memory space, benchmark average:

GC Avg. Memory Size
Incremental 310 MB
No 820 MB
Generational 210 MB
Compacting 210 MB
Copying 290 MB

48% higher (worse) than the generational and the compacting GC. 7% higher than the copying GC.

Occupied heap size at the end of each benchmark case, average across all cases:

GC Avg. Final Heap Occupation
Incremental 180 MB
No 820 MB
Generational 150 MB
Compacting 150 MB
Copying 150 MB

20% higher than the other GCs.

Overheads

Additional mutator costs implied by the incremental GC:

  • Mark bit:
    • Unmasking the mark bit when accessing the tag of an object.
  • Write barrier:
    • During the mark and evacuation phase: Marking the target of overwritten pointers.
    • During the update phase: Resolving forwarding of written pointers.
  • Allocation barrier:
    • During the mark and evacuation phase: Marking new allocated objects.
    • During the update phase: Resolve pointer forwarding in initialized objects.
    • Triggering allocation increments.
  • Pointer forwarding:
    • Indirect each object access and equality check via the forwarding pointer.

Mutator instructions:

GC Avg. Total Instructions
Incremental 2.4e10
No 1.6e10

50% overhead. However, these costs are again amortized with cheaper collection, such that the overall performance is not significantly worse, cf. Performance.

Testing

  1. RTS unit tests

    In Motoko repo folder rts:

    make test
    
  2. Motoko test cases

    In Motoko repo folder test/run and test/run-drun:

    export EXTRA_MOC_ARGS="--sanity-checks --incremental-gc --force-gc"
    make
    
  3. GC Benchmark cases

    In gcbench repo:

    ./measure-all.sh
    
  4. Extensive memory sanity checks

    Adjust Cargo.toml in rts/motoko-rts folder:

    default = ["ic", "memory-check"]
    

    Run selected benchmark and test cases. Some of the tests will exceed the instruction limit due to the expensive checks.

Extension for 64-Bit Heaps

The design partition information would need to be adjusted to store the partition information dynamically instead of a static allocation. For example, the information could be stored in a reserved space at the beginning of a partition (except if the partition serves as an extension for hosting a huge object). Apart from that, the GC should be portable and scalable without significant design changes on 64-bit memory.

Design Alternatives

  • Free list: See the prototype of the incremental mark and sweep GC. The free-list-based incremental GC shows higher reclamation latency, slower performance (free list selection), and potentially higher external fragmentation (no compaction, just free neighbor merging).
  • Remembered set: Inter-partition pointers could be stored in remembered set to allow more selective and faster pointer updates. However, the mark bits would still need to be cleared with a full heap traversal. Moreover, the write barrier would become more expensive to detect and store relevant pointers in the remembered set. Also, the remembered set would occupy additional memory.
  • Mark bitmap: In combination with the remembered sets, a mark bitmap per partition could accelerate the update phase. Traversal of the heap could be avoided in favor of selectively updating the pointers that are recorded in the remembered sets of the evacuated partitions.
  • Special solution: To be analyzed. A central location table could be introduced such that pointers to an object are always redirected via a object-associated table entry. As a consequence, objects could be efficiently moved by only updating the corresponding table entry. For fast backward navigation, objects may store the table entry address in their header (instead of the Brook pointer). As a result, the update phase can be removed and objects can be moved one-by-one down in the heap (continuously overwriting the previously moved objects). Extensions of the central location table are simple, since objects blocking the table extension can be easily moved away. Free entries in the central table are linearly linked. The design can be easily combined with generational garbage collection to profit from fast reclamation of short-lived objects. The downside: The reclamation latency may be higher, since the entire heap needs to be compacted for recycling free space in the old generation.

References

[1] C. H. Flood, R. Kennke, A. Dinn, A. Haley, and R. Westrelin. Shenandoah. An Open-Source Concurrent Compacting Garbage Collector for OpenJDK. Intl. Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, PPPJ'16, Lugano, Switzerland, August 2016.

[2] R. A. Brooks. Trading Data Space for Reduced Time and Code Space in Real-Time Garbage Collection on Stock Hardware. ACM Symposium on LISP and Functional Programming, LFP'84, New York, NY, USA, 1984.

crusso pushed a commit to dfinity/motoko-base that referenced this pull request Feb 23, 2023
Currently, the CI builds for the incremental GC
(dfinity/motoko#3756) fail because they run out
of memory for this test case (where memory is not collected). The
incremental GC has a somewhat higher memory footprint as it uses larger
object headers, with an additional forwarding pointer.

Therefore, reducing the test size a little bit to also fit for the
future incremental GC.
@luc-blaeser luc-blaeser deleted the luc/incremental-gc branch February 24, 2023 11:10
@luc-blaeser luc-blaeser changed the title Incremental Garbage Collector Incremental Garbage Collector (Mark Bit in Header) Feb 24, 2023
This was referenced Feb 24, 2023
mergify bot pushed a commit that referenced this pull request May 12, 2023
### Incremental GC PR Stack
The Incremental GC is structured in three PRs to ease review:
1. #3837 **<-- this PR**
2. #3831
3. #3829

# Incremental GC

Incremental evacuating-compacting garbage collector.

**Objective**: Scalable memory management that allows full heap usage.

**Properties**:
* All GC pauses have bounded short time.
* Full-heap snapshot-at-the-beginning marking.
* Focus on reclaiming high-garbage partitions.
* Compacting heap space with partition evacuations.
* Incremental copying enabled by forwarding pointers.
* Using **mark bitmaps** instead of a mark bit in the object headers.
* Limiting number of evacuations on memory shortage.

## Design

The incremental GC distributes its workload across multiple steps, called increments, that each pause the mutator (user's program) for only a limited amount of time. As a result, the GC appears to run concurrently (although not parallel) to the mutator and thus allows scalable heap usage, where the GC work fits within the instruction-limited IC messages.

Similar to the recent Java Shenandoah GC [1], the incremental GC organizes the heap in equally-sized partitions and selects high-garbage partitions for compaction by using incremental evacuation and the Brooks forwarding pointer technique [2].

The GC runs in three phases:
1. **Incremental Mark**: The GC performs full heap incremental tri-color-marking with snapshot-at-the-beginning consistency. For this purpose, write barriers intercept mutator pointer overwrites between GC mark increments. The target object of an overwritten pointer is thereby marked. Concurrent new object allocations are also conservatively marked. To remember the mark state per object, the GC uses partition-associated mark bitmaps that are temporarily allocated during a GC run. The phase additionally needs a mark stack that is a growable linked table list in the heap that can be recycled as garbage during the active GC run. Full heap marking has the advantage that it can also deal with arbitrarily large cyclic garbage, even if spread across multiple partitions. As a side activity, the mark phase also maintains the bookkeeping of the amount of live data per partition. Conservative snapshot-at-the-beginning marking and retaining new allocations is necessary because the WASM call stack cannot be inspected for the root set collection. Therefore, the mark phase must also only start on an empty call stack.

2. **Incremental Evacuation**: The GC prioritizes partitions with a larger amount of garbage for evacuation based on the available free space. It also requires a defined minimum amount of garbage for a partition to be evacuated. Subsequently, marked objects inside the selected partitions are evacuated to free partitions and thereby compacted. To allow incremental object moving and incremental updating of pointers, each object carries a redirection information in its header, which is a forwarding pointer, also called Brooks pointer. For non-moved objects, the forwarding pointer reflexively points back to the object itself, while for moved objects, the forwarding pointer refers to the new object location. Each object access and equality check has to be redirected via this forwarding pointer. During this phase, evacuated partitions are still retained and the original locations of evacuated objects are forwarded to their corresponding new object locations. Therefore, the mutator can continue to use old incoming pointers to evacuated objects.

3. **Incremental Updates**: All pointers to moved objects have to be updated before free space can be reclaimed. For this purpose, the GC performs a full-heap scan and updates all pointers in alive objects to their forwarded address. As mutator may perform concurrent pointer writes behind the update scan line, a write barrier catches such pointer writes and resolves them to the forwarded locations. The same applies to new object allocations that may have old pointer values in their initialized state (e.g. originating from the call stack). Once this phase is completed, all evacuated partitions are freed and can later be reused for new object allocations. At the same time, the GC also frees the mark bitmaps stored in temporary partitions. The update phase can only be completed when the call stack is empty, since the GC does not access the WASM stack. No remembered sets are maintained for tracking incoming pointers to partitions.

**Humongous objects**:
* Objects with a size larger than a partition require special handling: A sufficient amount of contiguous free partitions is searched and reserved for a large object. Large objects are not moved by the GC. Once they have become garbage (not marked by the GC), their hosting partitions are immediately freed. Both external and internal fragmentation can only occur for huge objects. Partitions storing large objects do not require a mark bitmap during the GC.

**Increment limit**:
* The GC maintains a synthetic deterministic clock by counting work steps, such as marking an object, copying a word, or updating a pointer. The clock serves for limiting the duration of a GC increment. The GC increment is stopped whenever the limit is reached, such that the GC later resumes its work in a new increment. To also keep the limit on large objects, large arrays are marked and updated in incremental slices. Moreover, huge objects are never moved. 
For simplicity, the GC increment is only triggered at the compiler-instrumented scheduling points when the call stack is empty. The increment limit is increased depending on the amount of concurrent allocations, to reduce the reclamation latency on a high allocation rate during garbage collection.

**Memory shortage**
* If memory is scarce during garbage collection, the GC limits the amount of evacuations to available free space of free partitions. This is to prevent the GC to run out of memory while copying alive objects to new partitions.

## Configuration

* **Partition size**: 32 MB.

* **Increment limit**: Regular increment bounded to 3,500,000 steps (approximately 600 million instructions). Each allocation during GC increases the next scheduled GC increment by 20 additional steps.

* **Survival threshold**: If 85% of a partition space is alive (marked), the partition is not evacuated.

* **GC start**: Scheduled when the growth (new allocations since the last GC run) account for more than 65% of the heap size. When passing the critical limit of 3.25GB (on the 4GB heap size), the GC is already started when the growth exceeds 1% of the heap size.

The configuration can be adjusted to tune the GC.

## Measurement

The following results have been measured on the GC benchmark with `dfx` 0.13.1. The `Copying`, `Compacting`, and `Generational` GC are based on the original runtime system ***without*** the forwarding pointer header extension. `No` denotes the disabled GC based on the runtime system ***with*** the forwarding pointer header extension. 

### Scalability

**Summary**: The incremental GC allows full 4GB heap usage without that it exceeds the message instruction limit. It therefore scales much higher than the existing stop-and-go GCs and naturally also higher than without GC.

Average amount of allocations for the benchmark limit cases, until reaching a limit (instruction limit, heap limit, `dfx` cycles limit). Rounded to two significant figures.

| GC                | Avg. Allocation Limit   |
| ----------------- | ----------------------- |
| **Incremental**   | **150e6**               |
| No                | 47e6                    |
| Generational      | 33e6                    |
| Compacting        | 37e6                    |
| Copying           | 47e6                    |

3x higher than the other GCs and also than no GC.

Currently, the following limit benchmark cases do not reach the 4GB heap maximum due to GC-independent reasons:
* `buffer` applies exponential array list growth where the copying to the larger array exceeds the instruction limit.
* `rb-tree`, `trie-map`, and `btree-map` are such garbage-intense that they run out of `dfx` cycles or suffer from a sudden `dfx` network connection interruption.

### GC Pauses

Longest GC pause, maximum of all benchmark cases:

| GC                | Longest GC Pause          |
| ----------------- | ------------------------- |
| **Incremental**   | **0.712e9**               |
| Generational      | 1.19e9                    |
| Compacting        | 8.41e9                    |
| Copying           | 5.90e9                    |

Shorter than all the other GCs.

### Performance

Total number of instructions (mutator + GC), average across all benchmark cases:

| GC                | Avg. Total Instructions | 
| ----------------- | ----------------------- | 
| **Incremental**   | **1.85e10**             | 
| Generational      | 1.91e10                 | 
| Compacting        | 2.20e10                 | 
| Copying           | 2.05e10                 | 

Faster than all the other GCs.

Mutator utilization on average:

| GC                | Avg. Mutator Utilization |
| ----------------- | ------------------------ |
| **Incremental**   | **94.6%**                |
| Generational      | 85.4%                    |
| Compacting        | 75.8%                    |
| Copying           | 78.7%                    |

Higher than the other GCs.

### Memory Size

Occupied heap size at the end of each benchmark case, average across all cases:

| GC                | Avg. Final Heap Occupation |
| ----------------- | -------------------------- |
| **Incremental**   | **176 MB**                 |
| No                | 497 MB                     |
| Generational      | 156 MB                     |
| Compacting        | 144 MB                     |
| Copying           | 144 MB                     |

Up to 22% higher than the other GCs.

Allocated WASM memory space, benchmark average:

| GC                | Avg. Memory Size        |
| ----------------- | ----------------------- |
| **Incremental**   | **296 MB**              |
| No                | 499 MB                  |
| Generational      | 191 MB                  |
| Compacting        | 188 MB                  |
| Copying           | 271 MB                  |

9% higher than the copying GC. 57% higher (worse) than the generational and the compacting GC.

## Overheads

Additional mutator costs implied by the incremental GC:
* **Write barrier**: 
    - During the mark and evacuation phase: Marking the target of overwritten pointers.
    - During the update phase: Resolving forwarding of written pointers.
* **Allocation barrier**:
    - During the mark and evacuation phase: Marking new allocated objects.
    - During the update phase: Resolve pointer forwarding in initialized objects.
* **Pointer forwarding**:
    - Indirect each object access and equality check via the forwarding pointer.

Runtime costs for the barrier are reported in #3831.
Runtime costs for the forwarding pointers are reported in #3829.

## Testing

1. RTS unit tests

    In Motoko repo folder `rts`:
    ```
    make test
    ```

2. Motoko test cases

    In Motoko repo folder `test/run` and `test/run-drun`:
    ```
    export EXTRA_MOC_ARGS="--sanity-checks --incremental-gc"
    make
    ```

3. GC Benchmark cases

    In `gcbench` repo: 
    ```
    ./measure-all.sh
    ```

4. Extensive memory sanity checks

    Adjust `Cargo.toml` in `rts/motoko-rts` folder:
    ```
    default = ["ic", "memory-check"]
    ```

    Run selected benchmark and test cases. Some of the tests will exceed the instruction limit due to the expensive checks.

## Extension to 64-Bit Heaps

The design partition information would need to be adjusted to store the partition information dynamically instead of a static allocation. For example, the information could be stored in a reserved space at the beginning of a partition (except if the partition has static data or serves as an extension for hosting a huge object). Apart from that, the GC should be portable and scalable without significant design changes on 64-bit memory.

## Design Alternatives

* **Free list**: See the prototype in #3678. The free-list-based incremental GC shows higher reclamation latency, slower performance (free list selection), and potentially higher external fragmentation (no compaction, just free neighbor merging).
* **Mark bit in object header**: See implementation in #3756. Storing the mark bit in the object header instead of using a mark bitmap saves memory space, but is more expensive for scanning sparsely marked partitions. Moreover, it increases the amount of dirty pages.
* **Remembered set**: Inter-partition pointers could be stored in remembered set to allow more selective and faster pointer updates. Moreover, the write barrier would become more expensive to detect and store relevant pointers in the remembered set. Also, the remembered set would occupy additional memory.
* **Allocation increments**: On high allocation rate, the GC could also perform a short GC increment during an allocation. This design is however more complicated as it forbids that the compiler can store low-level pointers on the stack while performing an allocation (e.g. during assignments or array tabulate). It is also slower than the current solution where allocation increments are postponed to next regularly scheduled GC increment, running when the call stack is empty.
* **Special incremental GC**: Analzyed in PR #3894. An incremental GC based on a central object table that allows easy object movement and incremental compaction. Compared to this PR, the special GC has 35% worse runtime performance. 
* **Combining tag and forwarding pointer**: #3904. This seems to be less efficient than the Brooks pointer technique with a runtime performance degrade of 27.5%, while only offering a small memory saving of around 2%.

## References

[1] C. H. Flood, R. Kennke, A. Dinn, A. Haley, and R. Westrelin. Shenandoah. An Open-Source Concurrent Compacting Garbage Collector for OpenJDK. Intl. Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, PPPJ'16, Lugano, Switzerland, August 2016.

[2] R. A. Brooks. Trading Data Space for Reduced Time and Code Space in Real-Time Garbage Collection on Stock Hardware. ACM Symposium on LISP and Functional Programming, LFP'84, New York, NY, USA, 1984.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants