Bug report
Bug description:
In the process of testing the default build incremental GC, I noticed a problem with the 3.14t GC. Shortly before the 3.14.0 release, I added some additional logic to defer GC triggering if the process memory use has not increased by 10% since the last collection. Unfortunately that does not work well with how mimalloc handles memory. It does not promptly return memory to the OS or mark pages as unused (e.g. with madvise). So, the process size from the point of view of the OS does not decrease after the GC frees cyclic trash.
For programs that create even modest amounts of cyclic garbage, this is a major problem. It acts as if the GC threshold has been set to 40x the "threshold0" value. By default, that means full GC collection every 80,000 net new objects. This is not a continuous memory leak but it means the process uses uses way more memory than it should.
The simple fix would be to just remove that process size based defer. However, that would result in regressing on the GH-132917 issue, so I wouldn't recommend that. I have a fairly simple fix that "asks mimalloc" how much memory is being used, rather than asking the OS. Based on my testing, that works well.
CPython versions tested on:
3.14
Operating systems tested on:
Linux
Linked PRs
Bug report
Bug description:
In the process of testing the default build incremental GC, I noticed a problem with the 3.14t GC. Shortly before the 3.14.0 release, I added some additional logic to defer GC triggering if the process memory use has not increased by 10% since the last collection. Unfortunately that does not work well with how mimalloc handles memory. It does not promptly return memory to the OS or mark pages as unused (e.g. with madvise). So, the process size from the point of view of the OS does not decrease after the GC frees cyclic trash.
For programs that create even modest amounts of cyclic garbage, this is a major problem. It acts as if the GC threshold has been set to 40x the "threshold0" value. By default, that means full GC collection every 80,000 net new objects. This is not a continuous memory leak but it means the process uses uses way more memory than it should.
The simple fix would be to just remove that process size based defer. However, that would result in regressing on the GH-132917 issue, so I wouldn't recommend that. I have a fairly simple fix that "asks mimalloc" how much memory is being used, rather than asking the OS. Based on my testing, that works well.
CPython versions tested on:
3.14
Operating systems tested on:
Linux
Linked PRs