Skip to content
This repository has been archived by the owner on Oct 12, 2022. It is now read-only.

Fix leaking main thread held memory #1857

Merged
merged 2 commits into from
Sep 17, 2017
Merged

Fix leaking main thread held memory #1857

merged 2 commits into from
Sep 17, 2017

Conversation

MoritzMaxeiner
Copy link
Contributor

@MoritzMaxeiner MoritzMaxeiner commented Jul 6, 2017

Solves the problem described in #1557 (which got stalled)

  • Release such memory explicitly by destroying
    the main thread on threading module termination
  • Prior to this PR rt.sections_elf_shared.finiTLSRanges already contains the necessary logic to reset _tlsRanges to zero length in the not Shared version, it just isn't used; instead the _tlsRanges (which only have one element) are cleared by calling _tlsRanges.popBack in _d_dso_registry.
    This PR removes that popBack call and instead uses the preexisting logic in finiTLSRanges via
    core.thread.thread_term -> ~Thread on the main thread -> rt.tlsgc.destroy -> rt.sections.finiTLSRanges

- Release such memory explicitly by `destroy`ing
  the main thread on threading module termination
- Remove explicit _tlsRanges.popBack call as now _tlsRanges will have already been released in that case via
  core.thread.thread_term ->
   ~Thread on the main thread ->
    rt.tlsgc.destroy ->
     rt.sections.finiTLSRanges
@dlang-bot
Copy link
Contributor

Thanks for your pull request, @Calrama! We are looking forward to reviewing it, and you should be hearing from a maintainer soon.

Some tips to help speed things up:

  • smaller, focused PRs are easier to review than big ones

  • try not to mix up refactoring or style changes with bug fixes or feature enhancements

  • provide helpful commit messages explaining the rationale behind each change

Bear in mind that large or tricky changes may require multiple rounds of review and revision.

Please see CONTRIBUTING.md for more information.

Bugzilla references

Your PR doesn't reference any Bugzilla issue.

If your PR contains non-trivial changes, please reference a Bugzilla issue or create a manual changelog.

Copy link
Member

@UplinkCoder UplinkCoder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -461,8 +461,6 @@ extern(C) void _d_dso_registry(CompilerDSOData* data)
else
{
// static DSOs are unloaded in reverse order
assert(pdso._tlsSize == _tlsRanges.back.length);
_tlsRanges.popBack();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to check - this is called once per the executable, not once per thread, right? Meaning, _tlsRanges will have just one element.

Copy link
Contributor Author

@MoritzMaxeiner MoritzMaxeiner Jul 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before this PR, yes. With this PR _tlsRanges will have no elements at that time (and the .popBack will thus result in an AssertError, which is why I removed it), because it was already .cleared via thread_term (as per PR description).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, just wanted to confirm that!

@Burgos
Copy link
Member

Burgos commented Jul 29, 2017

Just FYI, since I saw this mentioned on the newsgroup :). I'm back from a holiday next week, and will give this a more detailed look then, if nobody does by then. It looked fine to me last time I checked.

@PetarKirov
Copy link
Member

@Burgos, do you have time to give this PR a more detailed look?

Copy link
Member

@wilzbach wilzbach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too. As no one has given feedback on this for a very long time, it's a tiny fix and many people haven't seen any issue, I think we should move forward with this.

CC @MartinNowak @CyberShadow @ZombineDev

@Burgos
Copy link
Member

Burgos commented Sep 16, 2017

@Burgos, do you have time to give this PR a more detailed look?

Yes, indeeed. LGTM!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants