Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[oneDPL][tbb] + memory leaks fix (sort/stable sort, tbb backend) #1589

Merged
merged 4 commits into from
Jun 1, 2024

Conversation

MikeDvorskiy
Copy link
Contributor

[oneDPL][tbb] + memory leaks fix (sort/stable sort, tbb backend)

@MikeDvorskiy MikeDvorskiy added the follow through PRs/issues that should be completed/resolved label May 15, 2024
@MikeDvorskiy MikeDvorskiy force-pushed the dev/mdvorski/tbb_backend_mem_leaks_fix branch from 4137697 to 420acf8 Compare May 15, 2024 16:49
Comment on lines 648 to 649
auto __refcount = --__parent->_M_refcount;
__alloc.deallocate(this, *__ed);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason we have to decrement the refcount of the parent before we deallocate the child memory?

If not, couldn't we just move the deallocation call outside the if and simplify this code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pavelkumbrasev,
Could you please answer that question?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is some sort of an optimization (Used in several TBB patterns) where deallocation can be done concurrently (the reference reached 0 and the thread can proceed with tree folding without waiting for task deallocation on another thread).
I think the benefit is really small so we can amend the optimization and use @danhoeflinger suggestion if it helps with readability.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is faster to do it this way I wont object. It may be worth adding a comment to make that clear going forward.

The difference in readability is minor. However, from someone unfamiliar with the code to start with, it makes it seem like there are more hidden connections which require this pattern.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a comment.

danhoeflinger
danhoeflinger previously approved these changes May 22, 2024
Copy link
Contributor

@danhoeflinger danhoeflinger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I think this makes sense. However, I am not as familiar with this tbb backend code in its entirety as others may be.

Co-authored-by: Alexey Kukanov <alexey.kukanov@intel.com>
Copy link
Contributor

@danhoeflinger danhoeflinger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MikeDvorskiy MikeDvorskiy merged commit 481272b into main Jun 1, 2024
20 checks passed
@MikeDvorskiy MikeDvorskiy deleted the dev/mdvorski/tbb_backend_mem_leaks_fix branch June 1, 2024 08:26
SoilRos added a commit to dune-copasi/dune-copasi that referenced this pull request Nov 12, 2024
The main offender is TBB whose parallel sort has memory leaks
oneapi-src/oneDPL#1589
so now, the parallel sorts need to be explictely enabled.
SoilRos added a commit to dune-copasi/dune-copasi that referenced this pull request Nov 13, 2024
The main offender is TBB whose parallel sort has memory leaks
oneapi-src/oneDPL#1589
so now, the parallel sorts need to be explictely enabled.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
follow through PRs/issues that should be completed/resolved
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants