Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destructors #13

Closed
sorear opened this issue Oct 18, 2015 · 2 comments · Fixed by #169
Closed

Destructors #13

sorear opened this issue Oct 18, 2015 · 2 comments · Fixed by #169

Comments

@sorear
Copy link

sorear commented Oct 18, 2015

I've been thinking about how I might use crossbeam::mem to implement a concurrent hash table. The current examples of stacks and queues are actually somewhat special, because values can be moved into and out of them, but are never borrowed. In contrast with a hash table I want to be able to read it without modifying, which requires borrows; but if a value is borrowed from a hash table and then deleted from it in another thread, the drop() call needs to be deferred until garbage collection.

Currently crossbeam only supports freeing memory after the grace period. It seems like it could probably be modified to call destructors, but there might be some interesting caveats / possible unsafeties. One obvious thing is that a drop call can free an arbitrary amount of memory, so intelligently batching them becomes more difficult (if an object is holding on to 20MiB of memory, we want to free it ASAP, not after 64 epochs).

@schets
Copy link
Member

schets commented Jan 14, 2016

I'm working on that here

In contrast with a hash table I want to be able to read it without modifying, which requires borrows; but if a value is borrowed from a hash table and then deleted from it in another thread, the drop() call needs to be deferred until garbage collection.

It's only safe to read (and borrow) from a datastructure like the hashtable while you know the data can't be freed - in this case, that means while a Guard is active. Likewise, it's only safe to call drop during garbage collection since it invalidates the region of memory.

In my mind, this implies that references to values can't escape the hashtable - only copies can, while other read operations would be managed by the table. I'm not ultra-familiar with rust, but there's probably a way to tie the reference life to the life of the guard.

Currently crossbeam only supports freeing memory after the grace period. It seems like it could probably be modified to call destructors

This is pretty simple to do for anything that implements drop -

(if an object is holding on to 20MiB of memory, we want to free it ASAP, not after 64 epochs)

As it stands now, local garbage is freed whenever a pin call believes that it can and global garbage is freed whenever the epoch is advanced. Threads could be more forceful about advancing the epoch - GC thresholds could consider the garbage size so a large value doesn't sit waiting while no epochs advance and a similar method could pressure threads to advance the epoch and clear large data from the global cache.

However, any calls to free/drop must wait until it's safe (two epochs have advanced) and as of now epochs only advance when a call to pin decides to. In this branch, one of the functions I'm adding allows one to force attempts at GC and epoch advancement - if one knowingly unlinks a big object, this function could be called until enough epochs have advanced ( I'm working on the update that returns whether a GC actually ran or not).

I think that intelligent batching of drop/frees is too complicated for now - adding stuff to consider garbage size and allowing the programmer to force advancement is probably good enough for now.

@DemiMarie
Copy link

One approach to handling such situations is to have the API take a closure as argument, and to pass a borrowed reference to the closure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

3 participants