Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential race condition with the in memory store. #74

Open
elliotcourant opened this issue Sep 27, 2023 · 3 comments
Open

Potential race condition with the in memory store. #74

elliotcourant opened this issue Sep 27, 2023 · 3 comments
Assignees
Labels
bug Something isn't working memory backend Memory backend issues

Comments

@elliotcourant
Copy link
Collaborator

elliotcourant commented Sep 27, 2023

I hope I am not creating clutter with this.

I think there is the potential for a race condition when a job is started at the same time that Shutdown(...) is called for the in memory store.

When shutdown is called it reads the context array without a mutex, which would be in a different go routine than the workers themselves that own the context. A context could be appended to this array while shutdown is being performed.

func (m *MemBackend) Shutdown(ctx context.Context) {
for _, f := range m.cancelFuncs {
f()
}
m.cancelFuncs = nil
}

But a mutex is used when the contexts are added.

m.mu.Lock()
m.cancelFuncs = append(m.cancelFuncs, cancel)
m.mu.Unlock()

  1. I think the mutex should be changed to a RWMutex with Shutdown acquiring a read lock on the array, and start acquiring a write lock on the array. This would prevent start from adding a new context and thus starting a new job during a shutdown. But does not prevent a job from starting immediately afterwards.
  2. Do the contexts ever get cleaned up from this array? I don't see anything immediately that would remove items from the array after a job has completed. While I doubt these are of significant overhead would it be worth looking into a way to clean them up afterwards as well?

Happy to take on this work myself and contribute, really excited to start using this tool!

@elliotcourant
Copy link
Collaborator Author

I see that the context is only created per handler, not per job. So its very unlikely that a handler will "start" after the shutdown. And the likely-hood of a handler being added during a shutdown is also very low. Happy to tweak this in a PR or to just close it!

@acaloiaro
Copy link
Owner

I think you're right about this.

If it's not too tedious to trigger it with a test, I think that'd be a good start.

@elliotcourant
Copy link
Collaborator Author

Do you think if start was called on a handler after shutdown that some error should be raised or a panic?

@elliotcourant elliotcourant self-assigned this Oct 6, 2023
@elliotcourant elliotcourant added bug Something isn't working redis backend Redis backend issues memory backend Memory backend issues and removed redis backend Redis backend issues labels Oct 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working memory backend Memory backend issues
Projects
None yet
Development

No branches or pull requests

2 participants