Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for lockers, including a distributed lock #408

Closed
fenos opened this issue Mar 2, 2023 · 5 comments · Fixed by #514
Closed

Add support for lockers, including a distributed lock #408

fenos opened this issue Mar 2, 2023 · 5 comments · Fixed by #514

Comments

@fenos
Copy link
Collaborator

fenos commented Mar 2, 2023

The protocol states that:

In order to horizontally scale a tus server there should be locks in place during the time the server interacts with the remote storage (s3 for example)

Does tus-node-server already solve this issue in some way?
If not, do we need to implement an interface to add these custom locks?

Thanks!

@Murderlon
Copy link
Member

Hi, there is no distributed lock so you probably would have to settle for sticky sessions when you scale. I'm willing to look into it but don't except it soon :)

@Murderlon
Copy link
Member

@fenos feel free to post any thoughts and requirements you may have for support distributed locks.

@Murderlon Murderlon changed the title Horizontal scaling Add support for lockers, including a distributed lock Jul 19, 2023
@Acconut
Copy link
Member

Acconut commented Jul 25, 2023

Distributed locking is definitely something that we should also explore in tusd. It's one of the things that users are most confused/concerned about, I guess.

@Murderlon
Copy link
Member

I have a working distributed lock for tusd locally which I could upstream at some point.

@Acconut
Copy link
Member

Acconut commented Jul 26, 2023

Amazing, feel free to PR to tusd!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants