Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ressource sharing / Asynchronous lock #18

Closed
amirouche opened this issue Nov 21, 2017 · 6 comments
Closed

Ressource sharing / Asynchronous lock #18

amirouche opened this issue Nov 21, 2017 · 6 comments

Comments

@amirouche
Copy link
Contributor

amirouche commented Nov 21, 2017

In my search engine, the database works with contexts. I must limit the number of context since the underlying database can not open as much context as there can be concurrent fibers.

Right now the revelant code is the following:

(define (get-or-create-context! env)
  (with-mutex (env-mutex env)
    (let ((contexts (env-contexts env)))
      (if (null? contexts)
          ;; create a new context
          ;; XXX: the number of active context is unbound
          (apply context-open (cons (env-connection env) (env-configs env)))
          ;; re-use an existing context
          (let ((context (car contexts)))
            (env-contexts! env (cdr contexts))
            context)))))

As you can see right now, the number of context is unbound. So this will lead to a crash if there is too many concurrent fibers.

What I would like is that a fiber request a context and suspend until one is available if there is none available and the limit is reached.

How can I achieve that?

@amirouche amirouche changed the title async How to write asynchronous lock? Nov 21, 2017
@amirouche amirouche changed the title How to write asynchronous lock? Ressource sharing / Asynchronous lock Nov 21, 2017
@cwebber
Copy link
Collaborator

cwebber commented Nov 21, 2017

Heya @amirouche... I'm not sure this belongs on the bug tracker, so I'm going to close it (maybe in the future guile-user is a better place to discuss this?), but it it's a good question.

The right route is to never use locks directly. Instead, you have two patterns:

  • if there's a resource that can only be controlled by a single process at a time, set up a single fiber to handle that and read in "requests". If you also need to supply a response you can accept as an argument a "response channel" and then reply on that once you're done. Then that fiber just loops forever reading messages in and performing actions.
  • If there's a resource that can have multiple processes operating on it, but only a finite amount of them, set up a "process pool" to work on it... have a manager process receive the requests and then farm out to a fixed pool of "worker processes", otherwise following the same process as above!

Hope that helps... does that make sense to you?

@cwebber cwebber closed this as completed Nov 21, 2017
@amirouche
Copy link
Contributor Author

That's what I had in mind, but I did not come up with the code yet.

@amirouche
Copy link
Contributor Author

btw, everybody on github use issues to ask questions.

@amirouche
Copy link
Contributor Author

That said, I will ask my questions in the mailling list in the future. I feared that my question would go without notice on the ML ...

@cwebber
Copy link
Collaborator

cwebber commented Nov 21, 2017

Yeah I understand... it also sometimes makes sense to ask because maybe it wasn't a feature that Fibers provided.

Maybe what we need is a "common patterns" page so that people can see how to do these kinds of things?

@amirouche
Copy link
Contributor Author

Yes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants