Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Less restrictive DataSource effect implicits #163

Open
purrgrammer opened this issue Sep 25, 2018 · 8 comments
Open

Less restrictive DataSource effect implicits #163

purrgrammer opened this issue Sep 25, 2018 · 8 comments
Labels
question Further information is requested

Comments

@purrgrammer
Copy link
Contributor

Currently the methods that fetch data in DataSource require an effect type F with Par[F] and ConcurrentEffect[F].

Par is required due to the DataSource#batch implementation defaulting to running individual fetches in parallel. If we move away from it, Data sources that don't implement batching will run their fetches sequentially. Par[F] will still be required when creating or running a fetch to F, since is used when running multiple batches in parallel.

ConcurrentEffect is perhaps too restrictive, since we may not need the cancellation semantics that Concurrent provides. We could use Effect here, thoughts?

cats-effect-typeclasses

@purrgrammer purrgrammer added the question Further information is requested label Sep 25, 2018
@peterneyens
Copy link
Contributor

I think with Concurrent we should be able to implement batch in parallel as well. The code won't end up looking as nice as with Par, but that is probably a price worth paying if we can loose the Par constraint in a lot of places.

@fedefernandez
Copy link
Contributor

fedefernandez commented Sep 25, 2018

Would it be possible to make the restriction to Effect and define a couple of implementations if there is a ConcurrentEffect instance?

@ALPSMAC
Copy link

ALPSMAC commented Oct 19, 2018

2 cents from the peanut gallery for what it's worth...

From a users perspective moving from earlier versions of Fetch, it was very surprising to me to have to sprinkle implicit Timer and ContextShift magic into my otherwise Monix.Task-based DAOs - e.g.:

def fetchString[F[_] : ConcurrentEffect](n: Int): Fetch[F, String] =
    Fetch(n, ToStringSource)

//Why do I need to do all of this in order to interpret a Fetch into an IO?
//I understand *why* but it's counter-intuitive given that when we 
//run the Fetch to produce the IO we still haven't generated any side-effects.
val ec: ExecutionContext = implicitly[ExecutionContext]
implicit val timer: Timer[IO] = IO.timer(ec)
implicit val cs: ContextShift[IO] = IO.contextShift(ec)

val io: IO[String] = Fetch.run[IO](fetchString(123))

val task: Task[String] = Task.fromEffect(io)

So if I've got code like this:

trait KVStore{
   def get(key: String): Task[String]
}

within which I'm trying to use Fetch interpreted into a Task, I now need to either utilize some sort of constructor injection, trait mixing, or implicit currying to get the desired ExecutionContext (which I don't think was previously required, right?) down to where it needs to go - e.g.:

trait KVStore{
   def get(key: String)(implicit ec: ExecutionContext): Task[String]
}

I'm not sure of the details of Par vs. ConcurrentEffect, but if removing the dependency on Par would eliminate the need for the additional implicit Timer and ContextShift evidence in scope in cases like this one I'm all for it.

Thinking about this more... would it be possible to interpret a Fetch to a Reader[(Timer[IO], ContextShift[IO]), IO] so that the requisite Timer and ContextShift could be injected at the "end of the world" when the program is run?

@purrgrammer
Copy link
Contributor Author

I'm not sure of the details of Par vs. ConcurrentEffect, but if removing the dependency on Par would eliminate the need for the additional implicit Timer and ContextShift evidence in scope in cases like this one I'm all for it.

We use Par for leveraging parallel traverse operations and don't have to implement it ourselves, it has been mentioned that we could implement those parallel operations without requiring Par so I'm all for removing it. Also, it introduces another third-party dependency that is not officially supported by typelevel/cats.

Thinking about this more... would it be possible to interpret a Fetch to a Reader[(Timer[IO], ContextShift[IO]), IO] so that the requisite Timer and ContextShift could be injected at the "end of the world" when the program is run?

About Timer and ContextShift: not sure if we can remove them when running a Fetch into an IO since the latter needs evidence of them existing. I could be wrong but I don't think we can not remove those, I'll spend some time trying though. Not sure the solution to inject those running a Fetch into a Reader could work since we need implicit evidence.

Thanks for your feedback, I'll get back to this issue when I make some progress. If you have any ideas/proofs-of-concept for making implicits less restrictive I'm happy to review and merge PRs too.

@kubukoz
Copy link
Contributor

kubukoz commented Mar 13, 2019

I think with Concurrent we should be able to implement batch in parallel as well. The code won't end up looking as nice as with Par, but that is probably a price worth paying if we can loose the Par constraint in a lot of places.

I think Par is worth having to avoid having to duplicate its implementation for types that have Concurrent. And I'm afraid a parMap2 (or similar) written with IO's Concurrent, for example, might need substantial work to be as bug-free as the parMap2 that comes with IO's Parallel. I doubt users would mind F[_]: Par: Concurrent.

it introduces another third-party dependency that is not officially supported by typelevel/cats.

It's a very small library that's actually endorsed by cats maintainers until cats breaks binary compatibility (see gitter channel). I wouldn't consider this a major obstacle :)

I'm all +1 for getting rid of any Effect constraints. They should be used as rarely as possible.

@peterneyens
Copy link
Contributor

I think Par is worth having to avoid having to duplicate its implementation for types that have Concurrent.

I think I disagree know with what I said almost 6 months ago as well, and I agree that we should look if we can't get by with just Concurrent (instead of ConcurrentEffect).

@kubukoz
Copy link
Contributor

kubukoz commented Mar 13, 2019

Would you feel any different about Par if cats-par was officially endorsed by typelevel?

Here are all the reasons I found not to require Concurrent:

  • it's only used for a reimplementation of parTraverse for which Parallel/Par is enough
  • it's one of the most powerful type classes in the whole of cats/cats-effect: it provides ways to embed arbitrary side effects using FFI-like methods like delay, async, cancelable and friends; which is the main reason why it's usually a good idea to use it sparingly and hide the details like calls to race behind algebras that'll be implemented with Concurrent
  • it requires Throwable errors, whereas the only place where Fetch catches and handles error is the run part (in which MonadError[F, Throwable] would probably suffice)
  • very minor: you can have a Parallel for any monad, which gives the ability to get a class under test with a simple type like Id or Either and not a full blown IO that may or may not consist of asynchronous blocks

FYI ContextShift is completely unused in the code for fetch, too, so the only thing that remains is Clock, usage of which could be superseded by a custom algebra for measuring time (allowing custom implementations that'd simply return a dumb value for tests).

@purrgrammer
Copy link
Contributor Author

Would you feel any different about Par if cats-par was officially endorsed by typelevel?

We removed it because we only wanted to depend on cats-effect, introducing Par from cats-par felt odd, but is something we can reconsider. Taking a look at #184, thanks for the PR kubukoz!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants