New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core): allow using dataloader for references and collections #4321
Conversation
Codecov ReportPatch coverage is
📢 Thoughts on this report? Let us know!. |
Why would you need to build anything to test it? Write tests, they use ts-node, nothing needs to be build, or what am I missing? Tests will be needed either way, and you can run them in the CI so you are not blocked now. Never heard of ppc64le, so idk how could I help you with that. |
Because I already have tons of tests in my project which already makes use of an external library version of the dataloder. |
Would turborepo be any better? We could switch to that for the task running, and keep lerna only for publishing. |
It was, but unfortunately not anymore: They are planning to add ppc64le support back but I fear it's not going to happen anytime soon. |
@B4nan if I could manage to find where that unsupported error comes from I could try to build my own ppc64 binary (or even emulate the x86 one with qemu-user) but unfortunately it's not so obvious:
|
Apparently the lerna error comes from nx: https://github.com/nrwl/nx/blob/master/packages/nx/src/native/index.js#L235 Do we actually use the nx stuff? I didn't even know nx was a thing in lerna... |
It is used as the default task runner now. Maybe this could be even replaced with |
The problem is that using yarn as a task runner it doesn't find the root dependencies from inside the workspaces:
|
I think there was some option to allow that?
Hmm, but that's for scripts, not dependencies... |
Unfortunately yes, it would simply run the top level script. |
That's not what the documentation says.
Binaries in the documentation is what you're referring to as dependencies. If you change mikro-orm/packages/core/package.json Line 53 in 9ebdd87
yarn run -T rimraf ./dist it will use rimraf from the root workspace.
|
@merceyz looks like it's working, but unfortunately I cannot manage to get the Workspaces use a wildcard:
Unfortunately it's not clear whether it's for berry or yarn v1 and it looks like a won't fix basically. Am I missing something? EDIT: @merceyz nevermind, it looks like the correct option was |
I've finally managed to build mikro-orm using yarn's own task runner and incorporate it into my project via portals. The only minor annoyance so far has been having to change |
You shouldn't be using any deep imports, what types are you missing that are not exported from the root of the package? |
import type {
EntityKey,
EntityProps,
ExpandProperty,
ExpandScalar,
FilterValue2,
Loaded,
Query,
Scalar,
} from "@mikro-orm/core/dist/typings"; import type { EntityKey, IWrappedEntityInternal } from "@mikro-orm/core/dist/typings"; These are mostly for the find dataloader (a couple of them for the collection dataloader maybe). Apart from the dataloader I didn't have to use deep imports elsewhere in my app. I suggest to ignore them for the moment, I'll add a stub commit for the find dataloader and we decide if this is something which might be worth merging alongside with the ref and collection ones. If we decide to merge it we won't need these exports on the root package anymore, otherwise we can reason about exporting them. |
Ok, we could also export those types under some namespace, e.g. |
I've added a basic version of the Collections dataloader. It's not as straightforward as the Reference one, because for collections we have to filter the results to re-assign them to the original collections, but it shouldn't be too hard to understand either. Let me know if something is not clear. I've tested it against my own project test suite and it works well. P.S. |
I've pushed the find dataloader as well. This one is quite a bit more complex because it basically tries to optimize any kind of possible query into the smallest possible number of queries. It doesn't always manage to be faster so it's not something that we want to enable by default for every query, but amazingly it manages to be faster in some real world graphql scenarios (at least in my application). Once you get the grasp out of it it's not that complex (it's the second time I've rewritten it and I've focused on keeping things simple while achieving worthwhile performance), but feel free to ask me all the questions you want including a full detailed explanation if needed. I wanted to make it capable of possibly covering the whole set of operators, but I don't want to do so at the further expense of performance and I'm basically gradually adding more whenever I find the right use case in my own application. I think that even having a slightly tighter scope compared to the normal find could still be fine because that covers 90% of the use cases and you can always create your own specialized dataloader for complex queries. I use to benchmark it case by case and enable it for the queries that I know will suffer a lot from the GraphQL inherent N+1 nature. Not sure if we want to merge it, it's definitely useful but might never reach the full operators coverage (and maybe that's not even necessary). |
ffd989a
to
854a974
Compare
…_meta.className for performance
@B4nan the lock file has been regenerated and documentation has been created. I've also added a new global option to surgically enable each dataloader. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two final things before we merge
@B4nan done, should be ready to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all right, thanks for holding on with me!
This is currently a very basic implementation so that we can start talking about where we want this headed. I left out the collection dataloader as well as options handling and kept it as simple as possible. The biggest priority right now is being able to build mikro-orm on ppc64le to actually test it: lerna/lerna#3676
I've tried to build just
core
but I had no luck as well: