Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Part of #166979
Experimenting with the basis of what the new Kibana DI system could be.
Why not a decorator-based approach?
This question WILL be asked, so let's get it over with asap.
To be totally honest, I too was naive, and that's the approach I tried initially. After all, it's sexy, cool, fresh and has unicorns all over the idea. I quickly backtracked though, and for multiple reasons:
First, there are inconsistencies between the Typescript and the ECMA versions of decorators. Even the latest TS version (coming with TS5) isn't the same at the lastest (
2023-05
) version of the ECMA spec.Even more disturbing, even though we are using typescript in Kibana, I learned the hard way that we're always using babel for transpilation, and that babel doesn't delegate most of the transformations to typescript. TLDR: we're forced to use the ECMA version of decorators, via babel-plugin-proposal-decorators, we can't use the typescript spec.
Which leads to the next point. The recent versions of the ECMA spec (note: the TS version too, but to a lesser extend) have a couple of major issues in my opinion:
- It doesn't support parameter-level decorators
Yeah, that's right
Is not supported.
Note that even if param decorators were supported, there's no support in either spec of decorators outside of classes, meaning that:
will never be supported, which is an issue with the way I see factory based injection.
- It doesn't emit decorator metadata
The equivalent of the
emitDecoratorMetadata
option for typescript... Well... there's none for babel out of the box. There are 3rd party plugins that are doing it it, but 1. none is really maintained and 2. they all ask you to use thelegacy
version of the ECMA decorator spec, which is kinda scary for maintenance.- Injection decorators must be based on concrete types
Interfaces and types are abstract and stripped during transpilation from TS to JS (which makes sense given there's no equivalence in javascript). We tend to forget it greatly limit anything decorator related.
E.g
In this example, the class decorator
@Inject
would have NO knowledge of the types of the parameters of the constructors.There are shenanigans based on the reflect-metadata package, but given it's not natively supported, it forces to inject stuff into all annotated structures, which was kinda scary, and doesn't really solve the whole deal anyway
So TLDR: ihmo decorators are just not good/mature enough to be used for complex things like DI (yes, I know, angular does it, but it does use the TS version, with decorators metadata, and their DI system is only based on concrete types, which is not how most of our interfaces/contracts are defined in Kibana)
So what does it look like?
Before going into the API and implementation details, I just want to express the general idea:
The DI system will be composed of two layers. The lower layer will try to be as unspecialized as possible (e.g it should ideally be able to be used for other applications that Kibana), while still having the root concepts that will allow to to implement the higher, specialized, layer.
Lower layer
The lower layer looks like a very classical, description-driven, dependency injection system.
Base interface
Registering a service (without dependencies) would look like:
Registering a service with a dependencies to this previous services would look like:
Retrieving a service would just look like:
Service type inference
Note that there is no automatic type inference between the type of the parameters as defined in
params
and the types effectively passed to the constructor/factory function.This is because there's no way (AFAIK, I'll like you very much if you prove me wrong) way to infer the function's parameters from a type transformation of each individual parameter types:
What I would need is something like:
But yeah, as I said, I'm pretty sure it's not possible.
Which is why I had to find a work around to try to preserve type safety as much as possible, and what I found in the end is the use of an id token type, that would allow to preserve the type artificially.
it's still not perfect, as the developer still needs to make sure that the order of the params is the same as the order for the function, but it's still an okay way to work around the problem ihmo.
Once again, if someone has a better suggestion or idea, I'm gladly listening.
Inversion of control for registries
At Kibana, we have a problem with registries. We have a lot of them. And we always struggle with the registration patterns, and sometimes the lifecycle approach of the Kibana platform makes it even more awkward.
The DI system has a way to inject all the services matching a given label (or tag, or class, or...);
Which allow to go from
To
Which seems to have many upsides.
Container inheritance
One of the main characteristic of this injection container implementation is the container inheritance. Don't get me wrong, container inheritance is quite common for DI implementations, but here I wanted to have something that would allow us to easily implement the specialized Kibana DI layer.
E.g.
would translate to something like:
And this is where the concept of service scope come into play: with scopes, we can define at which layer/level the services will be instantiated and exposed.
Services of lower scope can depend on services of higher scope (services from children can depend on services from parent):
But not the other way around (services from parent cannot depend on service from children)
Multiple inheritance levels
Inheritance isn't limited to one level. Children can be created from any level / any container
The intent is to represent our different "injection/service levels" in the top layer, that could for example looks like (probably not the final shape of the pyramid, just the rough idea):
Which would achieve a more versatile (and more integrated) version of what request context holders are currently doing, but with more levels (not only global/request scopes)
Note that to achieve that, we need to extend the concept of
scope
to more than justglobal | container
, as we we will to be able to define any of those levels as the correct "level" for service instantiation. What I currently have in mind for that is to allow passing a predicate function asscope
, and pass all the containers, starting from the current one up to the root, so that the service registration can decide which level the service should be bound to.Top (Kibana) layer
Nothing concrete yet (will update the issue and the PR accordingly when my investigations progress), but the idea is to use the low level DI system with the proper nesting of contexts.
The concept of "scope" will change for the API consumer here too, as it won't be choosing between
global
,container
of defining its scope predicate. This will be done under the hood, and all the consumer will have to choose will be the Kibana prebuild scopes (as already described in #166979):global
,project
/tenant
,user
andrequest
.Ideally, I'd like to have per-plugin scoping too, which would allow to do things like
However, I'm afraid it may be hard to implement with the current design, as it's unclear where the
plugin
scope would go (e.g. if the plugin scope is too high, like highter than tenant/user, we would be recreating duplicates of the tenant services per plugin, and it feel like a terrible thing), so atm I'm really not sure, and need to experiment.Maybe container composition could be an answer (allowing multiple parents), but we can easily see how much of a nightmare it would become with service scoping, so I'm not sure this could be viable either. Or maybe we could have a "main" parent and "mixins", but yeah, this is still in brainstorm here (insight welcome btw)
Cohabitation with the current system
I've been thinking a lot about this too, and I'm still not sure what the best way would be.
Atm, I think that for the transition period, we won't be able to erase the concept of setup/start contract, and will be forced to keep them somehow for the new DI system.
It likely mean we would have one different context for
setup
andstart
(or maybe we can reuse the two and add the start service later in the init cycle..)The consumption of the new DI API from within the lifecycle could look like:
Note that is is very close to what was already done for runtime dependencies:
kibana/packages/core/plugins/core-plugins-browser-internal/src/plugin_context.ts
Lines 96 to 99 in c713b91
We could also, internally in Core, automatically register each plugin's contracts to the root container. I'm not unsure of whether this is a good idea or not, as it could discourage teams to actively transition to the new system (but they would be forced to to address their cyclic dependencies issues anyway, so I'm not sure).
this could look like:
Getting rid of explicit plugin-to-plugin dependencies
With this new DI system, we won't necessarily need to explicitly have plugin to plugin dependencies declared. More importantly, we would need a way to define dependencies between plugins without causing cyclic dependencies errors (as plugin cyclic dependencies !== service cyclic dependency).
Once things get more concrete, we will have to consider that point too.
Changes plugin structure
Allowing inter-plugin cross dependencies via service will force us to revisit the way we're asking teams to structure their plugins.
ATM plugin types are exposed from the same TS project as the plugin's implementation. With cross dependencies via services, this won't be possible anymore, given you can't cross import between TS projects.
We will have to find a solution, either by separating the types (and service id tokens) from the implementation, similar to what was already done in Core, or by rechallenging the use of distinct TS projects per plugin (but I somewhat doubt we will be able to easily backtrack from this one, so the first option seems like the only realistic one).