New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fleet] Integration installation issues in multi-space Kibana environment #143388
Comments
Pinging @elastic/fleet (Team:Fleet) |
CauseOn package installation we create a "managed" tag which we then tag all package assets with. This tag is a saved object itsself. Currently we create this saved object with the same hard coded ID every time. The issue is that IDs for saved objects have to be unique across spaces This is because tag saved objects are 'multiple-isolated' meaning they are restricted to one space but their IDs must be unique across all spaces (read more here). When we install a package in the default space, the managed tag saved object is created in that space. Then when we switch to another space and install a package, we first query to see if the managed tag saved object exists, this query is only done for saved objects in the current space so returns nothing. We then attempt to create the saved object with the hard coded ID, however this is rejected as a saved object with that ID already exists. The race condition / Why do we use a hard coded ID?The reason we use a hard coded ID is to prevent a race condition, for example when installing a package e.g apache in a new policy with system monitoring enabled, we install the apache package and the system package semi-concurrently, meaning they both attempt to create the managed tag saved object concurrently. By using a hard coded ID and setting Possible solutions1. Make the tag a shared saved object ❌ Update: Raised #143869, not proceeding with this solution, TLDR: too much work to justify for this bug. i.e move from 'multiple-isolated' to 'multiple' for tag saved objects. 2. Create a managed tag in every space on installation 2.a include space in the saved object ID 🟠 (See solution 3 in comment below) 2.b Use auto generated saved object IDs ❌ (Probably not) Steps forwardI think option 2.a is the way forward as it is simplest and requires minimal changes, the only complication is the SO migration which I am looking into now. |
Hello @hop-dev - by saved objects per space, do we mean the assets or also the Fleet objects? |
2a has some edge cases which could cause a mess. E.g. importing saved objects that were exported before the id renaming upgrade would still reference the old id and probably contain the old tag causing duplicate tags. We could fix this by running a migration task every X but if X is large this causes weird behaviour in the UI when a tag might suddenly dissapear. Isn't (1) closer to what we really want? We want one tag to tag all fleet-installed saved objects. The fact that users can change the tag colors etc across spaces is a feature of sharing. Although we're currently in this inbetween state with This makes me wonder about the ideal behaviour of installing packages across spaces more generally. If dashboards were shareable and a package containing a dashboard gets installed to a second space, would we want to share the existing dashboard to that space or create a new dashboard in that space. And what about data views? What does it mean to install a package to a space that might contain non-space isolated "assets" like if a package creates an index and sets up ILM policies. |
Agreed, I hadn't thought through the implications of changing saved object IDs, I don't think periodically migrating is desireable for these tags, especially as they are quite inconsequential to the user.
Yes on reflection I think so, Is changing from 'multiple-isolated' to 'multiple' a big change? I guess we would need to update the tagging client to be space aware. I will create an issue for this.
All very good points, we will be looking at this eventually and I agree it's going to be complicated! Currently we just want fleet not to break when installing packages in different spaces, I agree the UX is not great at the min. Currently this tags saved object issue is preventing users from installing any package in a second space once a package has been installed in another. |
@kpollich I've moved this to blocked while I try and figure out a way forward with the core team |
@rudolf @kpollich After going round the houses a bit I am back to proposing a modification of 2.a Above, Solution 3: Create tag saved object in each space, but keep legacy tag + no SO migration As in 2.a above, use the spaceId in the tag ID, so However if
Why avoid migrating? The migration would have to create the new tag, delete the old one and re-assign the assets to the new tag. This would not be atomic and we would have to do it on plugin start, which would be complex and fragile. |
Thanks, @hop-dev - the new solution makes sense to me. I appreciate you describing why we can't do this as a migration here. |
Fix has been backported to 8.5.x as I thought it was severe enough 👍 |
Hi @kpollich, We have re-validated this issue on the latest 8.6.0 SNAPSHOT Kibana Staging environment and found that the issue is fixed. Build details:
Below are the observations:
Screen Recording: Spaces.-.Elastic.-.Google.Chrome.2022-11-10.12-52-07_Trim.mp4Hence, marking this issue as QA: Validated. Thanks! |
Hi @kpollich, We have created 01 test case for this feature under our Fleet Test Suite: Please let us know if anything else is required from our end. Thanks! |
Hi @kpollich, We have re-validated this issue on the latest 8.5.1 BC1 Kibana Staging environment and found that the issue is fixed. Build details:
Below are the observations:
Screen Recording: Home.-.Elastic.-.Google.Chrome.2022-11-15.16-04-55.mp4Please let us know if anything is missing from our end. |
Summary
When installing integrations in a non-default space, the installed integration is not visible under
Installed Integrations
in either the default space or the non-default space.To reproduce
Screen.Recording.2022-10-14.at.11.17.40.AM.mov
I do see some interesting error logs during the installation process
The text was updated successfully, but these errors were encountered: