-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DISCUSS] Kibana index version migrations #15100
Comments
Had a short meeting with @tylersmalley and @chrisronline where some of this came up. Proposal:
Thoughts on transformations:
This is a quick writeup, I want to think more about it still but thought I'd see if there's any feedback. |
I did a little investigation/thinking about this concept and have some very basic code in a working state. We could use this as a platform for more discussion about how to do this. |
I like @jbudz idea of detecting invalid mappings and giving a clear error. I've used a number of migration tools (in Ruby, Clojure, .NET, and JavaScript). And a long time ago, I wrote a SQL migration tool for .NET because I didn't like EF. I'd be happy to contribute to this effort. My thought is that we'd do something similar to Rails db migrations:
// migrations/20180306161135-advanced-bar-chart-color-support.js or somewhere well-defined
export const key = "bar-chart"; // The type of data being migrated
// This gets called for each instance of "bar-chart" data that's in the Kibana index
// if that would be too slow (dunno how big our index is), we can enforce that migrations
// are all done via painless. But I'd personal rather start with a full-fledged language like JS,
// and use painless only as an optimization technique. This is also nicely testable.
export function up(oldState) {
return {
...dissoc(oldState, ["defaultColor"]),
style: {
background: oldState.defaultColor,
borderColor: oldState.defaultColor,
},
};
}
// We may not need to bother with this. Most places I've worked, we only really wrote
// up migrations, and if they caused an issue, we wrote further up migrations to address those.
export function down(newState) {
return {
...dissoc(newState, ["style"]),
defaultColor: newState.style.background,
};
} |
Hm. Thinking about it a bit more...
An example of one of these scenarios is the way the Kibana dashboard stores its plugin data. Right now, there are two records for every widget you see on the dashboard. There's the wrapper/container's record and there's the wrapped visualization's record. We are considering unifying these two records. A pseudocode migration for this might look something like this: // migrations/20180306161135-combine-panel-and-child-records.js
// conn is an interface to the Kibana elastic store
export async function up(conn) {
return conn.each({type: "dashboard-panel"}, async (panel) => {
const childState = await conn.find({id: panel.childId});
await conn.upsert({
...panel,
childState,
});
await conn.delete(childState);
});
}
|
To clarify a bit for @chrisdavies - dashboard will need the ability to parse fields, then separate and/or combine into more or less fields, on the document. Essentially this issue: #14754 I don't think right now we have any instance of needing to combine documents, though it's an interesting thought. If we did ever have that need, how could it be worked into a migration process? I we ever flattened the dashboard data structure (put all visualizations into a single dashboard object to ease security issues with nested objects) we might need it! But I'm pretty sure we won't ever do that and we are just going to address security issues assuming our nested object environment. About the transactional part - we'll need to handle this manually ourselves, since ES doesn't support transactions. Copy the old kibana index migrate to a new one. Maybe even leave the old one around just in case anything goes wrong, rather than deleting right away upon success. We've had bugs in the migration process and retaining that old index is pretty important to avoid data loss in those unexpected cases. |
I like the way you're thinking about this @chrisdavies. One part I have been thinking about is how we handle this from a users point-of-view. If we detect on start-up that a migration is required, probably due to an upgrade, we should force the user to a page explaining the migration. This will give the user an opportunity to backup/snapshot their Kibana index. The page would provide a button which would perform the migration, re-indexing into a new index and update the alias. Currently, we don't use aliases for the Kibana index, so this would be new but would allow for users to revert back (last resort). The Cloud team would need this migration to be an API call they need to avoid presenting this to the user when they upgrade the cluster. Having this UI would help prevent issues where someone stands up a newer version of Kibana and points it to an existing Kibana index. This way, the user knows they will be performing a migration and would need to have the permissions to do so. If we automatically did this migration, the existing Kibana index would no longer be compatible with the newer schema. I am not sure that we need a I am thinking that there would only be a single migration possible, per minor release. This should greatly simplify things and the new index would be the |
@tylersmalley Makes total sense. I've only just been introduced to painless (and Elastic, for that matter), but it makes sense that we'd have to support it. The only thing I'm not sure about is the single migration per minor release, though. That seems a little limiting, unless you mean a single migration per type + minor release? In which case, I think we could safely say that. |
After chatting with a few folks, here's the game plan so far: Migration functions
Checking if migration is required
Migrating
Notes:Why aren't we using reindex + painless?
Why aren't we storing a list of applied migrations, and then simply applying any migrations not in that list?
Why don't we delete the old index?
Why a checksum instead of just comparing the latest migration id in the index with the latest migration id in Kibana source?
|
Will plugin authors be able to hook into the migration system? Or do plugin authors need to maintain their own migrations? |
I have so many thoughts about this but not enough time today to weigh in. For now, I'll just weigh in on what @trevan mentioned and say that it is critical that migrations be pluggable from day one. |
Yes. Plugin authors will be able to add their own migrations. I'm not sure of the exact details yet, but my current thought is that it's similar to the way web-frameworks typically do it. A migration would be something like If we took that approach, the question is how such migrations should be run:
I'm leaning towards option 2, as it's much simpler, but it can be argued either way. If we go with option 2-- running each folder of migrations independently-- we can get away with not having to do a migration diff in order to determine what migration(s) need to be run. Plugins are expected to be compatible with whatever version of Kibana they are hosted in. So, plugin migrations might reasonably be expected to assume that core Kibana docs are already in the current version's format. Plus, I suspect that plugins will only be migrating their own documents and not touching core Kibana documents. (Is that true?) Anyway, if we go with option 2, there will be a checksum and latest migration id per plugin, and the check for "Do we need to run a migration?" would be comparison of the checksums. Edit: Another question that arises when you think about plugins is: how do we want to handle the scenario where all migrations succeed, except for some migrations for a certain plugin? My thought is, we'd probably want Kibana to still run, but just have the failed plugin(s) disabled until their indexes are properly migrated. Thoughts? |
@chrisdavies, if a visualization is considered "core Kibana documents" then visualization plugins will definitely migrate core Kibana documents. You need to make sure that a plugin can migrate without Kibana changing its version. Say you have plugin A with version 1.0 that is compatible with Kibana 7.0. Plugin A has an update to version 1.1 which is still compatible with Kibana 7.0 but it requires a migration. Updating plugin A should trigger a migration even though Kibana isn't being upgraded. |
@trevan Right. It won't be tied to a Kibana version for exactly that reason. If we detect any unrun migrations, we'll require migrations to be run before Kibana can be considered properly ready to run. So, if someone drops a new migration into a plugin, Kibana will no longer be considered to be initialized. That's the current thought. The idea being that plugins are as important as core. For example, if someone has a critical security plugin, and that plugin has not been properly migrated, Kibana should not assume it's OK to run. I have a couple of questions for anyone interested in this: Question 1: What strategy should be used to run migrations?
I'm leaning towards the last option, as it makes migrations more predictable for migration authors. Question 2: What should we do about this scenario:
Should this be considered an error needing manual correction? Should we just run it, and assume the best (I vote no, but am open to discussion)? Should we require migrations to have both an "up" and "down" transformation, in which case, we could rollback, |
This will create a minor delay, but if we do this all through the new platform instead of relying on the old, then we have a topological sorting of plugin dependencies, which means we can guarantee that migrations are always run in the proper order based on the plugin dependency graph. I don't think either of the options you proposed are reliable enough to be used for something this important.
We should never even attempt to apply migrations for a system if something seems out of whack with it. If we identified any issue like this, my expectation is that this would error with an appropriate message and Kibana would fail to start. Overwhelming validation is the story of the day here.
Up/down migrations aren't practical in this world because we can't enforce lossless migrations, so we may not have all the necessary data to revert back to an older state. This is OK in a traditional migration system because the idea is that an intentional revert in a production database is a nuclear option, so consequences are expected. We're talking about these migrations running all the time, so losing data is unacceptable. To us, rolling back a migration means rolling back Kibana to an older index.
Let's not rely on file names at all for this. Instead, let's require that plugins define an explicit order of their various migrations directly in their plugin code. Sort of like an entry file to migrations for that plugin. This makes it easy to create unit tests to verify migration orders and stuff without having to fall back to testing the file system. Aside from plugin loading itself, we should really avoid loading arbitrary files from the file system where possible. This has the added benefit of making migrations just another formal extension point that can be documented alongside any other future plugin contract documentation rather than a special case. |
Good points, everyone. Re @epixa, agreed. Further notes based on your comments: Migration order We'll rely on the new platform. It's not too much risk, as the new platform is scheduled for merge fairly soon, and migrations are a good few months off, even by optimistic estimates. This solves the migration-order problem. File system I think you're right about this. We won't do a file-system-based approach, but here are some points in favor of a file-based migration system, which our system needs to address: A file-system based migration system provides:
You can fairly easily unit test your migrations if done with a file-system. But a big downside to the file-system approach is that it would disallow transpilation, since it's loaded/run dynamically in production. I think that is a deal-breaker. We can expose migrations programmatically (e.g. as part of a plugin's interface). We'll use something similar for core Kibana. My thought is that it would be essentially an ordered array of migrations, each of which is a hash/object with three keys/props:
This means we need to detect duplicate IDs and error on that, as we can't rely on the filesystem to do this for us. We should also highly encourage a best-practices convention for authoring migrations. I think we do this via a yarn command: Error cases We'll error if:
|
How do we want to handle the scenario where a plugin is added to an existing Kibana installation?
The latter is preferable for many reasons, but might not be realistic. Can we dictate that plugin migrations should only apply to docs created by those plugins? If not, we need to run migrations as soon as a plugin is added, and each migration needs to be intelligent about detecting the shape of the data it is transforming, as that data may be in any number of shapes depending on the version of Kibana / other plugins in the system at the time the new plugin was added. |
I think this one is at least partly dependent on how tightly coupled this migration stuff is to the initial mapping creation stuff. Migrations will be able to change the values of stored data, but they can also modify the mappings for their objects, which is something that applies to all future objects as well. If we treat all of this as the same problem, then the initial mappings are essentially just migration zero, and in order to have the mappings properly set up for today, you must run all of the available migrations. If you can figure out a nice way to handle this that doesn't require running migrations unnecessarily, that would certainly be ideal.
I think yes, we should lock this down and not allow any plugin to directly mutate the objects of another plugin. Plugins shouldn't have to worry about their own objects changing without their control. If they want to make the structure of their own objects pluggable, then they can expose a custom way for other plugins to modify them. |
One example of objects that might need to be directly mutated by another plugin are visualizations. A plugin can add its own visualization as well as its own agg type. For the first case, the plugin would be migrating a visualization object that is completely owned by itself. For the second case, a visualization from a completely different plugin might have the agg type from the first plugin that needs to be migrated. We do that where we've added a custom agg type (country code) that is used by the region map visualization. |
It seems that in this scenario, plugin2 is simply a consumer of plugin1's data. It doesn't seem as if plugin2 should be mutating plugin1's data, right? I suspect nothing but dragons lie down that road. |
My initial thought was that seeding / initializing data should be considered separately from migrations, and I'd only focus on migrations here. But, I don't think that's possible, due to this scenario: pluginA is being upgraded to 2.0, and 2.0 needs a brand new document where it stores certain settings, so it needs to seed that. And that new seed data shouldn't pass through any previous migrations, but it should pass through future migrations. In other words, it's possible (probable?) that a system will require a combination of seeding and migrating over time. And these must necessarily be consistently ordered or else we'll have an unpredictable outcome. So, yeah. I think it would be beneficial to have this one system handle both seeding of new data and migration of existing data.
I think we might be able to initialize a plugin without requiring a full migration of the Kibana index. Essentially, if we have a brand new plugin and/or brand new system, we may be able to have new/seed documents pass through the migration pipeline and directly into an existing index. I don't love modifying an existing index, but in this case, it should be relatively harmless, as the old pre-plugin system should be unaffected by any docs created by the new plugin. |
I'm not sure if we are saying the same thing or different things. As an example, the plugin A has a visualization "region_map". It owns that visualization and so it should probably handle all of the migration. But plugin B adds a custom agg type which is available in the UI as a possible option in "region_map". If the user creates a region_map visualization using the custom agg type from plugin B, then the data stored in elasticsearch will be an entry that is owned by plugin A with a sub part of it (the agg type configuration) owned by plugin B. Plugin B might need to edit the visualization object that plugin A owns to migrate the agg type configuration. |
@trevan Ah. Thanks for the clarification. That is really complicated. In that scenario, do we have a consistent, systematic way for |
I know for this particular situation, plugin B can load all of the visualizations and check if each one has its agg type. I'm not sure there are many existing situations like this but I doubt there is a "consistent, systematic way". I believe you could make it consistent and systematic, though. It could be something along the lines of what @epixa said. Plugin A would expose a mechanism for plugin B to migrate the agg type data that it owns. I just wanted to make sure that this was taken into account as it is designed. |
If we make the migration system reusable, then we can pass it to each plugin to allow it to manage its own pluggable data. Sort of like a nested set of loops. The main migration system kicks off a "loop through all objects and defer to each of the "type" owners for migration behavior", then inside the migration for each object of type, those plugins can choose to iterate further. The visualizations plugin invokes a migration function for each object of type "visualization". In that migration function, when it detects an agg_type that it doesn't recognize, it loops through all agg type registrations from other plugins and invokes the custom migration code only on the agg_type data that was provided by the third party plugin. A simplified completely non-functional example of this flow (do not take as a suggested implementation): // in my plugin init
visualizations.registerAggType({
type: 'my_agg_type',
migrate(aggType) {
// do stuff with aggType
}
});
// in visualizations plugin
migrate(obj) {
const aggType = registeredAggTypes.find(aggtype => aggtype.ownsAggType(obj));
aggType.migrate(obj.agg_type);
}
// in global migration system
objects.forEach(obj => {
const plugin = plugins.find(plugin => plugin.ownsType(obj));
plugin.migrate(obj);
}); |
@trevan Thinking about this a bit more, I'm not sure that this is a scenario that a migration system would need to directly address. Here's why: In this scenario, if PluginB has created a breaking change to its public interface (e.g. in our example, it changes its aggregation data shape in some breaking way), it is up to consumers of PluginB to update their own code to conform to the new PluginB interface. So, in the scenario we mentioned, if someone is upgrading their system to have PluginB 2.0, they'd also need to update any consumers of PluginB (e.g. PluginA) to work with the new version. Obviously, in an ideal world, plugins should try to never make breaking changes to their public API, though this is not always possible. |
Jotting down notes from a conversation w/ @tylersmalley and @spalger. We've decided to explicitly not support deletion of the Instead, we are putting logic into the saved object client which will assert a valid index migration state prior to each write, and fail the write if the index is in an invalid state. This seemed like a reasonable approach to us, as by definition, if people are tinkering with the Kibana index and get it into an unknown state, it's unknown what the consequences of automatic recovery would be. |
This feature is currently in code-review, finalization mode, but I've come across a potential roadblock. ProblemThe current implementation will require
Possible solutionI think it might be possible to have a migration system with a lighter touch and still hit the goals we've set. We could create a system that behaves similarly to the current Kibana version:
How it would work:
{
migrationVersion: {
pluginA: 3,
pluginB: 1,
},
}
|
@chrisdavies What's the best way for me to learn about the current state of migrations, so I can better parse your new proposal? I'm looking for a definitive technical overview of how migrations work. Would that be https://github.com/chrisdavies/kibana/blob/feature/migrations-core/packages/kbn-migrations/README.md? |
@epixa That readme is currently the best written source outside of the source code itself, so yep! I'm also happy to hop on Slack / Zoom as necessary. |
Chatted w/ @spalger about this. Here's the summary of that conversation:
We both think this change is worth doing now rather than moving forward with the current approach. This per-document approach is more robust in a number of ways, and also eliminates a good deal of complexity (such as the need for a --force option, the need to update test snapshots all over the place, etc). |
Chatted w/ @tylersmalley and @jbudz and debated the pros/cons of the various approaches. The decision was to stay the course and fix all of the broken tests, as the current approach has less impact on performance, and ultimately is architecturally simpler even though it's significantly more code. |
In a world of automatic migrations, the whole premise behind esArchiver seems flawed. Or at least the extent to which we're relying on esArchiver for tests is flawed. Rather than importing raw document data through elasticsearch, we should probably be importing saved object data through kibana instead. If I add a migration and a test fails, then that should indicate a regression, but instead it's going to be treated as a blanket reason to rebuild the fixture data with esArchiver. I can see esArchiver fixtures being useful for testing our migration system itself and/or for testing Kibana running against old known legacy document states if that isn't handled by the migration system automatically. But beyond that all other tests should really be seeding data the way we expect data to be seeded in the real world. |
@epixa You and Tyler are on the same wavelength and have convinced me. :) |
Good news, @tylersmalley ! More changes. @epixa and I talked this morning, and came up with a scenario that is going to come up in the next release. That scenario is splitting plugins apart into smaller plugins, and changing what plugins own a type. This pointed out a flaw in the current implementation, and a change that both facilitates this requirement while also simplifying the implementation. Currently, we allow migrations to apply to any doc based solely on the filter the migration specifies. We track migrations per plugin, and enforce migration order on a per-plugin basis. Changes
|
@epixa pointed out that the current direction of migrations would allow us to easily make breaking changes to our API, as migrations would make it easy to change the shape of our data willy-nilly, and right now, our raw data is our API in some sense, as we expose it via the saved client system. So, we decided to disallow breaking mapping changes during minor version transitions, and relegate such changes to the major version boundaries. Minor versions would allow adding properties to mappings, but not removing properties, nor changing leaf-properties. @epixa suggested that in minor versions, we don't migrate the index at all, but simply run a transform function (one per type) on all docs coming into / out of the saved object API. So, the index may end up with docs of varying versions, but at read-time, they'd get sanitized and made consistent. After heading down this road a bit, I think I've come up against a roadblock: Problem 1
Problem 2
Possible solutions Migrate the index even on minors
Modify saved object find to not support source filtering
|
I have a proposal which, I think, simplifies the migration process and allows for seamless integration with plugins. Tracking migrationsEvery saved object should include a field which records which plugins have an interest in that object and the version of the plugin that last upgraded the object. For instance:
This way, Core can apply all upgrades to migrate the object from 6.3.0 to the current version, regardless of whether This property should be a required value that every plugin needs to provide, so that any object which is missing this value can be considered to come from the last known version without support for this property. Core and plugins would be passed all objects, and they would be responsible for determining whether they should make changes to the object or not. Specifying migrationsI don't see the point of using hashes to identify individual migrations - it overcomplicates things. I'd just have a single migration function per version, and deal with version changes when backporting. Always reindexI would always run migrations by reindexing to a new index. On the subject of limiting breaking changes, this could be done in minor versions by first applying the existing mapping to the new index before running any upgrades. On major version upgrades, you wouldn't apply the original mapping but instead build the new mapping entirely from core and plugins. It's possible that an index has been upgraded to (eg) 7.1.0 but still contains plugin data for an uninstalled plugin that comes from 6.3.0. When installing the plugin, the index would need to be migrated in "major" mode. Require user interactionI wouldn't run the migration automatically, at least not at first. I would make the index read-only and lock the Kibana interface saying that a migration is required. When the user presses a button, run the migration and report back, saying that they can delete the old index once they're satisfied that the migration has been successful. |
@clintongormley Thanks for the feedback. I like your suggestions.
I like this. I had an implementation a few iterations back that did almost exactly this, except it stored it like so: The downside to storing this on a per-doc basis is that it makes the "is this index up to date?" check more complex. Is there a reason to store this per doc instead of at the index level?
Yeah. I don't like the chekcsum, either, and changed my mind about it after posting my comment-- I forgot to update the issue. The original idea behind a checksum was that it allowed us to enforce migration ordering (this was before we tied migrations to specific Kibana versions) and it allowed for a very fast "is this up to date?" check. But I'm with you.
This is also what we were doing originally, and I also prefer this. It allows us to ensure that any changes made by migrations are accurately reflected in the index while also preventing data-loss from happening. I think it's a big enough win that it's worth doing. @epixa should weigh in on this though, as he has differing opinions. A challenge with this is if folks enable / disable plugins while doing migrations. It could mean running multiple migrations per Kibana version. The original migration implementation managed this by reindexing to an index that looked something like this:
I like making migrations an explicit step. It hopefully means that problems with migrations will be detected early on, rather than down the road after they've crept into various parts of the system. I think the main challenge to a UI approach is, who has permission to click the "Migrate This Thang" button? Anyone? Migrations are not an x-pack feature, so they can't fully rely on auth, though they can optionally rely on it. I think OSS customers would just have to live with the fact that anyone can click this. For x-pack customers we could restrict clickability to only those users who have full privileges to the .kibana index (assuming we have such a role / concept). An alternative solution that we have kicked around is:
This approach has higher friction, but does get around the permissions issue. |
Only that you use fewer fields. I'd be ok with the object approach too.
The thing I liked was that (after we move to this approach) it provides a mechanism for each plugin to indicate its interest in a particular object, which makes it easy for a plugin to decide whether it needs to do anything or not. It'd also be OK to store the upgrade levels in a doc in Kibana which summarises the status for the index. Or, with your field structure, you could use a
For index names, we don't need the SHA_OF_PLUGINS, we can just have a counter.
Easy to provide a UI function to clear up old indices.
Disable doesn't need to do anything, just enable. And we could use that max-agg to check if anything needs to be done. Obviously that'll only work once all plugins actually record an interest in saved objects.
I like the idea of doing it through the UI instead of the CLI. It can run as the Kibana user, so doesn't require the user to have admin permissions (although we could provide that option as a setting). If somebody has installed a new Kibana, then the migration needs to be run regardless of who runs it. The only downside of letting any user run it is that the admin doesn't have control over the timing. |
I'm sold.
I think the migration system can automatically determine this, as plugins register migrations on a type + version basis, and we'll know the type + version of docs in the index and in imported files. That said, the max aggregation is a good solution, and there are other advantages to storing things on a per doc basis. I'll weigh the pros/cons and make a decision, there.
We can do this now, as plugins currently register migrations on a per type basis, so we'll know if we need to migrate or not. |
I have the current direction outlined in the readme in this PR, and have also outlined possible changes per @clintongormley's comments. @epixa and @ @clintongormley, I think the two of you might want to kick around the conflicting options that have been proposed (see the Outstanding questions section of the readme). I'm going to hold off on moving this forward until that's resolved. Happy to hop on Zoom or similar, if need be. |
@chrisdavies Thanks for doing due diligence on the compatibility approach. I'm disappointed it won't work since it was a convenient low-risk stepping stone to migrations, but I can't think of any solution to the two problems you raised that doesn't involve dramatically complicating the whole system. That does leave us with actually executing migrations. We keep talking about major versus minor, but migrations need to be possible in patch versions as well. If we introduced some bad bug in a minor that resulted in broken data, we need to be able to release a migration that fixes that problem in a patch version.
This would mean that things like security and tags will need to upgrade all existing objects in the index as well as add themselves to every new object at write time. I don't necessarily think that's a problem, but this will need to happen in place rather than requiring a full reindex, otherwise simply having security enabled will result in a reindex on every upgrade.
I think the benefits of making this an active operation don't outweigh the drawbacks. Kibana isn't just a UI, and it's not acceptable for us to block the REST API on a user initiated action like this. As we roll out more and more capabilities at the REST API level, we need to be working toward minimizing downtime on upgrades rather than increasing it. That aside, we can't simply put things in read-only state and have any expectation that Kibana will behave in a sensible way. Simply loading the Kibana UI requires UI settings which are saved objects, and we can't trust that those are compatible with the current version pre-migration. It'd also just not be a great experience. Imagine if every time we upgrade demo.elastic.co some random visitor stumbles upon an admin interface asking them to migrate. Fortunately, with a reindex approach, there shouldn't be any real risk to just running the migration on startup. There'd still be downtime, but it would be as minimal as possible. There's no dealing with a read-only state as we simply don't listen on a port until things are square. |
@epixa can you explain more about ^^ - I don't understand why anything needs to happen in place? My understanding is that a doc is retrieved from the old index, funnelled through core migrations, then plugin migrations, then written to the new index. So we'd only need to do this when core or a plugin is upgraded. |
@clintongormley and I spoke directly about this to get on the same page, and this is where we landed:
@chrisdavies What do you think? |
I think it's a solid plan. It's actually pretty close to the original plan, but with some nice simplifications (like tracking migrations based on versioning rather than clunky hashes / migrationState). I agree completely that the consistency is a nice feature of this approach. So yaya! I'm really happy with this direction. Thanks, @epixa @clintongormley for hopping on this and hammering it out. |
A couple of questions remain that I'm unsure about: What should the saved object REST API do when it is sent documents that have no migration version information associated with them? Do we assume the docs are up to date or do we assume they are out of date? It seems that we need to assume they are out of date (e.g. they predate the migrations feature), but:
What do we do if the saved object API receives an "update" call with a document whose I think we have to fail the update, as it seems unrealistic to ask migration authors to write migrations in such a way that they can properly handle partial documents (and update calls will have partial documents). EDIT: Given the complexity of expecting everyone to pass our API proper migration version info, it may be better / simpler to say that we don't pass migration version info with every saved object write. Instead, we assume docs w/ no migration version are up to date. And, we tell API consumers that if they are calling the Kibana API, they need to update their code to send us up-to-date documents or the resulting behavior is undefined. |
I think the saved object API should only deal in terms of the latest document format, which does mean that we shouldn't introduce BWC breaks on the data level within minors, but that was our plan anyway. This is a decision we can change in the future if we deem it necessary. The import API is the only API that must be able to deal with older objects, and for that API I think we should assume all objects will contain a version or they are treated as crazy-old (this is the precise way to refer to objects that predate this migration system). Of course we'll need to ensure the export API includes the current version on each object.
As in my previous answer, I don't think these APIs should deal with migration at all for now. I'd take it a step further and say that we should explicitly deny any request (other than import) that even specifies a migrationVersion property, which will prevent people from doing things they don't expect and will give us the ability to introduce a more robust handling of migrationVersion as a feature in a future minor version if we felt it necessary. |
I'm going to close this as saved object migrations were merged in #20243 |
Proposal
Introduce a testable migration process that allows developers to incrementally add complex migration steps throughout the development of several minor releases.
Goals
Testable
The process should be easily testable so that at any point a failure to account for a required migration step will be captured. Ex: I almost submitted a PR that removed a field from the kibana mapping. Without a manual migration step, that data would have been lost. This would have gone unnoticed until late in the 7.0 index upgrade testing (if at all).
Incrementally add migration steps
We don't want to push the entire migration burden to the last minute. We should be able to incrementally add migration steps and tests to catch issues early and prevent major releases from being pushed back due to last minute bugs being found.
Flexibility
Right now we assume the entire re-indexing will fit into a painless script, but this won't hold us over for long. As a specific example, I'd like to migrate some data stored as JSON in a text field. Manipulating JSON from painless is not possible currently. I'd bet even more complicated scenarios are right around the corner. Our approach should be flexible to easily accommodate migration steps that won't fit into painless.
Make it easy for developers
Our approach should be un-intimidating so all developers on the team can easily add their own migration steps without requiring too much specific knowledge of the migration code. Making this a simple process will encourage us to fix issues that rely on .kibana index changes which can help clean up our code. There have been outstanding issues for many months that don't get addressed because they require mapping changes. A very small change (the same PR mentioned above) that incidentally removes the need for a field on the kibana mapping, and a pretty straightforward (albiet still not possible in painless) conversion, should be easy to throw into the migration.
Single source of conversion truth
There are potentially different areas of the code that need to know how to handle a bwc breaking change. For example, I'd like to introduce a change in a minor release which removes the need for a field on the .kibana index. In order to support backward compatibility, I need to do three things:
I'd have to think more about how/whether all three spots could take advantage of the same code.
Pluggable?
We might want to make this pluggable, so saved object types we are unaware of can register their own migration steps.
Questions
Implementation details
TODO: have not thought this far ahead yet.
cc @epixa @archanid @tsullivan @tylersmalley
The text was updated successfully, but these errors were encountered: