-
Notifications
You must be signed in to change notification settings - Fork 2
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Versioning, Schemas, and Migration #7
Comments
The schema version follows semver: versioned development will start at v0.1.0, all changes will be treated as breaking and increment the minor version. After 1.0.0, breaking changes increment the major version, non-breaking changes increment the minor version. |
Changes to the API will also follow semver and usually/probably accompany, or be accompanied by, a schema change. Re #13, Only v0 APIs are implemented under |
I think consumers will need to note that compatibility is not guaranteed for enums, ie. changes that add new possible enum values to the schema are considered "compatible" the same way that new fields are, and the docs need to have a note about how any enums in use should operate when an unrecognized value is present (which I guess will boil down to "when in doubt, ignore it as if it weren't there at all, like it wasn't profiled and could be anything"). Essentially, an "incompatible" change is only classified as anything that changes the way an existing property is profiled/structured, and new values are considered "compatible" as they can be ignored and have the same behavior as if they weren't present. |
To follow up from #149 (comment): I think I had a different philosophy about versioning when I originally started writing / designing the layout for these files, where changes and additions to the schema prioritized extending what values could look like to preserve backwards compatibility with old profiles over refactoring to match a newly-emergent pattern: for instance, I'd make a field polymorphic (#164) so that it could accept older scalar values in addition to multiple values, under the assumption that everything would get boiled down in adaptation to customized subset builds anyway (so profiling old data that's not being used one way would be fine: we'd just recognize older values differently when building for a newer interpretation that would convert them into the newer format). In other words, the rules were devised to prioritize ease of authorship, with concerns about reading deferred. The biggest problem with that is that it gets rapidly hard to deal with the dataset, from an editor's point of view, when the inputs can express the same notion so many different ways (and the namespace gets ugly as you contort to accommodate older misnomers): even just writing new profiles became hard as old habits conflicted with hastily-improvised new ones that I couldn't remember if I'd abandoned or not. It's better to just repeatedly migrate the entire dataset and keep everything in a consistent structure every time an extension is needed (and to try to get it right the first time so that you don't need to do so many migrations, holding off on profiling instead when you don't have enough examples to go off of (this was better solved via comments in Pull Requests)). |
Note everything about #138 for translating structured data into freeform notes in backwards-migrations. |
Yeah, as #138 (comment) pointed out, it's not as simple as what I wrote in #7 (comment), with "non-breaking changes, if any". Minor version bumps do impact results: if nothing else, introducing new fields removes something that could otherwise have been documented in notes. |
Closing in favor of further discussion happening in opws/opws-guidelines#2. |
Only one canonical version of the profile is maintained. All modification for previous schemas will be done via live backward migrations from the current model. As all data in a profile is optional and must not be expected to be present, this will not cause a problem if a later version of the schema outright removes a field.
The tag / version of the repo refers to the schema used by the live data.
The text was updated successfully, but these errors were encountered: