Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Versioning, Schemas, and Migration #7

Closed
stuartpb opened this issue Mar 1, 2015 · 7 comments
Closed

Versioning, Schemas, and Migration #7

stuartpb opened this issue Mar 1, 2015 · 7 comments

Comments

@stuartpb
Copy link
Member

stuartpb commented Mar 1, 2015

Only one canonical version of the profile is maintained. All modification for previous schemas will be done via live backward migrations from the current model. As all data in a profile is optional and must not be expected to be present, this will not cause a problem if a later version of the schema outright removes a field.

The tag / version of the repo refers to the schema used by the live data.

@stuartpb
Copy link
Member Author

stuartpb commented Jun 9, 2015

The schema version follows semver: versioned development will start at v0.1.0, all changes will be treated as breaking and increment the minor version. After 1.0.0, breaking changes increment the major version, non-breaking changes increment the minor version.

@stuartpb stuartpb added this to the v0.1.0 milestone Jun 9, 2015
@stuartpb
Copy link
Member Author

stuartpb commented Jun 9, 2015

Changes to the API will also follow semver and usually/probably accompany, or be accompanied by, a schema change.

Re #13, Only v0 APIs are implemented under v0/X, where X is the minor version. APIs after v0 will be implemented with routes under /vX, where X is the major version. (Non-breaking minor-version changes, if any, will be implemented as changes under the major version prefix in the live deployment.)

@stuartpb
Copy link
Member Author

stuartpb commented Feb 9, 2017

I think consumers will need to note that compatibility is not guaranteed for enums, ie. changes that add new possible enum values to the schema are considered "compatible" the same way that new fields are, and the docs need to have a note about how any enums in use should operate when an unrecognized value is present (which I guess will boil down to "when in doubt, ignore it as if it weren't there at all, like it wasn't profiled and could be anything").

Essentially, an "incompatible" change is only classified as anything that changes the way an existing property is profiled/structured, and new values are considered "compatible" as they can be ignored and have the same behavior as if they weren't present.

@stuartpb
Copy link
Member Author

stuartpb commented Feb 17, 2017

To follow up from #149 (comment):

I think I had a different philosophy about versioning when I originally started writing / designing the layout for these files, where changes and additions to the schema prioritized extending what values could look like to preserve backwards compatibility with old profiles over refactoring to match a newly-emergent pattern: for instance, I'd make a field polymorphic (#164) so that it could accept older scalar values in addition to multiple values, under the assumption that everything would get boiled down in adaptation to customized subset builds anyway (so profiling old data that's not being used one way would be fine: we'd just recognize older values differently when building for a newer interpretation that would convert them into the newer format).

In other words, the rules were devised to prioritize ease of authorship, with concerns about reading deferred.

The biggest problem with that is that it gets rapidly hard to deal with the dataset, from an editor's point of view, when the inputs can express the same notion so many different ways (and the namespace gets ugly as you contort to accommodate older misnomers): even just writing new profiles became hard as old habits conflicted with hastily-improvised new ones that I couldn't remember if I'd abandoned or not.

It's better to just repeatedly migrate the entire dataset and keep everything in a consistent structure every time an extension is needed (and to try to get it right the first time so that you don't need to do so many migrations, holding off on profiling instead when you don't have enough examples to go off of (this was better solved via comments in Pull Requests)).

@stuartpb
Copy link
Member Author

Note everything about #138 for translating structured data into freeform notes in backwards-migrations.

@stuartpb
Copy link
Member Author

stuartpb commented Mar 3, 2017

Yeah, as #138 (comment) pointed out, it's not as simple as what I wrote in #7 (comment), with "non-breaking changes, if any". Minor version bumps do impact results: if nothing else, introducing new fields removes something that could otherwise have been documented in notes.

@stuartpb
Copy link
Member Author

stuartpb commented Mar 3, 2017

Closing in favor of further discussion happening in opws/opws-guidelines#2.

@stuartpb stuartpb closed this as completed Mar 3, 2017
@stuartpb stuartpb added the moved label Nov 12, 2017
@opws opws locked and limited conversation to collaborators Nov 12, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant