Skip to content
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.

Formalize Semantic Versioning #33

Closed
mrkmarron opened this issue Apr 18, 2019 · 8 comments
Closed

Formalize Semantic Versioning #33

mrkmarron opened this issue Apr 18, 2019 · 8 comments
Labels
design discussion Language design discussion issue research item

Comments

@mrkmarron
Copy link
Contributor

What does it mean to bump the patch/minor/major level of a semantic version for our packages? Can we formally specify this hopefully validating later?

@mrkmarron mrkmarron added design discussion Language design discussion issue research item labels Apr 18, 2019
@merriam
Copy link

merriam commented Apr 18, 2019

The canonical reference is https://semver.org/. Basically, patches don't change the API; minor only appends to the API; major level increments on any change to the API. Except if the major version is zero, then anything goes.

@mrkmarron
Copy link
Contributor Author

Thanks, I should have included the link. I would like a more formal, ideally machine checkable definition. For example if for a P and a updated version P':

\forall v s.t. P(v) != error then P(v) == P'(v)

Or they only differ when P previously failed and now P' will return a result. I think we would agree this is a Patch level update but now we have the potential to mechanically verify that our change and the SemVer update match.

@thandy1212
Copy link

As a sort of simplified formalization could it be based on tests?

For example adding additional tests without adding any new interfaces would suggest a Patch version since it is making assumptions more explicit without changing existing behavior.

Adding new interfaces with corresponding tests would suggest a Minor version and removing tests or modifying existing tests would be considered a major update since it implies breaking existing functionality.

This is definitely a gross simplification, and my experience is still very limited so I would appreciate any input anyone might have.

@mrkmarron
Copy link
Contributor Author

My inspiration actually comes from the work a colleague of mine did on differential assertion checking. This looks at how to analyze 2 version of a program and determine where their behavior differs across all possible inputs.

If we can formalize what differences are acceptable for Patch/Minor/etc. then, ideally, we would be able to use a variation on this ideas from this work to check that the update to the SemVer version is consistent with the actual change in behavior.

@merriam
Copy link

merriam commented Apr 20, 2019

It is a good time to formalize the meaning, packaging, and requirements of tests into a language.

@orcmid
Copy link

orcmid commented Apr 29, 2019

My inspiration actually comes from the work a colleague of mine did on differential assertion checking. This looks at how to analyze 2 version of a program and determine where their behavior differs across all possible inputs.

Considering that program equivalence is undecidable (if Bosque is powerful enough :), one will still have to agree on some sort of heuristic with regard to Semantic Versioning, it seems to me. I assume that the work that catches your eye depends on some appropriate level of assertion decoration.
For Semantic Versioning to be anything but informal although careful, we also have to deal with fixes that repair previously-undetected defective behavior. Maybe that's level 2 (except for level 1 = 0). The problem is determining when such a case is to qualify as a breaking change or not.
There's also a tool-chain consideration, hence the decoration of Semantic Versions with platform and build identifiers and maybe even the supporting test set.
I confess that I never considered automation of Semantic Version verification and the topic is rather intriguing. I can see it operating at a check-in level, and I also wonder when it becomes unbearable. It would be interesting to see how folks who do continuation engineering and change management regard this.

@orcmid
Copy link

orcmid commented Apr 29, 2019

PS: With regard to Bosque itself, especially the core on which everything else is erected, I think verification and reproducibility are critical and one would want some transparent validation scheme. It is probably there that it is not possible to overdo it, so one can have great confidence in relying on it.

@mrkmarron
Copy link
Contributor Author

Revisit based on symbolic tester landing in master.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
design discussion Language design discussion issue research item
Projects
None yet
Development

No branches or pull requests

4 participants