Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start work on generated code for encoding/decoding proto messages. #268

Merged
merged 2 commits into from
Dec 7, 2018

Conversation

judah
Copy link
Collaborator

@judah judah commented Dec 6, 2018

This stubs out new methods in the Message class for decoding and encoding.
By generating the implementations rather than going through the reified
FieldDescriptor, we should ultimately get much faster code.

Currently, the implementations are trivial (e.g., buildMessage = const "") and
also named unfinished{Parse,Build}Message.

The code for the old, "reflected" encoding/decoding (i.e., using
FieldDescriptor) has been moved to a new module namespace
Data.ProtoLens.Encoding.Reflected. It is still used by
Data.ProtoLens.Encoding by default. A Cabal flag generated-encoding lets
`Data.ProtoLens.Encoding use the new implementation.

My intention is to finish this implementation and delete the old code before
making a new major release of proto-lens. In the meantime, this setup should
make it easier to implement the new API as a series of distinct PRs.

I changed CI to build the tests and benchmarks with the new API, but not to
actually run them (since they don't pass).


This change is Reviewable

@@ -113,7 +113,7 @@ generateModule modName imports syntaxType modifyImport definitions importedEnv s
mainImports = map (modifyImport . importSimple)
[ "Control.DeepSeq", "Lens.Labels.Prism" ]
sharedImports = map (modifyImport . importSimple)
[ "Prelude", "Data.Int", "Data.Word"
[ "Prelude", "Data.Int", "Data.Monoid", "Data.Word"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably decide on our supported GHC versions some time soon. If you can think of a way to poll proto-lens users for which version of GHC they care about, maybe we can formulate a support policy of some sort.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently the "ground truth" for what's supported is whatever's covered by our CI:
https://github.com/google/proto-lens/blob/master/.travis.yml#L20
It goes back to lts-7 which corresponds to ghc-8.0.1 and cabal-1.24.2.0.

I'll fix the build breakages before merging.

@blackgnezdo
Copy link
Collaborator

I presume you'll take care of builds before merging.

This stubs out new methods in the Message class for decoding and encoding.
By generating the implementations rather than going through the reified
FieldDescriptor, we should ultimately get much faster code.

Currently, the implementations are trivial (e.g., `buildMessage = const ""`) and
also named `unfinished{Parse,Build}Message`.

The code for the old, "reflected" encoding/decoding (i.e., using
`FieldDescriptor`) has been moved to a new module namespace
`Data.ProtoLens.Encoding.Reflected`.  It is still used by
`Data.ProtoLens.Encoding` by default.  A Cabal flag `generated-encoding` lets
`Data.ProtoLens.Encoding use the new implementation.

My intention is to finish this implementation and delete the old code before
making a new major release of proto-lens.  In the meantime, this setup should
make it easier to implement the new API as a series of distinct PRs.

I changed CI to build the tests and benchmarks with the new API, but not to
actually run them (since they don't pass).
@judah judah merged commit 8fefc1b into master Dec 7, 2018
@judah judah deleted the generate-encoding branch December 7, 2018 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants