New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auraescript case-sensitivity issue #128
Comments
I know what the issue is here, and will have a PR that fixes this anyway. There's already a solution for this that comes from |
While we wait for the CLA bot I took a look at pbjson. I'm a little skeptical that it will solve this issue, because I don't think it is the serialization of JS/TS <-> proto that is the issue. I think the issue is at the JS/TS <-> Rust that Deno does. I'm looking forward to being pleasantly surprised by the PR. But I'd like to probe your comment here if you have time to respond (at your convenience), somewhat for my own knowledge, but will help aurae too when we tackle #126. My preferred approach for JSON serialization/deserialization has been to be liberal in what is accepted (i.e., camelCase, snake_case, etc.) to a degree, but be strict in the output (match the convention, i.e., camelCase fields, SCREAMIN_SNAKE_CASE enums, etc.). Your comment reads to me like you have insight on why that might be a bad idea. I appreciate any thoughts/input you might have. |
In terms of the V8/Rust interop boundary, The same process applies on deserialization out of V8. The deserializer runs through the fields in the V8 object, but, with the default code generation, the new camelCased V8 object doesn't have any of the snake_cased fields that the derived serde deserializer is looking for. This is in part where What you'll find is that the JSON serialization provided by There may be a reason that you don't want to use What you end up with is a problem where your gRPC API semantics are placing constraints on your FFI semantics, using the same data structure to cross multiple layers, kinda like exposing a data base structure directly to the API. You may actually prefer to have a distinct type for FFI (that way you can guarantee or require from TS that certain fields are inhabited, but which proto3 semantically allows to be empty). This would also allow you to provide the canonical JSON serialization for the gRPC API types, but allow you specify a more efficient serialization for the FFI interaction. If you're not passing specialized types, and instead are primarily passing strings, integers, arrays, and objects, then the canonical JSON representation and the V8 FFI representation will overlap. |
I know I just left a wall of text there, but wanted to provide another alternate option: Drop I am the author of |
Just to get some very initial thoughts down: In my opinion, the main goal to keep in mind is that we want to stay at close to fully generated as possible for auraescript; small changes to the proto files should need a rebuild at most. That way development speed on the core functionality isn't hindered. Having looked at your comments only so far, I think your solutions meet that goal, which is great. I've got no issue with finding another solution or dropping Considering the two alternate solutions so far, buf gives me hesitation for multiple (potentially unfounded) reasons.
Admittedly, I don't know much about buf, except that I believe it is relatively new, and I'm happy to adapt to whatever others think is best. However, if we can use the lints from buf, whether we use it for more or not, I think that is a great idea. A custom |
This change enables the crates to be compiled from source without requiring the installation of `protoc` (if, for example, the crates were published to a public crate repository for consumption by other users). The change introduces a new intermediate crate `aurae-proto` that is depended upon by both `auraed` and `auraescript` rather than having `tonic-build` re-compile the protobuf into each of the crates independently. Generated code is checked in so that it is available without needing any `build.rs` to invoke `npm` or other external programs at crate compile-time. In addition, this change fixes the issue with TypeScript and Rust protoc-generated code not agreeing on the naming convention being used for fields when serialized through a JSON interface. By using `pbjson`, we correctly add the appropriate serializers/deserializers to ensure that we get the correct cannonical JSON serialization for protobuf types. Generating these files now relies on a program `buf` which invokes some remote plugins to generate Rust and TypeScript code. The `Makefile` has been updated to provide two new commands: `proto` and `proto-lint`. `proto` runs `buf generate` to generate code from the protobuf schemas, while `lint` runs `buf lint` to provide linting recommendations for the protobuf definitions. The linting is configurable, but I did not update the schemas to match the `buf` defaults, nor try to configure the linting to match the current _status quo_. `buf` also provides a `buf format` command, which can be used to ensure that the protobuf definition files follow a standard format, similar to `rustfmt`, but this also has not been incorporated or run. Fixes #128
I am 100% behind trying to be fully generated, and agree with the idea of using the protobuf schemas as the source of truth for everything downstream. In regard to relying on In this, I'm happy to back out On the code generation side of things, I am the author of A I think that a purpose-built Footnotes: |
First of all, thank you for all the info/context. I finally had a chance to look at #129, and since it has been merged, I think we are ok with what we have for now. Let's see how things progress and where the pain points are (if any). Some notes:
|
It seems like differences in casing for the generated code is causing property/field values to be lost.
JS/TS use camelCase for properties while rust uses snake_case for fields.
@krisnova confirmed this as an issue on stream. A possible solution is to relax the casing requirements with another macro that tags all the fields with
#[serde(alias = "camelCaseFieldName")]
as Deno uses serde for serializing/deserializing.*I actually have a macro for this, that probably needs a little adapting before I can contribute it. I'm just a little time constrained before the break, so wanted to leave this issue as a note for future me. For now, if anyone is trying stuff out, just use
hardtoread
field names so the casing will be the same throughout.The text was updated successfully, but these errors were encountered: