Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CanonicalSerialize/Deserialize for dynamically sized types #104

Closed
paberr opened this issue Feb 26, 2020 · 7 comments
Closed

CanonicalSerialize/Deserialize for dynamically sized types #104

paberr opened this issue Feb 26, 2020 · 7 comments

Comments

@paberr
Copy link
Contributor

paberr commented Feb 26, 2020

Hi!

I just looked into implementing CanonicalSerialize/Deserialize for proofs and keys.
After discussing a bit with @brunoffranca and @kobigurk , here are still some of the open points.

  1. CanonicalSerialize and CanonicalDeserialize currently take (mutable) byte slices as input. Most other serialisation methods rely on a type that implements Write/Read instead. @kobigurk told me the choice was made to be no_std compatible. However, there is already a no_std io replacement part of algebra, which could be used.
  2. Currently, buffer_size in CanonicalSerialize doesn’t depend on &self, but only statically on the type. While this works for constant size types, it won’t work for the keys that, e.g., depend on the number of inputs. For the serialisation, it would be perfectly fine to add the dependency on &self here. However, this wouldn't work for the length checks in CanonicalDeserialize. @kobigurk 's suggestion was:

I imagine we could introduce something like “base_buffer_size” and “instance_buffer_size”, where “instance_buffer_size” would default to the base buffer size.

Additionally, I thought about introducing convenience methods to serialize/deserialize without the need of caring about the extra_info (which is mostly required for the recursive calls anyway).
I agree with @xu-cheng in #51 that the current way of calling the (de-)serialization methods is quite unintuitive.

Alternatively, it could also be an option to look into using serde again or to adapt an easier to use interface similar to what we've done here (mainly since we rely on a BigEndian encoding instead of the current format in zexe):
https://github.com/nimiq/core-rs-albatross/blob/bls12-377/bls/src/compression.rs

Since the original issue #51 is closed, I opened up this new one for discussion.

Looking forward to hearing your thoughts on this!

Best,
Pascal

@Pratyush
Copy link
Member

Hey!

I agree that the current interface is a bit clunky. The design in https://github.com/nimiq/core-rs-albatross/blob/bls12-377/bls/src/compression.rs looks good. My only worry is about generalizing that design to work for all fields/curves, instead of just BLS12-377. (This is partially why the current interface is a bit clunky.)

If you feel like this interface can be adapted in a straightforward manner to the general case, then I'd be very happy to upgrade =). An extension to support serializing without compression (like #99) would be a cherry on top!

@paberr
Copy link
Contributor Author

paberr commented Feb 26, 2020

My only worry is about generalizing that design to work for all fields/curves, instead of just BLS12-377.

I think it might work in a generalised way. From the top of my head, the only reason why we didn't generalise it in our implementation was some Rust issue when implementing traits for structs outside the crate. I'll have a look into it tomorrow. :)

If you feel like this interface can be adapted in a straightforward manner to the general case, then I'd be very happy to upgrade =). An extension to support serializing without compression (like #99) would be a cherry on top!

If I'm right and the interface works for the generalised case, extending it to support the uncompressed format should be straightforward.

@gakonst
Copy link
Contributor

gakonst commented Feb 27, 2020

Just chiming in to say that having support for uncompressed points as well would be great

@paberr
Copy link
Contributor Author

paberr commented Feb 27, 2020

If you have feedback, I pushed my current state as a draft. ;)

@gakonst
Copy link
Contributor

gakonst commented Mar 4, 2020

Can this be closed? @Pratyush

@burdges
Copy link

burdges commented Mar 8, 2020

Is much lost due to not using Read and Write now? I'm increasingly sure that Error, Read, and Write can all be "fixed", but I still have not yet taken the time to do so.

@paberr
Copy link
Contributor Author

paberr commented Mar 8, 2020

The serialization is actually using Read and Write now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants