-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup int, uint, uint64, Int, Uint, UInt, UVarint, Uint64, etc. #615
Comments
@icorderi, yes, this was left as a TODO. Will put up a PR to clean up these types. Your PR will be welcomed too ❤️ |
I'm not aware of |
Another thing is: we are using |
^^^ +1 to @Kubuxu 's suggestion -- as I see it, CBOR is the lingau franca of Filecoin data exchange, and we should focus on staying in within rules of type encoding to CBOR, since we frequently then calculate CIDs that are dependent on that encoding. |
Thanks for bringing this up. will take a look this week. (some quick notes)
potentially three paths:
(cc @jzimmerman @whyrusleeping who both likely care about this) |
Negative numbers have similar ranges.
|
I think it might be clearer to separate the question of in-memory representation (and hence the types listed in the fields) from the question of serialization:
|
+1 for @jzimmerman 's approach. |
important note, it is crucial in order to define proofs correctly to have u8, u32 and u64 as distinct different types with guaranteed representation
…On 5. Nov 2019, 21:51 +0100, Hannah Howard ***@***.***>, wrote:
+1 for @jzimmerman 's approach.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@dignifiedquire Could that happen at the API level? I.e., in standard CPU code outside of proofs, the values are of type Int (64-bit), but we perform a check that they fit within the appropriate bit width before generating the proof, and the generation/verification process errors out if not? |
I don’t follow I am afraid. I am talking about the algorithms that make up the proofs.
…On 5. Nov 2019, 22:20 +0100, Joe Zimmerman ***@***.***>, wrote:
@dignifiedquire Could that happen at the API level? I.e., in standard CPU code outside of proofs, the values are of type Int (64-bit), but we perform a check that they fit within the appropriate bit width before generating the proof, and the generation/verification process errors out if not?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
and the data structures they use
…On 5. Nov 2019, 22:20 +0100, Joe Zimmerman ***@***.***>, wrote:
@dignifiedquire Could that happen at the API level? I.e., in standard CPU code outside of proofs, the values are of type Int (64-bit), but we perform a check that they fit within the appropriate bit width before generating the proof, and the generation/verification process errors out if not?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Ah, I see -- yes, I don't think there's an issue with having these distinct integer types internal to the algorithm implementations; the concern was mainly regarding the data structures that need to be passed between APIs and/or go on chain. |
The spec has uppercase, lower case, newtype defs, encoding expectations, size bounds all used indiscriminately in the doc.
We need to finalize expected size bound and encoding for each numerical type.
We proposed the following sets of numbers:
Varint64
(64-bit number encoded cheaply on low values),UVarint64
(64-bit unsigned number encoded cheaply on low values),BigInt
(arbitrary length integer, currently using leb128 encoding)[U]Varint{16-32}
to complete to the family (this doesn't affect the chain storage but puts memory size bounds and helps advise implementations on expected sizes, i.e. aUVarint64
for aMethodID
inside an actor seems a bit much).uint{8-64}
,int{8-64}
(this are NOT varlen encoded, LE/BE needs to be specified in the encoding section of the spec)The text was updated successfully, but these errors were encountered: