New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
murmur3 128 x64 support #34
Comments
Closed
I'm closing issues about adding new hash algorithms. It's clear that lots of them are missing. There is no point of having separate issues for them. Feel free to submit PRs if you need one implemented. |
vmx
pushed a commit
that referenced
this issue
Sep 30, 2020
* feat: derive from Code enum Instead of deriving a Multihash enum, derive from a Code enum. This commit also combines the previously generated `Multihash` enum with the `RawMultihash` struct. There is now a single `Multihash` type which is generic over its digest size. The derived `Code` enum is used for hashing, the `Multihash` only contains static, general data. Closes #12, #13. * fix: don't clone the digest * feat: introduce `max_size` attribute The digests are stack allocated. We cannot determine the biggest digest size that is used at compile time, hence you need to specify the maximum allocated size your custom code table should use via `[#mh(max_size = …)]` on the enum you derive the table from. * fix: enable the arb module again * fix: properly use local versions of imports This way you only need to import the derive and not also additional (seemingly unrelated) types. * fix: make Serde support work again * fix: make Parity Scale Codec support work again * feat: create MultihashCode trait Instead of just implementing it on the code enum, use a trait instead. This way other libraries can create digests without knowing the concrete type.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
To implement hamt a murmur3 128bit x64 is required
The text was updated successfully, but these errors were encountered: