-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Text ULID parsing squashes high bits #9
Comments
As per specification:
What's the time stamp you're encoding? |
After some discussion in my team, we realised that the string representation contains 130 bits (5 bits/character * 26 characters), whereas the binary form is 128 bit. This means there is two bits of redundancy in the first character. We encountered this by using a library that randomly generated string ULIDs without regard for the current timestamp. In my view, a ULID with a first character that is not between 0 and 7 should be invalid, since it encodes bits that will ignored in the transition to binary. I'll file an issue with the spec and ask for it to be mentioned explicitly. |
@alizain Do you have opinions on this issue? Edit: Oh, I see you wrote in the README
Which I guess means we need to make a fix. |
Yup |
From what I understood in my tests and in oklog#9, the max possible base32 timestamp is `7ZZZZZZZZZ`. Consequently, the current max year in the README is wrong.
From what I understood in my tests and in #9, the max possible base32 timestamp is `7ZZZZZZZZZ`. Consequently, the current max year in the README is wrong.
This issue was discovered by @larsyencken in valohai/ulid2#4; ulid2 uses the same decoding code as this library. Quoting that issue:
I'm wondering whether this is a bug or a limitation of the encoding.
The same issue is reproducible using this library, too:
outputs
The text was updated successfully, but these errors were encountered: