-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
check_trailing_bits still errors at certain lengths for base32 #28
Comments
Thank you @robyoder for the report. This is actually expected behavior. The However there is a way to decode inputs of invalid lengths using the fn truncate_decode(encoding: &Encoding, input: &[u8]) -> Result<Vec<u8>, DecodeError> {
match encoding.decode(input) {
Err(DecodeError {
position,
kind: DecodeKind::Length,
}) => encoding.decode(&input[..position]),
output => output,
}
} |
Sure, we can work around it and we will, but probably without running the decode function twice. My question is this: what is the benefit of having One such algorithm would be a naive secret generator that selects characters at random from the base32 alphabet. The final character in the string may represent 1-4 trailing bits, or a full 5 extra bits. I'm curious what kinds of algorithms |
You're not running the decode function twice, you're just calling it twice on invalid input and one call returns immediately. The The customization provided by the library is meant to configure the implementation, not to adapt to use-cases. This is because the number of possible use-cases is unbounded, while the number of decisions taken in the implementation is finite. So the right design is the one which is currently taken by the library, which is to provide fine granularity control over the implementation. This is the job of the user to define and use the specification adapted to their use-case. The current configurations have been added to permit to replicate the behavior of other implementations (like the GNU The algorithm you describe doesn't need this configuration to be efficiently implemented, because fn generate(base: &Encoding, len: usize) -> Vec<u8> {
let input = vec![b'F'; base.encode_len(len)]; // Randomize content.
base.decode(&input).unwrap()
} |
When using
spec.check_trailing_bits = false;
with the base32 spec, I would expect that any string of base32 characters would be decoded successfully with any extra bits ignored. But this is not the case. For strings with a length congruent to 1, 3, or 6 (mod 8), aDecodeError
is thrown.The above code produces the following output:
Playground link
I understand that by the canonical definition in the RFC, the second input is invalid base32, but so is the third, and I would expect them to be treated the same when it comes to
check_trailing_bits
. So for this example, I would expect the second line in the output to be identical to the first.Is the current behavior in this circumstance intentional, or is it simply a matter of a length check being done first without concern for
check_trailing_bits
?For reference, Google Authenticator for Android follows my expectation in this regard, as does Authy for iOS (in my own testing). In the comment linked there you can see that a "string of sixteen 7s ("7...7") and seventeen 7s both decode to the same byte array", which is not the case here.
The text was updated successfully, but these errors were encountered: