You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a few places in the spec which octet strings are used either as input or as output, however not all bits in the string are relevant.
For instance:
Importing an HMAC key were the length is not a multiple of 8 bits
Exporting an HMAC key whose length is not a multiple of 8 bits
Deriving bits for ECDH, using a length that is not a multiple of 8 bits
The spec is ambiguous on how exactly that mechanism works. This could lead to implementation incompatibilities if users rely on the behavior chosen by a particular implementation.
For instance consider these scenarios:
Import an HMAC key using data = [0xff] and length=1 bit. When exporting that key, implementations could return any of the following key values:
[0xff](the exact octet stream imported)
[0x80](the unused bits having been zeroed out)
[0x84](or any other combination where first bit is zero)
When importing an HMAC key and the unused bits are not zero, we could consider treating this as an error error to catch potential mis-use?
When deriving 12 bits for ECDH, it is natural for an implementation to return the same thing as if deriving 16 bits. However there is nothing in the spec that mandates this. If another implementation decided to zero out those last 4 bits however users became reliant on the other behavior...
My recommendation is to mandate that unused bits when returning an octet string should be set to zero.
The text was updated successfully, but these errors were encountered:
mwatson2
added a commit
to mwatson2/webcrypto
that referenced
this issue
May 24, 2016
Bug 27402 from Bugzilla:
There are a few places in the spec which octet strings are used either as input or as output, however not all bits in the string are relevant.
For instance:
The spec is ambiguous on how exactly that mechanism works. This could lead to implementation incompatibilities if users rely on the behavior chosen by a particular implementation.
For instance consider these scenarios:
Import an HMAC key using data = [0xff] and length=1 bit. When exporting that key, implementations could return any of the following key values:
[0xff](the exact octet stream imported)
[0x80](the unused bits having been zeroed out)
[0x84](or any other combination where first bit is zero)
When importing an HMAC key and the unused bits are not zero, we could consider treating this as an error error to catch potential mis-use?
When deriving 12 bits for ECDH, it is natural for an implementation to return the same thing as if deriving 16 bits. However there is nothing in the spec that mandates this. If another implementation decided to zero out those last 4 bits however users became reliant on the other behavior...
My recommendation is to mandate that unused bits when returning an octet string should be set to zero.
The text was updated successfully, but these errors were encountered: