Please sign in to comment.
Save memory and tighten up CBC processing
This commit makes two changes: * Convert our CBC padding checking to using a bit-fiddling generated mask, instead of pre-generated masks. Previously we generated 256 masks of 255 bytes each at init time and used those masks to validate CBC padding. This works, but ends up costing 32k of memory. In this commit we convert to using a bit-fiddling mask similar to how do we constant time copies. * Add and use s2n_hmac_digest_two_compression_rounds() . Since we launched, several bug reporters (including Martin R. Albrecht and Kenny Paterson from Royal Holloway, University of London) got in touch to point out that s2n_hmac_digest() does not run in constant time and varies depending in the length of the padding. If the length of the data section covered by the mac leaves fewer than 8 bytes spare in the hash block used by HMAC, then the underlying hash function will add and compress an additional hash block when _digest() is called. This doesn't result in leaking a measureable timing side-channel because of the additional timing blinding in s2n_recv.c (s2n adds between 1ms and 10 seconds of delay in the event of an error, which raises the number of trials required to measure any signal by at least a factor of 83 trillion, and more likely renders it completely unmeasureable). But it's still worth tightening up here. Previously we'd been thinking that doing anything here would involve "opening up" the hash function in ways that prevent us from using hardware hash acceleration, but Martin R. Albrecht and Kenny Paterson had a good idea: count and use compression rounds explicitly, which is what this change goes with.
- Loading branch information...
Showing with 36 additions and 27 deletions.