New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rng lifetimes #1066

Closed
wants to merge 4 commits into
base: master
from

Conversation

Projects
None yet
2 participants
@phil-levis
Copy link
Collaborator

phil-levis commented Jul 2, 2018

Pull Request Overview

This pull request updates the RNG HIL to make it in line with more modern HIL traits. In particular, it moves set_client() into the HIL, which enables more generic virtualization approaches, and in order to support this the trait also has a lifetime parameter.

It also updates the documentation for the RNG HIL, fixing a few typos and commenting on the expected vs. guaranteed entropy of each sample.

Testing Strategy

This pull request was tested by compiling imix, hail, launchxl, ek-tm4c1294xl, nordic/nrf51dk, nordic/nrf52840dk, and nordic/nrf52dk.

TODO or Help Wanted

Documentation Updated

Documentation on the trait updated.

  • Updated the relevant files in /docs, or no updates are required.

Formatting

  • Ran make formatall.

phil-levis added some commits Jul 2, 2018

Two changes to RNG HIL that bring it in line with modern HILs:
  - set_client is part of the HIL, rather than implementation-specific
  - include a lifetime in the trait signature, for the client reference
//!
//! A random number generator produces a stream of random numbers, either from
//! hardware or based on an initial seed. The [RNG](trait.RNG.html) trait
//! provides a simple, implementation agnostic interface for getting new random
//! values.
//! values. Cryptosystems should use this interface with care, as it does not
//! provide strong entropy guarantees.

This comment has been minimized.

@ppannuto

ppannuto Jul 2, 2018

Member

Should we consider renaming this interface to make this more explicit? There's been a bit of a backlash in other communities where historically there were random and secure_random APIs, and many folks understandably mistakenly use the former. Perhaps we have a chance to do better :)

I'm not sure what the better than here is, I don't think it's exactly insecure_rng or low_entropy_rng or no_promises_rng, as those are too negative. Maybe we should try to define the strong entropy guarantees that we would imagine a complimentary interface providing and use the delta to think about this further?

This comment has been minimized.

@phil-levis

phil-levis Jul 2, 2018

Collaborator

One of the big issues here is the tradeoffs of different kinds of randomness. If you are not using the random functions for cryptographic operations (e.g., for random backoff), entropy guarantees aren't really needed.

If we want to be very careful about this, we could use types such that only high-entropy randomness is fed to cryptographic operations.

This comment has been minimized.

@ppannuto

ppannuto Jul 2, 2018

Member

Using types to enforce high entropy for crypto operations is really cool! (Albeit somewhat orthogonal to this PR)

Should this interface return MediumEntropyBytes that implement to for u32, or is that just needlessly pedantic?

/// trait should try to ensure that there is at least 0.25 bits of
/// entropy per generated bit, such that sampling 4x the bits needed
/// can generate sufficient entropy, but cryptosystems should verify
/// this is true for each implementation they rely on.

This comment has been minimized.

@ppannuto

ppannuto Jul 2, 2018

Member

I'm somewhat ignorant in the details of this field. Is 0.25 per bit a well-known standard that many things provide? Do our current chips meet this requirement?

More generally, "implementors should" is maybe too soft. Do (should?) we want to make some guarantee like "implementations accepted in Tock master are certified by the Tock team to meet this requirement"?

This comment has been minimized.

@phil-levis

phil-levis Jul 2, 2018

Collaborator

For most random number generators I've seen, they provide well above 0.25. The one exception is the MK66, which produces only 1-2 bits for a 32 bit sample. I chose 4 because it seemed like a reasonable tradeoff between caller and callee complexity. I.e., most implementations will not have to whiten their output, and most callers will only need to sample 4x before hashing.

This comment has been minimized.

@ppannuto

ppannuto Jul 2, 2018

Member

In that case, it feels like a line in the sand of 0.25 might encourage a weird duplicated effort of whitening happening both below and above this interface. Should we instead perhaps just report the entropy/bit available from this RNG and let higher layers do with that what they will?

I could then envision a whitening capsule that takes in a bitstream of X entropy/bit and returns one at Y entropy/bit.

/// already in progress.
/// This method will cause `randomness_available` being called on the Client
/// at some point soon in the future. Numerious calls to this method before
/// `randomness_available` is called will only result in one callback.

This comment has been minimized.

@ppannuto

ppannuto Jul 2, 2018

Member

Should a repeat call perhaps instead return EALREADY? Keeps a one call -> one callback pattern?

This comment has been minimized.

@phil-levis

phil-levis Jul 2, 2018

Collaborator

Yes -- a return code would be great.

This comment has been minimized.

@phil-levis

phil-levis Jul 2, 2018

Collaborator

Actually, wait. Why should there be a one call -> one callback pattern? If the semantics are "SUCCESS means you will get a callback" then EALREADY implies you will not get a callback. EALREADY is for cases of actuation (you are changing something) rather than acquisition. It's a very useful idiom to start something if it isn't started and if it is started just keep on going. E.g., you want to turn on a peripheral: trying to turn it on several times is OK.

This comment has been minimized.

@ppannuto

ppannuto Jul 3, 2018

Member

Should it perhaps then be an error to change the client if there is an outstanding callback pending?

I agree that it's reasonable for a continuation interface to allow repeated start calls. The motivation for the one -> one pattern in my mind is trying to ensure the contract of "if you get a SUCCESS then your callback will be called"

This comment has been minimized.

@phil-levis

phil-levis Jul 3, 2018

Collaborator

NB: I've moved a larger discussion to #1069. I'll close this PR.

I agree -- if you get a SUCCESS then your callback will be called. However, this is a many -> one pattern, because there's only a single callback. Otherwise there's implicit queueing.

An error code for a client change is an interesting thought. I'm not sure this is always an error, though. For example, what if you have a virtualizer that's trying to feed randomness to multiple clients. It might, in the randomness_available callback, change the client before returning Continue::More. Or, if a higher priority client request comes in, and it uses a helper structure to handle the callbacks, it might want to change the callback for an outstanding acquisition. So while I think that there are cases when changing the client mid-acquisition would be a bug, it could also be the correct behavior; we probably don't want the implementation to make assumptions about what its caller does.

@phil-levis phil-levis closed this Jul 3, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment