Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Efficacy of jitterentropy RNG in Xen #6

Closed
adrelanos opened this issue Apr 14, 2019 · 10 comments

Comments

Projects
None yet
4 participants
@adrelanos
Copy link

commented Apr 14, 2019

Hello,

we at Qubes OS are wondering [1] about the efficacy of entropy daemons like haveged and jitterentropyd in Xen.

One of the authors of haveged [0] pointed out if the hardware cycles counter is emulated and deterministic, and thus predictable. He therefore does not recommend using HAVEGED on those systems. Is this the case with Xen counters?

Any differences in Xen dom0 vs Xen guests?

Would you discourage use of jitterentropy in any type of Xen or non-Xen VMs?

Kind regards,
Patrick

[1] QubesOS/qubes-issues#673
[2]
BetterCrypto/Applied-Crypto-Hardening@cf7cef7#commitcomment-23006392

@smuellerDD

This comment has been minimized.

Copy link
Owner

commented Apr 15, 2019

@HulaHoopWhonix

This comment has been minimized.

Copy link

commented May 2, 2019

Hi @smuellerDD U ran these tests on KVM and uploaded the files as you described:

https://phabricator.whonix.org/T817#18429

Can you please give your expert opinion on the results?

@smuellerDD

This comment has been minimized.

Copy link
Owner

commented May 5, 2019

@adrelanos

This comment has been minimized.

Copy link
Author

commented May 22, 2019

Could you please also look into the transmission for VirtualBox / Qubes which was submitted earlier by @TNTBOMBOM by e-mail?

@smuellerDD

This comment has been minimized.

Copy link
Owner

commented May 22, 2019

@TNTBOMBOM

This comment has been minimized.

Copy link

commented May 25, 2019

@adrelanos

This comment has been minimized.

Copy link
Author

commented Jun 6, 2019

Did you have a chance to look into it?

@smuellerDD

This comment has been minimized.

Copy link
Owner

commented Jun 6, 2019

Sorry for the delay, but I sent an email on May 25. Here is another copy of the answer.

Qubes

The first test "Test-Results" show that the heuristic validating whether the
underlying platform is sufficient for the Jitter RNG has detected no
insufficiency during 10000 test runs. Check.

The file foldtime.O0 contains test results for the non-optimized binary code
that is the basis for the Jitter RNG. To understand what it shows, we have to
understand what the Jitter RNG really does: it simply measures the execution
time of a fixed code fragment. The test does the same, i.e. it measures what
the Jitter RNG would measure. Each time delta is simply recorded.

Each time delta is expected to contribute entropy to the entropy pool. But how
much? We can use the SP800-90B tool set provided by NIST at [1]. This tool,
however, can only process input data with a window size of a few bits at most.
Thus, we take the 4 LSB of each time delta, hoping that they contain already
sufficient entropy.

Using the tool [1], we get the following output:

Running non-IID tests...

Running Most Common Value Estimate...
Bitstring MCV Estimate: mode = 2005620, p-hat = 0.50140499999999999, p_u =
0.50204895486400081
Most Common Value Estimate (bit string) = 0.994100 / 1 bit(s)

Running Entropic Statistic Estimates (bit strings only)...
Bitstring Collision Estimate: X-bar = 2.5010973564651491, sigma-hat =
0.49999895212561996, p = 0.5
Collision Test Estimate (bit string) = 1.000000 / 1 bit(s)
Bitstring Markov Estimate: P_0 = 0.50140499999999999, P_1 =
0.49859500000000001, P_0,0 = 0.50032309211116766, P_0,1 = 0.49967690788883234,
P_1,0 = 0.50249325729964067, P_1,1 = 0.49750674270035933, p_max =
3.86818991019963e-39
Markov Test Estimate (bit string) = 0.996903 / 1 bit(s)
Bitstring Compression Estimate: X-bar = 5.2170320393664023, sigma-hat =
1.0146785561878935, p = 0.025847044943319686
Compression Test Estimate (bit string) = 0.878976 / 1 bit(s)

Running Tuple Estimates...
Bitstring t-Tuple Estimate: t = 18, p-hat_max = 0.52360109960331436, p_u =
0.52424433922577907
Bitstring LRS Estimate: u = 19, v = 42, p-hat = 0.50001215824001477, p_u =
0.50065611564620627
T-Tuple Test Estimate (bit string) = 0.931689 / 1 bit(s)
LRS Test Estimate (bit string) = 0.998108 / 1 bit(s)

Running Predictor Estimates...
Bitstring MultiMCW Prediction Estimate: N = 3999937, Pglobal' =
0.50046008453798463 (C = 1999233) Plocal can't affect result (r = 24)
Multi Most Common in Window (MultiMCW) Prediction Test Estimate (bit
string) = 0.998673 / 1 bit(s)
Bitstring Lag Prediction Estimate: N = 3999999, Pglobal' = 0.50117058226135014
(C = 2002106) Plocal can't affect result (r = 22)
Lag Prediction Test Estimate (bit string) = 0.996626 / 1 bit(s)
Bitstring MultiMMC Prediction Estimate: N = 3999998, Pglobal' =
0.50240995443366221 (C = 2007063) Plocal can't affect result (r = 21)
Multi Markov Model with Counting (MultiMMC) Prediction Test Estimate
(bit string) = 0.993063 / 1 bit(s)
Bitstring LZ78Y Prediction Estimate: N = 3999983, Pglobal' =
0.50195008712868949 (C = 2005216) Plocal can't affect result (r = 24)
LZ78Y Prediction Test Estimate (bit string) = 0.994384 / 1 bit(s)

h': 0.878976

  • as we analyzed 4 bits of each time delta, we get 4 * 0.878976 = 3.515904
    bits of entropy per four bit time delta

  • assuming the worst case that all other bits in the time delta have no
    entropy, we have 3.515904 bits of entropy per time delta

  • the Jitter RNG gathers 64 time deltas for returning 64 bits of random data
    and it uses an LFSR with a primitive and irreducible polynomial which is
    entropy preserving. Thus, the Jitter RNG collected 64 * 3.515904 = 225.017856
    bits of entropy for its 64 bit output.

  • as the Jitter RNG maintains a 64 bit entropy pool, its entropy content
    cannot be larger than the pool itself. Thus, the entropy content in the pool
    after collecting 64 time deltas is max(64 bits, 225.017856) = 64 bits

This implies that the Jitter RNG data has (close to) 64 bits of entropy per
data bit.

Bottom line: When the Jitter RNG injects 64 bits of data into the Linux /dev/
random via the IOCTL, it is appropriate that the entropy estimator increases
by 64 bits.

Bottom line: From my perspective, I see no issue in using the Jitter RNG as a
noise source in your environments.

Note, applying the Shannon-Entropy formula to the data, we will get much
higher entropy values.

Note II: This assessment complies with the entropy assessments to be done for
a NIST FIP 140-2 validation compliant to FIPS 140-2 IG 7.15

[1] https://github.com/usnistgov/SP800-90B_EntropyAssessment

@smuellerDD

This comment has been minimized.

Copy link
Owner

commented Jun 6, 2019

VirtualBox:

The first test "Test-Results" show that the heuristic validating whether the
underlying platform is sufficient for the Jitter RNG has detected no
insufficiency during 10000 test runs. Check.

The file foldtime.O0 contains test results for the non-optimized binary code
that is the basis for the Jitter RNG. To understand what it shows, we have to
understand what the Jitter RNG really does: it simply measures the execution
time of a fixed code fragment. The test does the same, i.e. it measures what
the Jitter RNG would measure. Each time delta is simply recorded.

Each time delta is expected to contribute entropy to the entropy pool. But how
much? We can use the SP800-90B tool set provided by NIST at [1]. This tool,
however, can only process input data with a window size of a few bits at most.
Thus, we take the 4 LSB of each time delta, hoping that they contain already
sufficient entropy.

Using the tool [1], we get the following output:

Running non-IID tests...

Running Most Common Value Estimate...
Bitstring MCV Estimate: mode = 178623, p-hat = 0.50175000000000003, p_u = 0.50390853967110549
Most Common Value Estimate (bit string) = 0.988766 / 1 bit(s)

Running Entropic Statistic Estimates (bit strings only)...
Bitstring Collision Estimate: X-bar = 2.5001404573290635, sigma-hat = 0.50000173599752984, p = 0.54045131268728142
Collision Test Estimate (bit string) = 0.887763 / 1 bit(s)
Bitstring Markov Estimate: P_0 = 0.49825000000000003, P_1 = 0.50174999999999992, P_0,0 = 0.49755041521730553, P_0,1 = 0.50244958478269441, P_1,0 = 0.49894749806854699, P_1,1 = 0.50105250193145301, p_max = 3.8517512617334335e-39
Markov Test Estimate (bit string) = 0.996951 / 1 bit(s)
Bitstring Compression Estimate: X-bar = 5.2120367779353138, sigma-hat = 1.0184962286867389, p = 0.038585884056466568
Compression Test Estimate (bit string) = 0.782631 / 1 bit(s)

Running Tuple Estimates...
Bitstring t-Tuple Estimate: t = 14, p-hat_max = 0.52410145813830322, p_u = 0.52625750185056885
Bitstring LRS Estimate: u = 15, v = 34, p-hat = 0.50180998874502603, p_u = 0.50396852749416765
T-Tuple Test Estimate (bit string) = 0.926159 / 1 bit(s)
LRS Test Estimate (bit string) = 0.988594 / 1 bit(s)

Running Predictor Estimates...
Bitstring MultiMCW Prediction Estimate: N = 355937, Pglobal' = 0.50271923500407356 (C = 178168) Plocal can't affect result (r = 21)
Multi Most Common in Window (MultiMCW) Prediction Test Estimate (bit string) = 0.992175 / 1 bit(s)
Bitstring Lag Prediction Estimate: N = 355999, Pglobal' = 0.50257007320276437 (C = 178146) Plocal can't affect result (r = 19)
Lag Prediction Test Estimate (bit string) = 0.992603 / 1 bit(s)
Bitstring MultiMMC Prediction Estimate: N = 355998, Pglobal' = 0.50382428667215096 (C = 178592) Plocal can't affect result (r = 19)
Multi Markov Model with Counting (MultiMMC) Prediction Test Estimate (bit string) = 0.989007 / 1 bit(s)
Bitstring LZ78Y Prediction Estimate: N = 355983, Pglobal' = 0.50316286151811906 (C = 178349) Plocal can't affect result (r = 20)
LZ78Y Prediction Test Estimate (bit string) = 0.990903 / 1 bit(s)

h': 0.782631

as we analyzed 4 bits of each time delta, we get 4 * 0.782631 = 3.130524
bits of entropy per four bit time delta

assuming the worst case that all other bits in the time delta have no
entropy, we have 3.130524 bits of entropy per time delta

the Jitter RNG gathers 64 time deltas for returning 64 bits of random data
and it uses an LFSR with a primitive and irreducible polynomial which is
entropy preserving. Thus, the Jitter RNG collected 64 * 3.130524 = 200.353536
bits of entropy for its 64 bit output.

as the Jitter RNG maintains a 64 bit entropy pool, its entropy content
cannot be larger than the pool itself. Thus, the entropy content in the pool
after collecting 64 time deltas is max(64 bits, 200.353536) = 64 bits

This implies that the Jitter RNG data has (close to) 64 bits of entropy per
data bit.

Bottom line: When the Jitter RNG injects 64 bits of data into the Linux /dev/
random via the IOCTL, it is appropriate that the entropy estimator increases
by 64 bits.

Bottom line: From my perspective, I see no issue in using the Jitter RNG as a
noise source in your environments.

Note, applying the Shannon-Entropy formula to the data, we will get much
higher entropy values.

Note II: This assessment complies with the entropy assessments to be done for
a NIST FIP 140-2 validation compliant to FIPS 140-2 IG 7.15

[1] https://github.com/usnistgov/SP800-90B_EntropyAssessment

@smuellerDD

This comment has been minimized.

Copy link
Owner

commented Jun 13, 2019

I think all questions are answered and thus closing the issue. If there are issues left, please reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.