One of the only methods of encrypting a message from Alice to Bomb such that no third party (Eve) can possibly decrypt it is a one-time pad. This technique requires that Alice and Bob have a secret sequence of bits, S, which Alice then encrypts by computing the exclusive-or of it with the plaintext, P, to produce the cyphertext, C: C = S ⊕ P. Bob can then decrypt by xoring again: P = S ⊕ C.
In order to pull this off, Alice and Bob need to construct S out of some sort joint randomness, p(x, y, z), and public communication, V, which is assumed to have perfect fidelity. The maximum rate at which S can be constructed in the secret key agreement rate.
Given N IID copies of a joint distribution governed by p(x, y, z), let XN denote the random variables observed by Alice, YN denote the random variables observed by Bob, and ZN denote the random variables observed by Even. Furthermore, let S[X : Y||Z] be the maximum rate R such that, for N > 0, ϵ > 0, some public communication V, and functions f and g:
Intuitively, this means there exists some procedure such that, for every N observations, Alice and Bob can publicly converse and then construct S bits which agree almost surely, and are almost surely independent of everything Eve has access to. S is then known as a secret key.
The first lower bound on the secret key agreement rate is known in dit
as the :pylower_intrinsic_mutual_information
, and is given by:
Next is the secrecy capacity:
This gives the secret key agreement rate when communication is not allowed.
A tighter bound is given by the :pynecessary_intrinsic_mutual_information
gohari2017achieving
:
This quantity is actually equal to the secret key agreement rate when communication is limited to being unidirectional.
The secret key agreement rate is trivially upper bounded by:
The :pyintrinsic_mutual_information
maurer1997intrinsic
is defined as:
It is straightforward to see that
This bound can be improved, producing the :pyreduced_intrinsic_mutual_information
renner2003new
:
This bound improves upon the Intrinsic Mutual Information
when a small amount of information, U, can result in a larger decrease in the amount of information shared between X and Y given Z and U.
The Reduced Intrinsic Mutual Information
can be further reduced into the :pyminimal_intrinsic_total_correlation
gohari2017comments
:
Taken together, we see the following structure:
Most of the above bounds have straightforward multivariate generalizations. These are not necessarily bounds on the multiparty secret key agreement rate. For example, one could compute the :pyminimal_intrinsic_dual_total_correlation
:
Let us consider a few examples:
In [1]: from dit.multivariate.secret_key_agreement import *
In [2]: from dit.example_dists.intrinsic import intrinsic_1, intrinsic_2, intrinsic_3
First, we consider the distribution intrinsic_1
:
In [3]: print(intrinsic_1) Class: Distribution Alphabet: ('0', '1', '2', '3') for all rvs Base: linear Outcome Class: str Outcome Length: 3 RV Names: None
x p(x) 000 1/8 011 1/8 101 1/8 110 1/8 222 1/4 333 1/4
With upper bounds:
@doctest float In [4]: upper_intrinsic_mutual_information(intrinsic_1, [[0], [1]], [2]) Out[4]: 0.5
We see that the trivial upper bound is 0.5, because without conditioning on Z, X and Y can agree when the observe either a 2 or a 3, which results in
The :pyintrinsic_mutual_information
, however can detect this:
@doctest float In [5]: intrinsic_mutual_information(intrinsic_1, [[0], [1]], [2]) Out[5]: 0.0
Next, let's consider the distribution intrinsic_2
:
In [7]: print(intrinsic_2) Class: Distribution Alphabet: (('0', '1', '2', '3'), ('0', '1', '2', '3'), ('0', '1')) Base: linear Outcome Class: str Outcome Length: 3 RV Names: None
x p(x) 000 1/8 011 1/8 101 1/8 110 1/8 220 1/4 331 1/4
In this case, Z no longer can distinguish between the case where X and Y can agree on a secret bit, and when they can not, because she can not determine when they are in the 01 regime or in the 23 regime:
@doctest float In [8]: intrinsic_mutual_information(intrinsic_2, [[0], [1]], [2]) Out[8]: 1.5
This seems to imply that X and Y can adopt a scheme such as: if they observe either a 0 or a 1, write down 0, and if they observe either a 2 or a 3, write that down. This has a weakness, however: what if Z were able to distinguish the two regimes? This costs her 1 bit, but reduces the secrecy of X and Y to nil. Thus, the secret key agreement rate is actually only 1 bit:
@doctest float In [9]: minimal_intrinsic_mutual_information(intrinsic_2, [[0], [1]], [2], bounds=(3,)) Out[9]: 1.0