Permalink
Branch: master
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
303 lines (204 sloc) 12 KB
.. py:module:: dit.multivariate.secret_key_agreement

Secret Key Agreement

One of the only methods of encrypting a message from Alice to Bomb such that no third party (Eve) can possibly decrypt it is a one-time pad. This technique requires that Alice and Bob have a secret sequence of bits, S, which Alice then encrypts by computing the exclusive-or of it with the plaintext, P, to produce the cyphertext, C: C = S \oplus P. Bob can then decrypt by xoring again: P = S \oplus C.

In order to pull this off, Alice and Bob need to construct S out of some sort joint randomness, p(x, y, z), and public communication, V, which is assumed to have perfect fidelity. The maximum rate at which S can be constructed in the secret key agreement rate.

Background

Given N IID copies of a joint distribution governed by p(x, y, z), let X^N denote the random variables observed by Alice, Y^N denote the random variables observed by Bob, and Z^N denote the random variables observed by Even. A secret key agreement scheme consists of functions f and g, as well as a protocol for public communication (producing V), and is considered R-achievable if:

S_X = f(X^N, V) \\
S_Y = g(Y^N, V) \\
p(S_X = S_Y = S) \geq 1 - \epsilon \\
\I{S : V Z^N} \leq \epsilon \\
\frac{1}{N} \H{S} \geq R - \epsilon

The maximum rate R such that there exists a R-achievable scheme is known as the secret key agreement rate. Intuitively, this means there exists some procedure such that, for every N observations, Alice and Bob can publicly converse and then construct S bits which agree almost surely, and are almost surely independent of everything Eve has access to. S is then known as a secret key.

There are three general classes of secret key agreement rates, depending on which parties are permitted to communicate. We discuss them below.

.. py:module:: dit.multivariate.secret_key_agreement.no_communication

No Communication

In the case that neither Alice nor Bob are permitted communication, the no-communication secret key agreement rate is given by:

\operatorname{S}[X : Y || Z] = \I{X \meet Y | Z}

where X \meet Y is the :ref:`Gács-Körner Common Information` variable.

.. py:module:: dit.multivariate.secret_key_agreement.secrecy_capacity

Secrecy Capacity

Consider the situation that no party is allowed to communication, but rather than passively observing p(X, Y, Z) Alice has full access to driving the channel p(Y, Z | X). In this case we arrive at a maximum secret-key agreement rate known as the secrecy capacity, which is given by:

\operatorname{S_C}[X \rightarrow Y || Z] = \displaystyle \max_{U - X - YZ} \I{U : Y} - \I{U : Z}
.. py:module:: dit.multivariate.secret_key_agreement.one_way_skar

One-Way Communication

If only Alice is allowed to publicly broadcast information, the secret key agreement rate is given by:

\operatorname{S}[X \rightarrow Y || Z] = \displaystyle \max_{V - U - X - YZ} \I{U : Y | V} - \I{U : Z | V}
.. py:module:: dit.multivariate.secret_key_agreement.two_way_skar

Two-Way Communication

When both Alice and Bob are permitted communication, the secret key agreement rate, \operatorname{S}[X \leftrightarrow Y || Z], is much more difficult to compute, and in fact only upper and lower bounds on this rate are known.

Lower Bounds

The first few lower bounds on two-way secret key agreement rate are simply symmetrized forms of the more restricted secret key agreement rates.

.. py:module:: dit.multivariate.secret_key_agreement.trivial_bounds
Lower Intrinsic Mutual Information

The first lower bound on the secret key agreement rate is known in dit as the :py:func:`lower_intrinsic_mutual_information`, and is given by:

\I{X : Y \uparrow Z} = \max
   \begin{cases}
      \I{X : Y} - \I{X : Z} \\
      \I{X : Y} - \I{Y : Z} \\
      0
   \end{cases}
.. py:module:: dit.multivariate.secret_key_agreement.skar_lower_bounds
Secrecy Capacity

Next is the secrecy capacity:

\I{X : Y \uparrow\uparrow Z} = \max
   \begin{cases}
      \displaystyle \max_{U - X - YZ} \I{U : Y} - \I{U : Z} \\
      \displaystyle \max_{U - Y - XZ} \I{U : X} - \I{U : Z}
   \end{cases}

This gives the secret key agreement rate when communication is not allowed.

Necessary Intrinsic Mutual Information

A tighter bound is given by the :py:func:`necessary_intrinsic_mutual_information` :cite:`gohari2017achieving`, which is the maximum of the two one-way secret key agreement rates:

\I{X : Y \uparrow\uparrow\uparrow Z} = \max
   \begin{cases}
      \displaystyle \max_{V - U - X - YZ} \I{U : Y | V} - \I{U : Z | V} \\
      \displaystyle \max_{V - U - Y - XZ} \I{U : X | V} - \I{U : Z | V}
   \end{cases}
.. py:module:: dit.multivariate.secret_key_agreement.interactive_intrinsic_mutual_informations
Interactive Intrinsic Mutual Information
\I{X : Y \uparrow\uparrow\uparrow\uparrow Z} = \max
   \sum_{i \textrm{even}} \I{U_i : Y | U_{0 \ldots i}} - \I{U_i : Z | U_{0 \ldots i}} + \\
   \sum_{i \textrm{odd}}  \I{U_i : X | U_{0 \ldots i}} - \I{U_i : Z | U_{0 \ldots i}}

Upper Bounds

.. py:module:: dit.multivariate.secret_key_agreement.trivial_bounds
Upper Intrinsic Mutual Information

The secret key agreement rate is trivially upper bounded by:

\min\{ \I{X : Y}, \I{X : Y | Z} \}
.. py:module:: dit.multivariate.secret_key_agreement.intrinsic_mutual_informations

Intrinsic Mutual Information *****************&******

The :py:func:`intrinsic_mutual_information` :cite:`maurer1997intrinsic` is defined as:

\I{X : Y \downarrow Z} = \min_{p(\overline{z} | z)} \I{X : Y | \overline{Z}}

It is straightforward to see that p(\overline{z} | z) being a constant achieves \I{X : Y}, and p(\overline{z} | z) being the identity achieves \I{X : Y | Z}.

.. py:module:: dit.multivariate.secret_key_agreement.reduced_intrinsic_mutual_informations
Reduced Intrinsic Mutual Information

This bound can be improved, producing the :py:func:`reduced_intrinsic_mutual_information` :cite:`renner2003new`:

\I{X : Y \downarrow\downarrow Z} = \min_{U} \I{X : Y \downarrow ZU} + \H{U}

This bound improves upon the :ref:`Intrinsic Mutual Information` when a small amount of information, U, can result in a larger decrease in the amount of information shared between X and Y given Z and U.

.. py:module:: dit.multivariate.secret_key_agreement.minimal_intrinsic_mutual_informations
Minimal Intrinsic Mutual Information

The :ref:`Reduced Intrinsic Mutual Information` can be further reduced into the :py:func:`minimal_intrinsic_total_correlation` :cite:`gohari2017comments`:

\I{X : Y \downarrow\downarrow\downarrow Z} = \min_{U} \I{X : Y | U} + \I{XY : U | Z}
Two-Part Intrinsic Mutual Information
\I{X : Y \downarrow\downarrow\downarrow\downarrow Z} = inf_{J} min_{V - U - XY - ZJ} \I{X : Y | J} + \I{U : J | V} - \I{U : Z | V}

All Together Now

Taken together, we see the following structure:

\begin{align}
  &\min\{ \I{X : Y}, \I{X : Y | Z} \} \\
  &\quad \geq \I{X : Y \downarrow Z} \\
  &\quad\quad \geq \I{X : Y \downarrow\downarrow Z} \\
  &\quad\quad\quad \geq \I{X : Y \downarrow\downarrow\downarrow Z} \\
  &\quad\quad\quad\quad \geq \I{X : Y \downarrow\downarrow\downarrow\downarrow Z} \\
  &\quad\quad\quad\quad\quad \geq S[X \leftrightarrow Y || Z] \\
  &\quad\quad\quad\quad\quad\quad \geq \I{X : Y \uparrow\uparrow\uparrow\uparrow Z} \\
  &\quad\quad\quad\quad\quad\quad\quad \geq \I{X : Y \uparrow\uparrow\uparrow Z} \\
  &\quad\quad\quad\quad\quad\quad\quad\quad \geq \I{X : Y \uparrow\uparrow Z} \\
  &\quad\quad\quad\quad\quad\quad\quad\quad\quad \geq \I{X : Y \uparrow Z} \\
  &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \geq S[X : Y || Z] \\
  &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \geq 0.0
\end{align}

Generalizations

Most of the above bounds have straightforward multivariate generalizations. These are not necessarily bounds on the multiparty secret key agreement rate. For example, one could compute the :py:func:`minimal_intrinsic_dual_total_correlation`:

\B{X_0 : \ldots : X_n \downarrow\downarrow\downarrow Z} = \min_{U} \B{X_0 : \ldots : X_n | U} + \I{X_0, \ldots, X_n : U | Z}

Examples

Let us consider a few examples:

.. ipython::

   In [1]: from dit.multivariate.secret_key_agreement import *

   In [2]: from dit.example_dists.intrinsic import intrinsic_1, intrinsic_2, intrinsic_3

First, we consider the distribution intrinsic_1:

.. ipython::

   In [3]: print(intrinsic_1)
   Class:          Distribution
   Alphabet:       ('0', '1', '2', '3') for all rvs
   Base:           linear
   Outcome Class:  str
   Outcome Length: 3
   RV Names:       None

   x     p(x)
   000   1/8
   011   1/8
   101   1/8
   110   1/8
   222   1/4
   333   1/4

With upper bounds:

.. ipython::

   @doctest float
   In [4]: upper_intrinsic_mutual_information(intrinsic_1, [[0], [1]], [2])
   Out[4]: 0.5

We see that the trivial upper bound is 0.5, because without conditioning on Z, X and Y can agree when the observe either a 2 or a 3, which results in \I{X : Y} = 0.5. Given Z, however, that information is no longer private. But, given Z, a conditional dependence is induced between X and Y: Z knows that if she is a 0 that X and Y agree, and if she is a 1 they disagree. This results \I{X : Y | Z} = 0.5. In either case, however, X and Y can not agree upon a secret key: in the first case the eavesdropper knows their correlation, while in the second they are actually independent.

The :py:func:`intrinsic_mutual_information`, however can detect this:

.. ipython::

   @doctest float
   In [5]: intrinsic_mutual_information(intrinsic_1, [[0], [1]], [2])
   Out[5]: 0.0

Next, let's consider the distribution intrinsic_2:

.. ipython::

   In [7]: print(intrinsic_2)
   Class:          Distribution
   Alphabet:       (('0', '1', '2', '3'), ('0', '1', '2', '3'), ('0', '1'))
   Base:           linear
   Outcome Class:  str
   Outcome Length: 3
   RV Names:       None

   x     p(x)
   000   1/8
   011   1/8
   101   1/8
   110   1/8
   220   1/4
   331   1/4

In this case, Z no longer can distinguish between the case where X and Y can agree on a secret bit, and when they can not, because she can not determine when they are in the 01 regime or in the 23 regime:

.. ipython::

   @doctest float
   In [8]: intrinsic_mutual_information(intrinsic_2, [[0], [1]], [2])
   Out[8]: 1.5

This seems to imply that X and Y can adopt a scheme such as: if they observe either a 0 or a 1, write down 0, and if they observe either a 2 or a 3, write that down. This has a weakness, however: what if Z were able to distinguish the two regimes? This costs her 1 bit, but reduces the secrecy of X and Y to nil. Thus, the secret key agreement rate is actually only 1 bit:

.. ipython::

   @doctest float
   In [9]: minimal_intrinsic_mutual_information(intrinsic_2, [[0], [1]], [2], bounds=(3,))
   Out[9]: 1.0