% Error exponents for compound BSC % Aditya Mahajan % February 17, 2010 This article contains R source code used in the following paper \startframedtext Aditya Mahajan and Sekhar Tatikonda, \quotation{Opportunistic capacity and error exponent regions for compound channels with feedback}, submitted to IEEE Transactions on Information Theory, 2010 \stopframedtext This code is in written in R using the Sweave format. To process the file run Sweave("bsc-code.Rnw") \SweaveOpts{keep.source=TRUE} <>= options(continue="> ") @ We consider a compound channel consisting of two BSCs with complementary crossover probabilities, $p$ and $(1-p)$, where $0 < p < 1/2$ and $p$ is known to the transmitter and the receiver. We denote this compound channel by $$\ALPHABET Q_p \DEFINED \{\BSC_p, \BSC_{1-p}\}$$ where $\BSC_p$ denotes a binary symmetric channel with crossover probability $p$. For convenience, we will index all variables by $p$ and $(1-p)$ rather than by $1$ and $2$. For binary symmetric channel, the capacity and $B_Q$ term of Burnashev exponent are given by $$C_p = C_{1-p} = 1 - h(p)$$ <>= bsc.C <- function(p) { return (1 - binary.h(p)) } @ and $$B_p = B_{1-p} = D(p \| 1-p)$$ <>= bsc.B <- function(p) { return (binary.D(p,1-p)) } @ where $h(p) = -p \log p - (1-p) \log (1-p)$ is the binary entropy function <>= binary.h <- function(p) { return ( -p*log2(p) - (1-p)*log2(1-p)) } @ and $D(p\|q) = -p \log (p/q) - (1-p) \log ( (1-p)/(1-q))$ is the binary Kullback-Leibler function. <>= binary.D <- function(x,y) { return ( x*log2(x/y) + (1-x)*log2((1-x)/(1-y))) } @ We choose the all zero sequence as a training sequence and estimate the channel based on the type of the output sequence. If the empirical frequency of ones in the output is less than $q$, $p < q < 1-p$, the channel is estimated as $\BSC_p$; otherwise the channel is estimated as $\BSC_{1-p}$. For this class of channel estimation rules, the estimation error probability is bounded by the tail of the probability of the sum of independent random variables. From Hoeffding's inequality, the exponents of the estimation errors are given by $$T_p = D(q \| p), \quad T_{1-p} = D(q \| 1-p).$$ Suppose we want to communicate at rate $(R_p, R_{1-p})$, $R_p < C_p$ and $R_{1-p} < C_{1-p}$, using the coding scheme of the paper. Let $q_m$ and $q_c$ be the estimation thresholds for the message and control mode. The lower bound of Proposition 2 simplifies to $$E_p ≥ \frac{D(q_c \| p) D(p \| 1-p)} {D(q_c\|p) + D(p \| 1-p)} (1-γ_p)$$ and $$E_{1-p} ≥ \frac{D(q_c \| 1-p) D(p \| 1-p)} {D(q_c\|1-p) + D(p \| 1-p)} (1-γ_{1-p})$$ where $γ_p = R_p/C_p$ and $γ_{1-p} = R_{1-p}/C_{1-p}$. Now, we want to choose $q_c$ such that $E_p = E_{1-p}$ which is equivalent to choosing $q_c$ such $$φ(q_c,p) = \frac{ (1-γ_p) }{ (1-γ_{1-p}) }$$ where $$φ(q,p) = \frac{1 + D(p \| 1-p)/D(q\|p)}{1 + D(p \| 1-p)/D(q\|1-p) }$$ <<φ function>>= compoundBsc.phi <- function(p,q) { num = 1 + binary.D(p, 1-p) / binary.D(q, p) den = 1 + binary.D(p, 1-p) / binary.D(q, 1-p) return (num/den) } @ This means that $q_c=0.5$, which maximally distinguishes between $\BSC_p$ and $\BSC_{1-p}$ is optimal only when $γ_p = γ_{1-p}$. For other values of $γ_p$ and $γ_{1-p}$, we need to invert $φ(q_c,p)$ to determine the value of $q_c$. <>= compoundBsc.E <- function(p, γ_1, γ_2) { # if (0 > γ_1 || 0 > γ_2 || 1 < γ_1 || 1 < γ_2) # stop("γ out of bound") exponent <- function (p,q,γ) { num = binary.D (q,p) * binary.D (p, 1-p) * (1 - γ) den = binary.D (q,p) + binary.D (p, 1-p) return (num/den) } findQ <- function (q) { return (exponent(p,q,γ_1) - exponent(1-p,q,γ_2)) } eps = 10e-3 q = uniroot(findQ, upper = 1-p - eps, lower = p + eps)$root # if (abs (exponent(p,q,γ_1) - exponent(1-p,q,γ_2)) > eps ) # warning(sprintf("q not within %f accuracy for p=%f, γ_1=%f, γ_2=%f", # eps, p, γ_1, γ_2 )) return (list(exp=exponent(p,q,γ_1), q=q)) } @ The code below gives the plot of$φ(q_c,0.1)$for different values of$q_c\$. <