diff --git a/experiment/posttest.json b/experiment/posttest.json index 8efe323..95f100b 100644 --- a/experiment/posttest.json +++ b/experiment/posttest.json @@ -11,11 +11,11 @@ "e": "None" }, "explanations": { - "a": "Incorrect answer. The channel models we consider in this lab are all memoryless channels, i.e., noise in each transmitted symbol is independent of noise in other symbols.", - "b": "Correct Answer! The probability that $i$ bits are erased is $P(i)=\\binom{n}{i}0.3^i\\times 0.7^{n-i}$. If $iP(j)$.", - "c": "Incorrect answer. There are no bit flips possible in the $BEC$ channel model.", - "d": "Incorrect answer. There is no guarantee that exactly $0.3n$ bits will be erased, simply because the probability of each bit getting erased is $0.3$.", - "e": "Well, one of them is the correct answer." + "a": "Incorrect. The channel models we consider in this lab are all memoryless channels, i.e., noise in each transmitted symbol is independent of noise in other symbols.", + "b": "Correct. The probability that $i$ bits are erased is $P(i)=\\binom{n}{i}0.3^i\\times 0.7^{n-i}$. If $iP(j)$.", + "c": "Incorrect. There are no bit flips possible in the $BEC$ channel model.", + "d": "Incorrect. There is no guarantee that exactly $0.3n$ bits will be erased, simply because the probability of each bit getting erased is $0.3$.", + "e": "Incorrect. Well, one of them is the correct answer." }, "correctAnswer": "b", "difficulty": "beginner" @@ -32,7 +32,7 @@ "a": "Incorrect. There is a non-zero probability that this event can happen.", "b": "Incorrect. Observe that the question is giving exactly which of the bits are getting flipped.", "c": "Incorrect. Observe that the question is giving exactly which of the bits are getting flipped, and calculate the probability carefully. ", - "d": "Correct! We know that each bit is flipped with probability $0.9$, and any bit is not flipped with probability $0.1$, We have the answer $0.9^{n/2}\\times 0.1^{n/2}$ as the right option. Note that there is no binomial coefficient here, as the question specifies exactly which positions are getting flipped." + "d": "Correct. We know that each bit is flipped with probability $0.9$, and any bit is not flipped with probability $0.1$, We have the answer $0.9^{n/2}\\times 0.1^{n/2}$ as the right option. Note that there is no binomial coefficient here, as the question specifies exactly which positions are getting flipped." }, "correctAnswer": "d", "difficulty": "beginner" @@ -46,10 +46,10 @@ "d": "All three are equally probable" }, "explanations": { - "a": "Wrong answer. Note that the bit flip probability is $0.3$. Thus, fewer bit errors occur with higher probability than larger number of bit errors.", - "b": "Wrong answer. Note that the bit flip probability is $0.3$. Thus, fewer bit errors occur with higher probability than larger number of bit errors.", - "c": "Correct answer! Vectors (1) and (3) are two bit flips away from the input vector, whereas (2) is just a single bit flip away. Thus, the probabilities of getting either (1) (or (3)) from the given input sequence is $0.3^2\\times 0.7^2$, whereas (2) occurs with probability $0.3\\times 0.7^3$, given that the input sequence is $(0,1,0,1)$", - "d": "Wrong answer. Note that the bit flip probability is $0.3$. Thus, fewer bit errors occur with higher probability than larger number of bit errors." + "a": "Incorrect. Note that the bit flip probability is $0.3$. Thus, fewer bit errors occur with higher probability than larger number of bit errors.", + "b": "Incorrect. Note that the bit flip probability is $0.3$. Thus, fewer bit errors occur with higher probability than larger number of bit errors.", + "c": "Correct. Vectors (1) and (3) are two bit flips away from the input vector, whereas (2) is just a single bit flip away. Thus, the probabilities of getting either (1) (or (3)) from the given input sequence is $0.3^2\\times 0.7^2$, whereas (2) occurs with probability $0.3\\times 0.7^3$, given that the input sequence is $(0,1,0,1)$", + "d": "Incorrect. Note that the bit flip probability is $0.3$. Thus, fewer bit errors occur with higher probability than larger number of bit errors." }, "correctAnswer": "c", "difficulty": "intermediate" @@ -63,13 +63,13 @@ "d": "We cannot answer this question." }, "explanations": { - "a": "Note that we have to compare the two probabilities $p(\\boldsymbol{y_1}|\\boldsymbol{x})$ and $p(\\boldsymbol{y_2}|\\boldsymbol{x})$, and find out which one is higher. By the expression of $p(\\boldsymbol{y_i}|\\boldsymbol{x})$, we have to find which value among $\\sum_{j=1}^3(y_{i,j}-x_j)^2$ for $i=1,2$ is smallest. Now, $\\sum_{j=1}^3(y_{1,j}-x_j)^2=0.3^2+0.1^2+0.3^2=0.19$ and $\\sum_{j=1}^3(y_{2,j}-x_j)^2=0.4^2+0.2^2+0.1^2=0.21$. Hence, $\\boldsymbol{y_1}$ is the more probable output sequence, given that the input was $(+1,-1,-1)$.", + "a": "Correct. Note that we have to compare the two probabilities $p(\\boldsymbol{y_1}|\\boldsymbol{x})$ and $p(\\boldsymbol{y_2}|\\boldsymbol{x})$, and find out which one is higher. By the expression of $p(\\boldsymbol{y_i}|\\boldsymbol{x})$, we have to find which value among $\\sum_{j=1}^3(y_{i,j}-x_j)^2$ for $i=1,2$ is smallest. Now, $\\sum_{j=1}^3(y_{1,j}-x_j)^2=0.3^2+0.1^2+0.3^2=0.19$ and $\\sum_{j=1}^3(y_{2,j}-x_j)^2=0.4^2+0.2^2+0.1^2=0.21$. Hence, $\\boldsymbol{y_1}$ is the more probable output sequence, given that the input was $(+1,-1,-1)$.", "b": "Incorrect. Note that we have to compare the two probabilities $p(\\boldsymbol{y_1}|\\boldsymbol{x})$ and $p(\\boldsymbol{y_2}|\\boldsymbol{x})$, and find out which one is higher. Use the Gaussian distribution to compute these values.", - "c": "Incorrect option. One of the two is more probable than the other. Use the Gaussian distribution to answer this question.", - "d": "Wrong option. We can indeed compute relevant probabilities and answer this question." + "c": "Incorrect. One of the two is more probable than the other. Use the Gaussian distribution to answer this question.", + "d": "Incorrect. We can indeed compute relevant probabilities and answer this question." }, "correctAnswer": "a", "difficulty": "intermediate" } ] -} \ No newline at end of file +} diff --git a/experiment/pretest.json b/experiment/pretest.json index 28544c9..c84aa8f 100644 --- a/experiment/pretest.json +++ b/experiment/pretest.json @@ -11,11 +11,11 @@ "e": "None" }, "explanations": { - "a": "Incorrect Answer. There are indeed $\\binom{10}{4}$ ways to switch on exactly $4$ of the $10$ lights. However, there may be an error in calculating the probability of achieving one of these configurations. Please review the approach to ascertain where the discrepancy lies.", - "b": "Correct Answer! Indeed, there are $\\binom{10}{4}$ ways to switch on exactly $4$ of the $10$ lights, each possible way is obtained with probability $p^4(1-p)^6$.", - "c": "Incorrect answer. In this analysis, initially consider the number of ways to select the possible configurations with four lights on and the remaining six lights off. Subsequently, compute the probability of each specific configuration. This calculation must also factor in the configurations for the lights that remain off.", - "d": "Incorrect answer. In this analysis, initially consider the number of ways to select the possible configurations with four lights on and the remaining six lights off. Subsequently, compute the probability of each specific configuration. This calculation must also factor in the configurations for the lights that remain off.", - "e": "Well, One of the provided responses accurately addresses the question." + "a": "Incorrect. There are indeed $\\binom{10}{4}$ ways to switch on exactly $4$ of the $10$ lights. However, there may be an error in calculating the probability of achieving one of these configurations. Please review the approach to ascertain where the discrepancy lies.", + "b": "Correct. Indeed, there are $\\binom{10}{4}$ ways to switch on exactly $4$ of the $10$ lights, each possible way is obtained with probability $p^4(1-p)^6$.", + "c": "Incorrect. In this analysis, initially consider the number of ways to select the possible configurations with four lights on and the remaining six lights off. Subsequently, compute the probability of each specific configuration. This calculation must also factor in the configurations for the lights that remain off.", + "d": "Incorrect. In this analysis, initially consider the number of ways to select the possible configurations with four lights on and the remaining six lights off. Subsequently, compute the probability of each specific configuration. This calculation must also factor in the configurations for the lights that remain off.", + "e": "Incorrect. Well, One of the provided responses accurately addresses the question." }, "correctAnswer": "b", "difficulty": "beginner" @@ -29,10 +29,10 @@ "d": "0.28" }, "explanations": { - "a": "Incorrect answer. Try using the total probability rule, i.e., $p_Y(y)=\\sum_{x\\in\\mathcal{X}}p_{Y|X}(y|x)p_X(x)$.", - "b": "Correct answer! $p_Y(5)$ is calculated as follows: $p_Y(5)=p_{Y|X}(5|0)p_X(0)+p_{Y|X}(5|a)p_X(a)=p_{Y|X}(5|0)p_X(0)+(1-p_{Y|X}(b|a)-p_{Y|X}(c|a))p_X(a)=0.1\\times 0.3+(1-0.2-0.4)\\times(1-0.3)=0.03+0.28=0.31$.", - "c": "Incorrect answer. Try using the total probability rule, i.e., $p_Y(y)=\\sum_{x\\in\\mathcal{X}}p_{Y|X}(y|x)p_X(x)$.", - "d": "Incorrect answer. Try using the total probability rule, i.e., $p_Y(y)=\\sum_{x\\in\\mathcal{X}}p_{Y|X}(y|x)p_X(x)$." + "a": "Incorrect. Try using the total probability rule, i.e., $p_Y(y)=\\sum_{x\\in\\mathcal{X}}p_{Y|X}(y|x)p_X(x)$.", + "b": "Correct. $p_Y(5)$ is calculated as follows: $p_Y(5)=p_{Y|X}(5|0)p_X(0)+p_{Y|X}(5|a)p_X(a)=p_{Y|X}(5|0)p_X(0)+(1-p_{Y|X}(b|a)-p_{Y|X}(c|a))p_X(a)=0.1\\times 0.3+(1-0.2-0.4)\\times(1-0.3)=0.03+0.28=0.31$.", + "c": "Incorrect. Try using the total probability rule, i.e., $p_Y(y)=\\sum_{x\\in\\mathcal{X}}p_{Y|X}(y|x)p_X(x)$.", + "d": "Incorrect. Try using the total probability rule, i.e., $p_Y(y)=\\sum_{x\\in\\mathcal{X}}p_{Y|X}(y|x)p_X(x)$." }, "correctAnswer": "b", "difficulty": "intermediate" @@ -46,10 +46,10 @@ "d": "$p_X(0)=0.3,\\hspace{0.2cm} p_X(1)=0.6$." }, "explanations": { - "a": "Incorrect answer. This option is a valid Binomial distribution, not a Bernoulli distribution.", - "b": "Incorrect answer. A Bernoulli random variable takes only two values.", - "c": "Correct answer! A Bernoulli random variable takes two possible values (often represented as $0$ or $1$), and the probabilities should sum to $1$.", - "d": "Incorrect answer! A Bernoulli random variable does take only two possible values (often represented as $0$ or $1$. However their probabilities should sum to $1$." + "a": "Incorrect. This option is a valid Binomial distribution, not a Bernoulli distribution.", + "b": "Incorrect. A Bernoulli random variable takes only two values.", + "c": "Correct. A Bernoulli random variable takes two possible values (often represented as $0$ or $1$), and the probabilities should sum to $1$.", + "d": "Incorrect. A Bernoulli random variable does take only two possible values (often represented as $0$ or $1$. However their probabilities should sum to $1$." }, "correctAnswer": "c", "difficulty": "beginner" @@ -63,10 +63,10 @@ "d": "$\\frac{1}{\\sqrt{10\\pi}}e^{-\\frac{|x-5|}{10}}$" }, "explanations": { - "a": "Incorrect answer. A Gaussian random variable has the distribution $\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$, where the variance is $\\sigma^2$ and the mean is $\\mu$.", - "b": "Incorrect answer. A Gaussian random variable has the distribution $\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$, where the variance is $\\sigma^2$ and the mean is $\\mu$.", - "c": "Correct answer!", - "d": "Incorrect answer. A Gaussian random variable has the distribution $\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$, where the variance is $\\sigma^2$ and the mean is $\\mu$." + "a": "Incorrect. A Gaussian random variable has the distribution $\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$, where the variance is $\\sigma^2$ and the mean is $\\mu$.", + "b": "Incorrect. A Gaussian random variable has the distribution $\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$, where the variance is $\\sigma^2$ and the mean is $\\mu$.", + "c": "Correct.", + "d": "Incorrect. A Gaussian random variable has the distribution $\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$, where the variance is $\\sigma^2$ and the mean is $\\mu$." }, "correctAnswer": "c", "difficulty": "beginner" @@ -80,13 +80,13 @@ "d": "$\\frac{1}{(4\\pi)^2}e^{-\\frac{\\sum_{i=1}^4(x_i-8)^2}{4}}$." }, "explanations": { - "a": "Correct answer!", - "b": "Incorrect answer. The random variables are independent, so their joint probability density function is the product of their individual probability density functions, i.e., $p_{\\boldsymbol X}(x_1,x_2,x_3,x_4)=\\prod_{i=1}^4 p_{X_i}(x_i)$. Now, each $p_{X_i}(x_i)$ is a Gaussian probability distribution function with mean=$2$ and variance = $4$. Use this to arrive at the right answer.", - "c": "Incorrect answer. The random variables are independent, so their joint probability density function is the product of their individual probability density functions, i.e., $p_{\\boldsymbol X}(x_1,x_2,x_3,x_4)=\\prod_{i=1}^4 p_{X_i}(x_i)$. Now, each $p_{X_i}(x_i)$ is a Gaussian probability distribution function with mean=$2$ and variance = $4$. Use this to arrive at the right answer.", - "d": "Incorrect answer. The random variables are independent, so their joint probability density function is the product of their individual probability density functions, i.e., $p_{\\boldsymbol X}(x_1,x_2,x_3,x_4)=\\prod_{i=1}^4 p_{X_i}(x_i)$. Now, each $p_{X_i}(x_i)$ is a Gaussian probability distribution function with mean=$2$ and variance = $4$. Use this to arrive at the right answer." + "a": "Correct.", + "b": "Incorrect. The random variables are independent, so their joint probability density function is the product of their individual probability density functions, i.e., $p_{\\boldsymbol X}(x_1,x_2,x_3,x_4)=\\prod_{i=1}^4 p_{X_i}(x_i)$. Now, each $p_{X_i}(x_i)$ is a Gaussian probability distribution function with mean=$2$ and variance = $4$. Use this to arrive at the right answer.", + "c": "Incorrect. The random variables are independent, so their joint probability density function is the product of their individual probability density functions, i.e., $p_{\\boldsymbol X}(x_1,x_2,x_3,x_4)=\\prod_{i=1}^4 p_{X_i}(x_i)$. Now, each $p_{X_i}(x_i)$ is a Gaussian probability distribution function with mean=$2$ and variance = $4$. Use this to arrive at the right answer.", + "d": "Incorrect. The random variables are independent, so their joint probability density function is the product of their individual probability density functions, i.e., $p_{\\boldsymbol X}(x_1,x_2,x_3,x_4)=\\prod_{i=1}^4 p_{X_i}(x_i)$. Now, each $p_{X_i}(x_i)$ is a Gaussian probability distribution function with mean=$2$ and variance = $4$. Use this to arrive at the right answer." }, "correctAnswer": "a", "difficulty": "intermediate" } ] -} \ No newline at end of file +} diff --git a/experiment/references.md b/experiment/references.md index d94fad9..1b4feda 100644 --- a/experiment/references.md +++ b/experiment/references.md @@ -1,4 +1,2 @@ -# References - 1. Moon, Todd K. Error correction coding: mathematical methods and algorithms. John Wiley & Sons, 2020. (A nice textbook for coding theory, starting from basics of communication systems till codes used in practice today) -2. MacWilliams, Florence Jessie, and Neil James Alexander Sloane. The theory of error-correcting codes. Vol. 16. Elsevier, 1977. (A classic text for linear block codes) \ No newline at end of file +2. MacWilliams, Florence Jessie, and Neil James Alexander Sloane. The theory of error-correcting codes. Vol. 16. Elsevier, 1977. (A classic text for linear block codes) diff --git a/experiment/theory.md b/experiment/theory.md index 8a6b7bd..7dde660 100644 --- a/experiment/theory.md +++ b/experiment/theory.md @@ -1,45 +1,43 @@ -### What is a Communication Channel? +### What is a Communication Channel? -A communication channel is a medium through which communication happens. In this virtual lab, we are dealing with specifically those channels that accept binary-valued inputs. We call these channels as binary-input channels. For some binary channels, we write the possible set of inputs as the *logical bits* $\{0,1\}$. That is, at any time instant, we can send a logical "0" through the channel, or a logical "1". Equivalently, we may also write the binary alphabet in the *bipolar* form, which is written as $\{+1,-1\}$. Normally, we take the logical-bit to bipolar mapping as $0\to +1$ and $1\to -1$. - - -We generally use the notation $\cal X$ to denote the input alphabet of the channel. From the point of view of the receiver, the input to the channel is unknown, and hence is modelled as a random variable with some input probability distribution. We denote this input random variable as $X$. Similarly, the output of the channel, is a random variable denoted by $Y$. We assume that the output alphabet, the set of all values that the output can possibly take, is denoted by $\cal Y$. +A communication channel is a medium through which communication happens. In this virtual lab, we are dealing with specifically those channels that accept binary-valued inputs. We call these channels as binary-input channels. For some binary channels, we write the possible set of inputs as the _logical bits_ $\{0,1\}$. That is, at any time instant, we can send a logical "0" through the channel, or a logical "1". Equivalently, we may also write the binary alphabet in the _bipolar_ form, which is written as $\{+1,-1\}$. Normally, we take the logical-bit to bipolar mapping as $0\to +1$ and $1\to -1$. +We generally use the notation $\cal X$ to denote the input alphabet of the channel. From the point of view of the receiver, the input to the channel is unknown, and hence is modelled as a random variable with some input probability distribution. We denote this input random variable as $X$. Similarly, the output of the channel, is a random variable denoted by $Y$. We assume that the output alphabet, the set of all values that the output can possibly take, is denoted by $\cal Y$. ### Types of Channels considered in this virtual lab -The problem of designing good communication systems arises precisely due to the existence of *noise* in communication channels. The noise in the communication channel is generally modelled via the conditional probabilities (of the output value, given the input value). We consider some three important types of communication channels (or in other words, noise models) in this virtual lab. +The problem of designing good communication systems arises precisely due to the existence of _noise_ in communication channels. The noise in the communication channel is generally modelled via the conditional probabilities (of the output value, given the input value). We consider some three important types of communication channels (or in other words, noise models) in this virtual lab. -1. The **Binary Erasure Channel**: The noise in this channel is modelled as a *bit-erasure*, which denotes the transmitted bit was erased or lost. Formally, in this channel, the input alphabet is the set of logical bits, i.e., ${\cal X}=\{0,1\}$ and the output alphabet is the set of logical bits along with the erasure symbol $?$, i.e., ${\cal Y}=\{0,1,?\}$. The erasure symbol $?$ denotes that the input symbol was *erased* during the process of transmission through the channel. The binary erasure channel, denoted formally as $BEC(\epsilon)$, has the property that the bit that is transmitted is erased with probability $\epsilon$. Here, $\epsilon$ denotes the *erasure probability*, and we assume $\epsilon$ is a real number between $0$ and $1$, i.e., $\epsilon\in(0,1)$. +1. The **Binary Erasure Channel**: The noise in this channel is modelled as a _bit-erasure_, which denotes the transmitted bit was erased or lost. Formally, in this channel, the input alphabet is the set of logical bits, i.e., ${\cal X}=\{0,1\}$ and the output alphabet is the set of logical bits along with the erasure symbol $?$, i.e., ${\cal Y}=\{0,1,?\}$. The erasure symbol $?$ denotes that the input symbol was _erased_ during the process of transmission through the channel. The binary erasure channel, denoted formally as $BEC(\epsilon)$, has the property that the bit that is transmitted is erased with probability $\epsilon$. Here, $\epsilon$ denotes the _erasure probability_, and we assume $\epsilon$ is a real number between $0$ and $1$, i.e., $\epsilon\in(0,1)$. ---
- Binary Erasure Channel + Binary Erasure Channel

Depiction of a Binary Erasure Channel. The left side denotes the possible inputs {0,1} and the right denotes the possible outputs {0,1,ϵ} . The arrows indicate possible transitions when the bit passes through the channel. The values ϵ, (1-ϵ) marked upon the respective arrows indicates the probability of such a transition.

--- -2. The **Binary Symmetric Channel**: In this channel, the input alphabet is the set of logical bits, i.e., ${\cal X}=\{0,1\}$ and the output alphabet is the set of logical bits i.e., ${\cal Y}=\{0,1\}$. The noise of this channel is characterized by bit-flips (i.e., a transmitted $0$ bit is received as a $1$, or vice-versa). In the binary symmetric channel denoted by $BSC(p)$, we assume that bit-flip happens with some probability $p$, where $p$ is a real number and $p\in(0,1)$. +2. The **Binary Symmetric Channel**: In this channel, the input alphabet is the set of logical bits, i.e., ${\cal X}=\{0,1\}$ and the output alphabet is the set of logical bits i.e., ${\cal Y}=\{0,1\}$. The noise of this channel is characterized by bit-flips (i.e., a transmitted $0$ bit is received as a $1$, or vice-versa). In the binary symmetric channel denoted by $BSC(p)$, we assume that bit-flip happens with some probability $p$, where $p$ is a real number and $p\in(0,1)$. ---
- Binary Symmetric Channel + Binary Symmetric Channel

Depiction of a Binary Symmetric Channel. The left side denotes the possible inputs {0,1} and the right denotes the possible outputs {0,1}. The arrows indicate possible transitions when the bit passes through the channel. The values p, (1-p) marked upon the respective arrows indicates the probability of such a transition.

--- -3. The **Additive White Gaussian Noise Channel (AWGN)**: The AWGN channel, accepts a real number as an input, and adds to it a noise random variable $Z$ that is distributed independently according to a Gaussian distribution ${\cal N}(0,N_0/2)$, with zero-mean and variance $N_0/2$. Thus, the input alphabet and the output alphabet are both ${\cal X}=\mathbb{R}$. The relationship between the input $X$ and the output $Y$ is then given as +3. The **Additive White Gaussian Noise Channel (AWGN)**: The AWGN channel, accepts a real number as an input, and adds to it a noise random variable $Z$ that is distributed independently according to a Gaussian distribution ${\cal N}(0,N_0/2)$, with zero-mean and variance $N_0/2$. Thus, the input alphabet and the output alphabet are both ${\cal X}=\mathbb{R}$. The relationship between the input $X$ and the output $Y$ is then given as -$$Y=X+Z.$$ +$$Y=X+Z.$$ ---
- Gaussian Channel + Gaussian Channel

Depiction of a Additive White Gaussian Noise Channel. The left side denotes the possible input X and the right denotes the possible output Y. The arrows indicate the input X getting added to a noise Z to give the output Y.

@@ -47,7 +45,7 @@ $$Y=X+Z.$$ ### Conditional Distribution Associated with the Communication Channel -We can also describe the channels above using the conditional distribution of the output random variable $Y$ given by the input random variable $X$. Specifically, we have the following. +We can also describe the channels above using the conditional distribution of the output random variable $Y$ given by the input random variable $X$. Specifically, we have the following. 1. **Binary Erasure Channel**: The conditional distribution of this channel $BEC(\epsilon)$ is given as follows. @@ -55,7 +53,7 @@ $$ p_{Y|X}(y|x)= \begin{cases} 1-\epsilon&\text{if}~ y=x, \forall x \in\{0,1\},\\ -\epsilon & \text{if}~y=? . +\epsilon & \text{if}~y=? . \end{cases} $$ @@ -65,22 +63,23 @@ $$ p_{Y|X}(y|x)= \begin{cases} 1-p&\text{if}~ y=x, \forall x \in\{0,1\},\\ -p & \text{if}~x\neq y . +p & \text{if}~x\neq y . \end{cases} $$ 3. **AWGN Channel**: For this channel, we have -$$ -p_{Y|X}(y|x)=\frac{1}{\sqrt{\pi N_0}}e^{\frac{-(y-x)^2}{N_0}}, \forall x,y \in \mathbb{R}. -$$ + $$ + p_{Y|X}(y|x)=\frac{1}{\sqrt{\pi N_0}}e^{\frac{-(y-x)^2}{N_0}}, \forall x,y \in \mathbb{R}. + $$ ### The Memoryless Property of the Channels -We assume that the three channels we have considered in this virtual lab have the *memoryless* property and exist *without feedback*. To be precise, if we transmit a $n$-length sequence of bits denoted by $(x_1,\ldots,x_n)$ through any of these channels, the output is a sequence of bits $(y_1,\ldots,y_n)$, with probability as follows. +We assume that the three channels we have considered in this virtual lab have the _memoryless_ property and exist _without feedback_. To be precise, if we transmit a $n$-length sequence of bits denoted by $(x_1,\ldots,x_n)$ through any of these channels, the output is a sequence of bits $(y_1,\ldots,y_n)$, with probability as follows. $$p((y_1,\ldots,y_n)|(x_1,\ldots,x_n))=p(y_1|x_1)\cdots p(y_n|x_n)=\Pi_{i=1}^n p(y_i|x_i).$$ -The above property is a naturally expected property. For instance, consider the channel $BEC(\epsilon)$. The probability of receiving $(?,0,?,1)$ when transmitting $(1,0,0,1)$ is given by +The above property is a naturally expected property. For instance, consider the channel $BEC(\epsilon)$. The probability of receiving $(?,0,?,1)$ when transmitting $(1,0,0,1)$ is given by + $$ \begin{aligned} p((?,0,?,1)|(1,0,0,1))&=p(?|1)p(0|0)p(?|0)p(1|1)\\ @@ -89,16 +88,19 @@ p((?,0,?,1)|(1,0,0,1))&=p(?|1)p(0|0)p(?|0)p(1|1)\\ \end{aligned} $$ -More generally, we can say the following. Let $\bm{x}\in \mathbb{F}_2^n$ be an $n$-length binary vector. A vector $\bm{y} \in \{0,1,?\}^n$ is said to be *compatible* with $\bm{x}$ if $\bm{y}$ and $\bm{x}$ agree (i.e., are equal) in all positions which are unerased (not equal to "?" symbol) in $\bm{y}$. Otherwise, they are *not compatible*. For instance, the vectors $(0,1,0,1)$ and $(?,1,0,?)$ are compatible. However, the vectors $(0,1,0,1)$ and $(?,1,1,?)$ are not compatible, since the third bits of the two vectors are both unerased and not equal. +More generally, we can say the following. Let $\bm{x}\in \mathbb{F}_2^n$ be an $n$-length binary vector. A vector $\bm{y} \in \{0,1,?\}^n$ is said to be _compatible_ with $\bm{x}$ if $\bm{y}$ and $\bm{x}$ agree (i.e., are equal) in all positions which are unerased (not equal to "?" symbol) in $\bm{y}$. Otherwise, they are _not compatible_. For instance, the vectors $(0,1,0,1)$ and $(?,1,0,?)$ are compatible. However, the vectors $(0,1,0,1)$ and $(?,1,1,?)$ are not compatible, since the third bits of the two vectors are both unerased and not equal. - For any vector $\bm{y}\in\{0,1,\epsilon\}^n$, let $w_e(\bm{y})$ denote the number of erased symbols in $\bm{y}$. Then for any $\bm{x}\in\mathbb{F}_2^n$, we have, in the memoryless $BEC(\epsilon)$ channel, the following to be true. -$$p(\bm{y}|\bm{x})=\begin{cases} +For any vector $\bm{y}\in\{0,1,\epsilon\}^n$, let $w_e(\bm{y})$ denote the number of erased symbols in $\bm{y}$. Then for any $\bm{x}\in\mathbb{F}_2^n$, we have, in the memoryless $BEC(\epsilon)$ channel, the following to be true. + +$$ +p(\bm{y}|\bm{x})=\begin{cases} \epsilon^{w_e(\bm{y})}(1-\epsilon)^{n-w_e(\bm{y})} & \text{if}~\bm{x} ~\text{and}~\bm{y}~\text{are compatible}\\ -0 & \text{otherwise}. -\end{cases}$$ +0 & \text{otherwise}. +\end{cases} +$$ Turning our focus to the $BSC(p)$, we have the following. For any $\bm{x},\bm{y}\in\mathbb{F}_2^n$, let $d(\bm{x},\bm{y})$ denote the Hamming distance (number of positions where $\bm{x}$ and $\bm{y}$ have distinct values). For example, $d((1,0,1,0),(0,0,1,1))=2$ as the two vectors are distinct in the first and the fourth locations. Then, for the $BSC(p)$ memoryless channel, we have the following. $$p(\bm{y}|\bm{x})=p^{d(\bm{x},\bm{y})}(1-p)^{n-d(\bm{x},\bm{y})}.$$ -For the memoryless AWGN channel, we have, for any two vectors $\bm{x},\bm{y}\in\mathbb{R}^n$, -$$p(\bm{y}|\bm{x})=\frac{1}{(\pi N_0)^{n/2}}e^{-\frac{(||\bm{y}-\bm{x}||^2)}{N_0}}.$$ \ No newline at end of file +For the memoryless AWGN channel, we have, for any two vectors $\bm{x},\bm{y}\in\mathbb{R}^n$, +$$p(\bm{y}|\bm{x})=\frac{1}{(\pi N_0)^{n/2}}e^{-\frac{(||\bm{y}-\bm{x}||^2)}{N_0}}.$$