Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Added possible solution for 2)c)ii)

  • Loading branch information...
commit 35919179fb5ac80aaca259129b4e3d94bc8d4dfc 1 parent 88bf0c2
@Ymbirtt authored
Showing with 32 additions and 35 deletions.
  1. +32 −35 AP2.tex
View
67 AP2.tex
@@ -59,8 +59,7 @@
Like, seriously, loads of them. No I do not know where they are. These attempts
at solutions are in no way guaranteed to be accurate. Please do not read this
-and assume that you're totally correct/incorrect just because you agree/disagree
-with the stuff that's written in the big shiny \LaTeX \,document. Rather, think
+and assume that you're totally correct/incorrect just because you agree/disagreewith the stuff that's written in the big shiny \LaTeX \,document. Rather, think
as you read through it. Follow my logic, and if it doesn't make sense to you
then ask me. Email me\footnotetext[1]{\url{ymbirtt@gmail.com}}\footnotemark[1],
grab me on Steam, Skype, League of Legends\footnotetext[2]{Ymbirtt on
@@ -76,25 +75,23 @@
me\footnotemark[1], or for bonus points send me correct \LaTeX, that I can just
copy paste in, or for super bonus points, fork me on
github\footnotetext[4]{\url{https://github.com/Ymbirtt/Applied-Probability-II/blob/master/AP2.tex}}\footnotemark[4]
-and push a revision at me. Basic rules of Wikipedia apply, namely be bold, don't
-be a dick, and please be a massive dick so I can ban you.
+and push a revision at me. Basic rules of Wikipedia apply, namely be bold, don'tbe a dick, and please be a massive dick so I can ban you.
This document does have a habit of just stating answers without much
explanation. For my contributions, I'm effectively just typing up the solutions
that I have written on paper (because \LaTeX sucks), which were written whilst
-the exam paper was in front of me. As such a lot of this won't make any sense at
-all if you don't have the relevant exam paper in front of you. Also, the links
+the exam paper was in front of me. As such a lot of this won't make any sense atall if you don't have the relevant exam paper in front of you. Also, the links
won't work in the dropbox version. Sorry about that. If you're REALLY desperate
to have clickable links, just download it. I've also spotted a minor issue with
= signs on the lab machines in the Merchant Venturers' Building where pdfs just
don't display properly. If $=$ and $\neq$ look like the same symbol to you,
you're probably also going to have these issues, which can be fixed by just
-downloading it. Seems to be something specific to Chrome running on certain UNIX
-systems, so I'm not going to go mad trying to fix it for everyone.
+downloading it. Seems to be something specific to Chrome running on certain UNIXsystems, so I'm not going to go mad trying to fix it for everyone.
If you want to make my life phenominally easy, then please use Github for any
changes you want to make. The code file\footnotemark[4] can be edited in your
-browser by clicking the ``fork and edit this file" button. Make your changes, explain your changes in the ``commit
+browser by clicking the ``fork and edit this file" button. Make your changes,
+explain your changes in the ``commit
message" box at the bottom, press ``propose file change" in the bottom right,
then send a message filled with expletives to me about how I should totally
agree with your changes. If you just want to test it out, then feel free to
@@ -220,13 +217,12 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
&\implies \frac{p_n(t+h)-p_n(t)}{h} = -\lambda(p_n(t)-p_{n-1}(t))\\
&\implies p_n'(t) = -\lambda(p_n(t)-p_{n-1}(t))
\end{align*}
-Where $N(t) \sim Po(\lambda t)$, $\pr(N(t)=n) = \frac{e^{-\lambda t}(\lambda
+Where $N(t) \sim Po(\lambda t)$, $\pr(N(t)=n) = \frac{e^{-\lambda t}(\lambda
t)^n}{n!}$, so
\begin{align*}
p_n'(t) &= \frac{d}{dt} \frac{e^{-\lambda t}(\lambda t)^n}{n!}\\
-&= \frac{-\lambda e^{-\lambda t}(\lambda t)^n + n \lambda e^{-\lambda t}(\lambda
-t)^{n-1}}{n!}\\
-&= -\lambda \left( \frac{e^{-\lambda t}(\lambda t)^n}{n!} - \frac{e^{-\lambda
+&= \frac{-\lambda e^{-\lambda t}(\lambda t)^n + n \lambda e^{-\lambda t}(\lambda t)^{n-1}}{n!}\\
+&= -\lambda \left( \frac{e^{-\lambda t}(\lambda t)^n}{n!} - \frac{e^{-\lambda
t}(\lambda t)^{n-1}}{(n-1)!}\right)\\
&= -\lambda(p_n(t) - p_{n-1}(t))
\end{align*}
@@ -248,11 +244,17 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
h+o(h))(p)p_{n-1}(t)\\
&= p_n(t)-\mu h p p_n(t) + \mu h p p_{n-1}(t) +o(h)
\end{align*}
-If we let $\lambda = \mu p$, exactly the same argument as before holds, so $N(t)
-\sim Po(\mu p t)$, and $\mathbb{E}(N(t)) = \mu p t$ by properties of the
+If we let $\lambda = \mu p$, exactly the same argument as before holds, so $N(t)\sim Po(\mu p t)$, and $\mathbb{E}(N(t)) = \mu p t$ by properties of the
poisson.
\item
-I actually have no idea how to approach this. Any takers?
+I'm really not sure on this answer. Correct it if you find anything wrong.
+
+$\pr (D(t) = k+m | N(t) = k) = \pr(D(t)-N(t)=m | N(t)=k)$ is the probability
+that exactly $m$ decays go undetected given we have detected $k$ decays. The
+detection of decays is independent of all other variables, so the non-detection
+of $m$ decays is given by $(1-p)^m$. $\pr(D(t)-N(t)=m)=(1-p^m)$ by the same
+logic, so $\pr(D(t)-N(t)=m)=\pr(D(t)-N(t)=m|N(t)=k)$, so $D(t)-N(t)$ is
+independent from $N(t)$.
\end{enumerate}
\end{enumerate}
\clearpage
@@ -288,8 +290,7 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
solution}\\
&\implies \theta = 1 \mbox { twice}
\end{align*}
-This gives us that $r_i = A\theta^i +Bi\theta^i = A+Bi$. Our values for $r_{-a}$
-and $r_b$ give us that
+This gives us that $r_i = A\theta^i +Bi\theta^i = A+Bi$. Our values for $r_{-a}$and $r_b$ give us that
\begin{align*}
A-aB &= 1\\
A+bB &= 0\\
@@ -302,11 +303,9 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
\item
Let $s_k = \pr (X_n = 0 \wedge \forall k<n X_k \neq 0 \wedge \forall 1<l<n X_l
\neq K, -K | X_0 = k)$. $s_k$ is the probability that we return to 0 before
-hitting either $K$ or $-K$. We want to find $s_0$. From similar logic to before,
-we have that $s_0 = \frac{s_{-1}+s_1}{2}$.
+hitting either $K$ or $-K$. We want to find $s_0$. From similar logic to before,we have that $s_0 = \frac{s_{-1}+s_1}{2}$.
-If the process starts by moving upwards, so $Y_0 = 1$, we have a process similar
-to before, with $a=0,b=K, X_0 =1$, so
+If the process starts by moving upwards, so $Y_0 = 1$, we have a process similarto before, with $a=0,b=K, X_0 =1$, so
\begin{align*}
s_1 = r_1 &= \frac{b-1}{a+b}\\
&= \frac{K-1}{K}
@@ -367,8 +366,10 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
\begin{align*}
\mathbb{E}(\sum^T_{r=1}Y_r) &= \mathbb{E}(\mathbb{E}(\sum^T_{r=1}Y_r|T))\\
&= \mathbb{E}(\mathbb{E}(Y_1+Y_2+\dots+Y_T|T))\\
-&= \mathbb{E}(\mathbb{E}(TY_1|T)) \quad \mbox{Since the $Y_i$s are all iidrvs}\\&= \mathbb{E}(TY_1)\\
-&= \mathbb{E}(T)\mathbb{E}(Y_1) \quad \mbox{Since $T$ and $Y_1$ are independent}\end{align*}
+&= \mathbb{E}(\mathbb{E}(TY_1|T)) \quad \mbox{Since the $Y_i$s are all
+iidrvs}\\&= \mathbb{E}(TY_1)\\
+&= \mathbb{E}(T)\mathbb{E}(Y_1) \quad \mbox{Since $T$ and $Y_1$ are
+independent}\end{align*}
\item
At time $T$, $S_T = -4$, so $\mathbb{E}(S_T) = -4$. We also have that
$\mathbb{E}(Y_1) = \frac{-4}{7}$, so
@@ -378,9 +379,10 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
\implies \mathbb{E}(T) = 7
\end{align*}
\end{enumerate}
+
+\item
\begin{enumerate}
\item
-
\item Again, not sure on this one. I note that $\frac{240}{255} =
\frac{2^8-2^4}{2^8-2^0}$, which could be handy since we're dealing with powers
of 2, but beyond that I don't know how to approach it. Contribute! Be bold!
@@ -392,8 +394,7 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
\begin{enumerate}
\item
\begin{enumerate}
-\item A state, $j$, is recurrent if the probability of travelling from state $j$
-to $j$ in some length of time is $1$. $j$ is positive recurrent if $\mu_{jj}$,
+\item A state, $j$, is recurrent if the probability of travelling from state $j$to $j$ in some length of time is $1$. $j$ is positive recurrent if $\mu_{jj}$,
the expected time between returns, is finite.
\item A stationary distribution is a vector, $\underline{\pi}$, satisfying:
@@ -435,8 +436,7 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
The chain consists of a single, irreducible, finite closed communicating class,
so all states must be positive recurrent.
\item
-We can see that there are exactly two cycles for state 1, namely $(1,2,4,1)$ and
-$(1,3,5,1)$. Both of these cycles are of order 3, so
+We can see that there are exactly two cycles for state 1, namely $(1,2,4,1)$ and$(1,3,5,1)$. Both of these cycles are of order 3, so
\begin{align*}
d_1 &= hcf\{3,6,9,12,\dots\}\\
&= 3
@@ -464,8 +464,7 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
\pi_4 &= \alpha\pi_1 \\
\pi_5 &= (1-\alpha)\pi_1
\end{align*}
-We also require that $\sum\limits^5_{i=1}\pi_i=1$, so $3\pi_1 = 1$, and $\pi_1 =
-\frac{1}{3}$.
+We also require that $\sum\limits^5_{i=1}\pi_i=1$, so $3\pi_1 = 1$, and $\pi_1 =\frac{1}{3}$.
$$
\underline{\pi} =
\left(\frac{1}{3},\frac{\alpha}{3},\frac{1-\alpha}{3},\frac{\alpha}{3},\frac{1-\alpha}{3}\right)
@@ -488,8 +487,7 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
\end{align*}
So $\underline{\mu}$ satisfies $\underline{\mu}P = \underline{\mu} \wedge
||\underline{\mu}||_1 = 1$, and is therefore a stationary distribution for $P$.
-All of these steps will also work backwards, so stationary distributions for $P$
-are also stationary distributions for $Q$.
+All of these steps will also work backwards, so stationary distributions for $P$are also stationary distributions for $Q$.
\begin{enumerate}
\item
As before, each state intercommunicates, but now state 1 has cycle $(1,1)$ of
@@ -497,8 +495,7 @@ \section*{2011 paper, \url{http://bit.ly/KChGKp}}
Similar logic to before will give us that $\forall i \in S, d_i = 1$, so the
chane is aperiodic.
\item
-Since the chain is aperiodic, a limiting distribution exists and is equal to the
-stationary distribution, namely:
+Since the chain is aperiodic, a limiting distribution exists and is equal to thestationary distribution, namely:
$$
\underline{\pi} =
\left(\frac{1}{3},\frac{\alpha}{3},\frac{1-\alpha}{3},\frac{\alpha}{3},\frac{1-\alpha}{3}\right)
Please sign in to comment.
Something went wrong with that request. Please try again.