Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

update notes

  • Loading branch information...
commit a44dbbfe45766438f9f4074f5ba015ee150ebdea 1 parent bc8a6d6
@steveWang authored
Showing with 6,959 additions and 133 deletions.
  1. +854 −39 cs191.html
  2. +290 −1 ee105.html
  3. +168 −13 ee120.html
  4. +3 −1 mathjax/unpacked/config/local/local.js
  5. +495 −1 phys112.html
  6. +125 −2 phys137a.html
  7. +7 −1 sp2012/cs191/
  8. +2 −2 sp2012/cs191/
  9. +1 −0  sp2012/cs191/
  10. +2 −2 sp2012/cs191/
  11. +107 −0 sp2012/cs191/
  12. +138 −0 sp2012/cs191/
  13. +310 −0 sp2012/cs191/
  14. +235 −0 sp2012/cs191/
  15. +237 −0 sp2012/cs191/
  16. +18 −14 sp2012/cs191/
  17. +14 −14 sp2012/cs191/
  18. +1,086 −33 sp2012/cs191/
  19. BIN  sp2012/cs191/hw10.pdf
  20. +42 −2 sp2012/ee105/
  21. +99 −0 sp2012/ee105/
  22. +47 −0 sp2012/ee105/
  23. +72 −0 sp2012/ee105/
  24. +79 −0 sp2012/ee105/
  25. +387 −0 sp2012/ee105/
  26. +2 −2 sp2012/ee120/
  27. +87 −0 sp2012/ee120/
  28. +73 −0 sp2012/ee120/
  29. +57 −0 sp2012/ee120/
  30. +229 −2 sp2012/ee120/
  31. +171 −0 sp2012/phys112/
  32. +94 −0 sp2012/phys112/
  33. +106 −0 sp2012/phys112/
  34. +122 −0 sp2012/phys112/
  35. +129 −0 sp2012/phys112/
  36. +616 −0 sp2012/phys112/
  37. +2 −2 sp2012/phys137a/
  38. +5 −0 sp2012/phys137a/
  39. +106 −0 sp2012/phys137a/
  40. +56 −0 sp2012/phys137a/33.html
  41. +8 −0 sp2012/phys137a/
  42. +20 −0 sp2012/phys137a/
  43. +13 −0 sp2012/phys137a/
  44. +13 −0 sp2012/phys137a/
  45. +43 −0 sp2012/phys137a/
  46. +189 −2 sp2012/phys137a/
893 cs191.html
@@ -25,10 +25,9 @@
<li>2 Midterms: 14 Feb, 22 Mar.</li>
<li>Final Project</li>
<li>In-class quizzes</li>
-<li>Academic integrity policy
+<li>Academic integrity policy</li>
-<hr />
<li>What is quantum computation?</li>
<li>What is this course?</li>
@@ -63,10 +62,12 @@
fundamental problems, like factoring.</p>
<p>What this course will focus on is several questions on quantum computers.</p>
<p>Where we are for quantum computers is sort of where computers were
-60-70 years ago.
-<em> Size -- room full of equipment
-</em> Reliability -- not very much so
-* Limited applications</p>
+60-70 years ago.</p>
+<li>Size -- room full of equipment</li>
+<li>Reliability -- not very much so</li>
+<li>Limited applications</li>
<h2>Ion traps.</h2>
<p>Can trap a small handful of ions, small number of qubits. No
fundamental obstacle scaling to ~40 qubits over next two years.</p>
@@ -209,29 +210,31 @@
<h2>Multiple-qubit Systems -- January 24, 2012</h2>
<p>Snafu with the projections, so lecture will be on the whiteboard! Will
also stop early, unfortunately.</p>
-<p>State of a single qubit is a superposition of various states (cosθ|0〉
-+ sinθ|1〉). measurement has effect of collapsing the superposition
+<p>State of a single qubit is a superposition of various states
+(<mathjax>$\cos\theta\ket{0} + \sin\theta\ket{1}$</mathjax>). measurement has effect of
+collapsing the superposition.</p>
<p>(hydrogen atom: electron can be in ground state or excited state.)</p>
<p>Now we study two qubits!</p>
<h1>TWO QUBITS</h1>
-<p>Now you have two such particles, and we want to describe their joint
-state, what that state looks like. Classically, this can be one of
-four states. So quantumly, it is in a superposition of these four
-states. Our |ψ〉, then, is α₀₀|00〉 + α₀₁|01〉 + α₁₀|10〉 + α₁₁|11〉.
-Collapse of the wavefunction occurs in exactly the same manner.</p>
-<p>Probability first qubit is 0: |α₀₀|² + |α₀₁|². New state is a
-renormalization of the remaining states.</p>
+<p>Now you have two such particles, and we want to describe their joint state,
+what that state looks like. Classically, this can be one of four states. So
+quantumly, it is in a superposition of these four states. Our <mathjax>$\ket{\psi}$</mathjax>,
+then, is <mathjax>$\alpha_{00}\ket{00} + \alpha_{01}\ket{01} + \alpha_{10}\ket{10} +
+\alpha_{11}\ket{11}$</mathjax>. Collapse of the wavefunction occurs in exactly the
+same manner.</p>
+<p>Probability first qubit is 0: <mathjax>$\abs{\alpha_{00}}^2 + \abs{\alpha_{01}}
+^2$</mathjax>. New state is a renormalization of the remaining states.</p>
<p>First, let me show you what it means for two qubits not to be
entangled. Essentially, we have conditional independence.</p>
<p>Quantum mechanics tells us that this is a very rare event (i.e. it
almost never happens).</p>
<h2>Bell State</h2>
-<p>You have two qubits in the state (1/√2)|00〉 + (1/√2)|11〉. Impossible
-to factor, so we must have some sort of dependence occurring. Neither
-of the two qubits has a definite state. All you can say is that the
-two qubits together are in a certain state.</p>
+<p>You have two qubits in the state <mathjax>$\frac{1}{\sqrt{2}}\parens{\ket{00} +
+\ket{11}}$</mathjax>. Impossible to factor (nontrivial tensor product), so we must
+have some sort of dependence occurring. Neither of the two qubits has a
+definite state. All you can say is that the two qubits together are in a
+certain state.</p>
<p>Rotational invariants of Bell states -- maximally entangled in all
orthogonal bases.</p>
<p><a name='4'></a></p>
@@ -599,25 +602,26 @@
<p>Suppose M = X (bit-flip). Xv = λv. (X-λI)v = 0. det(X-λI) = 0. Solve for λ,
which are your eigenvalues, and then we go back and solve for our
-<p>Here, we're going to do t his by inspection. Eigenvectors of X would be
-|+〉, |-〉; the corresponding eigenvalues are 1, -1.</p>
+<p>Here, we're going to do this by inspection. Eigenvectors of X would be
+<mathjax>$\ket{+}, \ket{-}$</mathjax>; the corresponding eigenvalues are 1, -1.</p>
<p>Why is this an observable? If you were to create the right detector, we'd
observe something. We'd measure something. What we read out on the meter is
-λ|j〉 with probability equal to α{j}², and the new state is |Φ{j}〉. What
-Schrödinger's equation tells us is that if you look at the energy operator
-H, and then in order to solve this differential equation, we need to look
-at it in its eigenbasis. It was not supposed to be so frightening. You can
-write U(t) notationally as e^{-iHt/ℏ}.</p>
+<mathjax>$\lambda\ket{j}$</mathjax> with probability equal to <mathjax>$\alpha_j^2$</mathjax>, and the new state
+is <mathjax>$\ket{\Psi_j}$</mathjax>. What Schrödinger's equation tells us is that if you look
+at the energy operator H, and then in order to solve this differential
+equation, we need to look at it in its eigenbasis. It was not supposed to
+be so frightening. You can write U(t) notationally as <mathjax>$e^{-iHt/ℏ}$</mathjax>.</p>
<h2>Why H?</h2>
<p>Why should Schrödinger's equation involve the Hamiltonian? Why the energy
operator? What's so special about energy? Here's the reasoning: from axiom
3 of quantum mechanics, which says unitary evolution, what we showed was
-the unitary transformation is e^{-iHt/ℏ}. Any unitary transformation can be
-written in this form. You can always write it in the form e^{iM} for some
-Hermitian matrix M. The only question is, what should M be? Why should M be
-the energy function? The second thing that turns out (either something that
-we'll go through in class or have as an assignment) – suppose that M is a
-driving force of Schrödinger's equation. So ∂Ψ/∂t = M|Ψ〉.</p>
+the unitary transformation is <mathjax>$e^{-iHt/\hbar}$</mathjax>. Any unitary transformation
+can be written in this form. You can always write it in the form <mathjax>$e^{iM}$</mathjax>
+for some Hermitian matrix M. The only question is, what should M be? Why
+should M be the energy function? The second thing that turns out (either
+something that we'll go through in class or have as an assignment) –
+suppose that M is a driving force of Schrödinger's equation. So
+<mathjax>$\pderiv{\Psi}{t} = M\ket{\Psi}$</mathjax>.</p>
<p>Suppose there were some observable quantity A that is conserved: i.e. if
you start out with a measurement of |Ψ(0)〉, and you do the same
measurement at time t, if A is a conserved value, then this expected value
@@ -1532,8 +1536,8 @@
<h1>Guest lectures: implementation of qubits</h1>
<h2>April 3, 2012</h2>
<p>Kevin Young:</p>
-<p>We'll start with how to implement qubits using spins, and eventually culminate in
+<p>We'll start with how to implement qubits using spins, and eventually
+culminate in NMR.</p>
<p>Today, what we're going to do is start looking at physical implementations
of QC. Will continue through the end of next week. How to build an actual
physical quantum computer. Implementation choice: NMR (one of first QC
@@ -1706,8 +1710,8 @@
small. However, we get lucky because these are Hermitian matrices, so they
are diagonalizable with real eigenvalues, which is easy to do quickly.</p>
<p>Pauli matrices are especially nice in that they actually alternate between
-themselves and the identity. This has interesting results: <mathjax>$e^{i\theta
-X} = I\cos\theta + iX\sin\theta$</mathjax>; <mathjax>$e^{i\theta \hat{n} \cdot \vec{S}} = I \cos
+themselves and the identity. This has interesting results: <mathjax>$e^{i\theta X} =
+I\cos\theta + iX\sin\theta$</mathjax>; <mathjax>$e^{i\theta \hat{n} \cdot \vec{S}} = I \cos
theta + i(\hat{n} \cdot \vec{S})\cos\theta$</mathjax>. This shows up a lot since
physicists tend to pick magnetic field as Z. Originally devised by Zeeman,
so it'll often be known as the Zeeman Hamiltonian or the Zeeman effect.</p>
@@ -1718,7 +1722,818 @@
understand as the cost of having this nice physical picture. We're really
doing rotations on complex vector spaces.</p>
<p>Limitation of this picture is that it's only half of the representations we
-can do in the complex space.</p></div><div class='pos'></div>
+can do in the complex space.</p>
+<p><a name='22'></a></p>
+<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
+<h1>Guest lectures: implementation of qubits</h1>
+<h2>April 12, 2012</h2>
+<p>(notes from last time were on paper)</p>
+<p>One last homework due next week. Will give feedback on projects after
+that. 3 multi-part problems. Should not be too hard.</p>
+<p>Review: we can do arbitrary single-qubit gates. Wrote down expressions for
+doing arbitrary phase rotations. THe simplest model for two qubits
+interacting with each other that feels like <mathjax>$H_{int} = J\sigma_z^{(1)}
+\otimes \sigma_z^{(2)}$</mathjax>. If we were in a single qubit, we chose special
+states: <mathjax>$\ket{0},\ket{1}$</mathjax>. Eigenstates of <mathjax>$\sigma_z$</mathjax>. Arbitrary
+superposition of these.</p>
+<p>When we have two qubits, we can pick a similar basis, but now our system is
+composed of 2 qubits, so we need four basis states (can easily form these
+by taking tensor products of our previous basis states).</p>
+<p>effective magnetic field of the qubit is itself measured by a Pauli
+<p>If we want to build a matrix representation of an operator, we just need to
+know how it acts on each of our basis states. To set up our conventions,
+<mathjax>$\sigma_z\ket{0} = 0$</mathjax>; <mathjax>$\sigma_z\ket{1} = -\ket{1}$</mathjax>.</p>
+<p>If J positive, favors antialigned (antiferromagnetic); if J negative,
+favors aligned (ferromagnetic). Condensed-matter people love to study
+antiferromagnetic things. End up with frustration in triangle lattice.</p>
+<p>The way we construct the tensor product matrix is by sticking the second
+matrix B onto each element of A. What I mean by that is that <mathjax>$A \otimes B
+\equiv \begin{pmatrix}A_11 B &amp; A_{12}B &amp; ... \\ A_{21}B &amp; ... &amp; ...
+\\ ... &amp; ... &amp; ...\end{pmatrix}$</mathjax>. It's worth your time to convince yourself
+that this is the right way to do the tensor product. Think about how you're
+writing down your basis states.</p>
+<p>Now that we have this operator, let's think about what it does to our
+system. Let's look at it strictly from the lab frame. So we're not going to
+take out the Larmor precession. Write out full Hamiltonian.</p>
+<p><mathjax>$H = \mu_0 B_1 \sigma_z \tensor I - \mu_0 B_2 \sigma_z \otimes I + J
+\sigma_z \otimes \sigma_z$</mathjax>. One thing you can do is look at the spectrum of
+the Hamiltonian: the available energies (eigenvalues of Hamiltonian).</p>
+<p>By putting an interaction on these two qubits, I've given myself some way
+of doing a CNOT: a nontrivial tensor product. One thing that is important
+is that these qubits are indistinguishable somehow. If I want to control
+one but not the other, they need different resonance frequencies. If you're
+going to pick some molecule to stick in your quantum computer, you can't
+choose one with the usual symmetries. Something like plain acetylene
+wouldn't really work wekk: both carbons have the same
+environment. Indistinguishable. But by replacing one hydrogen by a
+deuterium, we've changed the resonant frequency by just a little bit so
+that our system is now resolvable.</p>
+<p>Let's go back into a rotating frame. In a rotating frame, what we've done
+is subtract out the rotating part of the hamiltonian. We can still do the
+same mathematical transformation. Why? These operators commute. Two
+different systems, so you can know both simultaneously. No uncertainty
+<p>Notice that the state <mathjax>$\ket{00}$</mathjax> and <mathjax>$\ket{11}$</mathjax> get a positive phase after
+some time t, and <mathjax>$\ket{01}$</mathjax> and <mathjax>$\ket{10}$</mathjax> get a negative phase after some
+time t. Recall that the <mathjax>$\pi$</mathjax>-pulse changes the parity of a state: it takes
+an even-parity state and switches it to an odd parity state.</p>
+<p>First is hamiltonian evolution for time <mathjax>$\tau$</mathjax>, and second, we do an X-gate
+on the first qubit, and then third, we do hamiltonian evolution again for
+<mathjax>$\tau$</mathjax>, and fourth, we do our X-gate again. I'll do 0 explicitly. We start
+out in the state <mathjax>$\ket{00}$</mathjax>. Hamiltonian evolution (for this system) under
+a time t looks like <mathjax>$e^{-J\tau}\ket{00}$</mathjax>. After the second step, we move
+into the state <mathjax>$e^{-J\tau}\ket{10}$</mathjax>. After the third step, we have the
+state <mathjax>$e^{J\tau - J\tau}\ket{10}$</mathjax>. And then, after the fourth step, we get
+back <mathjax>$\ket{00}$</mathjax>. So by doing this series of operations, I've effectively
+eliminated the effect of this hamiltonian (not ideal, but has the same net
+<p>This procedure is going to be very important to us: dipolar
+decoupling. It's dipolar because you're dealing with these magnetic
+dipoles, and this interaction in a sense couples these two qubits, and
+you've turned this interaction off. On Tuesday, we'll talk about dynamic
+decoupling, where it's basically the same procedure used to eliminate
+<p>One of the assumptions we've made is that the X-gate is fast compared to
+<mathjax>$\tau$</mathjax>. It takes some time to do that X-gate, and while that's happening,
+the system is evolving, and so you're accumulating some errors. Problem
+with doing these X-gates fast.</p>
+<p>Let's think of the exact sequence of controls we need to do to implement a
+Bell state. So the first thing we do is the dipolar decoupling, as
+discussed, so that we have the same state after some time has elapsed. Now,
+we have to do a Hadamard gate: sequence of X, Y pulses that looked like
+<mathjax>$X(\pi/2), Y(\pi/4), X(\pi/2)$</mathjax>. Now we want to do a CNOT by applying a
+<mathjax>$\pi$</mathjax>-pulse at one of the frequencies.</p>
+<p>This sequence of operations gives us exactly the Bell state that we wanted.</p>
+<p>When you start building a larger quantum computer, the idle times start
+becoming vital: even worse than we've discussed (Tuesday).</p>
+<p>A lot of pulse-design for NMR revolves around faulty gates: how can I make
+these errors cancel each other out?</p>
+<p><a name='23'></a></p>
+<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
+<h1>Guest lectures: implementation of qubits</h1>
+<h2>April 17, 2012</h2>
+<p>Last time, we talked about two-qubit gates. Today, we're going to talk
+about why NMR quantum computers are not scalable and dephasing/decoherence.</p>
+<p>So, one of the most successful demonstrations of an NMR quantum computer so
+far was factoring the number 15. Used a molecule <mathjax>$F^{19}C^{13}$</mathjax> as their
+nuclear spins, and then iron, etc. In order to construct a molecule to use
+it in a quantum computer, each qubit must be in a different local
+environment so that the spectrometer can see a different resonant frequency
+for each qubit.</p>
+<p>Carbon exists with several different isotopes: <mathjax>$C^{13}$</mathjax> has spin
+<mathjax>$\frac{1}{2}$</mathjax>, for instance.</p>
+<p>Carbons are easy to distinguish. Either surrounded by fluorines (very
+electronegative) or irons (metals).</p>
+<p>Also not just using one of these: fill up generously-sized sample-holder,
+and you end up with ~<mathjax>$10^{22}$</mathjax>. Also important: this system is generally
+analysed at room temperature. At room temperature, the effect of thermal
+excitations on quantum systems is fairly large. Back-of-the-envelope way of
+estimating whether you're in the ground state or not: if your energy gap is
+very big compared to <mathjax>$\tau$</mathjax>, you'll likely be in the ground state. But
+typically in NMR systems, the converse is true: energy gap is very small.</p>
+<h2>Density matrices</h2>
+<p>Quantum mechanics is frequently concerned with quantum probabilities
+(intensities of the wave function, so to speak). These are not the only
+probabilities we can consider.</p>
+<p>Person flips a coin. If heads, gives state <mathjax>$\ket{\psi}$</mathjax>. If you're doing an
+experiment and measure, what you want to do is describe what's coming out
+consistently. One way: describe state as list of tuples,
+e.g. [(.5 0) (.5 1)]. Not very useful. Expectation of operator A:
+<p>trace of <mathjax>$\rho_A$</mathjax>. Density matrix: if probabilities sum up to 1, trace of
+density matrix is 1.</p>
+<p>Remember Bloch sphere: states on Bloch sphere describe quantum states. If I
+have some probabilistic sum, the density matrix <mathjax>$\rho$</mathjax> is <mathjax>$\sum_k p_k
+\ketbra{\psi_k}{\psi_k}$</mathjax>. For a single qubit, I'm interested in making
+measurements. The measurements I usually have access to are the Pauli
+matrices (plus the identity), which form a basis for all Hermitian matrices
+for two-level systems -- quaternion lie group, almost. Pauli matrices are
+<p>Thus I can write the density matrix in terms of these quaternions. <mathjax>$\rho$</mathjax>,
+then, will be <mathjax>$aI + b\sigma_x + c\sigma_y + d\sigma_z$</mathjax>. All I have to do,
+now, is figure out a,b,c,d. I know that the trace of <mathjax>$\rho$</mathjax> is 1, so a must
+be <mathjax>$\frac{1}{2}$</mathjax>.</p>
+<p>Now let's say I want to take the expectation value of <mathjax>$\sigma_x$</mathjax>. That's
+equal to the trace of <mathjax>$\rho \sigma_x$</mathjax>. Working this out, we get b is
+<mathjax>$\frac{1}{2}\avg{\sigma_x}$</mathjax>. The rest follows in a similar
+manner. (remember that <mathjax>$\sigma_i\sigma_j = \sigma_k\epsilon{ijk} +
+<p>Something else is really nice here: you know that pure states are such that
+the expectations of <mathjax>$\sigma_x$</mathjax>, <mathjax>$\sigma_y$</mathjax>, <mathjax>$\sigma_z$</mathjax> are easy to
+calculate: the state is an eigenstate of <mathjax>$\hat{n} \cdot \vec{S}$</mathjax>.</p>
+<p>We can now say something: pure states live on the Bloch sphere, while mixed
+states live within the Bloch sphere.</p>
+<p>No measurements that can distinguish between states with same density
+matrix. Completely mixed state.</p>
+<p>How these things evolve over time. Populations, coherences. Time evolution:
+rewrite as <mathjax>$\int p_b \rho_b db$</mathjax>.</p>
+<p>Populations and coherences: intuition is notion of coherent superposition
+vs. incoherent superposition (classical prob. of up, classical prob. of
+down; off-diagonal terms are 0).</p>
+<p>Start having quantum coherences <mathjax>$\implies$</mathjax> values showing up in
+off-diagonal terms.</p>
+<p>Magnetic field will just cause phase to go around at some rate.</p>
+<p>Hahn (?) echo to extend coherence: only if magnetic field is not
+fluctuating. Great at eliminating low-frequency noise.</p>
+<p>In an NMR system, you tend to have inhomogeneous broadening.</p>
+<p>Decoherence comes in two favors: both flavors are very bad. This is called
+dephasing: losing the phase from the system. Typically comes with a time
+scale, <mathjax>$t_2$</mathjax>. If you do this series of Hahn echos, the coherence will very
+quickly decay. Remember, these magnetic fields are slightly fluctuating.</p>
+<p><mathjax>$t_2^*$</mathjax> is decay that gets almost completely eliminated by Hahn echos, so
+less relevant, generally.</p>
+<p>There's another type of decoherence: suppose I set up my state in the
+excited state. Could be noise that relaxes the state to the ground
+state. This relaxation is given by time <mathjax>$t_1$</mathjax>. If you set up some state on
+the Bloch sphere, and you don't do your complication very quickly, it'll
+start to decohere.</p>
+<p><mathjax>$t_2$</mathjax> is controllable. Can utilize correlations of that noise, various
+techniques to mitigate its effects. Relaxation very difficult to eliminate:
+what you could try is symmetrization. Can never get this echoing
+behavior. Eventually, all of these states will go down to the mixed
+state. Eventually all the states tend toward the zero state if there's
+<p>Decay of magnetization; Bloch equations. Decay in certain directions
+corresponds to certain time constants. So that's about everything. Did want
+to talk for a few seconds what happens when you try to scale NMR.</p>
+<p>Because you have these thermal issues, you can't prepare the ground state
+in exactly where you want. You want everything in the ground state. Because
+of thermal issues, you have a probability of being in all of the
+states. Make something called a pseudo-pure state. <mathjax>$\epsilon$</mathjax> times the
+ground state plus <mathjax>$1-\epsilon$</mathjax> times the fully-mixed state. When you start
+adding qubits (e.g. with 7), <mathjax>$\epsilon$</mathjax> gets smaller. If you get 100 qubits
+(molecule with 100 different spins on it) and a standard sample size,
+there's 100 qubits per molecule. There's a 99.9999999% chance you have 0
+molecules sitting in the ground state.</p>
+<p>Also, the colder your system gets, that's certainly better, but you need to
+push those temperatures really low, and at some point you're not doing
+liquid-state NMR any more (molecules are just tumbling; dipolar coupling
+between molecules balances out, very narrow lines), you're dealing with
+solid-state NMR (broadening of lines). Ways of coping: magic-angle spinning
+-- narrows lines a bit.</p>
+<p>Thursday: Haffner will come back to talk a bit more about another
+system. Umesh will then come back to talk about AQC and quantum crypto.</p>
+<p><a name='24'></a></p>
+<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
+<h1>Guest lectures: Experimental Quantum Computing</h1>
+<h2>April 19, 2012</h2>
+<p>Professor Haffner: will speak about experimental QC. One of leading experts
+in ion traps.</p>
+<p>Specific impl. of quant. information prcessing. Idea is fueled by building
+a scalable quantum processing device for whatever motivation you have. Many
+approaches. What people thought 10-15 years ago: landscape has not actually
+changed too much in recent years. Couple options shown to not work; most
+likely will; and some have made progress.</p>
+<p>Implementation of qubits: initialization of quantum registers, logic
+operations, maintaining coherence. NMR initially very successfully, but
+could be shown that this technology not scalable: exponential cost for
+initialization: prob of finding particle in ground state drops
+exponentially with no. of qubits because simply the prob of being in ground
+state decreases.</p>
+<p>Concentration for today: trapped ions. Mention: superconducting quantum
+bits -- new, looks promising.</p>
+<p>Picture of quantum computer. Quite complicated. Important thing: realize
+that the physics is very simple, and that's what you need for quantum
+processing device: very isolated, very clean. All in this vacuum chamber
+(rest is just tools). Ion trap: by applying correct voltages, we can
+confine single ions. trapping direction much stronger radially than
+axially. Distance from each other on order of <mathjax>$5\mu\mathrm{m}$</mathjax>. These ions
+are what we call quantum bits: nothing but two-level system: we forget
+about all other levels. Particular excited state: chose <mathjax>$D_{3/2}$</mathjax>
+(implementation detail). Transition is a two-photon transition: has two
+angular momenta. Unlikely to drop (since it needs to spit out two photons),
+so we have about a second for quantum processing.</p>
+<p>We also have this p-level around. Very important in this context, since it
+allows us to see the ions.</p>
+<p>Di Vincenzo criteria:</p>
+<h1>Scalable physical system, well-characterized qubits.</h1>
+<h1>Ability to initialize the state of the qubits</h1>
+<p>(need to set up a system)</p>
+<h1>Long relevant coherence times, much longer than gate operation time</h1>
+<p>(require this to be quantum, and we need to be able to implement arbitrary
+<h1>Universal set of quantum gates</h1>
+<h1>Qubit-specific measurement capability</h1>
+<p>(need to be able to read it out)</p>
+<p>No infinitely-scalable system (believed to be finite particles in
+universe), so we mean mostly-scalable, i.e. not exponential to scale.</p>
+<p>Experimental procedure: initialization in a pure quantum state. Very high
+probabilities: the idea is that you exploit large differences of coherence
+times. (done by shining lasers)</p>
+<p>Quantum state manipulation: also done with lasers.</p>
+<p>Quantum state measurement by fluorescence detection (every 10 ns I get a
+photon into a <mathjax>$4\pi$</mathjax> solid angle, etc. From a quantum mechanical view, this
+is pretty interesting: prepare in s,d, and start scattering photons, and
+the ion decides whether it's light or dark. Also works with multiple
+ions. Instead of zero and one, I will use s and d since I am talking about
+physical images.</p>
+<p>With very high fidelity (~99.99%) we can detect this state. Essentially
+limited by time it takes for the d state to the ground state. Many orders
+of magnitude better than other implementations.</p>
+<p>What we do now is initialize in ground state, shine in laser for a given
+time, then read out dark or bright, then plot probability. Then you see
+oscillations that correspond to the Bloch sphere, and you plot these.</p>
+<p>How do we distinguish between <mathjax>$s+d$</mathjax> and <mathjax>$s+id$</mathjax>? What does that mean? What
+does that phase mean? I can shine in this laser and stop here. Might have
+also noticed I can prepare in <mathjax>$s$</mathjax> state; how can I prepare in <mathjax>$is$</mathjax> state?
+This problem of the phase occurs because in quantum mechanics, you must be
+especially careful regarding what you can observe. Will show experiments.</p>
+<p>So what is this phase about? For this phase, you need a phase
+reference. The two phases we will have are the phase of the laser field and
+the phase of the atomic polarization. Assume for now that we have a
+dipole transition, not a quadrupole transition. <mathjax>$s$</mathjax> and <mathjax>$p$</mathjax> state. If I
+look at where my electron is, I will find that in the upper part it
+interferes constructively, but from the timing-dependent Schrodinger
+equation, relative phase is governed by energy difference. What this means
+is that you can have a minus sign, and it will be found lower.</p>
+<p>The electron probability moves up and down with energy difference between s
+and p state. Exactly the optical frequency: laser frequency. If I want to
+drive transition, must apply laser matching that energy difference.</p>
+<p>Atom with atomic polarization that goes up and down: how laser can drive
+transition. Electric field shakes electron. If phase right, I can increase
+this dipole moment. If phase wrong, I get destructive interference.</p>
+<p>By switching laser phase by <mathjax>$\frac{\pi}{2}$</mathjax>, we switch from constructive to
+destructive interference. By shifting the phase by this amount, we're not
+putting any more energy in the system, so it's not evolving.</p>
+<p>When I switch the phase, I am no longer rotating about the x, but now the
+<p>So far we were talking about single ions. Now consider multiple ions (where
+most problems show up).</p>
+<p>change voltage by electrooptic deflector: deflects light beam based on
+voltage. Neightboring ions are hardly excited. Residual excitation of cost
+here since never really zero. Way to correct: apply dipolar decoupling.</p>
+<p>Suppose you are ~1 A, and your neighboring ion is about 50 <mathjax>$\mu \mathrm{m}$</mathjax>
+away. Exploiting here: ions move together. Coulombic attraction. Two
+pendula coupled by a spring have normal modes. Most common is center of
+mass mode where all ions move together. All have different frequencies. Can
+use laser to excite these modes. Main mechanism: selectively push on one
+ion with a laser.</p>
+<p>Review of quantum mechanics of harmonic oscillators. Four levels: display
+slightly differently. Combination of two-level system with harmonic
+oscillator. Plot energy, label different states. Ion beam at ground
+state. Electron ground state, electron excited state, etc. And then I can
+apply a laser to drive this electron transition.</p>
+<p>Think of it really as a ball being at the bottom of some harmonic
+potential. Very crude approximation. Point is that we can think of this as
+an abstract Hilbert space which you can connect with lasers. Same
+frequency: carrier transition. In this transition, motion is not
+changed. Then we can detune the laser; we have energy conservation at
+particular energies.</p>
+<p>Blue side-bands, since blue-shifted, etc. Frequency multiplied by
+<mathjax>$\hbar$</mathjax>. When you scan the laser frequency, you can see some
+excitations. There are other excitation probabilities. Three harmonic
+oscillators, since we're working with three dimensions. Radial modes,
+radial minus axial modes, etc. Can also do transitions where excitation of
+state destroys a photon. Raising and lowering operators.</p>
+<p>e.g. radial production of phonons, axial destruction of phonons.</p>
+<p>What we can do, for instance, is increase motion in one direction while
+decreasing it in another direction.</p>
+<p>You learn things like dipole-forbidden. It's really a quadrupole
+transition, suppressed by <mathjax>$\alpha^2$</mathjax>. Gives the difference of 10ns
+vs. 1s. Don't worry about details.</p>
+<p>We looked already at this exciting the electronic transition. Can also tune
+laser to sideband, and see more Rabi oscillations with Rabi frequencies:
+Reduced by <mathjax>$\eta = kx_0$</mathjax> Lamb-Dicke parameter. Can calculate; actually
+probably would take an hour as well.</p>
+<p>Let us now create some Bell states. See how we can use this toolbox to
+create Bell states. Take two ions prepared in s state, but also
+laser-cooled center of mass to ground state. Doppler effect. What we do now
+is three pulses: first a pulse onto right ion for a length <mathjax>$\pi$</mathjax> on the
+carrier transition, i.e. flip state but not motion. Now, go with laser to
+other ion and apply a blue side-band pulse for length <mathjax>$\frac{\pi}{2}$</mathjax>. And
+now we have the last pulse, which will somehow create the bell
+state. Tuning our laser to the right ion and applying a <mathjax>$\pi$</mathjax>-pulse. What's
+happening is we go to the s state and remove a photon excitation. We
+de-excite the motion (which was common to both ions). The original part of
+this superposition, which was left around, won't happen: no state with
+correct energy.</p>
+<p>If we have zero excitations in our quantum oscillator, then we can't take
+out an excitation. We can separate out the motion, and what we left with is
+sd + ds. Remember: we're talking only about the center-of-mass
+motion. Normal mode spans the full space of the two ions moving.</p>
+<p>Bell-states with atoms: NIST: beryllium, fidelity of 97%; Innsbruck:
+calcium, fidelity of 99%, etc.</p>
+<p>We all know that there is an infinite number of Bell states, which have
+some phase incorporated. Need to play around with the length of the
+<mathjax>$\frac{\pi}{2}$</mathjax> pulse. Must show coherence: interference with each
+other. Plus sign makes sense. Not sometimes plus, sometimes minus. We want
+to also know relative phase of the superposition.</p>
+<p>What we do is develop a method to measure the density matrix. A measurement
+yields the z-component of the Bloch vector. Measuring diagonal of the
+density matrix is straightforward: enter measured values into this
+diagonal. So how, then, are you going to measure these off-diagonal
+elements, which determine the phase of this coherence?</p>
+<p>How do I, say, measure the projection onto the y-axis? Rotate about x-axis
+(apply <mathjax>$\frac{\pi}{2}$</mathjax>-pulse). Enter value here, then prepare same state
+again. Do the same with projection onto x-axis.</p>
+<p>Now we need to generalize to 2 qubits. Must try all combinations. Need to
+do x,y rotations, nothing to first, nothing to second. etc. Analysis, some
+linear algebra. Then we come up with the density matrices.</p>
+<p>These are actually all measurements. You can even go to more complex
+states: largest: 16 zero-qubits + 16 up-qubits. Huge matrix.</p>
+<p>W state, etc.</p>
+<p>Want to now show some nice things about quantum mechanics: can now prepare
+these states, and can now measure individual qubits. Suppose we measure the
+red qubit here in the center. Projection onto a self-consistent space.</p>
+<p>problems with Hilbert spaces. One more qubit would have increased analysis
+time by a factor of 24.</p>
+<p>In the last 20 minutes, we've talked only about generation of bell
+states. Not yet a quantum state. Original proposal to do controlled-not,
+Cirac-Zoller, geometric phase, Malmer-Sorensen gate.</p>
+<p>We use a little more complicated mechanism, which is harder to
+understand. What we want to do is manipulate our two ions in this
+manifold. Put all states. Drive two photon transitions: to move from SD to
+DS. Do this by detuning laser from excited state. Equivalent paths:
+constructive interference. When you analyze this carefully, you can see
+this is a two-qubit universal gate.</p>
+<p>Don't need to address ions: automatically implements a gate. Experimentally
+easier: higher fidelity, etc.</p>
+<p>Analysis of coherence. Applying two single-qubit gates to two ions:
+contrast of interference fringes goes up to 1: high-fidelity bell state, so
+gate works exceedingly well. One case: gate duration of 51 <mathjax>$\mu
+\mathrm{s}$</mathjax>, average fidelity of 99.3(2)%.</p>
+<p>Talk about errors. In theory, the fidelity would be sufficient.</p>
+<p>Parity + when equal, parity - when unequal, etc. Such an interference
+pattern predicted. If not happening, interference fringes have less
+contrast. Fidelity decreases as we apply more gates.</p>
+<p>Even after 21 equivalent gates, still 8% fidelity.</p>
+<p>Scaling of this approach?</p>
+<p>Coupling strength between internal and motional states of a N-ion string
+ decreases as <mathjax>$\eta \propto \frac{1}{\sqrt{N}$</mathjax> (reasoning: if you excite
+ one ion, a single photon needs to excite the whole ion chain; a single
+ photon gets absorbed by the whole ion chain. More difficult for single
+ photon to kick entire chain if chain is big), but problem does not scale
+ exponentially. From a math point, we are fine (no exponential
+ resources).</p>
+<p>More vibrational modes increase risk of spurious excitation of unwanted
+ nodes.</p>
+<p>Distance between neighbouring ions decreases <mathjax>$\implies$</mathjax> addressing more
+ difficult.</p>
+<p>So we need to divide: idea behind using segmented traps.</p>
+<p>One problem is complexity. Everything can fail. Suppose one component fails
+with error 0.1%, with 10000 components, this will never work.</p>
+<p>In some sense the ENIAC looks similar in complexity. What we have done to
+make things better was to put a box about it: probability of misalignment
+gets worse.</p>
+<p>We need to divide the system into smaller pieces: where most (if not all)
+of effort of ion-trapping goes at the moment. This is an idea which
+actually comes from Leibfried and Wineland, who envisioned many ions
+trapped in some structure, and voltages moving ions around. Advantage:
+small ion string.</p>
+<p>Can show that within 50us we can move ions on the order of a millimeter,
+which is huge. Shown on time scales, which are comparable to gate times, so
+not too expensive.</p>
+<p>Experiment, for instance: sketch of ion trap. What they have done is move
+ions between points, and what they have shown is coherence (Ramsey fringes
+on a single ion). When they transported, the contrast is approximately the
+same. This tells us that the quantum information is not lost during
+<p>Another development is to make the traps easier. People are interested in
+using conventional lithography to build easier traps. Recent development:
+surface traps. All on one surface; can use microfab techs. Can basically
+analyse electrostatics, and ions trapped on such a surface.</p>
+<p>That is basically where the experiments are. People are building these
+devices and trying to control them. Main challenge. Once we can control
+these, we have real scalable quantum processing.</p>
+<p>If you want to read about these things, you can look in this review which
+is written in a way with hardly any math.</p>
+<p>Review on "quantum computing with trapped ions"
+<p>most recent progress: NIST, Boulder
+<p>University of Maryland
+<p>Basic ideas of how this works, physics is fairly clear: quantum mechanics at its best.</p>
+<p><a name='25'></a></p>
+<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
+<h1>Quantum Cryptography</h1>
+<h2>April 24, 2012</h2>
+<p>Today we will speak about quantum cryptography. The main question: how to
+agree on a secret key? Alice &amp; Bob sitting at distance, can communicate
+over insecure channel. Want to agree on shared random key. Often running,
+can use private-key crypto, do whatever they want. Getting started, they
+want to exchange this private key.</p>
+<p>The problem is that Eve is listening on the line. So how do they accomplish
+this? Here's the setup: there's a quantum channel which they share. Alice
+can send qubits to Bob. They also share a classical channel, except that
+they have no control over the classical channel. All they know is that it's
+authenticated (public-key signing, perhaps).</p>
+<p>Eve also has access to the quantum channel. What we want to do is develop
+this protocol: sharing of the random key without loss of confidentiality.</p>
+<p>So what happens in classical cryptography? RSA (public key
+crypto). Everything we're speaking about today is working under the
+assumption that A&amp;B haven't met yet and are only now establishing this
+private key.</p>
+<p>Why do we need quantum cryptography? Shor's breaks RSA. What we are trying
+to achieve is unconditional security -- the only thing you need to assume
+in order to guarantee security is that the laws of physics hold (i.e. QM is
+an accurate model). Has been implemented using today's cryptography.</p>
+<p>What we will talk about today are the principles of quantum crypto.</p>
+<p>Qubits used here are polarizations of photons. Polarization as orientation
+of light wave as it propagates. In some direction orthogonal to direction
+of propagation. Polarization is a qubit. Nice thing: polarizing filter
+blocks photons whose polarization is perpendicular to the orientation of
+the filter and transmits photons whose polarization is aligned with the
+orientation of the filter. It gives us a measurement axis for the photon.</p>
+<p>The probability that the photon is transmitted by the second filter is
+<mathjax>$\cos^2\theta$</mathjax>. Measurement of the qubit.</p>
+<p>What we are saying is that we could write our qubit state <mathjax>$\ket{\psi}
+\equiv \alpha\ket{0} + \beta\ket{1}$</mathjax>. Zero vertically-polarized, one
+horizontally-polarized. One part of superposition goes through, other part
+blocked. Self-consistency. We could also write this in some other basis
+<mathjax>$\ket{u}, \ket{u_\perp}$</mathjax> at some angle <mathjax>$\theta$</mathjax>.</p>
+<p>This is really qubits in the same language that we had, except we're
+thinking about them as spatially-oriented, just as we thought about
+spin. Completely analogous.</p>
+<p>So here's what we're planning to do. Two ways to encode a bit of
+information. Could either encode in rectilinear (vert/horiz) or in diagonal
+(+,-). So when Alice wants to transmit a bit to Bob, if she was using the
+rectilinear basis, she'd transmit horizontally/vertically polarized
+photons. Bob would then use either a vertical or horizontal filter. Similar
+concept with a diagonal polarization.</p>
+<p>Remember: all channels are visible to everyone.</p>
+<p>Why use both these polarizations? Has to do with uncertainty principle:
+these are Fourier transform pairs, so maximally uncertain. You'll recall we
+did this in the second or third lecture. We talked about the bit and sign
+bases. There's an uncertainty principle between the two. You have a choice
+of measurement basis (could be one, could be other, could be a mix) to
+determine how much information you get. You cannot get perfect information
+about the state of said qubit.</p>
+<p>Eve, who doesn't know which basis the information is being sent in, really
+cannot figure out both. If Eve tries to measure in the wrong basis, then
+she completely destroys the information.</p>
+<p>thought regarding decision of which basis to choose: send a bell
+state. Confirmation would be on this classical line, just saying "I got &amp;
+measured the bell state"</p>
+<p>Implementation is actually done without bell states. Very difficult to
+implement bell states.</p>
+<p>Let's suppose Eve entangled the transmitted qubit with one of her
+own. By doing so, she's randomized the state.</p>
+<p>The BB84 protocol (that we're going to discuss) was invented in 1984 but was
+not proven correct until about a decade ago. People assumed
+correctness. Reason: subtle. All other attacks you could think of, but they
+don't work. But how do you show that no attack will work?</p>
+<p>How do we make use of this scenario? How do we distinguish between Bob and Eve?</p>
+<p>Repeat 4N times (if we want to end up with N random bits):</p>
+<p>Choose random bit, randomly select basis. Announce choices of
+bases. Discard if choices different. Eve can see all of this.</p>
+<p>Final result: roughly 2N shared bits. Select N random positions, confirm
+that they're the same. Remaining N bits are secret key.</p>
+<p>Ensures confidentiality of message. Integrity of key is guaranteed by
+communications on classical channel.</p>
+<p>Potential corruption of shared key (attack on integrity): Eve just needs to
+corrupt one bit. 50% chance of catching this. Refine: more sophisticated
+<p>Beautiful way of dealing with this, provable correctness.</p>
+<p>Single-photon generators imperfect. Occasionally emit two photons. Eve can
+intercept one, let the other go, and that breaks the scheme (known vector
+of attack).</p>
+<p>Polarization over these distances? This is the interesting thing about
+photons. If you transmit a photon, there is almost nothing that decoheres
+it. Can maintain coherence over long period and long distance. Can transmit
+through optical fiber, and it will maintain its polarization for something
+like 60 - 70 km.</p>
+<p>Issue is why 60-70 km? Because signal gets attenuated. Every so often, you
+have to amplify the signal, but of course we don't want to amplify the
+signal; we just want to transmit it. Usually involves measurement and
+<p>People are trying to build amplifiers that don't do that: small quantum
+computers that take the photon, error-correct, and retransmit. But not
+implemented yet.</p>
+<p>Security of BB84: Since Eve does not know the correct orientation, she
+cannot measure the polarization without disturbing it.</p>
+<p>Proof of unconditional security based on axioms of QM is difficult and
+dates back to about 2000.</p>
+<p>Practical considerations:</p>
+<li>imperfect measurements: channel noise causes shared btis to not agree.</li>
+<li>Figure out noise rate, determine whether acceptable.</li>
+<li>In this case, Alice and Bob exchange parity bits to correct mismatches.</li>
+<li>Can only guarantee that Eve does not know too many bits out of the
+ remaining N.</li>
+<p>Randomness distillation. Suppose we are left with 4 bits, and we know the
+noise rate is at most a quarter, so Eve knows at most one of these
+bits. Want to distill into bits Eve does not know about. Extract key via
+XORs. Claim: no matter which bit Eve knows, each bit looks completely
+random to Eve.</p>
+<p>Choose random hash function to hash these bits. What we can prove is that
+Eve has no information regarding the resulting hash. Must deal with these
+issues: mismatches and Eve's knowledge.</p>
+<p>Turns out that hash function itself doesn't have to be random. Should work
+even if hash function is public (and surjective?).</p>
+<p>Actual theorem: suppose you pick a hash function H at random, and now let's
+say that <mathjax>$N \mapsto M$</mathjax> (n-bit strings to m-bit strings), and that x is
+chosen according to some distribution.</p>
+<p>Lemma: left-over hash lemma. If you look at the pair H (specified somehow,
+chosen at random), H(x) (m-bit string; H(x) is some string), the claim is
+that this distribution is close to the uniform distribution on m-bit
+strings, and so you cannot tell the difference between this result and some
+random m-bit string.</p>
+<p>Has been implemented. Was famous experiment done at a lake near
+Geneva. Three locations with optical fiber going between these
+<p>Challenges with atmospheric photon transmission and detection. Background
+photons, daylight radiance, night radiance. How to distinguish? Start:
+timing pulse -- expect a photon in 1 us, within a window of 1 ns. What can
+we control? Narrowness of window of time; part of spectrum which sunlight
+that does not use (or least-used), and a very narrow window of frequency,
+at that; look at very small solid angle. So we're isolating in terms of
+position, momentum, and time. Once you do all of these things you realize
+that you can cut down this <mathjax>$10^{13}$</mathjax> to something very tiny.</p>
+<p>So why use photons? Why not use some particle with very strong coherence?
+Photons are nice -- very stable, coherent. So what would this other
+particle be?</p>
+<p>Must take into account GR. Photon essentially does not interfere with
+anything else. Remember how hard it is to do cavity QED. To take a photon
+and create a gate using said photon is very difficult to do. You put the
+photon in a cavity and couple the qubit to the cavity and the cavity to
+some other qubit. Outside of these extraordinary efforts, photons are
+fairly stable.</p>
+<p>Other considerations for photon transmission: efficient lasers (want some
+source such that exactly one photon can be transmitted). Already that is
+enough to get this off the ground.</p>
+<p>Exaamples of 10 km QKD. Commercially available quantum crypto systems.</p>
+<p>And then there's this fact. You can prove that these things are secure, but
+they aren't really: one possible attack: shine strong laser at Alice; can
+figure out internal settings.</p>
+<p>Device-independent cryptography. Want to prove quantum crypto scheme secure
+based purely on quantum mechanics. What is interesting is that you can show
+that in principle, you can create some systems. Theoretically possible to
+<p>Quantum key distribution, then use classical private key cryptography,
+i.e. schemes secure under quantum cryptography (not known to be broken; no
+proximal reason why those would be breakable, given the quantum algorithms
+we know).</p>
+<p>Two things: course projects. Some of you have sent email giving 2-3
+paragraphs describing what you'll be doing and what sources. What would be
+good would be if the rest of you could send something hopefully by later
+<p>Project presentations: next week Thursday. Will set up in location to be
+determined (hopefully here). Will go from 9:30 to 1, perhaps. Go through
+all project presentations. Make sure you have a good typed presentation
+that fits into ~20 min. With questions, presumably no presentations will go
+over 25 min.</p>
+<p>Roughly 10 groups. Will try to arrange for pizza -- at some point we'll
+break for lunch.</p>
+<p>What day will the paper be due? The paper will be due the end of the
+following week (end of finals week).</p>
+<p>Let's say that there might not be a breakdown in the sense that if you do
+particularly well in one, it'll compensate for the other.</p>
+<p>Two other issues: 1) there's a question for EECS majors: what should this
+course count as? Science requirement? EECS requirement? 50/50? Would be a
+useful thing to sort out. Would like to take existing notes for this class
+and make them more consistent with what was taught this semester. Two
+things: if you've been taking good notes in class, I'd love to have a copy
+of those.</p>
+<p><a name='26'></a></p>
+<h1>CS 191: Qubits, Quantum Mechanics and Computers</h1>
+<h1>Quantum Walks and the Dirac Equation</h1>
+<h2>April 26, 2012</h2>
+<p>Last lecture of the semester. Talk about two things: start with quantum
+walks and Dirac equation, and then I'll tell you a little about other
+topics in quantum computation you might be interested in. Larger picture,
+who, how to pursue.</p>
+<p>Before we start, let's review something about classical mechanics. Suppose
+we are doing a random walk on a discretized line, so we have grid
+points. At each step, we move randomly. How long to reach a distance <mathjax>$n$</mathjax>?</p>
+<p>You may recognize this: after k steps, this is given by a binomial
+distribution centered at zero, and the standard deviation is about
+<mathjax>$\sqrt{k}$</mathjax>: central limit theorem. We can work this out: if you look at the
+position after <mathjax>$k$</mathjax> steps, we have some distribution. We want to know what
+the variance of this distribution is, and thus what the standard deviation
+is. Turns out that the variance grows as <mathjax>$k$</mathjax>, and so the standard deviation
+grows as <mathjax>$\sqrt{k}$</mathjax>. Since each step is a random variable (independent from
+all other steps), we can exploit linearity of expectation to find this <mathjax>$k$</mathjax>.</p>
+<p>Thus if we want to reach this distance safely, we should expect to walk
+roughly <mathjax>$n^2$</mathjax> steps. That is the penalty for randomness: a factor of <mathjax>$n^2$</mathjax>.</p>
+<p>Now assume that instead of dealing with a classical particle and a
+classical walk, we're dealing with a quantum particle.</p>
+<p>In the classical case, the position is <mathjax>$x$</mathjax>, which is an integer; there is a
+coin flip <mathjax>$b$</mathjax>, which is <mathjax>$-1 \lor 1$</mathjax>, and at each step, you flip the coin
+and increment <mathjax>$x$</mathjax> by <mathjax>$b$</mathjax>.</p>
+<p>In the quantum case, we again have <mathjax>$x$</mathjax>s and <mathjax>$b$</mathjax>, but now we're keeping
+track of everything. The state includes both <mathjax>$x$</mathjax> and <mathjax>$b$</mathjax>. Whatever the coin
+flip says, we move accordingly. And now we must flip the coin. How to flip
+the coin, since it's now a quantum bit? Apply the Hadamard to it. That is
+our quantum walk on the line.</p>
+<p>Now, we could do the same thing, start from the origin, walk for <mathjax>$k$</mathjax> steps,
+and how far do we get? When you do this quantum walk, you end up with most
+of the mass at some constant multiplied by <mathjax>$k$</mathjax>: it really walks at constant
+rate. Why?</p>
+<p>What accounts for the chance at origin being roughly zero in the quantum
+<p>We must go back to the classical case and understand this. There are many
+ways to return to the origin, but in the quantum case, each way comes with
+a phase, and these phases tend to cancel out. When you shoot out to the
+ends, you get constructive interference, whereas at the origin, you get
+destructive interference.</p>
+<p>Qualitatively, the quantum walk behaves very differently from the classical
+walk. Remember: in quantum mechanics, you don't have a notion of a
+<p>You can also define a continuous version of that by defining an appropriate
+Hamiltonian. The Hamiltonian you would need is exactly what you think: it's
+the Pauli matrix <mathjax>$\sigma_x$</mathjax>. We'll see how to write this behavior
+explicitly in a bit: this will result in the Dirac equation.</p>
+<p>So now, let's see how to apply some of these ideas about quantum
+walks. What are these useful for? Algorithms. This relates to a quadratic
+speedup in the simple case. Tells you if you were using random walks to
+design an algorithm, if you switched over to quantum, you'd get quadratic
+speedup. You do get quadratic speedup or much speedup for most of these.</p>
+<p>What's interesting is a quantum algorithm for formula evaluation: quadratic
+speedup for evaluating a boolean expression.</p>
+<p>Applications: minmax tree where values are zero and one. And is min, Or is
+<p>This finds its use in quantum algorithms and other places. What I want to
+talk about today is Dirac's equation, which can be explained nicely with
+quantum walks.</p>
+<p>This was when he was reconciling quantum mechanics with special relativity,
+which also led him to the role of spin and the relativistic origins of
+spin. Also led him to his discovery of antimatter. This was really quite a
+remarkable event in physics, and what was interesting was that also, in all
+this was just how much courage Dirac had in terms of looking at this
+problem, finding this very simple equation, and having the courage to stand
+by something so new that had so many major consequences.</p>
+<p>So let's go back: we have a particle whose energy is classically
+<mathjax>$\frac{p^2}{2m}$</mathjax>. Quantumly, this is <mathjax>$\frac{\hat{p}^2}{2m}$</mathjax>, where <mathjax>$\hat{p}
+\equiv \frac{\hbar}{i}\nabla$</mathjax>. So now, let's try to understand
+relativistically that the energy is given by the Klein-Gordon equation
+(Einstein's theory of relativity, really, which arises from invariances of
+moving frames), which says that <mathjax>$E^2 = p^2c^2 + m^2c^4$</mathjax> (where <mathjax>$c$</mathjax> is speed
+of light), and <mathjax>$p^2c^2$</mathjax> is the kinetic form.</p>
+<p>If you try to figure out what this is saying, when a particle has speed
+much less than that of the speed of light, you can pull out <mathjax>$mc^2$</mathjax>, and you
+get <mathjax>$mc^2\sqrt{1 + \frac{p^2}{m^2c^2}}$</mathjax>. Since this is small, you can
+approximate this as <mathjax>$mc^2(1 + \frac{p}{2mc^2})$</mathjax> (first-order Taylor
+expansion with respect to p). Expanding this, we get <mathjax>$mc^2$</mathjax> (energy
+associated with rest mass) added to <mathjax>$\frac{p^2}{2m}$</mathjax>. And that's exactly
+what you want: that is the total energy.</p>
+<p>So all is fine and well, and now, what Dirac was trying to do was figure
+out what the corresponding quantum equation is: <mathjax>$H^2 = \hat{p^2}c^2 + m^2
+c^4 I$</mathjax> (where <mathjax>$I$</mathjax> is the identity). This is the square of the
+Hamiltonian. How do you compute the Hamiltonian itself? This is exactly
+what Dirac was trying to do: he was trying to compute the square root of
+this operator. Youc an try to compute square roots, use Taylor series, it
+blows up, and it doesn't really look like anything. And then he had this
+<p>Let's use units where <mathjax>$c=1$</mathjax>. What we're trying to do is compute <mathjax>$H =
+\sqrt{p^2 + m^2 I}$</mathjax>, where both of these are operators. Here was the flash
+of insight: what he realized was if you wrote the Hamiltonian by doubling
+the dimension as <mathjax>$\begin{pmatrix}\hat{p} &amp; mI \\ mI &amp;
+-\hat{p}\end{pmatrix}$</mathjax>. So what happens when you square this? We get
+<mathjax>$\begin{pmatrix}\hat{p}^2 + mI &amp; 0 \\ 0 &amp; \hat{p}^2 + mI \end{pmatrix}$</mathjax>.</p>
+<p>Has this feel about it that in trying to solve this especially difficult
+problem, and by stepping outside of what seem to be the rules of the game,
+it suddenly becomes very simple. And then instead of saying this is
+illegal, perhaps these actually are the correct rules of the game.</p>
+<p>And so what Dirac said was that there's an extra degree of freedom: it's a
+qubit -- spin. So now we can try to understand what this tells us about how
+the particle moves relativistically. If we write that out, we want to solve
+<mathjax>$i\pderiv{\Psi}{t} = H \Psi$</mathjax>. Let's do case 1: <mathjax>$m = 0$</mathjax>.</p>
+<p>So what's our state space? There's the position of the particle, and then
+there's spin. The Hamiltonian is going to act on the whole thing. What will
+this look like? In general, it's acting on the two spaces together. If you
+were to write out the state vector, it consists of the component with spin
+<mathjax>$b=0$</mathjax> and the component with spin <mathjax>$b=1$</mathjax>.</p>
+<p>With the massless particle, <mathjax>$\pderiv{\psi}{t} = -\nabla\psi$</mathjax>, so in the
+one-dimensional case, <mathjax>$\psi$</mathjax> is moving right with the speed of light. So
+there are two solutions, depending on your spin qubit: in one case, you're
+moving right at the speed of light, and in the other case, you're moving
+left with the speed of light.</p>
+<p>So what happens in general? You have this term that corresponds to motion
+left or right at the speed of light, and then you have the term that
+corresponds to the mass. The greater the mass, the more often you flip the
+coin. The presence of the mass term corresponds to a new direction after
+each coin flip, so to speak. Recall that we moved <mathjax>$ak$</mathjax> for some <mathjax>$a$</mathjax>. With
+no mass, <mathjax>$a$</mathjax> is <mathjax>$c$</mathjax>. As we increase the mass, <mathjax>$a$</mathjax> decreases.</p>
+<p>There's another thing you might think about: in the classical case, after k
+steps, you move <mathjax>$\sqrt{k}$</mathjax>. You can move one step per unit time right or
+left. But of course we want to let that unit tend to zero, so <mathjax>$k$</mathjax> tends to
+<p>In fact, what Dirac did was work out this problem, and he worked out the
+statistics of this walk. He didn't call it a quantum walk, but that's what
+he implicitly did.</p>
+<p>So that's it regarding quantum walks and the Dirac equation. I'll now spend
+the next ten minutes talking how to further pursue quantum computing. If
+you have specific questions, feel free to ask.</p>
+<p>There are many different areas here in quantum computing, which is
+partitioned into three primary flavors of research: theory work -- consists
+of quantum algorithms (design), complexity theory (QBP = P?), information,
+crypto. The boundaries will vary depending on your source.</p>
+<p>Quantum information theory: quantum analog of classical information theory:
+how much classical information can I transmit in each qubit? What's the
+value of quantum information? And the second part: error correction and
+fault-tolerance. If you want to build a quantum computer, one thing you
+have to worry about is decoherence. Previously people thought that quantum
+states were not protectable: they degrade over time as the environment
+decoheres them. This is a way of putting this quantum information inside a
+sort of error-correcting envelope for protection. Even after being subject
+to errors, we can still recover the original message (a quantum state).</p>
+<p>Physically, you think about these errors as heating of the system (increase
+of entropy). What do you do? Take fresh qubits all initialized to zeros
+(supercooled), and do heat exchange. Take all the heat (in the form of
+errors), isolate it, and push it into these cold qubits. Now these errors
+are sitting inside these new qubits, which we can then discard --
+refrigeration. If you do that, that's called fault-tolerance.</p>
+<p>Cryptography we know about. There's one subject I haven't talked about:
+post-quantum cryptography. Quantum algorithms break most of modern crypto:
+RSA, Diffie-Hellman, you name it. What we'd like to do is restore
+public-key cryptography, and eventually use private-key cryptography. You
+could use quantum cryptography, but even though there are existing systems,
+they're very difficult to use.</p>
+<p>There's a different way out: how about designing public-key cryptosystems
+which quantum computers cannot break? Very active field of research. If
+you think about it, this may be the biggest practical impact of quantum
+computation in the very near future (next few years). Would be very
+interesting if that happens; it's not the quantum computers that would have
+the impact, but rather the threat of quantum computers.</p>
+<p>Two other fields are simulating quantum systems: the starting point for
+quantum computation, to simulate this, we normally would need an
+exponential amount of work to simulate. So how would you simulate quantum
+systems efficiently on a quantucm computer? There are interesting results
+here: how would you, for instance, run the quantum metropolis algorithm?
+The second question: how to simulate on a classical computer. Here, the
+question is not how to simulate general systems, but rather, could it be
+that natural quantum systems are somehow much simpler and can be simulated
+very efficiently, even if they are highly-entangled? There is a renaissance
+of information in this field.</p>
+<p>Special systems can be solved explicitly (closed-form). Those can be
+simulated on a classical computer. There are many others that can't be
+solved explicitly, but they have certain properties that we can take
+advantage of.</p>
+<p>And finally, there is a lot of work on experimental quantum
+computing. Various techniques.</p>
+<p>In terms of resources: Haffner teaches a course on quantum computing from a
+more experimental viewpoint. In terms of the physics department, there's
+Haffner, Clark (superconducting qubits), Stamper-Kurn (Bose-Einstein
+condensates, optical systems), Szdiki, Crommie. In the chemistry
+department, there's Waley, who might teach this course next year. There's
+also a colleague in computer science: Kubiatowicz -- quantum architecture.</p>
+<p>How to put together? Let's say that three years from now, ion traps work
+and scale to a few hundred qubits. Now how would you put them together to
+do interesting computations? What are the architectural tradeoffs: should
+you keep qubits close or teleport, how to manage ECC, etc.</p>
+<p>Few places other than Berkeley; Waterloo (IQC, perimeter institute),
+Caltech, MIT, U. Maryland -- growing group, CQT in Singapore (which
+actually offers a degree in QC). Number of resources. And then there are
+groups in Europe which are very strong: Munich, Germany, Paris. There are
+lots of resources internationally, but also there are plenty of
+opportunities for summer schools if interested or graduate work in the
+<p>If you want any more information, you can email or stop by.</p></div><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
291 ee105.html
@@ -1498,7 +1498,296 @@
<p>Very quickly: single-pole system. <mathjax>$A = \frac{A_0}{1 + \frac{s}{\omega_p}}$</mathjax>,
so <mathjax>$\frac{V}{X} = \frac{A}{1 + AK} = \frac{A_0}{1 + \frac{s}{\omega_p} +
A_0K}$</mathjax>. Put this back into original form, etc. Pole moves. With feedback,
-pole moved to higher frequency.</p></div><div class='pos'></div>
+pole moved to higher frequency.</p>
+<p><a name='31'></a></p>
+<h1>EE 105: Devices &amp; Circuits</h1>
+<h2>Wednesday, April 11, 2012</h2>
+<li>circuits poles move</li>
+<li>stability and amplitudes</li>
+<p>Nice mathematical abstraction of feedback with an amplifier with open-loop
+gain given by A, and a feedback factor K, and loop gain AK. If we close the
+loop, then we know that we get a closed-loop gain of <mathjax>$\frac{A}{1+AK}$</mathjax>. For
+<mathjax>$\abs{AK} \gg 1$</mathjax>, this is approximately <mathjax>$\frac{1}{K}\parens{1 -
+<p>Turns out same analysis we do for amplifiers is the same we need to do for
+oscillators. Circuits don't quite match this.</p>
+<p>So let's do a little math, and example where I mess up and am too busy
+thinking about the beautiful math. Single-pole amplifier with <mathjax>$A(s) =
+\frac{A_o}{1 + s/\omega_{p0}}$</mathjax>. CLG is <mathjax>$\frac{A}{1 + AK} = \frac{A_0}{1 +
+A_0 K + s/\omega_{p0}} = \frac{A_0}{1 + A_0 K} \frac{1}{1 + s /
+\omega_{p,cl}}$</mathjax>. DC gain is the same, and then we've got some
+frequency-dependent component. So we have <mathjax>$\omega_{p,cl} = (1 + A_0
+K)\omega_{p0}$</mathjax>. Pole has moved: depends on how much feedback we provide. In
+particular, if the amount of feedback I put into this thing is 0, then I
+get the right answer.</p>
+<p>So if we look at (the magnitude of) the open-loop transfer function, we
+have this <mathjax>$\omega_{u0} = A_0 \omega_{p0}$</mathjax>. And now, as I add feedback, this
+should not move.</p>
+<p>Mathematically, as we change the loop, unity gain does not change.</p>
+<p>We know from Miller that we can write the equivalent linear circuit with
+input and output loads.</p>
+<p>Calculate <mathjax>$G_m, R_o$</mathjax> the usual way (small signal model). Math ran into real
+world: taking last lab into account, unity gain frequency moved. Mistaken
+assumption was that the resistor impedance stays small at all
+frequencies. Impedance goes up with frequency, and all of a sudden, <mathjax>$r_\pi$</mathjax>
+is not negligible any more. Initially 7k, and I'm driving a 7k with a 100k
+resistor. cut effective <mathjax>$G_m$</mathjax>.</p>
+<p>Now let's look at your next lab, where we'll build a ring
+oscillator. Feeding back to the input and trying to turn this thing into an
+amplifier. Put a 100k resistor in the loop, each stage has an explicit
+100pF capacitance on the input. Start with <mathjax>$R_{fb} = 0$</mathjax>, put load on this
+capacitance. Then we can figure stuff out from <mathjax>$V_{in}$</mathjax> to <mathjax>$V_{fb}$</mathjax>.</p>
+<p>We're going to get three frequencies out of these. Initially they're
+identical, and we're going to get an open-loop gain <mathjax>$A = \parens{\frac{G_m
+R_o}{1 + s/\omega_{p1}}}^3$</mathjax>. All the poles line up at the same place.</p>
+<p>Gets bigger each loop until saturation. Mathematical criteria for
+stability: <mathjax>$\abs{H(j\omega)} &lt; 1, \angle H(j\omega) = 0$</mathjax>. Phase margin:
+<mathjax>$\angle H(j\omega)$</mathjax>. If that's 360, we're doomed. <mathjax>$360 - \angle H(j\omega)$</mathjax>
+tells you how close to instability you are. Then there's gain margin, which
+is <mathjax>$H(j\omega_{360})/1$</mathjax>. People get very nervous when your phase margin is
+less than <mathjax>$45^\circ$</mathjax>. So this thing is going to oscillate, and that's what
+you found. (when phase margin dips below 0, you oscillate)</p>
+<p>So how to keep this an oscillator? (generally used for clocks; stick
+through handful of inverters afterward and measure that final output)
+Current source: have some reference current with resistor up to rail
+(setting ref. voltage), and now you have some transistors whose drains are
+all tied together and have particular values of <mathjax>$W/L$</mathjax>. We control the bias
+voltage with more MOSFETs or whatever. Lots of advice to take 142 and
+<p>system absolutely unstable, etc. How do we guarantee small gain? Get rid of
+poles or stuff. Single inverter will not oscillate. Not enough poles to get
+you to 360.</p>
+<p><a name='32'></a></p>
+<h1>EE 105: Devices &amp; Circuits</h1>
+<h2>Monday, April 16, 2012</h2>
+<p>slew rate: trying to drive an <mathjax>$8\Omega$</mathjax> speaker. Probably want to drive
+this with a DC-block capacitor. In systems like this, you generally want
+AC-coupling so you don't have to worry how things are inserted. You also
+want this to work for audio. C should be blocking. 20Hz. For lab, only have
+to go down to 1kHz. May use 10 or 100 <mathjax>$\mu F$</mathjax>. Why is this bad? Output
+stage is a cascode (sort of) with capacitive load.</p>
+<p>If you've got a sine wave on this thing at some DC offset of 1-5V, you know
+that what you'd like to see is the output tracking the input with a diode
+drop. In fact, that works fine for low amplitudes at low frequencies.</p>
+<p>But at high amplitudes / frequency, what you see is this thing called the
+slew rate (looks like capacitor action). What's happening? Current flowing
+across this capacitor is equal to <mathjax>$C\omega V_0 \cos(\omega t)$</mathjax>. The
+magnitude of the current is simply <mathjax>$\omega C V_0$</mathjax>. If f is 160 Hz and C is
+1 <mathjax>$\mu F$</mathjax>, then <mathjax>$\omega = 10^3$</mathjax>, <mathjax>$\omega C = 10^{-3}$</mathjax>, so you're going to
+get 1 mA of current that you need at the peaks of these sine waves. But if
+you take this up to <mathjax>$f = 1 kHz, C = 10 \mu F$</mathjax>, you'll need <mathjax>$60 mA$</mathjax>. etc.</p>
+<p>What you see on the scope is that output tracks just fine on the way
+up. No problems with emitter follower transistor. On its way up, if this
+voltage is lagging, it only takes 60 mV to give 10x the current. More than
+happy to source extra current. But once you hit the peak, current coming
+down, and you need to discharge capacitor with that current. And that's why
+with this setup you end up with a slew rate <mathjax>$\deriv{V}{t}\Big|_{max} =
+<p>Bunch of tricks people have come up with over the years. Problem here is
+that we have a hard time pulling down. So what if we make a CE or CC with a
+PNP transistor? On the negative-going transitions, this thing will happily
+pull much more current. But it'll have problems with the positive-going.</p>
+<p>So combine the two, make an NPN follower to bring this voltage up and a PNP
+follower to bring this voltage down. Tie these together, and this is our
+<mathjax>$V_{in}$</mathjax>. Challenge: if you look at <mathjax>$I_{out}$</mathjax> vs. <mathjax>$V_{in} - V_{out}$</mathjax>, you
+know that we have a dead zone of two diode drops, i.e. when these things
+turn on/off. SO what do we want to do to fix this?</p>
+<p>Ideally you put a battery in there that biases these two bases far enough
+apart that they're always on. How do you do that? Turns out, lots of other
+ways to do a voltage source.</p>
+<p>Simplest: just put two diodes in there. Trying to compensate for the diode
+drops. Want some current flowing through, so just put a resistor before the
+diodes. Conceptually, this will do. Could put input anywhere, really, and
+the output really would track the input.</p>
+<p>This is one of the problems with the lab: driving low impedance with
+output. Driving high-impedance source. If this were actual electret, this
+would have huge impedance. But inside this can, there's already a JFET to
+lower the impedance.</p>
+<p>Two issues: slew rate; impedance is basically 8<mathjax>$\Omega$</mathjax>. Somehow you've got
+to get from k<mathjax>$\Omega$</mathjax>s to <mathjax>$\Omega$</mathjax>s. Put in a follower. Key: understand
+what you're adding and why. Perfectly fine as well as you get reasonable
+performance. Hand calculations match spice calculations match
+<p>One thing you might put in there is another CC, which magnifies impedance
+by that <mathjax>$1+\beta$</mathjax> factor. Darlington pair.</p>
+<p>As always, at the input to the amplifier, we have a magical subtracter box
+for now. We'll eventually show how to make this with transistors
+(differential pairs).</p>
+<p>Unity-gain feedback can be used to manipulate what an output impedance
+looks like. Should not be a surprise: we already knew this because we knew
+that with that same configuration where we had some load capacitance, we
+got a system with an open-loop gain at 0 and a pole at <mathjax>$\frac{1}{R_o
+C_L}$</mathjax>. When we wrap feedback around it, this moves to <mathjax>$\omega_{p,cl} = (1 +
+A_0 K)\omega_{p,ol}$</mathjax>.</p>
+<p>So that's a very good thing: for a voltage amplifier, feedback lowers your
+output impedance, potentially dramatically.</p>
+<p>So how about the input impedance? This is a little trickier in that it's
+doing some handwaving regarding how the feedback actually happens.</p>
+<p>Want to get high gain because you can always trade it for good things in
+circuits: low stable gain, etc. The only caveat is that this is a real
+amplifier, so there's only so much swing you can get.</p>
+<p>Magical subtracter box! We'd like <mathjax>$V_D = V_+ - V_-$</mathjax>. If they're both going
+up and down together, we'd like nothing to happen.</p>
+<p>Works because of linearization around our operating point.</p>
+<p>Take current and turn into voltage by putting in resistors. And now you've
+got a differential voltage across here.</p>
+<p>Similar to common emitter: how is this different? the change on either side
+will set the other rail. Another way to look at this: we're rejecting the
+<p><a name='33'></a></p>
+<h1>EE 105: Devices &amp; Circuits</h1>
+<h2>Wednesday, April 18, 2012</h2>
+<p>Output impedance, slew rate. Feedbacks, op amps, digital.</p>
+<p>From lab: trying to derive load with resistive component, passive
+component. Bunch of issues here: R is small (8 ohm), output resistance much
+smaller than load resistance means burn current.</p>
+<p>Current limits: <mathjax>$I = \omega C V_{swing}$</mathjax>. Pick C such that impedance small
+compared to R.</p>
+<p>cleverness. Cleverness in short supply. Nice to have ability to just throw
+op amps at problems instead of cleverness.</p>
+<p>What if we had some way to measure output voltage and compare to input
+voltage and use to affect bias voltage on transistor? Would be nice if
+we could somehow measure output voltage relative to input, gain it up
+(positive) and apply to <mathjax>$V_{BE1}$</mathjax>.</p>
+<p>Turns out: negative feedback. Doesn't fit nicely into picture we like of
+summing junction, etc, but it turns out most circuit feedback doesn't (at
+transistor level). If you take 140, you learn what true pain is. Until you
+start using op amps. Which is an abstraction.</p>
+<p>Something that lets me drive with small current in small-signal
+<p>Still have problem slewing north, but slewing south is fine. Before we had
+the opposite problem.</p>
+<p>Turns out when you're slewing, linearity of drive not good; going to sound
+distorted. </p>
+<p>More talk about feedback.</p>
+<p>Op amps!</p>
+<p>Last time we learned about differential amplifiers. Now we'll put that to
+use. There's no precise definition of what an op amp is. But roughly, there
+are three parts: differential amplifier (where the goal is to separate the
+common mode from the difference of the signal), gain stage, and maybe an
+output stage. Sometimes you see amplfiiers drawn as trapezoids, which just
+means that it's missing the output stage: could be very large output
+<p><a name='34'></a></p>
+<h1>EE 105: Devices &amp; Circuits</h1>
+<h2>Monday, April 23, 2012</h2>
+<p>Operational amps; oscillated at 800Hz in speaker-to-microphone
+feedback. Ideal op amp: is differential amplifier with gain. Infinite input
+impedance, no output impedance, etc. Doesn't care where output goes; no
+current limits. All things that real op amps don't share with ideal op
+amps. We're not going to get into the details of designing op amps, but
+it's a nice circuit: uses things you've learned all semester
+long. Relatively easy to make and buy.</p>
+<p>If we look at the simplest op amp, you have to have a differential pair,
+and you have to have some sort of current flowing through that differential
+pair. We know from last lecture if the common mode voltage is the same, as
+they move up and down, the emitter moves up and down. Depending on the
+resistor, your current stays relatively constant. So this gives us some
+common-mode rejection. Slight changes of this (if you draw the SSM) --
+virtual ground. You get a differential current (one side is +, other is
+-). Keeping current constant. Simplest way to turn this into a voltage is
+to make it single-sided, and that is an op amp.</p>
+<p>So how do we find the gain? We have to find the operating point. For this
+particular circuit, we need to know what the inputs are at. Our operating
+point, therefore, is one <mathjax>$V_{BE}$</mathjax> drop below the DC bias at 0. Tail
+resistance. Named by people doing tubes. Tail current: <mathjax>$\frac{V_{EE} -
+V_{BE} - V_{cm}}{R_{tail}}$</mathjax>. So <mathjax>$g_{m1}$</mathjax> is <mathjax>$\frac{I_c}{V_T}$</mathjax>. <mathjax>$R_{out}
+\approx R_c$</mathjax>, and <mathjax>$G_m$</mathjax> ends up being <mathjax>$\frac{g_{m1}}{2}$</mathjax>.</p>
+<p>Gain ends up being around 50.</p>
+<p>So this thing actually does work as an op amp. If I wanted to make a
+follower or unity-gain buffer out of it,</p>
+<p>Stability? Depends on gain at <mathjax>$\omega_{360}$</mathjax>. If there is no
+<mathjax>$\omega_{360}$</mathjax>, then we don't have to worry about this. This particular
+case is a single-pole system (to a good approximation).</p>
+<p>Traditional thing to do in tube amplifiers: -500V (really big volts).</p>
+<p>This ends up getting an output impedance of about 200 ohms, which is not
+ideal, but still better than 10k.</p>
+<p>So let's add a second stage.</p>
+<p>Reduces gain, but eh. Usually see follower.</p>
+<p>Small signal does poor job approximating: well out of region of approximation.</p>
+<p>Let's suppose we have an amplifier, and we put it in feedback. Assume gain
+is 1000, output resistance is just <mathjax>$R_c = 1k$</mathjax>. This stuff here is all about
+creating K, the feedback factor. So open-loop, this is <mathjax>$K=0$</mathjax>. AK is about
+10, so closed-loop gain is about 90, output resistance is about 100. If K
+is 0.1, closed-loop gain is about 10, output resistance is about 1.</p>
+<p>Unity-gain frequency, closed-loop gain is about 1, output resistance is
+about 1.</p>
+<p><mathjax>$A_{CL} \approx \frac{1}{K}$</mathjax>, <mathjax>$R_o \approx \frac{R_{out}{1 + A_0 K}$</mathjax>.</p>
+<p>So this can drive a speaker pretty well. So what does it look like when
+it's driving it? Fine for small signals. If you think about that output
+stage, drawing just part of this, with no bias on the input, and I've got
+my feedback, then it turns out the way this thing is set up is such that I
+get an output bias of a couple of volts. Pretty close to zero. With no
+input, I'm going to have something like 6 mA flowing.</p>
+<p>Talk about slew. Can slew fast in positive direction, but in negative
+direction, stuck with bias current through <mathjax>$R_C$</mathjax>.</p>
+<p>That's why the next step is to take that amplifier that we have and have it
+drive two emitter followers.</p>
+<p><a name='35'></a></p>
+<h1>EE 105: Devices &amp; Circuits</h1>
+<h2>Wednesday, April 25, 2012</h2>
+<p>Next week: lecture is just going to be review, examples, Q&amp;A. If we run out
+of Q's or A's, we'll quit. Comprehensive final. Stuff certainly on
+capacitance versus applied voltage.</p>
+<p>Will be a homework next week to throw in all the things we haven't had
+homework on.</p>
+<p>Not an op amp class, so all we need to know is that they're made of common
+emitters and common collectors and things like that. Might do some good to
+see that people design real products using these tools that we now know.</p>
+<p>Some stuff quite subtle and not covered at all.</p>
+<p>3T, 5T op-amp. On some problems, we lost half of our transconductance. Lose
+gain <mathjax>$R_1 \parallel r_{\pi3}$</mathjax> (not a big deal yet, but will be an issue
+when we add a current source) -- costs us a factor of 2 for now. Bias
+currents depend on common mode (CM) input and output voltage and supply.</p>
+<p>On the output stage, there's a very nasty deadband on the output, and
+there's no over current protection on the output. On the LM324, they made
+the output stage even worse: got current source -- NPN common emitter stuck
+into PNP and also an NPN with a Darlington on the NPN. The problem over
+here is that if the output is at 0, we've got a range of <em>three</em> diode
+drops that does nothing. Quad op amp. Power, ground, plus, minus for 4 op
+amps. Nine cents in volume.</p>
+<p>One interesting thing on this guy: they've got a current source feeding the
+gate of the Darlington pair (100 uA), so they've got <mathjax>$\beta^2$</mathjax>. If my input
+goes low, then all the current is going to flow through the Darlington pair
+and get multiplied by <mathjax>$\beta^2$</mathjax>. Huge amount of current dropping across big
+voltage. (potentially at 30V). Literally can destroy transistors.</p>
+<p>So what do you do to stop that? Very common they'll put in a sense
+resistor, so now you have a voltage proportional to the current flowing
+through the output, and so if the current gets too high, I steal some of
+the output current.</p>
+<p>Thermal runaway: <mathjax>$I_c \propto I_s(\tau)$</mathjax>, more or less.</p>
+<p><mathjax>$\beta$</mathjax>-helper: try to minimize current being stolen from <mathjax>$I_{ref}$</mathjax>. Going
+to have to have <mathjax>$I_B$</mathjax> flowing into the node. Therefore current flowing that
+way is <mathjax>$\frac{NI_B}{\beta}$</mathjax>, which is basically nothing.</p>
+<p>Can turn any one of these and turn into a source by just mirroring it:
+diode connect a transistor on the other end so it has a current of
+<mathjax>$I_{ref}$</mathjax>, and I just do the same thing.</p>
+<p>MOS case. Now we don't need a <mathjax>$\beta$</mathjax>-helper because we don't have a
+current. Multiple copies, mostly same thing. Except now you change the
+<mathjax>$\frac{W}{L}$</mathjax> (as opposed to area). Don't want to change <mathjax>$L$</mathjax>, since that's
+coupled to nonlinear device behavior.</p>
+<p>We normally lose half of our <mathjax>$g_m$</mathjax></p>
+<p>What I really want is a current mirror. If I put in a PNP current mirror
+and mirror one side's current over to the other side, now I recover my
+original current, <em>and</em> this thing self-biases. Standard 2-stage op amp.</p>
+<p>Got a differential pair that gives you a current proportional to difference
+between two signals, stick that into an active load current mirror, and now
+this thing has very high gain (as long as the next stage has high
+gain). Stick that into a common-emitter amplifier to get another gain
+stage, and maybe you put some gain after it. These are driven by current
+sources (which we know how to make -- diode-connected transistor and a
+resistor), and that is your classic two-stage op amp.</p>
+<p>Works with MOS as well: differential input stage, current mirror as a load,
+and common source gain stage. Both of these want current sources as loads.</p>
+<p>The nice thing about both of these is that you just wire them up and they
+work. Might not work well (point of 140).</p>
+<p>Miller effect helps. Pole-splitting, beautiful thing. Outside of the scope
+of this class. So that's it for op amps.</p></div><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
181 ee120.html
@@ -286,19 +286,25 @@
<p>Fourier analysis today. Some of this will be review, but we'll dive
more deeply into the linear algebra in this class.</p>
- * Fourier Analysis
+<li>Fourier Analysis
"A way of decomposing signals into their constituent frequencies.
Kind of like the way a prism splits light into its components.
- Fourier analysis gives us the tools we need to analyze signals."
- + Periodic [ in time domain. ]
- - Discrete time, discrete time Fourier series.
- - Continuous time, continuous time Fourier series.
- + Aperiodic
- - Discrete-time Fourier transform.
- - Continuous-time Fourier transform
+ Fourier analysis gives us the tools we need to analyze signals."</li>
+<li>Periodic [ in time domain. ]<ul>
+<li>Discrete time, discrete time Fourier series.</li>
+<li>Continuous time, continuous time Fourier series.</li>
+<li>Discrete-time Fourier transform.</li>
+<li>Continuous-time Fourier transform
We can put blocks around these. Discrete-time signals are periodic
- in the frequency domain.</p>
+ in the frequency domain.</li>
<p>Before I do that, I want to review an abstraction that we use for
signals (and the way we look at signals) as vectors.</p>
<h1>Periodic DT Signals</h1>
@@ -1553,8 +1559,8 @@
<p>These are our eigenvalues. Some linear combination of these two
exponentials will give us our initial conditions (i.e. <mathjax>$y(0) = 1$</mathjax>, <mathjax>$y(-1) =
-y(-2) = 0$</mathjax>). That is, <mathjax>$y = a_0 \parens{-\frac{1}{2}}^n + a_1 \parens{-\frac{1}{3}}^n$</mathjax>
+y(-2) = 0$</mathjax>). That is, <mathjax>$y = a_0 \parens{-\frac{1}{2}}^n + a_1
+\parens{-\frac{1}{3}}^n$</mathjax> )</p>
<p><a name='21'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 10, 2012.</h2>
@@ -1622,7 +1628,156 @@
e^{-at} u(t) \ltrans \frac{1}{s+a} (-\Re(a) &lt; \Re(s))
\\ -e^{-at} u(-t) \ltrans \frac{1}{s+a} (\Re(s) &lt; -\Re(a))
-$$</mathjax></p></div><div class='pos'></div>
+<p><a name='23'></a></p>
+<h1>EE 120: Signals and Systems</h1>
+<h2>April 17, 2012.</h2>
+<h2>Differentiation property:</h2>
+x(t) \ltrans \hat{X}(s)
+\\ \dot{x}(t) \ltrans s\hat{X}(s)
+<p><mathjax>$y^{(N)} + ... + a_1 y^{(1)}(t) + a_0 = b_M x^{(M)} + ... + b_0 x(t)$</mathjax>. What
+I want you is to apply the differentiation property to find the transfer
+function of this. (x-coefficients-polynomial divided by
+<p><mathjax>$\frac{\sum_m b_m s^m}{\sum_n a_n s^n}$</mathjax>.</p>
+<p>Going back to a series-RC circuit powered by a voltage source, we have
+<mathjax>$z(t) = \frac{x(t) - y(t)}{R}$</mathjax>, <mathjax>$C\dot{y}(t) = z(t)$</mathjax>. So <mathjax>$C\dot{y} -
+\frac{1}{R}y(t) = x(t)$</mathjax>. Transfer function therefore is <mathjax>$\frac{1}{RCs +
+1}$</mathjax>. Other way is to plug in <mathjax>$e^{-st}$</mathjax> and insist silly things.</p>
+<p>Inverting this signal yields <mathjax>$\frac{1}{RC}e^{-t/(RC)}u(t)$</mathjax>. That is the
+impulse response of the system.</p>
+<p>So that was differentiation in time. There is differentiation in the
+s-domain. </p>
+<h2>Differentiation in s</h2>
+x(t) \ltrans \hat{X}(s)
+\\ -t x \ltrans \deriv{\hat{X}}{s}
+<p><mathjax>$x(t) = \frac{1}{2\pi i} \oint \hat{X}(s) e^{st} ds$</mathjax></p>
+<p><mathjax>$te^{-at}u(t) \ltrans \frac{1}{(s+a)^2}$</mathjax>.</p>
+<p>Conjecture: terms of the form <mathjax>$t^n e^{-at} u(t)$</mathjax> and their anticausal
+counterparts are the only kinds that can be combined (subject to matching
+RoCs) to produce rational transforms. This means that the impulse response
+of any rational transfer function must be the sum of these terms.</p>
+<p>In differential equations, you studied simple and multiple roots (which
+correspond to simple/multiple poles in our vernacular).</p>
+<p><mathjax>$s + 1 + 1/(s+3)$</mathjax></p>
+<p>(unit doublet)</p>
+<p>on one side, you have delta, step, ramp, quadratic. On the other side,
+you've got a doublet, second derivative of delta, etc. Delta is <mathjax>$u_0(t)$</mathjax>,
+doublet is <mathjax>$u_1(t)$</mathjax>, step is <mathjax>$u_{-1}(t)$</mathjax>, etc.</p>
+<p>If not strictlty proper, have polynomial in s.</p>
+<h2>Method 1: non-transform method</h2>
+<p>If delta goes into the system, what comes out? <mathjax>$h$</mathjax>. If the unit step goes
+in, we get <mathjax>$u*h$</mathjax>.</p>
+<h2>Method 2: transform method</h2>
+<p>partial fractions and stuff. Consistent with result of method 1.</p>
+<h2>Integration in time/transform domain</h2>
+<p>Just relabel variables, and it becomes self-evident.</p>
+x(t) \ltrans \hat{X}(s)
+\\ \int x dt^\prime \ltrans \frac{1}{s}\hat{X}(s)
+<h1>Steady-State &amp; Transient Response of LTI Systems</h1>
+<p>Exactly the same as expected. Note that the second one dies out because of
+the pole of the system.</p>
+<p>With BIBO-stable system, input pole to right of rightmost pole of system
+dominates output.</p>
+<p><a name='24'></a></p>
+<h1>EE 120: Signals and Systems</h1>
+<h2>April 19, 2012.</h2>
+<p>Transient/Steady-State Wrap up</p>
+<p>Let's talk a bit about a causal BIBO-stable system. Which is usually the
+case with practical applications. Has a rational transfer function, so
+usually ratio of two polynomials in <mathjax>$s$</mathjax>. Not going to be too concerned
+about zeros of system, so we'll write the factored denominator in terms of
+the poles of the system.</p>
+<p>Assume all poles are simple. All poles are in left half-plane. Also, assume
+transfer function is strictly proper.</p>
+<p>To this system, I apply a one-sided (causal) complex exponential
+signal. What is the output?</p>
+<p>transforms and multiplications.</p>
+<p>Eigenfunction property (plus other stuff?!).</p>
+<p>True for any BIBO-stable function: you can evaluate the Laplace transform
+on the <mathjax>$i\omega$</mathjax> axis and get the Fourier transform for that particular
+<p>What happens to all the terms involving the Rs? These, collectively,
+compose your transient response. The last term (result from input)? Doesn't
+die out. Steady-state.</p>
+<p>What this says is that the system cannot distinguish between <mathjax>$e^{i\omega_0
+t}$</mathjax> and its truncated cousin <mathjax>$e^{i\omega_0 t}u(t)$</mathjax> if we wait long enough:
+i.e. transients become insignificant. Only portion of response that remains
+is the one corresponding to <mathjax>$e^{i\omega_0 t}$</mathjax>. Notice that the pole of the
+input is to the right of the rightmost pole of the system.</p>
+<p>Important: all poles of the system are in the left half-plane, and the pole
+of the input is on the <mathjax>$i\omega$</mathjax> axis, which means it's to the right of the
+rightmost pole (and of course the system is causal). Therefore the pole of
+the input will dominate the response.</p>
+<p>Eigenfunction property applies to steady-state solution. Can also extend to
+<p>Likely a good time to move to the unilateral Laplace transform and how we
+can use it to solve ordinary LDEs.</p>
+<h1>Unilateral Laplace Transform&amp; linear, constant-coefficient differential equations with non-zero initial conditions</h1>
+<p>Whenever you have nonzero initial conditions, you need to truncate. Trick
+used: multiply by unit step, then take Laplace transform. Effectively the
+same as taking unilateral Laplace transform.</p>
+<p><mathjax>$\hat{\mathcal{X}}(s) = \int_{0^-}^\infty x(t) e^{-st} dt$</mathjax>. A lot of
+textbooks only deal with the unilateral transform because they're
+interested in causal systems. As are we, in this context.</p>
+<p>If I am looking at the unilateral Laplace transform of <mathjax>$\dot{x}$</mathjax>, one
+additional term appears. If we integrate by parts, we can see which this
+term must be. In the bilateral case, we evaluated <mathjax>$uv$</mathjax> at both
+infinities. The second term (i.e. <mathjax>$\int vdu$</mathjax>) required that this product
+pevaluate to zero at infinities -- otherwise the integral would not
+<p>In the unilateral case, we therefore have an additional term: <mathjax>$-x(0^-)$</mathjax>.</p>
+<p>Zero-state, zero-input method. Remember: <strong>different</strong> from transient and
+steady-state. Best not to think of these at the same time.</p>
+<p>Method 2: use unilateral Laplace transform.</p>
+<p>Note that if a signal is causal, its unilateral Laplace transform is the
+same as its bilateral Laplace transform.</p>
+<p><a name='25'></a></p>
+<h1>EE 120: Signals and Systems</h1>
+<h2>April 24, 2012.</h2>
+<h1>DC Motor Control</h1>
+<p>Application of what we've been studying. Way to review and test fluency
+with material. We've got a DC motor whose model is some second order linear
+differential equation. We've got applied torque and damping. Moment of
+inertia of rotor and whatever's hooked up to it.</p>
+<p>Transfer function?</p>
+<p>Feedback to stabilize the system. Place this in proportional feedback
+configuration: only other thing in the feedback system is K, which is a
+scalar. Integrator: of form <mathjax>$\frac{1}{s}$</mathjax>. What's the transfer function?
+Characteristic polynomial from differential equations.</p>
+<p>K must be positive for BIBO stability.</p>
+<p>If roots complex, guaranteed stability. Real part of each pole is
+<mathjax>$-\frac{D}{2M} &lt; 0$</mathjax>.</p>
+<p>Oscillations you get when you have complex poles. Underdamping, critical
+damping, overdamping. Robustness discussion.</p>
+<h1>Bode plots!</h1>
+<p>Gots two building blocks:</p>
+\hat{F}_I(s) = 1 + \frac{s}{\omega_0}
+\\ F_I(\omega) = \hat{F}_I(i\omega)
+<p>Asymptotic plot of <mathjax>$20\log\abs{F_I}$</mathjax> and <mathjax>$\angle F_I$</mathjax>. The horizontal axis
+is a logarithmic axis.</p>
+<p>What happens when <mathjax>$\omega$</mathjax> is very small? Asymptotically zero. At higher
+frequencies, <mathjax>$\omega$</mathjax> large, so imaginary part will dominate. And so when
+you take its magnitude, you jump by 20 every time you increase magnitude by
+10 -- slope is 20dB/dec. <mathjax>$\omega_0$</mathjax> is your corner frequency (named for
+obvious reasons). 3dB point: corner frequency. One of foundational blocks
+for frequency responses on logarithmic scales. 45 degrees per decade. (use
+10x to determine dominance).</p>
+<p>This building block is called a regular zero. Not widely used; Babak
+learned from circuits professor at CalTech (R.D. Middlebrook).</p>
+<p>The second building block is <mathjax>$\frac{s}{\omega_0}$</mathjax>. Simple zero.</p>
+<p>Claim: all expressions with real roots can be written as combinations of
+these two.</p>
+<p>Inverted zero: <mathjax>$1 + \frac{\omega_0}{s}$</mathjax></p></div><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
4 mathjax/unpacked/config/local/local.js
@@ -31,10 +31,12 @@ MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
// TEX.Macro("R","{\\bf R}");
// TEX.Macro("op","\\mathop{\\rm #1}",1); // a macro with 1 parameter
TEX.Macro("set", "\\left\\{ #1 \\right\\}", 1);
+ TEX.Macro("const", "\\mathrm{const}", 0);
+ TEX.Macro("tensor", "\\otimes", 0);
TEX.Macro("cplx", "\\mathbb{C}", 0);
TEX.Macro("fourier", "\\overset{\\mathcal{F}}{\\Longleftrightarrow}", 0);
TEX.Macro("ztrans", "\\overset{\\mathcal{Z}}{\\Longleftrightarrow}", 0);
- TEX.Macro("ltrans", "\\overset{\\mathcal{L}}{\\Longleftrightarrow}", 0);
+ TEX.Macro("ltrans", "\\overset{\\mathcal{H}}{\\Longleftrightarrow}", 0);
TEX.Macro("prob", "\\Pr\\left[#1 \\right]", 1);
TEX.Macro("abs", "\\left\\vert #1 \\right\\vert", 1);
TEX.Macro("vec", "\\overset{\\rightharpoonup}{#1}", 1);
496 phys112.html
@@ -2066,7 +2066,501 @@
ground state. We show that we have a certain Einstein condensation
temperature; <mathjax>$\tau_E = \frac{2\pi\hbar^2}{m}\expfrac{N}{2.612 V}{2/3}
\implies N_{exc} = N\expfrac{\tau}{\tau_E}{3/2}$</mathjax>. For large densities,
-<mathjax>$\tau_E$</mathjax> is not very small. </p></div><div class='pos'></div>
+<mathjax>$\tau_E$</mathjax> is not very small. </p>
+<p><a name='31'></a></p>
+<h1>Physics 112: Statistical Mechanics</h1>
+<h1>Bose-Einstein Condensation: April 16, 2012</h1>
+<p>Today: In particular, speak about number of states, superfluidity in, for
+instance, liquid helium, and discoveries of Bose-Einstein condensates.</p>
+<p>Midterms will be graded by tomorrow, presumably. You will have them on
+Wednesday. So the final is on May 7, so three weeks from today, early in
+the morning (sorry about that, but you are accustomed now). Of course, it
+will cover everything that you have covered. After Bose-Einstein
+condensates, I will have a short chapter on phase partitions (that is an
+important subject in statistical mechanics), and then we will spend some
+time on cosmology next week to show you how you can apply this to a
+practical problem (it is not that practical).</p>
+<p>Office hours Wednesday from 3-4; today is as usual (11-12).</p>
+<h1>Bose-Einstein Condensates</h1>
+<p>Bose-Einstein had an occupation number of <mathjax>$\frac{1}{\exp\parens{
+\frac{\epsilon - \mu}{\tau}} - 1}$</mathjax>. This negative one is critical and will
+determine the behavior of the condensation. What we saw was that when
+<mathjax>$\frac{\mu - \epsilon_0}{\tau} \sim -\frac{1}{N}$</mathjax> (for some <mathjax>$\epsilon_0$</mathjax>
+being our ground state), roughly all of the particles are in the ground
+<p>In principle we should make the calculation of <mathjax>$\mu$</mathjax> by just looking at the
+total number of particles: by the usual sum. Recall that when measuring
+<mathjax>$\mu$</mathjax>, we cannot use the integral approximation that we used in the
+Fermi-Dirac case, since our states of low energy are not very close
+together. The integral approximation does not take into account the spacing
+between the states. When <mathjax>$\mu$</mathjax> is very small, this is not a very good
+approximation. One way of thinking about it: states get denser at higher
+<mathjax>$\epsilon$</mathjax>, but what happens with the Bose-Einstein condensation? When
+<mathjax>$\frac{\mu - \epsilon_0}{\tau} \sim \frac{1}{N} \ll \epsilon - \epsilon_0$</mathjax>,
+this spacing is critical: it will enforce that most of the particles are in
+the ground state, and very few are in excited states.</p>
+<p>So can we make calculations of this <mathjax>$\mU$</mathjax> analytically? No; this is a
+numeric problem.</p>
+<p>We can talk, however, of isolating the first term and then using the
+integral approximation on all excited states. That is an approximation, and
+we would like this to be equal to N. We are interested in the second term,
+the number of excited states. We would like <mathjax>$\frac{\epsilon_0 - \mu}{\tau}
+\sim \frac{1}{N}$</mathjax>, so we can replace the excited states with <mathjax>$N_{exc}
+\equiv V \int_{ \epsilon_1}^\infty \frac{1}{\exp \parens{\frac{\epsilon_0 -
+\mu}{\tau}} \exp \parens{\frac{\epsilon - \epsilon_0}{\tau}} - 1}
+D(\epsilon) d\epsilon$</mathjax>.</p>
+<p>Working through the math, and setting <mathjax>$\epsilon_0$</mathjax> to 0, we get <mathjax>$N_{exc} =
+2.612 n_Q V$</mathjax>. It does not depend on N if <mathjax>$\frac{\mu}{\tau} = 0$</mathjax>. We thus
+define the Einstein condensation temperature as <mathjax>$\tau_E \equiv \frac{2\pi
+\hbar^2}{m} \expfrac{N}{2.612 V}{2/3}$</mathjax>, so <mathjax>$N_{exc} = N\expfrac{\tau}
+{\tau_E} {3/2}$</mathjax>.</p>
+<h2>Liquid <mathjax>$^4$</mathjax>He</h2>
+<p>If you look in the slides, I have actually computed (also probably in
+Kittel) the numerical values for <mathjax>$^4$</mathjax>He. If you apply this naively, you
+will get 3.1K. So <mathjax>$^4$</mathjax>He is expected (if there were no interactions between
+the atoms) to behave like a Bose-Einstein condensate. Not exactly true:
+behaves as a Bose-Einstein condensate about 2.17K. This is important:
+called the Landau point (also called lambda point because this looks like a
+lambda). Helium is liquid about 4K. Literally changes sign at 2.17K.</p>
+<p>All of your particles are in the same state and becomes a macroscopic
+quantum state. Very fun to see (oh god!). Basically in the same way as in
+electromagnetism, which is doing a square root of number of photons and
+plane waves: we have a wave function which is the square root of energies.</p>
+<p>With this quantum effect: you can use vortices which are quantized: have a
+certain amount of angular momentum. You have the equivalent of the two-slit
+experiment, where basically you have liquid helium go through two slots,
+and they diffract exactly like Young's double-slit experiment. You have
+basically all the interference phenomena.</p>
+<p>The most dramatic macroscopic property is superfluidity. Not only dramatic,
+it is a pain for experimentalists working at low temperature. Basically
+what is happening is that the atoms are not subject to any kinds of forces
+from the wall. They begin to flow on the wall as if it had no roughness
+(explanation forthcoming!). It makes the helium have no surface tension on
+the surface and go through cracks. One of the problems of experimentalists
+working at low temperature is something that is essentially leak-proof
+above the Landau point (2.17K), but once you cross that threshold, bang!
+the thing begins to leak like a sieve.</p>
+<p>And of course 2.17K is something that you go and look; you'd have to warm
+up and try to understand where the leak could have come from, redo the
+solder, get back down, and maybe nine times out of ten, this thing is
+leaking again. That's why low-temperature is sometimes called
+slow-temperature physics. It takes a lot of tries to fix a system which is
+<p>To give you an idea of what is going on, I would like you the following
+question: when is energy transfer maximized? When the two masses are equal;
+easy to show via conservation of energy and conservation of
+momentum. Important consequence: if <mathjax>$M \ll m$</mathjax>, then <mathjax>$\frac{1}{2}mv^2 \to
+0$</mathjax>. In one dimension (you can of course generalize to several dimensions),
+with a particle of mass M, <mathjax>$E = \frac{p^2}{2M}$</mathjax>. I have to conserve energy
+and momentum, so the dispersion relationship of my particle of mass <mathjax>$m$</mathjax> can
+be expressed graphically by the intersection of two parabolas. If <mathjax>$m = M$</mathjax>,
+the curves have the same width, so energy transfer is maximized. If m is
+infinite, this is flat: I am not losing any energy.</p>
+<p>We have a large effective <mathjax>$m$</mathjax>, but the analogy breaks down: system that is
+very soft: no way to transfer a lot of momentum. When we send a little ball
+into a superfluid liquid helium, it does not lose energy: keeps going as if
+it were in a vacuum.</p>
+<p>If your velocity is large enough, you can lose energy to phonons. In liquid
+helium, there are also quantized oscillations. You have a system with
+excitations, and there are phonons. If I am below the tangent, there is no
+way I can have phonons (i.e. if travelling below velocity of sound). Only
+if I am above the speed of sound can I lose energy. It will only lose
+energy to phonons, and not to kicking of the system. Can emit excitations
+(phonons) if velocity large enough. This may remind you of Cherenkov
+radiation. This ia a phenomenon remarkably similar to that. If a particle
+goes through a medium faster to the local velocity of light (smaller than c
+because of diffraction index), then you will emit light. Same thing: if you
+go above the velocity of sound, you will emit phonons.</p>
+<p>So a lot of interesting physics; you can do the calculations, but the graph
+is good enough to show what is happening.</p>
+<p>By the way, there is one small experiment, which is somewhat interesting:
+if you put a little pipe going through the surface of liquid helium in a
+container with glass walls, and you begin to pump at 4K, and the
+temperature of the helium goes down. Suddenly at 2.17K, you have a fountain
+of liquid helium coming out of your tube. Very cool. Another thing that
+happens is that the helium rises up on the sides and heats up to 2.17K
+where it evaporates. It goes through cracks.</p>
+<p>In the late 90s, there were interesting successful attempts by two groups
+to artificially create Bose-Einstein condensation: one at NIST/JILA in
+Boulder, and one at MIT.</p>
+<p>This is an example where they were trapping atoms in the form of a ring,
+which you can observe rotating. This is just the spectacular demonstration
+that the particles are totally coherent.</p>
+<p>Any way: how do we do that? What you need to make a Bose-Einstein
+condensate: low temperature (need to cool atoms) and high density (of the
+order of the quantum density). So how do you cool atoms? Having them bounce
+on the wall is not a very efficient way of cooling them.</p>
+<p>The breakthrough came from what was called laser cooling: suppose I have an
+atom that I want to cool. This atom is going in many directions. Let's
+choose an absorption line of this atom which has some resonance and
+frequency. Instead of sending a laser at the resonance frequency, let's
+send a laser slightly below this frequency. What is happening? This is if I
+were in the rest frame of the atom. If the atom is moving towards the
+laser, in this rest frame, it sees the frequency of the laser slightly
+blue-shifted, so it absorbs the laser more, and it will emit the photons
+over <mathjax>$4\pi$</mathjax> after a while, and it has lost kinetic energy. If it goes away
+from the laser, it will scatter less (it will see the frequency
+red-shifted). The net result is if I am sending laser light from all
+directions, I will tend to cool my atoms and decrease their energy.</p>
+<p>We can use the same idea to trap the atoms: put magnetic field on the side:
+frequency is changing; oblige particles to be in the area of zero magnetic
+<p>In practice this is a little more complex than this: you cannot make a
+magnetic field that looks like an infinite square well, but you can have a
+rotating magnetic field, so every time the particles want to go out, they
+will see the magnetic field (whose energy will be higher: particles are
+slow since they have been cooled). Two groups in our department are doing
+that as their main research.</p>
+<p><a name='32'></a></p>
+<h1>Physics 112: Statistical Mechanics</h1>
+<h1>Fermi-Dirac, Bose-Einstein, Phase transitions: April 18, 2012</h1>
+<p>Reasoning why chemical potential does not vary much at low temperatures: it
+looks fairly rectangular under normal circumstances. When we have a finite
+temperature, we have some rounding of the distribution.</p>
+<p>The occupation number of holes: if I am a distance <mathjax>$\delta$</mathjax> from <mathjax>$\mu$</mathjax>,
+this is the occupation number <mathjax>$\lambda$</mathjax> of electrons. The two are equal
+because of this addition. On the other hand, that's not what we want to
+have: we want to multiply by density of states. We have to plot
+<mathjax>$f(\epsilon, \tau)$</mathjax> and integrate over that. In two dimensions,
+<mathjax>$D(\epsilon)$</mathjax> is constant. So this is the same graph, except now we have
+<mathjax>$D(\epsilon)$</mathjax> on our axis. At a distance <mathjax>$\delta$</mathjax> from the chemical
+potential, now the <em>number</em> of holes is equal to the <em>number</em> of electrons
+(as opposed to occupation number: we've taken into account density of
+states, so this is for the entire system).</p>
+<p>Because of this symmetry, the sum from 0 to infinity of <mathjax>$f(\epsilon, \tau)
+D(\epsilon) d\epsilon$</mathjax> is exactly equal to <mathjax>$\int_0^\mu D(\epsilon
+d\epsilon$</mathjax>. This is known to be <mathjax>$\int_0^{\epsilon_F} D(\epsilon)
+d\epsilon$</mathjax>. Slightly different from what happens in 3 dimensions:
+<mathjax>$D(\epsilon) \propto \sqrt{\epsilon}$</mathjax>, so my function <mathjax>$fD$</mathjax> looks different
+because I am losing less on the hole-side than gaining on the electron, so
+I have to reduce the chemical potential a little bit at <mathjax>$\tau$</mathjax> at 0.</p>
+<p>Don't confuse occupation number (which is symmetric) with the number (which
+is not symmetric in general). Symmetric only in two-dimensional case
+because <mathjax>$D(\epsilon)$</mathjax> constant.</p>
+<p>Let me, then, finish rapidly what I wanted to say on Fermi-Dirac and
+Bose-Einstein. We were speaking about this very nice experiment: BEC atoms,
+cooling in a trap. Now becoming routine, but before were very difficult to
+do with atoms. And now people are doing that with molecules; they are
+making artificial crystals; there is a whole industry. They take atoms and
+arrange them in a particular fashion and potential. Now that the technology
+of cooling the atoms and trapping them is well understood, there is a lot
+of physics happening.</p>
+<p>Graph corresponds to spatial density of atoms. Claim: not a great
+discovery; just a technical feat. Superconductivity: in low-temperature
+superconductors, you have electrons pairing in Cooper pairs for phonon
+<p>Condensation theory is a bad approximation: interactions between Cooper
+pairs are important. The temperature at which superconductivity appears is
+much smaller than you would naively compute. Similar effects: zero
+resistance (as in superfluidity), vortices: quantization of flux, phase
+shift effects: all the <mathjax>$n^2$</mathjax> behavior. Superconductor with two junctions is
+equivalent to Young's double-slit experiment. Very similar properties; very
+important devices.</p>
+<p><mathjax>$^3He$</mathjax> is spin <mathjax>$\frac{1}{2}$</mathjax>, and you have pairing of the spin to create a
+spin of 1 to create superconductivity. Then, because this is in vacuum, you
+have very strange effects: magnetic properties.</p>
+<p>This is a very important effect in condensed-matter physics. Emerging
+phenomenon: completely different phenomenon at low temperature. Surprising:
+not that low of temperature for sufficiently dense systems.</p>
+<p>Energy density for both bosons and fermions goes as <mathjax>$T^4$</mathjax>. Useful when
+considering early universe when considering expansion.</p>
+<p>Pressure: we have never solved problem of scattering of particles (shown
+force per unit area). Once again, define independent variables when taking
+partial derivatives. What is constant is energy and number of particles (if
+we want to use <mathjax>$p = \tau \pderiv{\sigma}{N}$</mathjax>.</p>
+<p>The force per unit area on the wall can be readily computed in the
+following way: I am considering a small area on the wall <mathjax>$dA$</mathjax>. Now, if I
+have a particle coming in (I will assume, by the way, because of the
+symmetry, the angle of reflection is the same as that of incidence), the
+force is merely <mathjax>$\pderiv{p}{t}$</mathjax>, i.e. what is the change of momentum per
+unit time? And the pressure will be the force divided by <mathjax>$dA$</mathjax>. I will have
+to compute this: <mathjax>$\frac{1}{dA\Delta p}\int 2p\cos\theta v\Delta t dA
+\cos\theta n(p) p^2 dp d\Omega = 2\pi \int cos^2\theta d\cos\theta \int 2pv
+n(p)p^2dp$</mathjax>. The <mathjax>$\theta$</mathjax> integral gives me <mathjax>$\frac{1}{3}$</mathjax>, and what we have
+is that my pressure is <mathjax>$\frac{4\pi}{3} \int pv n(p) p^2 dp$</mathjax>. Depending on
+whether you are nonrelativistic or (ultra)relativistic, pv is just <mathjax>$mv^2 =
+2\epsilon$</mathjax>, so <mathjax>$P = \frac{2}{3}U$</mathjax>. If you are relativistic, <mathjax>$pv$</mathjax> is just
+<mathjax>$pc = \epsilon$</mathjax>, the pressure is therefore just <mathjax>$\frac{1}{3}U$</mathjax>. And check
+with our various results: this is the same pressure as the thermodynamic
+definition <mathjax>$\tau\pderiv{\sigma}{V}\Big|_{U,N}$</mathjax></p>
+<p>Explanation for why we have pressure for Fermi-Dirac even at zero
+temperature: I have to stack up my states in energy space, and I have to
+have states that are high velocity even at zero temperature. That's one of
+the interpretations of the pressure of a Fermi-Dirac gas.</p>
+<p>phase transitions: system in contact with reservoir, but not necessarily in
+equilibrium. What is minimized? Landau free energy. We have seen this: free
+energy is not defined because <mathjax>$\tau$</mathjax> is not defined. Energy is not minimzed
+because system is constantly kicked by thermal fluctuations.</p>
+<p><a name='33'></a></p>
+<h1>Physics 112: Statistical Mechanics</h1>
+<h1>Phase transitions: April 23, 2012</h1>
+<p>Would like to finish phase transitions if possible. The modern way of
+looking at phase distributions involves looking at the Landau
+distributions: consider the Landau free energy <mathjax>$F_L = U_s - \tau_R
+\sigma_s$</mathjax> and Landau free enthalpy <mathjax>$G_L = U_s - \tau_R\sigma_s + p_R
+V_s$</mathjax>. The first one is used when considering constant volume, and the
+second one is used at constant pressure.</p>
+<p>Generally speaking, you will look at the dependence of <mathjax>$U_s, \sigma_s, V_s$</mathjax>
+on an order parameter <mathjax>$\xi$</mathjax>, and we are looking at equilibrium, which is
+obtained at the minimum.</p>
+<p>You may be somewhat confused by the fact that you cannot define the state
+of a system by just one parameter. We must almost actually minimize with
+respect to all other variables.</p>
+<p>This minimization comes in usually by the expression of <mathjax>$\sigma_s
+(\xi)$</mathjax>. When you speak of <mathjax>$\sigma_s(\xi)$</mathjax>, usually energy depends directly
+on <mathjax>$xi$</mathjax>, whereas <mathjax>$\sigma$</mathjax> depends on probabilities. You will maximize
+<mathjax>$\sigma_s$</mathjax> at some given <mathjax>$\xi$</mathjax>.</p>
+<p>When I was speaking of ferromagnetism, at one point I was changing in the
+expression of the entropy <mathjax>$\frac{mB}{\tau} \to \tanh^{-1}\parens{\frac{M}
+{nm}}$</mathjax>, I was already doing this minimization.</p>
+<p>The net result is that if I plot F as a function of the magnetization, at
+high temperature, the magnetization wants to be zero (since that is the
+minimum of the Landau free energy), and this curve will move upwards until
+suddenly, at a critical temperature (the Curie temperature), it develops a
+minimum, and the equilibrium magnetization becomes nonzero.</p>
+<p>If I plot the magnetization as a function of teh temperature, it is zero
+above this temperature. This is a second-order phase transition, and you
+move smoothly from <mathjax>$m=0$</mathjax> to <mathjax>$m \neq 0$</mathjax>. There is no discontinuity in
+<mathjax>$m$</mathjax>. This is very different from the case in which we go from gas to
+liquid. Continuous evolution: the thing that is discontinuous is the first
+<p>Classical gases: what are we missing in our description (point-like
+particles which just scatter when they make contact with each other).</p>
+<p>Issues: limit to compressibility (not taking into account volume of
+particles). Interaction forces (long-distance): attractive Van der Waals
+forces (polarization due to fluctuations induces polarizations in other
+local particles). So how do these forces look? I have a very strong
+repulsive force when they are in contact and a weaker <mathjax>$\frac{1}{r}$</mathjax> force
+when they are sufficiently far apart.</p>
+<p><mathjax>$V \to V - Nb$</mathjax> (where <mathjax>$b$</mathjax> is the volume of the atom). So this is the
+approximation. Instead of a very fast approaching force, we linearize about
+this point. For the attractive force, I will treat in a similar manner to
+what we did for magnetism: mean field approximation. <mathjax>$\avg{U} \to \avg{U} -
+\avg{\phi}\frac{N(N-1)}{2}$</mathjax>. I will say that <mathjax>$U = T - \frac{N^2 a}{V}$</mathjax>
+because the average of <mathjax>$\phi$</mathjax> is <mathjax>$\frac{\int \phi d^3 r}{V} \equiv
+\frac{2a}{V}$</mathjax>. I will make this approximation: there is an attractive force
+that goes as <mathjax>$N^2$</mathjax>.</p>
+<p>We want to compute <mathjax>$G_L$</mathjax>. I have <mathjax>$U_s = T - \frac{N^2 a}{V}$</mathjax>, and, back to
+counting of states, <mathjax>$\sigma_s = N \bracks{\log \bracks{}\frac{\expfrac{M
+U_K}{3\pi\hbar^2 N}}{\frac{N}{V - Nb}} \equiv N \log \frac{n_Q}{n} +
+\frac{5}{2}N$</mathjax> (the Sackur-Tetrode formula).</p>
+<p>At <mathjax>$U_k = \frac{3}{2} N\tau$</mathjax>, I am not yet in equilibrium with the
+reservoir, so there is no real way to define temperature.</p>
+<p>What is the pressure? There is a critical pressure <mathjax>$\frac{a}{27b^2}$</mathjax> (don't
+ask me; just result of calculation) above which things behave normally:
+<mathjax>$G_L$</mathjax> normally, and as temperature goes down, <mathjax>$G_L$</mathjax> as a function of <mathjax>$V_s$</mathjax>
+moves downward. But below this limit, I start to develop two minima: moves
+up and to the left as temperature goes down.</p>
+<p><mathjax>$\pderiv{G_L}{V_s$</mathjax> gives us the Van der Waals equation of state,
+<mathjax>$\parens{p_R + \frac{N^2a}{V_s^2}}\parens{V_s - Nb} = N\tau_R$</mathjax>. I have
+already defined <mathjax>$p_c = \frac{a}{27b^2}$</mathjax>, and <mathjax>$\tau_c = \frac{8a}{27b}$</mathjax>.</p>
+<p>At this critical temperature, it begins to develop a certain inflection
+point, and the liquid/gas relationship emerges.</p>
+<p>If you do this calculation numerically, it is not easy to plot: there is a
+big difference between the volume of the gas and the volume of the gas for
+different values of the pressure compared to the critical pressure.</p>
+<p>At high pressure compared to teh critical pressure, you stay always in the
+gas phase. As you increase the temperature, it will decrease, and the
+minimum goes down: the volume just changes. Nothing special happens. If you
+are below the critical temperature, at high temperature, volume is large,
+and you see a minimum at high temperature, and there is a second minimum
+which develops: the liquid. That is what is happening.</p>
+<p>Transition from liquid to gas as we increase the temperature: it does not
+go by itself: there is a potential barrier between the two phases: liquid
+<mathjax>$\to$</mathjax> gas needs a wall or a dust particle (creating a bubble takes work.</p>
+<p>Even if the gas has a smaller free enthalpy, I still have to overcome this
+potential barrier (we are stuck in the liquid if nothing else
+happens). Takes work to create a bubble.</p>
+<p>Meta-stable states: superheated liquid or supercooled vapor: need surfaces
+for transition to occur.</p>
+<p>Unless you increase the temperature high enough such that there are no more
+local minima, you will go extremely brutally.</p>
+<p><a name='34'></a></p>
+<h1>Physics 112: Statistical Mechanics</h1>
+<h1>Phase transitions, Cosmology: April 25, 2012</h1>
+<p>What I wanted to do was speak about the final. You have the right to have
+four pages now (single-sided) of notes. As usual, my advice is to rewrite
+your notes because this class is more about concepts and how they relate to
+each other than formulae. As should be quite obvious from the midterms, if
+you take the wrong formula and apply to a situation, the result is
+<p>I think we all agree that 8am is not advisable. So what I can propose is a
+review session either on Wednesday 4-6, Friday 10-12, or Friday 2-4 (Alex's
+is on Thursday 1-3pm in 9 Lewis). I will focus the review on chemical
+potential, since we have not really seen this before.</p>
+<p>Strong preference for Wednesday.</p>
+<p>So let's look at phase transitions. We were looking at this question of how
+the system goes from liquid to gas as we increase the temperature. The
+thing I wanted to attract your attention that for this kind of first-order
+phase transition, the behavior is not continuous: there is a discontinuity
+because there are to minima in <mathjax>$G_L$</mathjax>. Because there is a potential barrier
+between the two minima, the system can be stuck in one of the states. It is
+not true that if you heat (pure) water at about 100 degrees Celsius, it
+will not boil. It will be stuck in a metastable state of superheated
+liquid. Will only boil because of defects.</p>
+<p>Import in stuff like bubble chambers and particle detectors (important for
+detection of dark matter). Can stay metastable for minutes. This is fairly
+characteristic of what we call first-order transitions.</p>
+<p>Chemical potential as a function of <mathjax>$p, \tau, N$</mathjax> has no dependence on
+<mathjax>$N$</mathjax>. We showed that <mathjax>$G = N\mu(p,\tau)$</mathjax>.</p>
+<p>Entropy of liquid is lower than that of gas with same parameters. Related
+to the fact that there are fewer degrees of freedom, so smaller number of
+states. Using that the Gibbs free energies are the same, <mathjax>$\Delta H = LN &gt;
+0$</mathjax>, where <mathjax>$L$</mathjax> is the latent heat per particle.</p>
+<p>Coexistence: you can follow the separation between the liquid and gas as a
+function of pressure and temperature. When the Landau free enthalpy is
+equal between the systems, you are on this locus where gas and liquid
+coexist, and of course it stops when you reach critical pressure and
+critical temperature, which we call the critical point. This is of course
+of intense interest to us.</p>
+<p>There is a very famous formula derived in the mid-nineteenth century by
+Clausius and Clapeyron, which is very simple. Clearly we have
+<mathjax>$G_L(p(\tau),\tau) = G_g(p(\tau),\tau)$</mathjax> (equation at the coexistence
+line). Now, taking the derivative with respect to <mathjax>$\tau$</mathjax>, we get
+<mathjax>$\pderiv{G_L}{\tau}\deriv{p}{\tau} + \pderiv{G_L}{\tau} = \pderiv{G_g}
+{\tau}\deriv{p}{\tau} + \pderiv{G_g}{\tau}$</mathjax>. So what are these terms?</p>
+<p><mathjax>$dG = -\sigma d\tau + Vdp + \mu dN$</mathjax>, so <mathjax>$\pderiv{G}{p} = V$</mathjax>,
+<mathjax>$\pderiv{G}{\tau} = -\sigma$</mathjax>. Thus we can solve: <mathjax>$\deriv{p}{\tau} =
+\frac{\sigma_l - \sigma_g}{V_g - V_l} = \frac{1}{\tau}\frac{L}{v_g - v_l}$</mathjax>,
+where <mathjax>$v_g \equiv \frac{V_g}{N}$</mathjax>. If you use that <mathjax>$v_l \ll v_g \sim
+\frac{\tau}{p}$</mathjax>, then this is roughly <mathjax>$-\frac{pL}{\tau^2}$</mathjax>, or <mathjax>$p \sim
+\exp\parens{-\frac{L}{\tau}}$</mathjax>. If you express the partial pressure, you
+have a straight line, which is actually the latent heat per particle. Very
+good approximation for water (given the number of assumptions we have made)
+and ice (since we can do the same thing between solid and liquid). Also an
+excellent approximation for <mathjax>$^4\mathrm{He}$</mathjax>.</p>
+<p>That's basically all regarding phase transitions. These arise from
+correlations between particles: no phase transitions with ideal gases. The
+method used here is the mean field approximation as a first-order. Two
+types: first order (coexistence, latent heat, metastability) and second
+order (continuous transformation -- discontinuity in derivative). In first
+order, there is a critical point. Very important in modern statistical
+<p>Let me dissipate ambiguities regarding presence on final. The details of
+what I will tell you will not be on the final. But the kind of principles
+that I am applying (the thermal physics and statistical mechanics) are
+clearly on the final: these are the things we have spoken about in
+considerable detail over the last 14 weeks.</p>
+<p>What I would like to speak to you about is basically the thermal evolution
+of the universe. We have something (we call this the Big Bang). Big
+explosion: a lot of unknown particle physics at the beginning. At about a
+tenth of a nanosecond in, what we see is fairly well defined. A few
+thousand years gives us the microwave background.</p>
+<p>How do we know that the universe is expanding? We can measure the Hubble
+recession of distant galaxies. To a first approximation, their velocity is
+proportional to their distance. There is essentially 80 years of research
+where we have learned to measure distances, account for local velocities,
+and more.</p>
+<p>THe second thing (which we have spoken about) is that we have observed this
+background radiation (about 3K). The final thing (which I will speak about
+in more detail on Friday) is that if we, in the early universe, not only
+observe protons and electrons, but that pi-modal -- helium and deuterium
+have been formed -- in order to understand how these things happened, we
+need a very hot phase.</p>
+<p>Best way to think about this expanding universe (which is mind-boggling
+because everything is changing) is to divide out the expansion. We have
+something called the scale parameter a(t) and go from the physical
+coordinate to the comoving coordinate, where we take this expansion away.</p>
+<p>Now, there is something that we call the Friedman equation: the sum of
+kinetic energy and potential energy is constant. We can compute constants
+in general relativity. If the curvature of the universe is infinite, this
+is constant.</p>
+<p>For all practical purposes, we believe this all started from a phase
+transition: inflation. This second-order phase transition led to
+exponential expansion. What is going on? We assume that we have a field
+which we have never seen (order parameter -- inflaton). Same that we've
+used, except now in quantum mechanics: instead of classic variables, we now
+have quantum fields. As temperature decreases, this begins to develop a
+second minimum. This will induce a phase transition. System feels that its
+energy is not equal to zero, which leads to an exponential increase of the
+scale parameter. We call that inflation. We believe that in the space of
+less than a tenth of a nanosecond, the universe has expanded by a factor of
+60E40. For all practical purposes, this is what we call the big bang.</p>
+<p><a name='35'></a></p>
+<h1>Physics 112: Statistical Mechanics</h1>
+<h1>Cosmology: April 25, 2012</h1>
+<p>Thermal history of the universe. How thermodynamics / statistical physics
+is applied to the field. Before I do that: let me remind you that next week
+we will have review sessions Wed 4-6 in 325 Leconte and Th 1-3 9 Lewis.</p>
+<p>What I am going to do is finish talking about inflation, then speak of the
+evolution of temperature as a function of time, and finally give you an
+idea about nucleosynthesis: how the elements (which have nuclei) were
+formed. This will give us the opportunity to speak about three important
+aspects of the course: phase transitions, more general arguments about
+evolution of entropy, and the mass-action law.</p>
+<p>For instance: looking at a proton and an electron, we get hydrogen. Or a
+neutron and a proton: deuterium. Or deuterium and a proton: Helium-3.</p>
+<p>The universe is expanding in a homogenous and isotropic manner. The
+physical coordinates are related to comoving coordinates by a certain
+expansion factor. This is interesting: relates speed of expansion to energy
+<p>Did speak on Wednesday regarding what happens in the early universe: phase
+transition (postulate). We believe that we had a phase transition, where
+what was happening as a function of order parameter, the Landau free energy
+developed a minimum, and suddenly the universe wants to go to this
+point. When it does that, it discovers that is has a nonzero energy
+density, and it begins to expand.</p>
+<p>On the order of 60E40.</p>
+<p>So: why do we need something like that? Need to justify cosmic microwave
+background. Remember this is the radiation from the plasma in the early
+universe at a given temperature. Recombined into hydrogen, so universe
+became transparent.</p>
+<p>Things were close (in causal contact). Then things were put very far
+apart. In GR, there is no problem with having the space expand faster than
+the speed of light.</p>
+<p>That was the main reason for inflation: some reason for extremely fast
+expansion of universe, which then settles down.</p>
+<p>Space flat, so no worry regarding initial conditions. Quantum fluctuations
+are frozen in and expand with space. Will provide seed for large scale
+<p>If you plot the power spectrum of density fluctuations as a function of
+spatial frequency <mathjax>$k$</mathjax> (just a Fourier transform). We have a power spectrum
+that looks roughly parabolic (with a maximum), and we understand the
+shape. If we measure the microwave background on this plot and extrapolate
+from the expansion factor, it links perfectly with what we measure in the
+galaxies in terms of structure. That was the great excitement about the
+cosmic microwave background when we first measured these fluctuations: they
+were right were we needed them to be.</p>
+<p>We have no real mechanism for this field. Cosmology points to physics at
+much higher energy: <mathjax>$10^{16}$</mathjax> GeV. Best accelerators are <mathjax>$10^4$</mathjax> GeV. One of
+reasons for switching to cosmology.</p>
+<p>What can we test? Measure polarization of microwave background: see
+gravitational field. This is very much related to discussions in media on
+<p>What is constant in this comoving sphere during the expansion? The entropy
+-- no heat transfer. The energy cannot be constant: the sphere is working
+against the rest of the universe. There is this pressure. The volume
+increases, <mathjax>$-pdV$</mathjax> acts, and so the energy inside decreases by <mathjax>$pdV$</mathjax>. The
+entropy on the other hand should not decrease: universe is isotropic and
+<p>No generation of entropy if there are no first order phase transitions
+(which would mean irreversibility).</p>
+<p>This tells us that entropy per unit comoving volume has to be constant. <mathjax>$T
+\propto \frac{1}{a(t)}$</mathjax>. Remember: we did show that the energy density for
+relativistic particles (which dominated the entropy during the early
+universe) goes as <mathjax>$T^4$</mathjax>. There is a factor corresponding to the degrees of
+freedom, which is 1 per polarization for bosons and <mathjax>$\frac{7}{8}$</mathjax> per
+polarization for fermions So <mathjax>$\frac{\sigma}{\gamma} \propto g^* T^3$</mathjax>. That
+is related to relativstic number of the number of particles.</p>
+<p>So what is the <mathjax>$\frac{\sigma}{V_{\text{com}}} \propto g^*$</mathjax>. In that case,
+we have <mathjax>$S_{com} \propto a^3 g^* T^3$</mathjax>, which gives us that <mathjax>$T \sim
+\frac{1}{a(t)$</mathjax>. Same result as in GR.</p>
+<p>Reason why temperature goes as <mathjax>$\frac{1}{a(t)}$</mathjax> is related to the Doppler
+shift. As the universe expands, the wavelength of the relativistic
+particles is stretching out in GR, and the wavelength increase, frequency
+decreases, therefore the temperature decreases.</p>
+<p>If there is no change of degrees of freedom, <mathjax>$T \propto \frac{1}{a(t)}$</mathjax>. In
+the parabolic graph, used that we can compute (from first principles)
+temperature of fusion of hydrogen to be ~3000K. Universe was 1000x smaller
+than it is now.</p>
+<p>If the number of degrees of freedom is changing, you have a kink in the
+temperature evolution.</p>
+<p>With high enough temperature compared to binding energy, the product,
+for all practical purposes, does not exist.</p>
+<p>Indeed, in this case, the chemical potentials add.</p>
+<p>Problem: running out of time. Try to tell a little bit about how we discuss
+the </p></div><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
127 phys137a.html
@@ -660,7 +660,7 @@
<p><a name='30'></a></p>
<h1>Physics 137A: Quantum Mechanics</h1>
-<h2>Wednesday, April 9</h2>
+<h2>Wednesday, April 11</h2>
<p>Radial wave function, Exponential that goes like <mathjax>$e^{-r/an}$</mathjax>. Scale should
jump out at you. You get this feeling as you go to higher and higher
states, stuff gets weaker, and whatever. Solns to Lapl's eq on sphere;
@@ -676,7 +676,130 @@
\expfrac{q!}{k!}{2}\frac{x^k}{(q-k)!}$</mathjax>. Thus our wave function <mathjax>$\psi_{n\ell
m} (r,\theta, \phi) = \bracks{\expfrac{2}{na}{3}\frac{(n-\ell - 1)!}{2n
-\ell + 1}\parens{\frac{24}{na}}Y_{\ell m}(\theta, \phi)$</mathjax>.</p></div><div class='pos'></div>
+\ell + 1}\parens{\frac{24}{na}}Y_{\ell m}(\theta, \phi)$</mathjax>.</p>
+<p>Last couple of little properties: orthogonality. Must be careful with
+orthogonality. A lot comes from the <mathjax>$Y_{\ell m}$</mathjax>s. Orthogonality in the
+angular components enforced by <mathjax>$Y_{\ell m}$</mathjax>, which we've already shown.</p>
+<p><a name='31'></a></p>
+<h1>Physics 137A: Quantum Mechanics</h1>
+<h2>Friday, April 13</h2>
+<p>Nothing much.</p>
+<p><a name='32'></a></p>
+<h1>Physics 137A: Quantum Mechanics</h1>
+<h1>Monday, April 16</h1>
+<p>Think about <mathjax>$e^{i\vec{k} \cdot \vec{r}}\vec{\sigma}$</mathjax>.</p>
+<p>Recap: commutators. raising/lowering (ladder) operators. Levi-Civitta
+tensor. Maximal commuting set of operators.</p>
+<p>Ladder operators anti-hermitian (skew-hermitian). Have very interesting
+<p>Couple things to think about. Phase transitions.</p>
+<p>We now have a really interesting trick: if we wanted to, we could have
+write this <mathjax>$\ket{lm} \equiv \parens{L_-}^{l-m}\ket{ll}$</mathjax>.</p>
+<p>Maybe we can avoid some algebra. Invoke Wigner-Eckart theorem?</p>
+<p>Two states with just one value of <mathjax>$m$</mathjax>. Must somehow be able to do these
+operations efficiently.</p>
+<p>Choice of bases is just a matter of convention. Underlying all of this must
+be some result, some physical quantity that is independent of your
+coordinate system. This is how you figure it out.</p>
+<p>All of these matrix elements -- I'll write down the notation -- can be
+written as a single element with no dependence on <mathjax>$m$</mathjax> (3-j symbol?).</p>
+<p>Will see this stuff if you start to use it in practical
+applications. Annoying, lots of algebra, but you get simple results that
+allow you to do more stuff efficiently.</p>
+<p>PREVIOUSLY in quantum mechanics: we wrote down <mathjax>$L^2 = (\vec{x} \times p)^2$</mathjax>
+and stuff.</p>
+<p>These are not pulled out of a hat: written out because we know what these
+are in Cartesian coordinates, and we know how to transform from Cartesian
+to spherical. Not doing derivation because that has nothing to do with
+quantum mechanics. One of the problems for next homework: raising and
+lowering operators in <mathjax>$L_\pm \equiv \pm \hbar e^{\pm i\phi}\parens{
+\pderiv{}{\theta} \pm i\cos\theta\pderiv{}{itheta}}$</mathjax>.</p>
+<p>Zeeman: decided to break spherical symmetry by putting hydrogen atom in
+a magnetic field. Nice discretisation.</p>
+<p>First observed by Pauli: looks like you missed a quantum number. When you
+plot magnetic field, you see twice the number of states.</p>
+<p>Talk about Alfred Landay. Pauli also coming at that time (just a kid at
+that time, evidently, but did stuff like a paper on special
+relativity). Missing quantum number?! Electron spin. Thought about this
+classically. Idea was basically very interesting: source of this extra
+quantum number. (internal!). Electrons had to have half-integer spins (from
+Zeeman effect): here you would have two quantum numbers. Got 2 initially
+for g-factor -- surprising quantum-mechanically. Applied to other problem:
+fine structure splitting. Explained another factor explained by magnetic
+moments. Needed a different value of g-factor (1). Pauli had two objections:
+couldn't be right, since they were different values, and only one valid
+radius you could use: <mathjax>$r = \frac{e^2}{m c^2}$</mathjax>.</p>
+<p>Figuring out how fast this surface was rotating, had to be 100x speed of
+light. Special relativity says this is impossible. Told same thing to Niels
+<p>Nine months later, two guys in Holland (also very young) had exactly the
+same idea: had notion that there must be a new quantum number, almost must
+be spin; talked to their boss Ehrenfest. Said it was either nonsense or
+very important. Then said to talk to most famous Dutch theorist at the time
+Lorentz. Lorentz said everything was nonsense. Turns out this wasn't
+stupid: within two years they figured out problems. First, not classical
+spin. Also had interaction that looked like inner product between spin and
+angular momentum. This is what gives the fine structure splitting.</p>
+<p>Should have used Dirac's equation: need relativity.</p>
+<p>Nobody ever heard about the guy who actually discovered spin. Moral: don't
+listen to anyone over 30, and publish if you have something to discuss.</p>
+<p>If you want a fantastic review article (just wonderful), Jean Cummins from
+the atomic group. Coming out in a couple months. Called "electron
+spin". Talking about how theories were broken. How to fix in new QM.</p>
+<p>So what is this spin? The idea is that spin is an intrinsic quantity and
+quantum-mechanical, so don't try to think of it as a spinning soccer
+ball. Same algebra. What we're going to do is build this with something
+that we do have classical relations for, i.e. orbital angular momentum. Go
+through cyclical permutations, and this again can be summarized in this
+relationship that <mathjax>$\comm{S_i}{S_j} \equiv i\hbar\epsilon_{ijk}S_k$</mathjax>. Once
+again, you can choose eigenfunctions with quantum numbers <mathjax>$s,
+m_s$</mathjax>. Electron has <mathjax>$s = \frac{1}{2}$</mathjax> (fermions).</p>
+<p>Diagonal quantum numbers: <mathjax>$s^2\ket{s m_s}= \frac{3}{4}\hbar^2 \ket{s m_s}$</mathjax>,
+<mathjax>$s_z \ket{s m_s} \equiv \hbar m \ket{s m_s}$</mathjax>. You can do raising and
+lowering operators just as before: mimic what we did with angular orbital
+momentum. All particles are fundamental: point-like. Test with <mathjax>$e^\pm,
+\mu^\pm, \tau^\pm$</mathjax> (electrons, muons, tauons). Quarks: u,d/c,s/t,b. All of
+these constitute our fermions.</p>
+<p>We also have <mathjax>$\gamma, w^\pm, z$</mathjax>: our bosons.</p>
+<p>Composite particles act just like elementary particles. The theory
+(i.e. SM) thinks that various fermions (i.e. electrons, muons, tauons) are
+also composed of quarks.</p>
+<p>Spin-up, spin-down wave functions.</p>
+<p><a name='33'></a></p>
+<h1>Physics 137A: Quantum Mechanics</h1>
+<h1>Wednesday, April 18</h1>
+<p>Raising, lowering, Pauli matrices</p>
+<p><a name='34'></a></p>
+<h1>Physics 137A: Quantum Mechanics</h1>
+<h1>Friday, April 20</h1>
+<p>Pauli matrices. Electron in magnetic field.</p>
+<p>Larmor precession. More stuff. Not quite the Bloch sphere.</p>
+<p>Beyond SG machines!</p>
+<p>Introduction to addition of angular momentum. Actually important! Reason
+for picking this example: how to go further with hydrogen atom, now that
+you understand spin.</p>
+<p>People began to test spin carefully: explained Zeeman effect, explained
+specific levels of hydrogen. Splitting of p(1/2), p(3/2). No degeneracy:
+spin of electron interacts with orbital: magnetic field from relative
+motion of proton (with respect to electron).</p>
+<p><a name='35'></a></p>
+<h1>Physics 137A: Quantum Mechanics</h1>
+<h1>Monday, April 23</h1>
+<p>Let's start up where we left off last time: Larmor frequency (electron in
+magnetic field).</p>
+<p>learn to manipulate products of angular momenta.</p>
+<p>Important: linear combination of coupled states can often be used to
+represent uncoupled states. Clebsch-Gordan coefficient.</p>
+<p><a name='36'></a></p>
+<h1>Physics 137A: Quantum Mechanics</h1>
+<h1>Wednesday, April 25</h1>
+<p>Klepsch-Gordan coefficients. Corresponds to unitary (and hermitian)
+transformation between bases.</p>
+<p>Fine-structure of hydrogen: taking into account spin interaction with
+magnetic field induced by relative motion of proton (larmor precession)</p>
+<p>Quadrupoles, nuclear spin, total angular momentum of electron. Angular
+momentum of atom: <mathjax>$I_N + \vec{J} = \vec{F}_{\mathrm{atom}}$</mathjax>.</p></div><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
8 sp2012/cs191/
@@ -5,6 +5,7 @@ Introduction -- January 17, 2012
Course Information
* Announcements on website
* Try Piazza for questions.
* GSIs:
@@ -21,14 +22,17 @@ Course Information
+ Final Project
+ In-class quizzes
+ Academic integrity policy
* What is quantum computation?
* What is this course?
* Double-slit experiment
What is Quantum Computation?
* Computers based on quantum mechanics can solve certain problems
exponentially faster than classical computers, e.g. factoring
(Shor's algorithm).
@@ -58,6 +62,7 @@ What this course will focus on is several questions on quantum computers.
Where we are for quantum computers is sort of where computers were
60-70 years ago.
* Size -- room full of equipment
* Reliability -- not very much so
* Limited applications
@@ -81,7 +86,8 @@ Quantum Cryptography
Ways to use QM to communicate securely (still safe even with Shor's).
This course
* Introduction to QM in the language of qubits and quantum gates.
* Emphasis on paradoxes, entanglement.
* Quantum algorithms.
4 sp2012/cs191/
@@ -7,8 +7,8 @@ April 3, 2012
Kevin Young:
-We'll start with how to implement qubits using spins, and eventually culminate in
+We'll start with how to implement qubits using spins, and eventually
+culminate in NMR.
Today, what we're going to do is start looking at physical implementations
of QC. Will continue through the end of next week. How to build an actual
1  sp2012/cs191/
@@ -4,6 +4,7 @@ Qubits, Superposition, & Measurement -- January 19, 2012