Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
tree: b00350f053
Fetching contributors…

Cannot retrieve contributors at this time

2614 lines (2612 sloc) 167.421 kb
<div><div class='wrapper'>
<p><a name='1'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>January 18, 2012</h2>
<h2>Organization</h2>
<p>Goals</p>
<ul>
<li>Deeper understanding of concepts: less mysterious.</li>
<li>Entropy</li>
<li>Free energy</li>
<li>Chemical potential</li>
<li>Statistical mechanics<ul>
<li>Fluctuations</li>
<li>Kinetic theory background ← use of simulations</li>
</ul>
</li>
<li>Recognition of physics in a "context-rich" situation, i.e. real life.</li>
<li>Acquire computational tools</li>
<li>Quantitative</li>
<li>Problem-solving skills</li>
<li>Linkages to</li>
<li>everyday life<ul>
<li>bridge between microscopic and macroscopic</li>
<li>irreversibility</li>
<li>engineering</li>
</ul>
</li>
<li>modern physics<ul>
<li>frontier</li>
<li>applications e.g. astronomy, cosmology
· condensed matter physics, low temperature</li>
</ul>
</li>
<li>History</li>
</ul>
<p>Participation</p>
<ul>
<li>Focus: Conceptual Understandidng</li>
<li>We learn by construction/reconstruction of your mental models.</li>
<li>This process is both intensely personal and social (learn through
others)</li>
<li>focus on grades, formulae, short cuts</li>
<li>Before lecture</li>
<li>Read book and notes because our use of clickers which decreases
the in-class presentation time, there will be testing of this
reading.</li>
<li>Play with applet simulations → intuition. (You can suggest others
you can find)</li>
<li>In class</li>
<li>Beginning of class: typically a conceptual question/applet</li>
<li>Peer instruction questions:<ul>
<li>A conceptual question</li>
<li>Vote (with clickers)</li>
<li>Discussion in small groups</li>
<li>Vote again</li>
</ul>
</li>
<li>Active participation during lecture: questions, clickers</li>
<li>"One minute" random quizzes (check attention to the material)</li>
<li>Out of class</li>
<li>Homework</li>
<li>Working groups on problems if you like (but you write your own
solution)</li>
<li>Discussion sections (but try homework first)</li>
<li>Office hours (come with questions)</li>
</ul>
<p>Clickers:</p>
<ul>
<li>Systematic use in class begins Mon, Jan 23.</li>
</ul>
<p>[ More boring administrative stuff; not particularly unique to this
particular class. Largely duplicated from the syllabus. ]</p>
<p>[ note to self: 277 Cory ]</p>
<p><a name='2'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Kinetic Theory 1 / Probabilities. January 23, 2012</h2>
<p>Details:</p>
<ul>
<li>clickers</li>
<li>homework -- due in LeConte reading room</li>
<li>webcasts</li>
<li>Discussion sections</li>
</ul>
<p>Ideal Gas</p>
<ul>
<li>particle interaction leads to thermal equilibrium.</li>
<li>particle probability distributions are independent</li>
<li>particle systems where interaction time ≪ mean free time between
collisions</li>
</ul>
<h1>Motivations: statistical mechanics and fluctuations</h1>
<h2>The need for statistical mechanics</h2>
<h2>How to describe large systems.</h2>
<ul>
<li>very large</li>
<li>at finite temperature – not at rest</li>
<li>velocity <mathjax>$v \approx 300$</mathjax>m/s @ 300K.</li>
<li>Constant collisions</li>
<li>mean free path <mathjax>$\lambda \approx 0.1$</mathjax> μm.</li>
<li>Probabilistic description. 1 particle</li>
<li>Classically<ul>
<li>Have to track both position and velocity/momentum.</li>
<li>We will introduce a phase space. The probability distribution
on phase space = position × momentum.</li>
<li>Problem is that states are treated as continuous.</li>
</ul>
</li>
<li>Quantum – states are actually discrete / countable.</li>
</ul>
<h2>N particles</h2>
<p>In many cases it is a good approximation to treat particles as
independent. We will call this an ideal gas.</p>
<h2>Temperature, pressure, entropy</h2>
<p>Temperature as a measure of mean energy of excitations of the system. Two
reasons: 1) usually more degrees of freedom than spatial degrees of freedom
(spin (e.g. ferromagnetism), and 2) other effects: quantum gas: Fermi-Dirac
exclusion principle (i.e. we cannot put two particles in the same state)
means that we have to stack in energy. Their mean energy is very large,
even at 0K.</p>
<p>Pressure as a measure of energy density or "average" force per unit area on
walls of container.</p>
<p>Entropy as a measure of the state of disorder of a system. Modern
definition: <mathjax>$\sigma = -\sum p_i \log p_i$</mathjax>. 0 if in one state, max if flat
distribution.</p>
<p><a name='3'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Kinetic Theory 2 / Probabilities. January 25, 2012</h2>
<h1>House keeping.</h1>
<h1>Fast introduction to kinetic theory.</h1>
<ul>
<li>Equilibrium</li>
<li>Vibrations</li>
</ul>
<h1>Probabilities.</h1>
<ul>
<li>Distributions</li>
<li>Moments</li>
</ul>
<h1>How a system reaches equilibrium</h1>
<p>Diffusion. True also in probability space. An isolated system tends to
evolve toward maximum flatness.</p>
<p>We won't speak classically of position + momentum, but rather of
states. Quantum states. The same thing happens in the case of probability.</p>
<p>It turns out that when all the properties are equal, we have maximum
entropy. Equilibrium occurs when "you are maximally flat".</p>
<h1>Fluctuations</h1>
<p>Fluctuations are the effect of life if you don't have an infinite number of
degrees of freedom. Which never happens. One of the fundamental properties
is noise. Noise is intrinsic to the finity of degrees of freedom.</p>
<p>Central limit theorem. Fluctuations decrease as samples increase.</p>
<h1>Probabilities</h1>
<p>Probabilistic event: occurs in an uncontrolled manner, described by random
variables. parameters are fixed (even if unknown).</p>
<h1>Discrete case</h1>
<ul>
<li>consider bins.</li>
</ul>
<p>An example of discrete probabilities is a histogram. Raw counts.</p>
<p>Cannot determine probabilities via experiments. Probability distributions
add up to 1. Probabilities cannot be negative.</p>
<h1>Continuous case</h1>
<ul>
<li>Actually a lie.</li>
</ul>
<p>Moments. Stuff.</p>
<p><a name='4'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Kinetic Theory 3 / Probabilities. January 27, 2012</h2>
<p>Various probability stuffs. Continuous distirbutions. Sum/average of
independent events. Central limit theorem.</p>
<p><mathjax>$$
P(s) \ge 0, \sum P(s) = 1
\\ \avg{y} = \sum y(s)P(s) \text{ [ this is the first moment. ]}
\\ y(s) = 35 \delta_{ik}. \avg{y} = \sum 35\delta_{ik} = 35/38.
$$</mathjax></p>
<p>Variance:</p>
<p><mathjax>$\sigma^2 = \avg{(y(s) - 〈y})^2 = \avg{y^2} - 2\avg{y\avg{y}} + \avg{y}^2
= \avg{y^2} - \avg{y}^2$</mathjax></p>
<p>root mean square (rms) <mathjax>$\equiv$</mathjax> standard deviation</p>
<p>Independence. Usually, if I have two random variables, x and y, the
probability of <mathjax>$P(x,y) = P(x|y)P(y) \neq P(x) \times P(y)$</mathjax>. We say we have
independence iff <mathjax>$P(x,y) = P(x)P(y)$</mathjax>. In other words, <mathjax>$P(x|y) = P(x)$</mathjax>.</p>
<p>You can define a correlation coefficient to be
<mathjax>$\frac{(x - \avg{x})(y - \avg{y})}{\sigma_x\sigma_y}$</mathjax>.</p>
<p>Independence <mathjax>$ ⇒ \rho = 0$</mathjax>. Converse not necessarily true.</p>
<p>Continuous distributions. Histograms. The case where a variable is
continuous. We now have probability <em>densities</em>.</p>
<p><mathjax>$$
f(x)dx = g(y)dy = f(x,y)\deriv{x}{y}dy.
f(x) \ge 0, \sum f(x)dx = 1.
$$</mathjax></p>
<p>Moments: we can define moments in exactly the same way as before.
The moment of a variable <mathjax>$y(x)$</mathjax></p>
<p><mathjax>$$
\avg{y(x)} = \int y(x)f(x)dx.
\\ \mu = \avg{x} = \int xf(x)dx.
\\ \sigma^2 = \avg{x²} - \avg{x}² = \int x^2 f(x)dx - \parens{\int xf(x)dx}^2.
$$</mathjax></p>
<p><mathjax>$f(x,y)dxdy = g(x)dx h(y)dy$</mathjax>. Factoring works the same way, if our
variables are independent.</p>
<p>Normal distributions: Gaussian.</p>
<p>It is very important to put the differential element (<mathjax>$dx$</mathjax> or <mathjax>$dy$</mathjax>) because
the function changes depending on what the differential element is. The
histogram depends on what variable you choose to plot. If I choose <mathjax>$x$</mathjax> or
<mathjax>$x^2$</mathjax>, my histogram will be different. Usually.</p>
<p>The mean is in the middle of a normal distribution because the third
moment is 0.</p>
<p>Full-width half maximum (FWHM) <mathjax>$\approx 2.3\sigma$</mathjax></p>
<p>A is called the mode, i.e. the maximum. In a distribution with nonzero skew
(like the Maxwell-Boltzmann), the mode is different from the mean.</p>
<p>mean: location.</p>
<p>standard deviation: width.</p>
<p>skewness: symmetry.</p>
<p>kurtosis: peakedness.</p>
<p>Fourier transform is the characteristic function. The log of this is stuff
with cumulants.</p>
<p>Sum of random variables:</p>
<p><mathjax>$$
x \equiv y + z
\\ \avg{x} = \avg{y} + \avg{z}.
\\ \sigma^2_x = \sigma^2_y + \sigma^2_z + 2\rho\sigma_y\sigma_z.
\\ \text{Independence} \implies \rho = 0; \sigma^2_x = \sigma^2_y + \sigma^2_z
\\ h(x)dx = (f*g)(x)dx \implies \text{the cumulants add!}
$$</mathjax></p>
<p>Proof: Convolution in original space is equivalent to product in Fourier
space. Hence for a sum, the characteristic functions multiply and the logs
of characteristic functions add.</p>
<p>Central limit theorem: Cool if our variables actually are independent.</p>
<p><a name='5'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Entropy and Stuff. January 30, 2012</h2>
<h1>Homework</h1>
<p>First one is due today, second one is posted.</p>
<h1>Central Limit Theorem</h1>
<p>Taking N independent random variables <mathjax>$x_i$</mathjax> of average <mathjax>$\mu_0$</mathjax> and of
variance <mathjax>$\sigma_0^2$</mathjax>, the experimental average <mathjax>$A = \frac{1}{N}\sum
x_{i}$</mathjax>. <mathjax>$f(A)dA → \lim_{N\to\infty} \frac{1}{\sqrt{2\pi}σ}
\exp(-\frac{(A-\mu)^2}{2\sigma^2})dA; \mu = \mu_0, \sigma^2 =
\frac{\sigma_0^2}{N}.$</mathjax></p>
<p>This is different from <mathjax>$\avg{A}$</mathjax>, which would just be an integral.</p>
<p>The other thing is that <mathjax>$\sigma^2 = \frac{\sigma_0^2}{N}$</mathjax>. The variance
decreases as <mathjax>$\frac{1}{N}$</mathjax>, or the standard deviation decreases as
<mathjax>$\frac{1}{\sqrt{N}}$</mathjax>. This is fairly typical of stochastic processes, where
the relative width decreases as <mathjax>$\frac{1}{\sqrt{N}}$</mathjax>. Overall width will
increase if you don't normalize, but relative width decreases.</p>
<p>The relative widths of kurtosis and skewness go to zero as we add more
terms.</p>
<h2>Consequences for Thermodynamics</h2>
<p>Because of large number of particles, averages are very peaked around mean.</p>
<ul>
<li>System well characterized by mean = "macroscopic quantity".</li>
<li>Fluctuations are very small.</li>
<li>Not a lot of difference between most probable value, mean value,
and experimental value.</li>
</ul>
<p>Dealing with systems with large degrees of freedom. Temperature, for
instance, is related largely to the kinetic energy of the particles. It
will be very well-defined and have very few fluctuations if we are just
measuring the mean of said particles, and it will be very peaked around the
mean.</p>
<p>The central limit theorem makes thermodynamics meaningful.</p>
<h1>Quantum States of a System</h1>
<h2>States of a System</h2>
<p>Not covering spins. ALso, marginal definition of entropy.</p>
<p>(mostly chapters 1 and 2 of Kittel and Kroemer)</p>
<ul>
<li>States / COnfigurations</li>
<li>Probabilities fo States</li>
<li>Fundamental assumptions</li>
<li>Entropy</li>
<li>Counting States</li>
<li>Entropy of an ideal gas.</li>
</ul>
<p>State = Quantum State.
* Well-defined, unique.
* Discrete ≠ "classical thermodynamcs" where entropy was depending on
resolution ΔE.</p>
<p>Boltzmann for reasons that he did not fully understand arrived at the
conclusion that he had to have discrete states. This was before Planck and
quantum mechanics. Boltzmann was always doubting himself to the point that
he took his own life. So when you are a little bit despaired, think about
Boltzmann.</p>
<p>It's amazing that a guy like that who contributed so much to modern physics
was unsure of himself.</p>
<p>Configuration = Macroscopic specification of system</p>
<ul>
<li>Macroscopic Variables</li>
<li>Extensive: U, S, F, H, V, N (not normalized)</li>
<li>Intensive: T, P, μ (chemical potential)</li>
<li>Not unique: depends on level of details needed.</li>
<li>Variables + constraints ⇒ not independent.</li>
</ul>
<p>Many microstates (states) correspond to a single macrostate
(configuration).</p>
<h2>Quantum Mechanics in 1 transparency</h2>
<p>Fundamental postulates</p>
<ul>
<li>State of one particle is characterized by a wave function.</li>
<li>Probability distribution = |Ψ(x,t)|^2 with 〈Ψ|Ψ〉 = ∫{Ψ(x)*}Ψ(x)dx = 1.</li>
<li>Physical quantity → Hermitian operator.</li>
<li>In general, not fixed outcome. Expected value of Ô = 〈Ψ|Ô|Ψ〉.</li>
<li>Eigenstate ≡ state with a fixed outcome e.g. Ô|Ψ〉 = o|Ψ〉, where o
is a number.</li>
<li>A finite system has discrete eigenvalues.</li>
</ul>
<p>1-dimensional case, infinitely deep square well. Solve via separation
of variables, construct Fourier series.</p>
<p>If you solve this equation, you can show that in that case, the wave
function has to go to 0 at the wall.</p>
<p>That's just a particle in a box. If you ask yourself what is the momentum
of this particle, this particle is actually a series of two momenta. It
does not have a well-defined momentum, since it is going back and forth.</p>
<p><a name='6'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Entropy and Stuff. Feb 1, 2012</h2>
<p>REMINDER: states ⇒ microstates; configurations ⇒ macrostates.</p>
<p>Overview:</p>
<ul>
<li>Discrete states</li>
<li>Isolated system in equilibrium</li>
</ul>
<p>Postulate; H theorem. One of the finest works in physics by
Boltzmann. Shows that entropy is increasing. The probability distributions
of the states tend to equalize. Whatever initial probabilities tend to
evolve toward equilibrium.</p>
<ul>
<li>Counting states</li>
</ul>
<h1>Quantum Mechanics (again)</h1>
<p>Particle in a box (or, if you like, an infinite square well). By the axioms
of quantum mechanics, these solutions are identical to the modes of a
vibrating string.</p>
<p>[ clicker question regarding distance between momentum states. Talk
about Uncertainty principle. ]</p>
<p>ℏ/2L: distance between momentum states.</p>
<p>quantization comes from the combination of both the eigenvalue equation as
well as the specific boundary conditions.</p>
<p>Same thing happens with a hydrogen atom. We have an electron orbiting
around the photon, and Bohr said that if you turn the nucleus by 2π, your
function should come back on itself. And then if you solve the radial
Schrödinger equation, it is the solution where you have something finite at
the origin and 0 at infinity which gives the quantization of energy.</p>
<p>Basically resonance conditions, like a vibrating string, will give us
discrete energy levels.</p>
<h1>SPIN</h1>
<p>Particles can carry a spin – a unit of angular momentum. Actually, they can
carry an angular momentum. Carrying integers and half-integers of angular
momentum – former are known as fermions, and latter are known as
bosons. Very different.</p>
<p>If I have a spin of 1, it can point up, nowhere, or down. The photon, which
is massless, is spin 1 with 2 spin states. (exception: usu. spin s ⇒ 2s+1
states.)</p>
<p><mathjax>$\epsilon = mB$</mathjax>, where m = magnetic moment, B = magnetic field. emission of
photons, directions of spin.</p>
<p>Important implication in medicine: MRI is the magnetic resonance
imaging. Uses this property to see where the spins of the hydrogen
are. Just looking at hydrogen, basically.</p>
<p>Energy now is is proportional to r².</p>
<h1>Fundamental postulate</h1>
<p>Probabilistic description of state of a system.</p>
<p>Systems in equilibrium. An isolated system in equilibrium is equally likely
to be in any of its accessible states. Kittel does not specify in
"equilibrium". It does not matter if we are close to equilibrium.</p>
<h2>Consequences</h2>
<p>Probability of a configuration:</p>
<ul>
<li>Probability of configuration: simple counting problem, normalized.</li>
</ul>
<p>This allows us to compute probabilities of configurations. This method is
conventionally called a microcanonical computation. Characterized by being
very awkward, cumbersome, difficult to implement, and so on.</p>
<h1>Entropy</h1>
<p><mathjax>$\sigma = -\sum p log p = -H$</mathjax>. H = Negentropy = Information (Shannon).</p>
<p>For an isolated system in equilibrium: identical to Kittel. In classical
thermodynamics, definition usually used is <mathjax>$dQ = TdS$</mathjax>.</p>
<p>Basically, we have a nondeterministic finite-state machine (weighted
probabilities and all) being run many times simultaneously. H theorem tells
us it will reach equilibrium i.e. all currents will cancel.</p>
<p><mathjax>$\Gamma_{rs} = \Gamma{sr} ⇒ \text{ H theorem}$</mathjax>. Symmetry of the transition
rate will cause the evolution of the probabilities to converge eventually
to a flat distribution (<em>of microstates</em>). Else you have a nonzero
divergence in more than one state.</p>
<p>Stirling approximation: number of states goes to a Gaussian.</p>
<p><a name='7'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Entropy and Stuff. Feb 3, 2012</h2>
<p>homework problem #2:
<mathjax>$\exp(u² - 2ρuv + v²) ⇒ exp(u² + (v-ρu)² - ρ²u²)$</mathjax></p>
<h1>Counting States: Discrete States</h1>
<p>With just counting states and some assumptions as to what pressure and
temperature are, we have PV = NkT and U = 3NkT/2.</p>
<h2>Density of spatial states per unit phase space</h2>
<p>Phase space: Position space (x) <mathjax>$\otimes$</mathjax> (p) Momentum space.</p>
<p>Density of quantum states (orbital states) is <mathjax>$\frac{1}{h^n}$</mathjax> per particle
(n ≡ dimensions). That is, if I have a volume in the phase space, I can
count the number of orbital states for one particle by dividing this volume
by <mathjax>$h^3$</mathjax> (in three dimensions).</p>
<p>The volume of the phase space is just the integral of <mathjax>$\frac{d^3 x d^3 p}{h³}$</mathjax>.</p>
<h2>Ideal Gas</h2>
<p>Now consider N particles in weak interactions. Total energy U is
constant. <mathjax>$g = \Pi \int \frac{d^3 xd^3 p}{h^{3N}}$</mathjax>. Insert a <mathjax>$\delta(\sum
\frac{p_{i}^2}{2M} - U)$</mathjax>. Basically imposing that the total energy of my
system is U. The product of <mathjax>$dx_i$</mathjax> will give <mathjax>$V^N$</mathjax>. So there is no problem
there. But what about the <mathjax>$d^3 p_i \delta(\sum...)$</mathjax>? Imposing some of the
<mathjax>$\frac{p_i^2}{2N}$</mathjax> to be equal to <mathjax>$U$</mathjax>. I'm looking now at the surface of a
sphere with a certain radius.</p>
<p>If we have one particle in one dimension, we have only one momentum
space. In large dimensions, <mathjax>$g \propto \frac{V^NU^{3(N-1)/2}}{h^{3N}}$</mathjax>.</p>
<p>[ Reasoning: for momentum integral: we need to conserve energy. ]</p>
<p>[ We are basically speaking of the surface of a volume in 3N-space. ]</p>
<p><mathjax>$\sigma = \frac{S}{k} = \log(g) = \log(V^N U^{3N/2}) + \text{const}
= N\log(V) + \frac{3N}{2}\log(U) + \text{const}.$</mathjax></p>
<p>In that case, can write <mathjax>$dU = TdS - pdV$</mathjax>. Or, if you prefer, <mathjax>$\tau d\sigma
- pdV$</mathjax>.</p>
<p>Solve for U: <mathjax>$U = A\exp(2\sigma/3N)V^{-2/3}$</mathjax>.</p>
<p><mathjax>$\tau = \pderiv{U}{\sigma(V, \sigma, N)} = \frac{2U}{3N} \iff U =
\frac{3NkT}{2}$</mathjax>.</p>
<p><mathjax>$p = -\pderiv{U(V, \sigma, N)}{V} = \frac{2U}{3V} \iff pV = N\tau ≡ NkT$</mathjax>.</p>
<p><a name='8'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Entropy and Stuff. Feb 6, 2012</h2>
<p>[[ Talk about clickers. ]]</p>
<h2>So far:</h2>
<p><mathjax>$\sigma = -\sum p \log p$</mathjax> ✓</p>
<p>H theorem: with an isolated system, probabilities of states evolve to being
equal.</p>
<p>Consequence of the H theorem: If I am looking at the probability of a
configuration, the probability is equal to (states in configuration) /
(total states)</p>
<p><mathjax>$\sigma = \log (g_t)$</mathjax> (mathematically equivalent to the first statement)</p>
<p>Counting of states to get the entropy.</p>
<p>Density of orbitals (quantum spatial wave functions) in phase space is
<mathjax>$\frac{1}{h^d}$</mathjax> (dimension of space)</p>
<p>Once again: phase space <mathjax>$\equiv$</mathjax> position space (x) <mathjax>$\otimes$</mathjax> (p) momentum
space. Good way to compute the number of spatial states. degrees of freedom
(spin, rotation, vibration).</p>
<p>Number of states g(spatial states) of a system of energy U in a volume</p>
<p>V is <mathjax>$g ∼ U^{3N/2}V^N = \exp(\sigma). \tau \equiv kT, \sigma \equiv \frac{S}{k}$</mathjax></p>
<p><mathjax>$\tau = \pderiv{U}{\sigma}|_V \implies U = \frac{3}{2}N\tau$</mathjax></p>
<p><mathjax>$P = -\pderiv{U}{V}|_\sigma \implies PV = N\tau$</mathjax></p>
<p>Just as natural, if not more so, to work with <mathjax>$\sigma(U,V) \equiv
\log(V^N U^(3N/2))$</mathjax>.</p>
<p><strong>We must be careful to note, when working with partial derivatives,
which variables we are keeping constant.</strong></p>
<p>This is a very useful way of defining pressure and temperature once we have
counted states.</p>
<p><mathjax>$\tau = \frac{1}{\pderiv{\sigma}{U}|_{V,N}}, p = \tau \pderiv{\sigma}{V}|_{U,N}$</mathjax></p>
<p>We were starting with the phase space for our <mathjax>$N$</mathjax> particles: <mathjax>$\frac{d^{3N}x
d^{3N}p}{h^{3N}}$</mathjax>: density of our states. We want to then integrate over
the volume, but we choose the states such that U fixed. Represents in our
3N-dimensional momentum space a sphere.</p>
<p><mathjax>$\delta[\sqrt{\sum\sum p^2_{ik} - \sqrt{2MU}}]$</mathjax>. Take advantage of sifting
property of <mathjax>$\delta$</mathjax>.</p>
<p>radius of sphere in 3N momentum space. r has dimension <mathjax>$r^{d-1}$</mathjax>. In the
general case, we'll have <mathjax>$\Omega^{3N}r^{3N-1}$</mathjax>.</p>
<p>Thus we have <mathjax>$\frac{\Omega^{3N}\sqrt{(2MU)^{3N-1}}}{h^{3N}}$</mathjax>.</p>
<p><mathjax>$\delta$</mathjax> has dimension of <mathjax>$\frac{1}{p}$</mathjax>. <mathjax>$\int \delta(x)dp = 1$</mathjax></p>
<p>Sackur Tetrode formula: entropy of an ideal gas. <mathjax>$\frac{S}{k} =
N(\log(\frac{n_Q}{n}) + 5/2)$</mathjax></p>
<p><a name='9'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Entropy and Stuff. Feb 8, 2012</h2>
<p>Divide <mathjax>$g_{t}$</mathjax> by a missing <mathjax>$N!$</mathjax>, which is from indistinguishability of
particles / quantum states. Also, experiments were off by a very large
factor.</p>
<p><mathjax>$g_t = \Omega_{3N}\sqrt{2M\Delta U}V^N U^{3(N-1)/2}/(N!h^{3N})$</mathjax></p>
<p>Quantum density. What remains of our counting states.</p>
<p><mathjax>$\sigma = N(\log(n_Q/n) + 5/2)$</mathjax>.</p>
<p><mathjax>$n = N/V$</mathjax>, <mathjax>$n{Q} = \parens{\frac{2\pi M}{h²} \frac{2U}{3N}}^{3/2}$</mathjax></p>
<p>Free expansion of gas (isolated system ⇒ no particle/heat exchange): final
temperature = initial temperature.</p>
<p>Entropy is increasing, however, since volume is increasing.</p>
<p>Chapter Three:</p>
<h1>Equilibrium, Thermodynamics, Potential</h1>
<p>RECALL:</p>
<p><mathjax>$\frac{1}{\tau} = \pderiv{\sigma}{U}|_{V,N}$</mathjax></p>
<p><mathjax>$\frac{p}{\tau} = \pderiv{\sigma}{V}|_{U,N}$</mathjax></p>
<p><mathjax>$\frac{\mu}{\tau} = -\pderiv{\sigma}{N}|_{U,V}$</mathjax></p>
<p>For an ideal gas in 3-space:</p>
<p><mathjax>$$
U = \frac{3}{2}N\tau
\\ \tau \equiv kT
\\ pV = N\tau
$$</mathjax></p>
<h2>States of a combination of two systems</h2>
<p>Take 2 systems and put them in contact.</p>
<p>Put them in weak interactions <mathjax>$U = U_1 + U_2, V = V_1 + V_2, N = N_1 + N_2$</mathjax></p>
<p>Number of states <mathjax>$g_1(U_1,V_1,N_1)$</mathjax>, <mathjax>$g_2(U_2,V_2,N_2)$</mathjax>. Weak interaction ⇒
quantum states are not modified.</p>
<p>How many states do we have in the combined system?
<mathjax>$g_1g_2$</mathjax></p>
<p>What is the configuration of maximum probability? The number of states in
<mathjax>$g(U_1,V_1,N_1)/g_t$</mathjax>. Take a derivative with respect to <mathjax>$U_1$</mathjax> to see where
the extremum of this function is.</p>
<p><mathjax>$$
\pderiv{g_1}{U_1} g_2 + g_1 \pderiv{g_2}{U_1} ≡ 0.
\\ \pderiv{g_1}{U_1} g_2 - g_1 \pderiv{g_2}{U_2} = 0.
\\ \frac{1}{g_1}\pderiv{g_1}{U_1} = \frac{1}{g_2}\pderiv{g_2}{U_2}.
\\ \pderiv{\log(g_1)}{U_1} = \pderiv{\log(g_2)}{U_2}.
\\ \pderiv{\sigma_1}{U_1} = \pderiv{\sigma_2}{U_2}.
\frac{1}{\tau_1} = \frac{1}{\tau_2}.
$$</mathjax></p>
<p><a name='10'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Equilibrium, Thermodynamics, Potential. Feb 10, 2012</h2>
<h1>Midterm</h1>
<h1>Equilibrium between two two systems</h1>
<h1>Laws of Thermodynamics</h1>
<p>At the end of last lecture, we were considering two systems which can
exchange energy, volume, particles. We were asking ourselves what was the
most probable configuration <mathjax>$U_1,V_1,N_1$</mathjax>. <mathjax>$\pderiv{\sigma_1}{U_1} =
\pderiv{\sigma_2}{U_2} \equiv \frac{1}{\tau}$</mathjax>. Similarly,
<mathjax>$\pderiv{\sigma_1}{V_1} = \pderiv{\sigma_2}{V_2} = \frac{p}{\tau},
\pderiv{\sigma_1}{N_1} = \pderiv{\sigma_2}{N_2} = -\frac{\mu}{\tau}$</mathjax> (<mathjax>$\mu$</mathjax>
for one species) – definitions.</p>
<p>The configuration of maximum probability share the same pressure,
temperature, potential.</p>
<p>The important thing is that the distribution of the system around this most
probable configuration is very peaked and becomes more peaked as we get
more particles (see central limit theorem).</p>
<p>Did not put volume because it is difficult to define walls.</p>
<p>The last point I wanted to make: by putting the system in contact, we have
increased the entropy of the system. That we know from the H theorem.</p>
<p>What Kittel and Kroemer say is approximately correct. "Entropy increases
because most probable configuration by default is greater than what you
started with before". This is approximate because they are speaking about
the most probable configuration.</p>
<p><mathjax>$\sigma_{\max} &lt; \sigma; \sigma_{\max} \approx \sigma$</mathjax> (very very close,
though, because our probability distribution is very peaked. This is why
Kittel and Kroemer is merely an approximation)</p>
<p>In practice, it does not really matter, since this is a good enough
approximation for the systems that we work with: <mathjax>$\sigma-\sigma{max} \ll
\sigma = N(\log\frac{n_{Q}}{n}+5/2)$</mathjax> [Sackur tetrode formula].</p>
<p>Kittel &amp; Kroemer does recognize that, but it is presented in a slightly
confusing manner.</p>
<p>We have shown that indeed the temperature behaves as the temperature we
know. We have not yet shown that this is the same pressure.</p>
<p>Chemical potential affects various things such as rates of
diffusion. Determined by species of particles. However, at equilibrium, by
the H theorem, the chemical potential is equal in all accessible states.</p>
<p>What is chemical potential? It is a measure of the concentration! (verify
by working out <mathjax>$-\tau \pderiv{\sigma}{N}$</mathjax>). A lot of work for a very simple
result. But this is much more powerful. We will use this to look at
batteries, equilibrium in various systems. </p>
<p>MIDTERM: Friday the 24th. 8:10-9:00. Why Friday and not Monday? Turns out
there is an important workshop on dark matter in UCLA. There is a potential
problem: Monday the 20th is President's day, so no class, no office
hours. Proposition: Tuesday the 21st, we have extended office hours from
4:00-6:00.</p>
<p>Alex's review sessions: Wednesday 15th (5-6), Wednesday 22nd (6-8).</p>
<p>You should take this midterm seriously but not panic too much about it. I
will also give you, next week, maybe before Wednesday, a small diagnostic
quiz that allows you to see where you are.</p>
<p>BACK TO THERMO.</p>
<p>Thermodynamics were developed over a century and a half from the late 1700s
to the early 1900s. Basically, by the beginning of the twentieth century,
there was this concept of three laws of thermodynamics (actually 4: the 0th
law says that at equilibrium, pressure, temperature, chem. potential are
equal).</p>
<p>The first law comes directly from our definition</p>
<p><mathjax>$d\sigma = \frac{1}{\tau} dU + \frac{p}{\tau} dV -\sum \frac{\mu}{\tau}
dN$</mathjax>. Now, I can multiply by <mathjax>$\tau$</mathjax> and we get <mathjax>$dU = \tau d\sigma - pdV +
\sum \mu dN$</mathjax>: change in energy = heat - work + chemical work</p>
<p>You can have many different expressions, but basically, this is a
consequence of our definition.</p>
<p>Second law of thermodynamics: when an isolated system evolves from a
non-equilibrium configuration to equilibrium, its entropy will increase.</p>
<p>Third law: entropy is zero at zero temperature (or log of number of states
occupied) ⇒ method to compute entropy.</p>
<p>If there is only one ground state, at zero temperature, all of the
particles will be in this ground state (not fermions) and there is only
one state. <mathjax>$p_0 = 1; p_i = 0 \implies \sigma = 0$</mathjax>. If there are several
ground states, this is degenerate; you will have a <mathjax>$\log g$</mathjax>, where <mathjax>$g$</mathjax> is
the number of ground states.</p>
<p>Fermions: all of particles cannot be in the ground state. What you should
remember is that at zero temperature, there is one (or several) ways to put
all of the particles.</p>
<p>Thermodynamic identities. Rewrite slides in different ways depending on
what are your independent variables.</p>
<p>How do we measure entropy experimentally? There is a problem with assuming
ideal gas: not valid close to absolute zero. For that matter, Sackur
Tetrode is not valid at absolute zero. The classical approximation breaks
down when <mathjax>$n &gt; n_Q$</mathjax>.</p>
<p><a name='11'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Macroscopic Thermodynamics. Feb 13, 2012</h2>
<p>Nader Mirabolfuthi. 343 Old LeConte. 12-1p.</p>
<p>Review of some thermodynamics concepts, laws and applications.
Thermodynamic functions: Maxwell relations
Heat engines, refrigerators: Carnot cycle.</p>
<p>dQ = dE + dW.</p>
<p>The laws of thermodynamics were developed by people who didn't use
statistics.</p>
<p>equilibrium: A↔B.</p>
<p>0th law: equilibrium. 1st law: energy (heat in the form of energy). 2nd
law: entropy of isolated system increasing.</p>
<p>An exact differential is a well-defined function in terms of multiple
variables x and y. Very important feature: conservative vector
field. Internal energy is actually a state function of the system, whereas
heat and work can be added arbitrarily. Place bars to denote non-exact
differentials.</p>
<p>PV = nτ: equation of state: relates observables with the system (pressure,
volume, temperature).</p>
<p>pV = νRT, TdS = pdV + dE. Therefore dS = P/T dV + dE/T. Since E = E(T,V),
dE = ∂E/∂T dT + ∂E/∂V dV. So dS = nR/V dV + 1/T (∂E/∂T dT + ∂E/∂V dV).</p>
<p>So dS = (νR/V + 1/T ∂E/∂V)dV + 1/T ∂E/∂T.</p>
<p>S = S(T,V). Also a state function. So we can write dS, therefore, as:</p>
<p>∂S/∂T dT + ∂S/∂V dV = ... ⇒ ∂S/∂T = νR/V ∂E/∂V + 1/T; ∂S/∂V = 1/T ∂E/∂T.</p>
<p>Working through the math,</p>
<p>∂²S/∂V∂T = 1/T² ∂E/∂T + 1/T ∂E/∂V∂T = 1/T ∂E/∂V∂T.</p>
<p>Specific heats: ∂Q/∂T while holding various parameters constant.</p>
<p>Adiabatic vs. isothermal expansion. γ ≡ C{p}/C{v}.</p>
<p><a name='12'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Macroscopic Thermodynamics. Feb 15, 2012</h2>
<p>Recap of last time: we proved that the internal energy of an ideal gas was
a state variable.</p>
<p>Maxwell relations and their proof: from τdσ = dU + pdV,
* U = U(σ,V) ⇒ dU = τdσ - pdV
+ Internal energy
* H = H(σ,P) ⇒ dH = τdσ + Vdp
+ Enthalpy
+ H = U + PV
* F = F(τ,V) ⇒ dF = -σdτ - pdV
+ Helmholtz free energy
+ F = U - τσ
* G = G(τ,P) ⇒ dG = -σdτ + Vdp
+ Gibbs free energy
+ G = U - PV + τσ</p>
<p>exact differential: f = f(x,y) ⇒ df = (∂f/∂x)dx + (∂f/∂y)dy.</p>
<p>For an ideal gas, we found C{p} - C{v} = k{B}N{A}. But C ≡ dQ/dt. If Q or T
were exact differentials, regardless with which path they took, we should
have had the same dQ/dT. The same for work because the first thermodynamic
law combines heat and work, and energy is an exact differential.</p>
<p>∂τ/∂V = -∂p/∂σ
dU = τdσ - pdV. U ⇒ dU = (∂U/∂σ)∂σ (∂U/∂V)∂V.</p>
<p>A necessary condition for an exact differential is that the derivatives
commute. In other words, <em>take the curl</em>.</p>
<p>More relations:</p>
<p>∂τ/∂V = –∂p/∂σ
∂τ/∂p = ∂V/∂σ
∂σ/∂V = –∂p/∂τ
∂σ/∂p = –∂V/∂τ</p>
<p>We just get these from various formulations of first law of thermo (U, H,
F, G): by virtue of being exact differentials, their derivatives commute.</p>
<p>General relations for the specific heat:</p>
<p>C{V} = (∂Q/∂τ){V} = T(∂σ/∂τ){V}
C{p} = (∂Q/∂τ){p} = T(∂σ/∂τ){p}</p>
<p>Thus C{v} = C{p} + τ(∂σ/∂P·∂P/∂τ)</p>
<p>\alpha ≡ 1/V ∂V/∂τ. "volume coefficient of expansion"</p>
<p>κ ≡ -1/V ∂V/∂p "isothermal compressibility"</p>
<p>Therefore C{p} - C{v} = Vτ(\alpha²/κ)</p>
<p>What's needed to calculate entropy</p>
<p>Use the relations shown in the previous part. All we need are C{v} and
equation of state. But C{v} is also computable using equation of state at a
given V₀.</p>
<p>σ(τ,V) = σ(τ₀,V₀) + ∫C{v}(τ′,V)dτ′/τ′ + ∫dV′(∂P{v}(τ₀,V′)/∂τ)</p>
<p>Same for internal energy.</p>
<p>dU = C{v}dτ + (τ∂P/∂τ - P)dV
∂U/∂τ = C{v}
∂U/∂V = τ∂P/∂τ - P.</p>
<p>Example van der Waals gas (particle interactions):
(P + a/v²)(v-b) = k{B}N{A}τ (v ≡ V/ν</p>
<p><a name='13'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Applications of Classical Thermodynamics. Feb 17, 2012</h2>
<h2>Heat engines, refrigerators, and Carnot cycle</h2>
<p>Historically very important: industrial revolution. Easy to do mechanical
work and transform to heat: mechanical, electrical. What about the
opposite? Not at the expense of machine itself: otherwise no cyclic.</p>
<p>Also has interesting physical importance: this was a new thing: how to
study energy? Joule showed there was an equivalence between work done and
heat released.</p>
<p>η = w/q₁ ≤ 1 - τ₂/τ₁. Equality holds only for quasi-static.</p>
<p>Carnot cycle: alternating adiabatic / isothermal expansion /
contraction. Maximum attainable efficiency for given τ₁, τ₂.</p>
<h2>Reversibility</h2>
<p>Entropy constant / always in equilibrium. Relaxation of constraints: if you
enforce the original constraints, and the system returns to its initial
state, it is reversible.</p>
<p>t{operation}, t{equilibrium}. t{o} » t{e}. Other possibility: t{e} ≫
t{o}. Also reversible. Becomes bad when these two are of the same order of
magnitude.</p>
<p>Clausius statement for 2nd law of thermodynamics: effectively, Carnot
engine is maximally efficient, given τ₁, τ₂. (q₂/q₁ ≤ τ₂/τ₁)</p>
<p>Joule-Thomson process: H = U + PV ⇒ H₁ = H₂ if dQ = 0.
μ ≡ V/C{p} (τ\alpha - 1)</p>
<p><a name='14'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Chemical Potential. Feb 22, 2012</h2>
<h1>Midterm</h1>
<ul>
<li>One page of notes. Clarification: One side of one sheet. No explicit
ruling against oversized paper. Reasoning: it is always good to
reconstruct what you have learned.<ul>
<li>Put things in your own terms and understand what you don't understand
and why you don't understand.
<mathjax>$\cdot$</mathjax> There are some formulae that are useless to understand. (n_{Q}, n,
etc.) This is not a memorizing test. I would like you to understand
the concepts.</li>
</ul>
</li>
<li>Will take about a week for grading.</li>
<li>No need for blue book. May want scratch paper if you like, but no blue
book necessary.</li>
</ul>
<h1>Exact differentials</h1>
<ul>
<li>Evolving system.<ul>
<li>Energy can be defined. Not necessarily useful.</li>
<li>Entropy can also be defined.
<mathjax>$\cdot$</mathjax> <mathjax>$\sigma = \log g_{t}$</mathjax> only holds in equilibrium for an isolated system.
<mathjax>$\cdot$</mathjax> In this case, it is well-defined by <mathjax>$\sigma \equiv -\sum p(t) \log p(t)$</mathjax>.</li>
</ul>
</li>
<li>Heat transfer is not an exact differential.</li>
</ul>
<h1>Chemical potential</h1>
<ul>
<li>Why is it called a potential?</li>
<li>Action mass law</li>
</ul>
<p>Most common state is such that the sum of chemical potentials is equal to
zero. <mathjax>$\mu_{i} = \tau \log (n/n_{Q_i})$</mathjax>.</p>
<p><a name='15'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Chemical Potential. Feb 27, 2012</h2>
<p>Conference result (dark matter): unfortunately a negative result. A number
of groups (two in particular) claimed that they saw signals that could be
the result of WIMPs which are responsible for dark matter. MACHOs were
bad.</p>
<p>What we have done so far is referred to the microcanonical method (the name
does not matter). Basically, we have been counting states. With the
postulate of the H theorem, this allowed us to define entropy as <mathjax>$-\sum_i
p_i \log p_i$</mathjax>. ( Various reminders of definitions of temperature, pressure,
chemical potential. )</p>
<p>The system <mathjax>$S$</mathjax> is in one state of energy <mathjax>$\epsilon$</mathjax>. How does the number of
the states in the reservoir evolve as <mathjax>$\epsilon$</mathjax> increases? Since the two
systems are isolated, the total energy of the system is constant. As one
increases, the other decreases. Energy is directly proportional to the
number of states and inversely proportional to the probability of being in
a state.</p>
<p>Now we are equipped to derive the Boltzmann distribution.</p>
<p>We know that the number of states is <mathjax>$g_{R}(U_0 - \epsilon)$</mathjax>, where <mathjax>$g_R
\equiv (U_0 - \epsilon)^{3N/2}$</mathjax>.</p>
<p><mathjax>$$
= A(U_0 - \epsilon)^{3N/2}
\\ = AU_0^{3N/2}(1-\frac{\epsilon}{3N\tau_R/2})^{3N/2}
\\ \Rightarrow AU_0^{3N/2}e^{-\epsilon/\tau_R}
$$</mathjax></p>
<p>Configuration that we are looking at: state of the system
<mathjax>$S$</mathjax>.</p>
<p><mathjax>$$Pr[S \in \epsilon ] \propto g_R(U_0 - \epsilon)$$</mathjax></p>
<p>\begin{align}
\frac{Pr[S \in \epsilon_1]}{Pr[S \in \epsilon_2]}
= \frac{g_R(U_0 - \epsilon_1)}{g_R(U_0 - \epsilon_2)}
\ = \frac{\exp(-\sigma_R(U_0-\epsilon_1))}
{\exp(-\sigma_R(U_0-\epsilon_2))}
\ = \frac{\exp(-\frac{\epsilon_1}{\tau})}
{\exp(-\frac{\epsilon_2}{\tau})}
\end{align}</p>
<p><mathjax>$$
Pr[S \in \epsilon] = \frac{e^{-\epsilon/\tau}}{\sum_s e^{-\epsilon
s/\tau}}
\\ (z \equiv \sum_s e^{-\epsilon s/\tau})$$</mathjax></p>
<p><mathjax>$z$</mathjax> is known as the partition function.</p>
<p>So let me take a two-level system, where state 1 has <mathjax>$\epsilon = 0$</mathjax>, and
state 2 has energy <mathjax>$\epsilon$</mathjax>.</p>
<p>So the probability of being in state 1 is <mathjax>$\frac{\exp(-0/\tau)}{1 +
\exp(-\epsilon/\tau)} = \frac{1}{1 + \exp(-\epsilon/\tau)}$</mathjax>.</p>
<p>Likewise, the probability of being in state 2 is <mathjax>$\frac{\exp(-\epsilon/\tau)}{1 +
\exp(-\epsilon/\tau)} = \frac{1}{\exp(\epsilon/\tau) + 1}$</mathjax>.</p>
<p>As I increase the temperature, I can excite the system, and at some
temperature, the two states are equiprobable with 50%. </p>
<p>The second example we will take is the Maxwell distribution, where a
particle in a gas in contact with a reservoir (which can be the rest of the
gas). To be continued on Wednesday.</p>
<p><a name='16'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Boltzmann Distribution. Feb 29, 2012</h2>
<h1>5.1 Constant number of particles</h1>
<h2>Boltzmann factor</h2>
<p>Recall: two-state system, we could write down the Boltzmann
distribution. Note that heat capacity will peak when there is a maximum
change in state (<mathjax>$C_V = \pderiv{Q}{T} = \pderiv{U}{T}$</mathjax>.)</p>
<p>Let's go back a bit and try to understand where this equation is coming
from. Why does the probability that S in a quantum state energy <mathjax>$\epsilon$</mathjax>
decrease exponentially with <mathjax>$\epsilon$</mathjax>? As you increase <mathjax>$\epsilon$</mathjax>, since
the total energy <mathjax>$U_R$</mathjax> is fixed, the energy and number of states in the
reservoir decrease (conservation of energy).</p>
<ul>
<li>Examples</li>
</ul>
<p>Free particle at temperature <mathjax>$\tau$</mathjax>. Assumption: no additional degrees of
freedom (no spin, not a multi-atom molecule).</p>
<p><mathjax>$\text{Pr}[dV , p+dp, d\Omega] = \frac{\sum \exp(-p^2/(2M\tau))}{Z}$</mathjax></p>
<p>That's where we're using our recipe (result that we derived before) that
the density of states in phase space is merely <mathjax>$\frac{1}{\hbar^3}$</mathjax>. So the
number of states in a differential volume element is <mathjax>$\frac{dV d^3p}
{\hbar^3}$</mathjax>. Define <mathjax>$d^3p \equiv p^2 dp d\cos\theta d\phi$</mathjax> (further defining
<mathjax>$d\Omega \equiv d\cos\theta d\phi$</mathjax>, where <mathjax>$\Omega$</mathjax> is the solid angle
moment), i.e. convert to spherical coordinates. The above probability thus
reduces to <mathjax>$\frac{1}{Z} \exp(-p^2 /(2M\tau)) \frac{dV p^2 dp d\Omega}
{\hbar^3}$</mathjax>.</p>
<p>Normalizing to N particles (i.e. choosing <mathjax>$Z$</mathjax> such that this sum is 1)
yields <mathjax>$f(v)dv d\Omega = n\left(\frac{M}{2\pi\tau}\right)^{3/2}
exp\left(-\frac{Mv^2}{2\tau}\right)v^2 dv d\Omega$</mathjax>. Note that <mathjax>$\hbar^3$</mathjax>
disappeared entirely. Important distribution, Boltzmann (we often call it
the Maxwell distribution).</p>
<p>Why does the Maxwell distribution have a <mathjax>$v^2$</mathjax> term? It's a result of the
Jacobian matrix, which effectively tells us that the density of quantum
states per unit velocity interval in the system <mathjax>$S$</mathjax> goes to zero as <mathjax>$v^2$</mathjax>
when <mathjax>$v$</mathjax> goes to zero. Only works when the reservoir is very big relative
to the system. Cannot possibly be from the normalization, since this is a
function of <mathjax>$v$</mathjax>.</p>
<p>Notice that the slope of <mathjax>$f(v)$</mathjax> is zero at <mathjax>$v=0$</mathjax>, whereas the slope of
<mathjax>$f(\epsilon)$</mathjax> is <mathjax>$\infty$</mathjax> at <mathjax>$v=0$</mathjax>.</p>
<p>Why doesn't the system S stay in a state of energy <mathjax>$\epsilon$</mathjax>? Continuous
exchange between <mathjax>$S \leftrightarrow R$</mathjax> of excitations of energy
<mathjax>$\tau$</mathjax>. Thoughts: Uncertainty principle technically governs everything, but
that's not particularly useful. Each excitation in the reservoir will have
a typical energy <mathjax>$\approx \tau$</mathjax>. Thermal fluctuations of the order of
\tau. Difficult to get. <mathjax>$\epsilon \gg \tau$</mathjax>, <mathjax>$\text{Pr}[\epsilon]
\propto \exp(-\epsilon/\tau)$</mathjax></p>
<p>With an electron cloud (since those are fermions), you have some more
complicated physics going on, but you have a Fermi gas, and you will be
exchanging energy on the order of <mathjax>$\tau$</mathjax>. In a crystal or a solid, you will
exchange phonons with energy on the order of <mathjax>$\tau$</mathjax>. Regardless of what
system you have, you are exchanging quantum energy packets on the order of
<mathjax>$\tau$</mathjax>.</p>
<h2>Partition function</h2>
<p><mathjax>$Z = \sum_s \exp(-\epsilon_s/\tau$</mathjax>. Let's see, then, that we can compute
that. We can ask ourselves, what is the mean energy of the system? <mathjax>$U =
\avg{\epsilon} = \sum_s \frac{\epsilon_s e^{-\epsilon_s/\tau}}{Z}$</mathjax>. But
look, think about what <mathjax>$\pderiv{\log Z}{\tau}$</mathjax> will look like.</p>
<p>Aside: we still have time to do the entropy.</p>
<p><mathjax>$\sigma = -\sum p_s \log p_s = -\frac{\sum
e^{-\epsilon_s/\tau}}{Z}\left[-\frac{\epsilon_s}{\tau} - \log z\right]$</mathjax>.</p>
<p>What you see is just <mathjax>$\sum \frac{1}{\tau}\frac{\epsilon_s e^{-\epsilon_s
/\tau}}{Z} + \log Z = \frac{U}{\tau} + \log Z = \pderiv{\tau\log Z|}{\tau}$</mathjax></p>
<p><mathjax>$F = U - \tau\sigma = -\tau\log Z$</mathjax>.</p>
<p>[ done for the day. Fairly powerful tool. ]</p>
<h2>Ideal gas (again!)</h2>
<ul>
<li>Entropy = Sackur-Tetrode formula</li>
</ul>
<h1>5.2 Exchange of particles</h1>
<h2>Gibbs factor</h2>
<h2>Grand Partition Function as a calculation tool [ Gibbs Sum (grand canonical methods) ]</h2>
<p>A very powerful tool. More technical, but important.</p>
<h1>5.3 Fluctuations</h1>
<p><a name='17'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Partition Function. Mar 2, 2012</h2>
<h1>Midterm</h1>
<ul>
<li>Office hours</li>
</ul>
<h1>Partition function as computational tool</h1>
<h1>Ideal gas</h1>
<p>We are looking at a totally different method of computing entropy. Instead
of the microcanonical method -- counting states (impose U; difficult), Then
we are putting the system in contact with a reservoir. We came to the
conclusion that <mathjax>$\prob{i} = \frac{1}{Z}e^{-\epsilon_i/\tau}$</mathjax>. So that's a
totally different method, which tends to be much easier.</p>
<p>From what you have seen, it is quite easy to get ahold of the thermodynamic
quantities you know of.</p>
<p><mathjax>$$
\sigma = \pderiv{\tau\log Z}{\tau}
\\ F = U - \tau\sigma = -\tau\log Z \implies \sigma = -\pderiv{F}{\tau}
\\ U = \avg{\epsilon} = \sum \epsilon_i p_i = \tau^2 \pderiv{\log Z}{\tau}
= -\tau^2\pderiv{\frac{F}{\tau}}{\tau}
$$</mathjax></p>
<h2>Examples</h2>
<p>Consider a two-level system with energies 0 and <mathjax>$\epsilon$</mathjax>. <mathjax>$Z = 1 +
e^{-\epsilon/\tau}$</mathjax>, trivially. Therefore <mathjax>$U = \tau^2\pderiv{\log z}{\tau}
= \tau^2\frac{\frac{1}{\tau^2}e^{-\epsilon/\tau}}{1 + e^{-\epsilon/\tau}} =
\frac{1}{e^{\epsilon/\tau} + 1}$</mathjax>.</p>
<p>Now, for an ideal gas: <mathjax>$Z = \sum_s e^{-\epsilon_s/\tau} = \int \frac{d^3q
d^3p}{\hbar^3}e^{-\epsilon}{\tau}$</mathjax>. Phase space integral. Things are very
simple.</p>
<p>This is equivalent to <mathjax>$\frac{V}{\hbar^3}\int dp_x e^{-p_x^2/2M\tau} dp_y
e^{-p_y^2/2M\tau} dp_z e^{-p_z^2/2M\tau} = \frac{V}{\hbar^3}\left(
\sqrt{2\pi M\tau}\right)^3 = V\left(\frac{2\pi M\tau}{\hbar^2}\right)^3/2 =
n_Q$</mathjax>.</p>
<p><mathjax>$\avg{\epsilon} = \frac{3}{2} \frac{\tau^2}{\tau} = \frac{3}{2}\tau$</mathjax></p>
<p>The power of <mathjax>$Z$</mathjax> is that either you can do these sums very easily
(e.g. geometric series), or the terms grow so fast that you can just take
the first term, or you can use the integral approximation as we have done
here.</p>
<h2>Harmonic Oscillator</h2>
<p><mathjax>$E = \sum e^{-s\hbar\omega/\tau} = \left(\sum e^{-\hbar\omega/\tau}
\right)^s = \frac{1}{1 - e^{\hbar\omega/\tau}}$</mathjax></p>
<p>These are known as the canonical methods, which are fairly powerful. I may
come back to this question of density of states. I have probably told you
enough on how to sum over discrete states. We may come back when we review
the material.</p>
<p>I have spoken long enough about this fairly technical thing. Let's ask
ourselves how to extend this to multiple systems (e.g. 2).</p>
<p>The partition function of 2 systems is merely the product if the systems
are distinguishable, and the product divided by <mathjax>$2!$</mathjax> (approximately) if
they are indistinguishable. Consider indistinguishability of "quantum
states", where each state is a system.</p>
<p>Therefore if we have N indistinguishable systems, <mathjax>$Z(N) \approx \frac{1}
{N!} Z(1)^N$</mathjax>. This is a low occupation number approximation due to Gibbs.</p>
<p><mathjax>$$Z_1 = e^{-mB/\tau} + e^{mB/\tau} = 2\cosh\frac{mB}{\tau} \\ Z = Z_1^N =
2^N\cosh^N\frac{mB}{\tau} \implies U = -NmB\tanh\frac{mb}{\tau}.$$</mathjax></p>
<p>Thus <mathjax>$M = \frac{U}{BV} = nm\tanh\frac{mB}{\tau}$</mathjax>.</p>
<p>[ curie temperature: where M disappears ]</p>
<ul>
<li>Why is the division of the product by N! approximate?</li>
</ul>
<p>As mentioned earlier, this is a low concentration approximation, i.e. the
probability of having two systems in the same state is negligible. The
error we accumulate is due to fewer terms appearing in the sum.</p>
<p>In other words, we have very weak correlation coefficients.</p>
<p><a name='18'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Ideal gas, Gibbs distribution: Mar 5, 2012</h2>
<h1>Ideal Gas</h1>
<p>We previously focused on the partition function, which is <mathjax>$\sum_s
e^{-\epsilon_s/\tau}$</mathjax>. This so-called "canonical" method is much simpler,
usually, than counting states with the constraint that energy is fixed. So
that's the advantage of that method, and so, from the partition function by
appropriate derivatives (usu. of the log), you can get all the quantities
that you like (energy, entropy, free energy, chemical potential, pressure,
and so forth).</p>
<p>If I have <mathjax>$N$</mathjax> systems, <mathjax>$Z = (Z_1)^N$</mathjax> (if the systems are distinguishable),
and <mathjax>$Z \approx \frac{(Z_1)^N}{N!}$</mathjax> if the systems are indistinguishable. We
discussed last time why this is an approximation: sparseness of states.</p>
<p>For the ideal gas, we found that <mathjax>$Z = Vn_Q$</mathjax>, where <mathjax>$n_Q\equiv \parens{
\frac{M\tau}{2\pi\hbar^2}}^{3/2}$</mathjax>. This is <mathjax>$\approx \frac{1}
{\lambda^3}$</mathjax>. So this is really a quantum density satisfied by putting
particles a wavelength apart, and I cannot do much better than that.</p>
<p>If I apply the Gibbs recipe there, what I get is that <mathjax>$Z_N =
\frac{1}{N!}\parens{Vn_Q}^N$</mathjax>. So <mathjax>$\log Z_N = N\log(Vn_Q) - \log N!$</mathjax>. We
use the Stirling approximation to say <mathjax>$\log N! \approx N\log N - N$</mathjax>. Thus
<mathjax>$\log Z_N = N\log(\frac{n_Q}{\frac{N}{V}}) + N = N\log \frac{n_Q}{n} + N$</mathjax>.</p>
<p>When we introduce free energy, we have <mathjax>$F = -\tau\log Z$</mathjax>; <mathjax>$p =
-\pderiv{F}{V}$</mathjax>; <mathjax>$\sigma -\pderiv{F}{\tau}$</mathjax>.</p>
<h2>Barometric equation</h2>
<p>Assume the atmosphere has constant temperature. How does the density
decrease with altitude z? The way the book does it is <mathjax>$\mu = \text{const}$</mathjax>, since
we are in thermal equilibrium. But <mathjax>$\mu$</mathjax> has two pieces: the internal
chemical potential (<mathjax>$\tau\log\parens{\frac{n}{n_Q}}$</mathjax>) and external chemical
potential, which is just the potential in the gravitational field, namely
<mathjax>$mqz$</mathjax>. This has to be constant, which implies that <mathjax>$n = n_0\exp\parens{
-\frac{mqz}{\tau}}$</mathjax>.</p>
<p>The problem with this derivation is that it is at a very high level. We
need to invoke the chemical potential. Very similar problem to density of a
centrifuge, or density of ions in the membrane of a cell.</p>
<p>This is one way of doing things. There are actually two other ways of doing
things.</p>
<h2>Single-particle occupancy of levels at different altitude</h2>
<p>Instead of considering the atmosphere to be in equilibrium, I would
ask myself (the more intuitive way of doing it), what is the probability
that a given molecule is at the altitude <mathjax>$z$</mathjax>? I (Sadoulet) would say that
<mathjax>$\epsilon = \epsilon_k + mgz$</mathjax> (energy is kinetic energy + potential
energy). The probability of being at the altitude <mathjax>$z$</mathjax> would be proportional
to <mathjax>$\exp\parens{-\frac{\epsilon_k + mgz}{\tau}}$</mathjax>. The density, therefore,
has to be proportional to <mathjax>$\exp\parens{-\frac{mgz}{\tau}}$</mathjax>.</p>
<p>The problem with these two derivations is that they do not generalize very
well to the point where you are not in equilibrium.</p>
<h2>Hydrodynamic equilibrium</h2>
<p>The third way of doing it (which you have likely done in Physics 7) is to
consider a slab of thickness <mathjax>$dz$</mathjax> and consider the hydrodynamic
equilibrium. The force pushing up on this slab is <mathjax>$pA$</mathjax>, whereas the force
pushing down on the slab is <mathjax>$(p + dp)A$</mathjax>. Our slab has a certain mass and
particle density, and so the downward force (due to gravity) is <mathjax>$nmAz
g$</mathjax>. Putting that all together, we have <mathjax>$pA - (p + dp)A = nmgzA$</mathjax>. This leads
directly to <mathjax>$\frac{1}{n}\deriv{p}{z} = -mg$</mathjax>. This is totally general. If we
have <mathjax>$pV = N\tau$</mathjax>, <mathjax>$p = \frac{N}{V}\tau = n\tau$</mathjax>. Since <mathjax>$\frac{\tau}{p}
\deriv{p}{z} = -mg \iff \frac{1}{n} \deriv{n}{z} = -\frac{mg}{\tau} \implies
p \sim \exp\parens{-\frac{ngz}{\tau}}$</mathjax>. This generalizes to the case when
the temperature is not constant.</p>
<p>(various clicker questions)</p>
<p>Grand partition function: <mathjax>$\sum_{S,N} \exp\parens{-\frac{\epsilon(N) - \mu
N}{\tau}}$</mathjax></p>
<h1>Gibbs Distribution</h1>
<p>Exchange not only energy, but also particles.</p>
<p><a name='19'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Gibbs Distribution: Mar 7, 2012</h2>
<p>The mathematics of the Gibbs distribution is not difficult so much as
complicated. What is difficult is determining what phenomena this
explains.</p>
<p><mathjax>$$\prob{S \in \epsilon_s(N_s), N_s} \propto g \propto \exp
\parens{-\frac{\epsilon_s(N_s)}{\tau}} \exp\parens{\frac{\mu}{\tau}}
\\ = \frac{1}{z} \exp\parens{-\frac{\epsilon_s - \mu}{\tau}}
\\ z \equiv \sum \exp\parens{-\frac{\epsilon_s - \mu}{\tau}}
$$</mathjax></p>
<p>We call this the Gibbs distribution (z is, as we recall, the grand
partition function).</p>
<p>Often, chemists prefer to consider chemical potential <mathjax>$\lambda_i \equiv
\exp \parens{\frac{\mu_i}{\tau}}$</mathjax>. </p>
<p>This is weak interaction. What it means is that as we keep adding particles
at the quantum level, we don't change the wave function. They don't care
how many particles are in the system.</p>
<p>Often, we will simply use <mathjax>$\prob{\epsilon_s} = \frac{1}{z}\exp \parens
{-\frac{(\epsilon_s - \mu)N}{\tau}}$</mathjax>.</p>
<p>A common problem with this distribution, by the way, is that we are
extremely general, and sometimes it is difficult to know what this
represents in practice, so we should take examples and try to think about
the examples to consider what these mean in practice.</p>
<p>Maybe the simplest example I can take is a Fermi-Dirac distribution. Let us
consider an energy level <mathjax>$\epsilon_s$</mathjax>, and I have a fermion, which has a
half-integer spin, e.g. an electron (spin <mathjax>$\frac{1}{2}$</mathjax>). If we listen to
Pauli, at most one fermion can occupy each state.</p>
<p>Therefore <mathjax>$N_s \in \{0,1\}$</mathjax>. <mathjax>$\prob{N_s=0} = \frac{1}{z}$</mathjax>; <mathjax>$\prob {N_s=1} =
\frac{1}{z} e^{-(\epsilon_s-\mu)/\tau}$</mathjax>. So <mathjax>$z = 1 + e^{-(\epsilon_s-\mu)/
\tau}$</mathjax>. Plugging this back in, we have <mathjax>$\prob{N_s=0} = \frac{1}{1 + e^{
-(\epsilon_s-\mu)/ \tau}}$</mathjax>; <mathjax>$\prob {N_s=1} = \frac{ e^{-(\epsilon_s-\mu) /
\tau}}{1 + e^{-( \epsilon_s-\mu)/ \tau}} = \frac{1} {e^{( \epsilon_s -
\mu)/\tau} + 1}$</mathjax>. Examples: gas absorption: heme in myoglobin, hemoglobin,
walls. Energy bands of materials (bandgaps).</p>
<p>Determining <mathjax>$\mu$</mathjax>, when considering hemoglobin and stuff. All we need is
the density of <mathjax>$O_2$</mathjax> in the blood. So how can we compute this? Remember
what we said about mass action law. If we are a dilute system of particles,
they will have the same properties as the gas: they will barely interact
with each other, so states will be independent, and we can then apply the
ideal gas approach.</p>
<p>We can also apply this to our dissolved gas, and <mathjax>$\mu_{gas} = \tau\log
\frac{n}{n_Q}$</mathjax>. This is an ideal gas in the sense of weakly-interacting
n-body system.</p>
<p>Chemists and biologists like to put the pressure, since the partial
pressure is something you can measure easily. Langmuir adsorption isotherm:
<mathjax>$f = \frac{p}{\tau n_Q\exp\parens{\frac{\epsilon}{\tau}} + p}$</mathjax>.</p>
<p>Aside: Pauli was an enormous guy, German/Swiss physicist. Very bright but
very unforgiving. Lack of understanding: not a question.</p>
<p>Adsorption: fraction occupied <mathjax>$= f$</mathjax>. Same form as Fermi-Dirac. Adsorption
dependence on <mathjax>$\mu$</mathjax> can clearly be observed to be positive: as <mathjax>$\mu$</mathjax>
increases, the fraction of occupied sites increases. However, as <mathjax>$\tau$</mathjax>
increases, the fraction of occupied sites decreases. (binding energy:
<mathjax>$-\epsilon$</mathjax>)</p>
<p>Ionized impurities in a semiconductor: K&amp;K p370. Doping. Donor: tends to
have one more electron; acceptor: tends to have one fewer electron. <mathjax>$\mu$</mathjax>
(Fermi level) determined by electron concentration in semiconductor.</p>
<p><a name='20'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>Gibbs Grand Partition Function: Mar 9, 2012</h2>
<p>Gibbs grand partition function, fluctuations, what is minimized when a
system is in thermal equilibrium with a bath?</p>
<p>Next midterm is Monday, March 19.</p>
<p>I would like to move the third midterm from April 20 to April 13 for two
reasons: one, it's a week before we finish; two, allows me to go to a small
workshop in Davis. Interesting workshop, announced at last minute, purpose
is to try to understand what the LHC tells us about dark matter.</p>
<p>There was an announcement by Fermilab that they may have seen the
Higgs. CERN might have seen the Higgs. Not convinced by announcements so
far: more publicity than anything else.</p>
<p>Let's go rapidly through the grand partition function. Remember, we were
speaking of a system, which was exchanging with a reservoir both energy and
particles. What we had shown was that <mathjax>$\prob{\epsilon_s(n)} = \frac{1}{z}
\exp (-\frac{\epsilon-\mu N}{\tau})$</mathjax>, <mathjax>$z = \sum_{s,N} \exp(-\frac{\epsilon
- \mu N}{\tau})$</mathjax>. In most cases, we'll have <mathjax>$\epsilon_s(N) = \epsilon_s
N$</mathjax>. So that's the Gibbs formalism. Almost exactly in the same way as in the
Boltzmann formalism, we can deduce a number of things about the derivatives
of the log of this partition.</p>
<p>For instance: taking into account one species, <mathjax>$\avg{N} = \sum_{s,N}
\frac{1}{Z} N\exp (-\frac{\epsilon_s(N)-N}{\tau})$</mathjax>. See, if I were to take
<mathjax>$\pderiv{\log Z}{\mu}$</mathjax>, I would have <mathjax>$\sum_{s,N} \frac{N}{\tau}\exp (-\frac
{\epsilon_s(N)-N}{\tau})$</mathjax>. And so <mathjax>$\mu = \tau\pderiv{\log Z}{\mu}$</mathjax>.</p>
<p>So let's take two examples: Fermi-Dirac distribution.</p>
<p>Fermions versus bosons: fermions have half-integer spins, follow
Fermi-Dirac distributions; bosons have integer spins, follow Bose-Einstein
distribution.</p>
<p>So in the Fermi-dirac, we have <mathjax>$\avg{N} = \tau \pderiv{\log Z}{\mu} =
\frac{\exp(-(\epsilon-\mu)/\tau)}{1 + \exp(-(\epsilon-\mu)/\tau)} =
\frac{1}{\exp((\epsilon-\mu)/\tau) + 1}$</mathjax>.</p>
<p>Bose-Einstein: <mathjax>$z = \sum_N e^{-\frac{(\epsilon-\mu)N}{\tau}}$</mathjax>. You
recognize this is a geometric series and is thus equal to
<mathjax>$\frac{1}{1-\exp(-(\epsilon-\mu)/ \tau)}$</mathjax>. <mathjax>$\avg{N} = \tau\pderiv{\log Z}
{\mu} = \frac{\exp((\epsilon-\mu)/\tau)}{1 - \exp((\epsilon-\mu)/\tau)} =
\frac{1}{\exp((\epsilon-\mu)/\tau) - 1}$</mathjax></p>
<p>Note that as <mathjax>$\epsilon-\mu \to 0$</mathjax>, this quantity diverges: Bose-Einstein
condensation. On the other hand, for a Fermi-Dirac distribution, this
converges to 1.</p>
<p>Just pay attention to <mathjax>$\sigma = \sum_{s,N}p_{s,N}\log p_{s,N}$</mathjax>. I will put
into this expansion my Gibbs factor <mathjax>$\frac{1}{Z}\exp(-\frac{-\epsilon_s(N)
- \mu N}{\tau})$</mathjax>.</p>
<p><mathjax>$$
\log z + \sum_{s,N} p_{s,N} \frac{\epsilon_s(N)}{\tau} - \sum_{s,N} p_{s,N}
\frac{\mu N}{\tau}
\\ = \log Z + \frac{\avg{\epsilon_s}}{\tau} - \frac{\avg{N}\mu}{\tau}
\\ = \pderiv{\tau\log Z}{\tau}
$$</mathjax></p>
<p>You have to be careful. The formulae are different with the grand partition
as opposed to without the partition function. <mathjax>$\Omega \equiv F - \sum_{\mu_i}
\avg{N_i} = -\tau\log Z$</mathjax>.</p>
<p>Note that <mathjax>$\sigma = \pderiv{\tau\log Z}{\tau}$</mathjax> while <mathjax>$\Omega = -\tau\log Z$</mathjax>
as a result of the thermodynamics identity.</p>
<h2>Fluctuations</h2>
<p>Actually, we started to speak about what I am going to say. There are
fluctuations when we are exchanging particles or energy with the
reservoir. The number, the energy is not fixed, and if I am exchanging
particles, they fluctuate. I told you about the mean number of particles
there; we spoke before abotu the mean energy. But now we can try to compute
the variance (or root-mean-square/standard deviation) of this quantity.</p>
<p>For instance, you remember that for a Boltzmann distribution (not
exchanging particles, so I don't need the grand partition function),
<mathjax>$\avg{E} = \tau^2\pderiv{\log Z}{\tau}$</mathjax>. The variance, you should remmeber,
is simply <mathjax>$\sigma_E^2 = \avg{E^2} - \avg{E}^2 = \sum \epsilon_s^2
e^{-\epsilon_s/\tau} - \parens{\sum \epsilon_s e^{-\epsilon_s/\tau}}^2$</mathjax>. If
you differentiated this expression with respect to <mathjax>$\tau$</mathjax>, you got this
expression, which was this elegant, if not beautiful formula <mathjax>$\avg{N} =
\tau\pderiv{\log Z}{\mu} = \tau^2\pderiv{\log \avg{E}}{\tau} = \tau^2
\pderiv{\tau^2 \pderiv{\log Z} {\tau}}{\tau}$</mathjax>.</p>
<p>So <mathjax>$\sigma_N^2 = \avg{N^2} - \avg{N}^2 = \tau \pderiv{}{\mu}
\parens{\tau\pderiv{\log Z}{\mu}}$</mathjax>.</p>
<p>Let me quickly do my calculations for my Fermi-Dirac and Bose-Einstein
distributions, considering <mathjax>$\sigma_N$</mathjax>. So I have to take <mathjax>$\sigma_N^2 =
\tau\pderiv{\avg{N}}{\mu}$</mathjax>. So for Fermi-Dirac, we have <mathjax>$\frac {e^
{(\epsilon -\mu)/ \tau}}{e^{(\epsilon-\mu)/\tau} + 1} = \avg{N}(1-\avg{N})$</mathjax>.</p>
<p>Note that the variance is actually less than <mathjax>$N$</mathjax>, so less than the Poisson
distribution.</p>
<p>If I were to do the same thing for the Bose-Einstein distribution, I would
get <mathjax>$\avg{N}\parens{1 + \avg{N}}$</mathjax>. So the fluctuations at N = 0 are
absolutely enormous.</p>
<p>So we say that the Bosons like to travel in flux.</p>
<p>Physicist joke: astronomers are fermions, particle physicists are bosons.</p>
<p><a name='21'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h2>What is Minimized; Black Body: Mar 12, 2012</h2>
<h1>Housekeeping</h1>
<p>Remember that we have a midterm next Monday. Some material is in the
book. The next midterm would be Friday (April 13/14, depending on which is
Friday).</p>
<p>Today, office hours moved to 1:30-2:15. Review session on Saturday.</p>
<h1>What is minimized?</h1>
<p>Would like to start by asking you a question: in the conditions we have
been speaking about (system s in contact with reservoir), exchanging only
energy, does it go to a state of minimum energy? No; remember the constant
exchange of energy (constant kicking, so to speak).</p>
<p>Contrary to a system which is able to lose energy to vacuum, Does not
evolve to state of minimum energy. Contrary to an isolated system, does not
evolve to a configuration of maximum entropy (entropy of combination of
reservoir and system is maximized).</p>
<p>Evolves to a configuration of minimum Free Energy ("balance between
tendency to lose energy and to maximize entropy")</p>
<h2>Landau Free Energy &amp; Enthalpy</h2>
<p>More rigorously: We cannot defint he temperature of the system out of
equilibrium. So we introduce <mathjax>$F_L(U_S) \equiv U_S \tau_R\sigma_S(U_S)$</mathjax>
(Landau Free Energy; constant volume) and <mathjax>$G_L(U_S, V_S) \equiv U_S
\tau_R\sigma_S(U_S,V_S) + p_RV_S$</mathjax> (Landau Free Enthalpy; constant
pressure). The difference between ordinary free energy and the Landau free
energy is that the ordinary free energy is a function of the system <mathjax>$S$</mathjax>
that is only defined at equilibrium, whereas Landau Free Energy is defined
everywhere.</p>
<p>What we want to show that this <mathjax>$F_L$</mathjax> is the value that is maximized. We can
simply count states.</p>
<p><mathjax>$\sigma_{tot}(U_S) = \sigma_R(U-U_S) + \sigma_S(U_S) = \sigma_R(U) -
\pderiv {\sigma_R}{U}U_S + \sigma_S(U_S) = \sigma_R(U) - \frac
{1}{\tau_R}U_S + \sigma_S = \sigma_R(U) - \frac{1}{\tau_R}F_L(U_S)$</mathjax></p>
<p>The most probable configuration maximizes <mathjax>$\sigma_{tot}$</mathjax>, which minimizes
<mathjax>$F_L$</mathjax>. Note that this implies that <mathjax>$\pderiv{F_L(U_S)}{U_S} = 0 \iff \pderiv
{\sigma_S(U_S)}{U_S} = \frac{1}{\tau_R}$</mathjax> (by the definition of Landau free
energy).</p>
<p>When minimized, temperature of system is equal to temperature of reservoir.</p>
<h2>Landau Grand potential</h2>
<p>If we exchange particles, what is minimized is <mathjax>$\Omega_L(U_S,N_S) \equiv
U_S - \tau_R\sigma_S(U_S,N_S) - \mu_R N_S$</mathjax> (the Landau Grand Potential).</p>
<p>This will be useful when we do phase transitions. They get interesting in
statistical mechanics, and people are using this Landau Free Energy.</p>
<h1>Aside</h1>
<p>This completes the material for the midterm. Since I do not like to
emphasize memorization, this time is that you get two pages (one sheet
double-sided) of notes of formulae. The main thing is not the formulae, but
rather how to apply them. So many formulae now that if you apply them now,</p>
<h1>Black Body radiation</h1>
<p>Let us begin with the Planck distribution.</p>
<p>Usual way to consider a gas of photons in thermal equilibrium. (take
<mathjax>$\omega \equiv 2\pi v$</mathjax>) Let's think of it in a single mode of cavity, and
consider the number of photons in that mode. Harmonic oscillation:
<mathjax>$\omega$</mathjax>. If I have s photons in this mode, the energy would be <mathjax>$\epsilon
\equiv s\hbar\omega$</mathjax>; <mathjax>$s \in \mathbb{Z_+}$</mathjax>. So <mathjax>$Z = \sum_s e^{-s\hbar\omega
/ \tau}$</mathjax>, roughly. So <mathjax>$\avg{S} = \frac{\tau^2}{\hbar\omega} \pderiv{\log Z}
{\tau} = \frac{e^{-\hbar\omega/\tau}}{1 - e^{-\hbar\omega/\tau}} -
\frac{1}{e^{\hbar\omega/\tau} - 1}$</mathjax>. So <mathjax>$\avg{\epsilon_\omega} = \frac{\hbar
\omega}{e^{\hbar\omega/\tau} - 1}$</mathjax></p>
<p>Two quantifications: first quantification (wave-like): discrete states in
massive particles; obvious cavity modes for photons. The second
quantification was of discrete particles: it was reversed -- it was obvious
that massive particles were quantized as discrete particles, but not so
much that photons were quantized in numbers -- Planck.</p>
<p>Questoin: why is the mean energy per mode decreasing with energy? For
<mathjax>$\hbar\omega &gt; \tau$</mathjax>, it is more nad more difficult for the bath to create
a photon: I need <mathjax>$\hbar\omega$</mathjax> to create a photon, and if it's too large, I
don't have enough energy available.</p>
<p>Photon properties. Maxwell equations in vacuum.</p>
<p>Mean energy density: Is <mathjax>$\hbar\omega$</mathjax> the mean energy density as a function
of frequency? No; we have to fold in the frequency density of the modes
<mathjax>$D(\omega)d\omega$</mathjax> which can be computed with our <mathjax>$\frac{1}{h^3}$</mathjax> rule.</p>
<p>The density of states is <mathjax>$2\frac{d^3xd^3p}{h^3}$</mathjax>. (2 is the polarisation
factor) Note that <mathjax>$pc = \hbar\omega$</mathjax>, and <mathjax>$d^3p = p^2 dp d\Omega$</mathjax>. So our
density is <mathjax>$\frac{2d^3x(\hbar\omega)^2\hbar d\omega d\Omega}{c^3
\hbar^3}$</mathjax>. We can use the fact that <mathjax>$\hbar^3 = \frac{1}{8\pi/3}$</mathjax>. If we are
not interested in the direction of the photons, we integrate on solid angle
and we get <mathjax>$u_\omega d\omega = u_vdv = \frac{8\pi hv^3dv} {c^3\parens
{\exp\parens{hv/\tau} - 1}}$</mathjax>. I would like you to be able to derive this
from first principles, rather than rote memorization.</p>
<p><a name='22'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Black Body: Mar 14, 2012</h1>
<p>Office hours: today from 3:30-4:30; extra at Friday from 3-4.</p>
<p>Review Saturday @ 10:30.</p>
<h1>Black Body</h1>
<p>Radiation energy density between <mathjax>$\omega$</mathjax> and <mathjax>$\omega+d\omega$</mathjax>? </p>
<p>Would like to rederive black-body law in a slightly different way. Recall:
speaking of photons in equilibrium with a metallic cavity. In that case,
the average number of photons at frequency <mathjax>$\omega$</mathjax> is <mathjax>$\frac{1}{e^{\hbar
\omega/\tau} - 1}$</mathjax>. The minus is coming from the fact that we have a boson,
the spin of which is 1. Some of you are more comfortable with <mathjax>$\nu = \frac
{\omega}{2\pi}$</mathjax>, in which case the energy is <mathjax>$\frac{1}{e^{\hbar
\nu/\tau} - 1}$</mathjax>.</p>
<p>We can write <mathjax>$u_\nu(\nu,\theta,\phi) 2\frac{d^3xd\nu d\Omega}{h^3} =
\avg{s}h\nu 2\frac{d^3xd^p}{h^3}$</mathjax> (we need to pull out this factor of two
to account for the polarization). We can then use that <mathjax>$p = \frac{\epsilon}
{c} = \frac{h\nu}{c}$</mathjax>. If I'm not interested in the direction, I have to
integrate over the angles, which gives me a factor of <mathjax>$4\pi$</mathjax>, and so our
net result is <mathjax>$u_\nu(\nu)d^3xd\nu = \frac{8\pi h\nu^3d\nu d^3x}{c^3(e^{h
\nu/\tau}-1)}$</mathjax></p>
<h2>Cosmic microwave radiation</h2>
<p>Example of most perfect black body that we know: photons in equilibrium in
early universe. In the early universe, the tempmerature was over 3000 K. We
had mostly <mathjax>$p + e^-$</mathjax> at that time. The mean free path was very low; protons
interacting with electrons. So what you had was a plasma; effectively
opaque to photons -- very high scattering. At about 3000 degrees, universe
then becomes transparent: hydrogen forms, and photons don't scatter.</p>
<p>So all around us, we see this radiation (discovered in the 60s), and it is
no longer at this temperature. The radiation has been shifted and is now at
about 3K -- in microwave band; roughly mm wavelength.</p>
<p>Difficult to make black body radiator to calibrate instrument: primary
limiting factor.</p>
<p>Fluctuations of temperature are <mathjax>$10^{-5}$</mathjax> of mean temperature. You can
measure these as frequencies. You get this marvelous spectrum with wiggles,
and we see the plasma vibrating, in some sense. This gives us a lot of
information: tells us that the universe is especially flat; tells us the
amount of protons and neutrons in the universe; tells us the amount of dark
matter.</p>
<p>Allows us, for instance, to look at a lot of cosmology parameters.</p>
<p>We can also measure polarization; gravitational potential waves shaking
this plasma.</p>
<p>These fluctuations are (difficult to prove) responsible to the formation of
structure: they are responsible for the formation of galaxies.</p>
<p>European satellite this year named Planck, which will achieve more accurate
results.</p>
<p>Could lecture for over ten hours on this subject, but that is not the
subject of this class.</p>
<h1>Planck</h1>
<p>Before: physicists only considered continuum. We did not know about the
second quantization, that photons come with energy <mathjax>$h\nu$</mathjax>. Physicists of
19th century just wrote the partition function, summing over states. If you
do this, you get <mathjax>$\tau/\epsilon_0$</mathjax>, independent of frequency. The mystery
was you had to sum over modes (frequencies), and you'd get infinity. This
is a problem.</p>
<p>Planck introduced the quantization of states. Planck did not clearly
understand what he was saying. The photons comes in quanta of <mathjax>$\hbar
\omega$</mathjax>. Thus there is a cut-off at <mathjax>$\epsilon=\tau$</mathjax>, and so you do not have
what was called the ultraviolet catastrophe. This was 1903.</p>
<p>In 1905, Einstein published two articles, one which was about this
commenting on how quantization explains the photoelectric effect -- this is
why he got his Nobel prize. The same year he also published his paper on
special relativity, and so he did not have to work an 8-5 job.</p>
<h2>Have we solved everything?</h2>
<p>No: zero point energy. We have again an infinite sum. This is related to
the problem of the vacuum energy, dark energy, and the cosmological
constant.</p>
<p>Namely, <mathjax>$\epsilon_\omega = \frac{\hbar\omega}{2} + s(\hbar \omega )$</mathjax>. This
is related to what we consider vacuum energy, dark energy (maybe), and the
cosmological constant.</p>
<p>Note: important to sum instead of integrate.</p>
<p>Let me try to ask you why we did not use the Gibbs formalism? Photons
appear and disappear in interaction with cavity. However, this is a
different formula.</p>
<p>The reason why we do not use this is that the chemical potential of the
photon must be zero. Why? Special case of mass action law; entropy of the
reservoir does not change when we change the number of photons (keeping the
energy constant); and the photons is its own antiparticle.</p>
<p>Namely: <mathjax>$\mu_\gamma + \mu_e - \mu_e = 0 \implies \mu_y = 0$</mathjax>; <mathjax>$\pderiv
{\sigma_R}{N_{\gamma BB}}\bigg|_U = -\frac{\mu_\gamma}{\tau} = 0$</mathjax>. And
finally <mathjax>$\gamma + \gamma \leftrightarrow e \bar{e}$</mathjax>. We will see that the
chemical potential of the antiparticle is the same as that of a </p>
<p>It is perfectly fine to use the Gibbs formalism; it is just that
<mathjax>$\mu_\gamma$</mathjax> is zero.</p>
<p>Theme of this course: there are often three different ways of deriving the
same result: microcanonical (counting states), canonical (Boltzmann), and
grand canonical (Gibbs formalism).</p>
<p>Counting number of states: Kittel and Kroemer go through a painful counting
of number of states using quantum numbers. Of course, same result.</p>
<p>Flux through an aperture. Do not forget cosine factor: volume of oblique
cylinder is <mathjax>$cdtdA\cos\theta$</mathjax>.</p>
<p><a name='23'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Black Body: Mar 16, 2012</h1>
<p>As usual, housekeeping. Then comments on grand partition function. And then
we will come back to the black body approach and finish the
Stefan-Boltzmann law, speak about Kircchoff.</p>
<p>Wednesday moving again office hours: going to SF to participate in a
roundtable discussion between artists and scientists on the concept of
space: "how do we know that things we don't see exist?" (i.e. dark matter,
dark energy). The thing is very strange (suggested by an artist) is that
the main question is about creativity. Important for scientist, yes, but
this is not useful for invisible space; we do not create things; we are
forced to believe these things.</p>
<p>There has been some confusion regarding the grand partition
functions. <mathjax>$\frak{Z} = \sum_s e^{-\frac{\epsilon_s(N_s)-\mu
N_s}{\tau}}$</mathjax>. We have been using this primarily for a system of one state
<mathjax>$\epsilon_i$</mathjax>. In this case, the partition function <mathjax>$\frak{Z}_i =
\sum_{N_i} e^{(\epsilon_i-n_i)N_i/\tau}$</mathjax>. Then, we get similarly that the
average number of particles of this state is <mathjax>$\tau \pderiv{\log \frak
{Z}_i}{\mu}$</mathjax>.</p>
<p>Thus if we have a system with all possible states, <mathjax>$N = \sum \avg{N_i} =
\sum_i \tau \pderiv{\log\frak{Z}_i}{\mu_i}$</mathjax>. Therefore <mathjax>$\avg{U} =
\sum_i \tau\epsilon_i \pderiv{\log\frak{Z}_i}{\mu}$</mathjax>.</p>
<p>However, we could also approach this by considering our system to be an
ensemble of states from the start. In order to specify this, I must
specify the number of particles that have specific energies.</p>
<p>Rather, <mathjax>$\frak{Z} = \sum e^{-\frac{\sum \epsilon_i N_i - \mu\sum
N_i}{\tau}} = \prod_i(\frak{Z}_i)$</mathjax>.</p>
<p>We can then explore a bit: <mathjax>$\avg{N} = \tau\pderiv{\log \frak{Z}}{\mu} =
\sum_i \tau\pderiv{}{\mu}\log z_i$</mathjax> (after some simple arithmetic
manipulations)</p>
<p>By the way, if you do it either way, you have no problem with the
distinguishability of the various states. You take that into account
automatically.</p>
<p>Note that this is different from the case where the system is an ensemble
of particles.</p>
<p>Recall: flux through an aperture requires us to integrate over an oblique
cylinder, which has area <mathjax>$cdtdA\cos\theta$</mathjax>.</p>
<p><mathjax>$u_\omega(\theta,\phi d\omega d\Omega = \frac{\hbar\omega^3d\omega}{\pi^2
c^2(\exp(\hbar\omega/\tau) - 1)}$</mathjax>. This is the energy density of photons
moving in the direction <mathjax>$\theta, \phi$</mathjax>. If I'm interested in the flux
density, i.e. the brightness, there I am just looking at what happens
taking a unitary <mathjax>$dA$</mathjax> perpendicular to the direction. We consider
<mathjax>$I_\nu(\theta,\phi)d\omega d\Omega dt dA = \frac{\hbar\omega^3
d\omega}{\pi^2 c^2(\exp(\hbar\omega/\tau) - 1)} cdt dA$</mathjax>. <mathjax>$I_\nu$</mathjax> is just
<mathjax>$\frac{2h\nu^3d\nu dt dA}{c^2(\exp() - 1)}$</mathjax>.</p>
<p>With a fixed aperture, <mathjax>$J_\nu dA dt d\nu = \int \frac{2h\nu^3
d\nu}{c^3(\exp() - 1)} cdtdA\cos\theta d\Omega$</mathjax>. We have to be careful
here: we do not integrate beyond <mathjax>$\frac{\pi}{2}$</mathjax>, since those photons are
going the wrong way -- the only photons that go through the aperture are
moving toward the aperture.</p>
<p>So what we have finally is <mathjax>$\frac{4\pi h \nu^3 d\nu dtdA}{c^2(\exp(h\nu /
\tau) - 1)} = \frac{c}{4} u_\nu d\nu dA dt$</mathjax>. Note here that <mathjax>$u_\nu$</mathjax> is the
energy density integrated over the solid angle.</p>
<p>Therefore if I am interested in the total energy in my volume, I will
integrate <mathjax>$\int u_\nu(\theta,\phi)d\nu d\Omega dV = \int \frac{8\pi h\nu^3
d\nu}{c^3(\exp(h\nu/\tau)-1)} = \frac{8\pi V}{h^3 c^3} \int_0^\infty
\frac{\tau^4 x^3 dx}{\exp(x)-1}$</mathjax>. The total energy, therefore, goes as <mathjax>$a_B
T^4\; a_B = \frac{\pi^2 k_B^4}{15\hbar^3 c^3}$</mathjax>.</p>
<p>Similarly, if you were to look at thte total flux through a fixed aperture,
we will do the same calculation, but now what we will get <mathjax>$J =
\frac{c}{4}a_B t^4 = \sigma_B t^4$</mathjax>.</p>
<p>You can also get the entropy and the number of particles by just
integration. For entropy, you can either integrate <mathjax>$d\sigma/d\tau$</mathjax> or just
remember the density of states, which goes as <mathjax>$T^3$</mathjax>, as does the number.</p>
<p>Why do we have a <mathjax>$T^4$</mathjax> for energy density but a <mathjax>$T^3$</mathjax> for entropy or number
of photons? By the way, entropy is proportional to number of photons, which
makes sense. Mathematically, this appears as a result of change of
variables. So what is the effect that dominates? It is the increase of the
density of states with energy. Density of states goes as <mathjax>$\omega^2
d\omega$</mathjax>, which comes directly from the counting of states. Since <mathjax>$\omega$</mathjax>
is of the order of <mathjax>$\frac{\tau}{\hbar}$</mathjax>, we get here that the number goes
as <mathjax>$T^3$</mathjax>. Of course, when I'm looking at energy, I have one extra factor of
temperature (<mathjax>$\tau \equiv k_B T$</mathjax>), and that's where <mathjax>$T^4$</mathjax> comes in.</p>
<p><a name='24'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Black Body: Mar 21, 2012</h1>
<h1>Overview</h1>
<p>Black body: Detailed balance, Kirchhoff laws</p>
<p>Midterm on the fifteenth. No homework due on thirteenth.</p>
<p>As mentioned last week, cannot be at office hours this afternoon, going to
SF for panel discussion between artists and scientists about space and the
unknown. Instead, I would like to propose office hours on Friday 3-4.</p>
<p>Examining of histogram. Happy about no very low grades; 7.5 points lower;
smaller spread.</p>
<p>There are still people in the class writing something like <mathjax>$\mu =
\pderiv{\sigma}{N} = \pderiv{U}{N}$</mathjax> without putting any indication of what
the independent variables are. The second type of problem which seems
serious are confusions. One of these is that between isolated systems and
systems in contact with a bath. Of course, at equilibrium, the probability
of a given state is <mathjax>$\frac{1}{g_i}$</mathjax> (H theorem), and in the second case,
they are not equiprobable and are given by the Boltzmann distribution (or
Gibbs).</p>
<p>Also, people assuming everything is an ideal gas. Don't do that. Putting
random formulae shows that you don't understand the problem, so you'll lose
points.</p>
<p>Statistical mechanics is much bigger than just <strong>classical</strong> ideal
gases. And it's much more than Thermodynamics identity. Those are
distortions of coming from Physics 7; you tend to write the Thermodynamics
identity over and over; we have much more powerful ways to do things: we
can compute entropy from first principles. And then people use <mathjax>$dU = \tau
d\sigma - pdV + \mu dN$</mathjax>, but people without a lash will put <mathjax>$U =
\tau\sigma$</mathjax>. Reverse-engineering, it does not work.</p>
<p>Also, we'll use the best two midterms out of three.</p>
<p>For these things, any questions? Alex has the midterms, and I will have
them on Friday.</p>
<h1>Black Body</h1>
<p>We are in black body radiation. What I would like to focus on today is heat
imbalance. We have actually a very specific black body radiator, which is a
cavity with metallic walls. We were thinking about the modes in this cavity
and the photons in thermal equilibrium. Now, what we will do is put another
system in communication with this cavity and ask ourselves: what is the
radiation of this cavity? (these two systems are at the same temperature
and communicate through an internal aperture)</p>
<p>Suppose that first that the second system is a black body. By this, we mean
something that absorbs all radiation. So a cavity with a very small hole is
a very good approximation for a black body: if I send a photon into this
hole, it will not come back.</p>
<p>Can imagine more black bodies -- what we use in our field: rubber with a
lot of carbon in it (does not reflect, only absorbs).</p>
<p>By construction there, if we have these two systems communicating, the
energy emitted by one is the energy absorbed by the other.</p>
<p>First question, which is very simple: how does the energy emitted by system
1 compares to energy absorbed by system 1? Since they are at the same
temperature, the energy emitted by either system is the same (consider
formula derived last lecture).</p>
<p><strong>Principle of detailed balance</strong>. Consequence: the spectrum of radiation
emitted by a black body is the "black body" spectrum calculated before.</p>
<p>This is a very simple argument, but very powerful. Proof: <mathjax>$\frac{c}{4}A
u_{BB} = \frac{c}{4}Au_{cavity} \implies u_{BB} = u_{cavity}$</mathjax>. Emits in an
isotopic way exactly as the cavity. We could have inserted a frequency
filter or a direction selector, and these still remain constant.</p>
<p>Let's modify the situation. Instead of a black body, let's consider a grey
body, and let's define an absorption coefficient <mathjax>$a \equiv \frac {\text
{power absorbed}}{\text{power received}}$</mathjax>. In principle, this could depend
on angle and frequency. So the question now that we ask ourselves is how does
the energy of a grey body compare to energy emitted by a black body?</p>
<p>For this, we must consider both the reflected BB energy and GB emission by
walls.</p>
<p>Notice that the energy received by the black body is <mathjax>$(1 - a)(BB) + a(BB) =
BB$</mathjax>. Emission coefficient = <mathjax>$e = \frac{\text{power emitted}}{\text{power
black body}}$</mathjax>. <mathjax>$a = e$</mathjax> -- usually known as Kirchhoff law. The black body
receives back exactly what it emits. Therefore its temperature stays
constant.</p>
<p>Very useful in many applications. At the basis of the Nyquist noise or
Johnson noise. We will come back to this. You can show that the amount of
power emitted <mathjax>$U d\nu = 4kTRd\nu$</mathjax>. This is explained not very well, I'm
afraid, in Kittel and Kroemer, but this is the cause for many sources of
noise in our electronic circuits. More specifically, a theorem known as the
dissipation/noise theorem: any system capable of absorbing radiation is
making noise: if it absorbs radiation and has constant temperature, it will
have to emit <em>something</em> to stay at constant temperature. This is deeply
ingrained in quantum mechanics: anything that absorbs emits.</p>
<p>In order to detect anything, we absorb energy. And because we absorb
energy, we must emit energy.</p>
<p>There are many applications. Let me just give you an orientation. In many
cases, in all the sensing applications, we can say that we are at least
roughly a black body radiator, which has a certain surface area <mathjax>$A_e$</mathjax>, and
we are looking at it with some sort of telescope with a certain reception
area <mathjax>$A_r$</mathjax>. There is a simple relationship if the distance is <mathjax>$d$</mathjax>. The
source is seen from the receptor to fill a solid angle <mathjax>$\Omega_r$</mathjax>, which is
basically, for small angles, just <mathjax>$\theta^2$</mathjax>. Similarly, the solid angle as
seen by the emitter is <mathjax>$\Omega_e = \frac{A_r}{d^2}$</mathjax>; likewise, <mathjax>$\Omega_r =
\frac{A_e}{d^2}$</mathjax>, and so if you take the ratio of the two, you have
<mathjax>$\frac{\Omega_e}{\Omega_r} = \frac{A_r}{A_e}$</mathjax>. Purely from geometry, but an
important result.</p>
<p>If we have something that is diffraction-limited, then <mathjax>$\Delta \theta \sim
\frac{\lambda}{R}$</mathjax>. This means that <mathjax>$\Omega_r \approx \Delta\theta^2 \sim
\frac{\lambda^2}{R^2}$</mathjax>. Since our area goes as <mathjax>$\pi R^2$</mathjax>, <mathjax>$\Omega_r A_r =
\lambda^2$</mathjax>. With these two sets of equations, we can look at various
situations in astronomy. We'll do that rapidly at the beginning of next
lecture. I will also probably speak about phonons on Friday, and we'll see
if we can start the Fermi-Dirac/Bose-Einstein.</p>
<p><a name='25'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Black bodies and phonons: Mar 23, 2012</h1>
<p>What I would like to do today, if you've looked at the homework for next
week, is finish black body radiation -- in particular stars -- and to speak
of phonons in a crystal. So, uhm, let's start with a simple question: a
star like the sun is to a first approximation a black body. Is the
radiation across the disk constant, brightening at the limb, or dimming at
the limb?</p>
<p>Experimentally, it looks constant, roughly. Why is this? This is coming
from the fact that for black body radiation, the radiation is
isotropic. When we ask ourselves what is the amount of light that reaches
our photodetector in a time <mathjax>$dt$</mathjax>, this is basically <mathjax>$u(\nu, \theta, \phi)
d\nu d\Omega dA cdt$</mathjax>.</p>
<p>So if I take my sun, essentially what is happening is that I am looking at
a radiation shell of thickness <mathjax>$cdt$</mathjax>.</p>
<p>So that's one way of thinking about it: does not depend on angle. Another
way of considering it: area <mathjax>$dA$</mathjax>. Whatever emerges from this area is
independent of the angle. What I receive in my eye is not proportional to
<mathjax>$dA$</mathjax> but rather <mathjax>$dA\cos\theta$</mathjax> (<mathjax>$\cos\theta$</mathjax> is normal to my line of
sight).</p>
<p>So apparent radiation is constant all over the disk, if I have a black
body, which is radiating isotropically, where I am not depending on
<mathjax>$\theta$</mathjax> and <mathjax>$\phi$</mathjax>. That is why the disk of the sun is basically constant
luminosity, the moon is constant luminosity.</p>
<p>Why are we considering the sun a black body? The black body occurs in
thermal equilibrium when the photons (of a star), at least those very close
to the edge, are scattering so much that they are in thermal
equilibrium. Basically, this is coming from the fact that the photons are
scattering very much. It is not an exact black body for the following
reason: the sun is surrounded by its corona. When photons are coming
through that region, they are absorbed at certain frequencies. We have
absorption line at certain frequencies. By the way, that's how we
discovered helium: absorption lines we had no idea existed; was discovered
by looking at the sun through a spectrograph.</p>
<p>Essentially, that's a black body: photons in equilibrium.</p>
<p>Maybe 10 years ago, there was a professor from Ohio State (or something
like that) who published a full-page ad of the New York Times (which was twice
his monthly salary) how people talking about the Big Bang don't understand
anything about anything. Whole argument was that since this is not coming
from an oven with a small hole, this was just silly. Argued that stars were
not black bodies.</p>
<p>It shows you that people can get very emotional spending twice two months'
salary on a full-page ad in the New York Times.</p>
<p>Now, an interesting question is: what happens if the sun in practice is not
also a black body with very defined edges? The density in the sun goes down
as the radius increases; one over the mean free path does something like
<mathjax>$\rho\sigma$</mathjax>. The emissivity of the sun, therefore, as a function of
radius, is 1 very close to 0, and dies off at some point. This leads to
limb darkening. And by looking precisely at the intensity as a function of
the radius, you can have an idea of the mean free path, and of therefore
the density. That is how we mapped the density at the edges of the sun.</p>
<p>If <mathjax>$u$</mathjax> (energy density) is a function of the radius, then <mathjax>$cdt$</mathjax> at the
edges has more energy than <mathjax>$cdt$</mathjax> at the edges. There is a whole field of
stellar astronomy devoted to this kind of stuff.</p>
<p>So any way. This kind of simple question (why the sun appears to be of
constant luminosity despite it being a sphere) is interesting and not
totally trivial to answer. Related to the isotropic behavior of the black
body.</p>
<h2>Solid angles and surfaces at emission/reception</h2>
<p>Showed last time these simple geometric relationships. The one at the
bottom is related to refraction and comes from the fact that the angle
<mathjax>$\theta_D$</mathjax> at some radius R goes as <mathjax>$\frac{\lambda}{R}$</mathjax>. So the solid angle
goes as <mathjax>$\frac{\lambda^2}{R^2}$</mathjax>, and the area of my detector goes as <mathjax>$\pi
R^2$</mathjax>,and so <mathjax>$\Omega_r A_r \sim \lambda^2$</mathjax>.</p>
<p>This has an interesting consequence. Suppose that I am looking at a totally
diffuse object, e.g. the microwave background. If I have a
diffraction-limited telescope, how does the radiation I receive depend on
the diameter of my telescope? (i.e. how does the received energy depend on
the diameter of a diffraction-limited telescope?) Does it increase or stay
constant?</p>
<p>It indeed stays constant, for the following reason: the portion that I see
of this diffuse object is my <mathjax>$\Omega_r$</mathjax>. Therefore the area at emission is
just <mathjax>$\Omega_r d^2 = A_e$</mathjax>. Now whhat is received is also proportional to
the solid angle of my telescope, i.e. <mathjax>$I_\nu d\nu \Omega_e A_e$</mathjax>. So that is
equal to <mathjax>$I_\nu d\nu \frac{A_r}{d^2} \Omega_r d^2 = I_\nu d\nu \lambda^2$</mathjax>.</p>
<p>If I am looking, however, at a point object (i.e. something significantly
small), that is not correct. That is why astronomers go for very large
telescopes; to capture as much light as possible.</p>
<p>So if you have a diffuse emitter, it [increasing area] does not change the
power. Size of microscope is not important in terms of power, but is
definitely important if you're trying to look at ripples from fluctuations
in temperature of microwave background. Larger telescope means more detail,
better angular resolution.</p>
<p>This leads to a number of interesting results. For instance: if the sun is
a black body, the amount of radiation that I get in my telescope is a
measure of its diameter. More exactly, apparent diameter (i.e. diameter
divided by distance) of the star. So let's take a star that emits power
(absolute luminosity) <mathjax>$L$</mathjax> isotropically. If I am now trying to look at the
apparent luminosity <mathjax>$\ell$</mathjax>, this is <mathjax>$\frac{\text{Power received}}{\text{Area
of telescope}}$</mathjax>. Since our object is much smaller than the telescope, this
does depend on area of the telescope. This is the apparent luminosity. I
can compute that and say "look, my star is emitting a power L over the
whole <mathjax>$4\pi$</mathjax> solid angle, so over my unit solid angle it emits
<mathjax>$\frac{L}{4\pi}$</mathjax>. So what do I see? <mathjax>$\frac{L}{4\pi}\frac{A^2}{d^2}$</mathjax>. So
<mathjax>$\ell = \frac{L}{4\pi}\frac{1}{d^2}$</mathjax>. So for a black body, <mathjax>$L = 4\pi r_e^2
\sigma_B T^4$</mathjax>. That's my Stefan-Boltzmann law. So <mathjax>$\ell = \frac{4\pi
r_e^2}{4\pi d^2} \sigma_B T^4 = \expfrac{r_e}{d}{2}\sigma_B
T^4$</mathjax>.</p>
<p>There are a number of objects that are vibrating or exploding (so they are
changing their diameters and often also their temperatures). So we can do
the same thing with supernovae. If we look at <mathjax>$\deriv{\ell}{t}$</mathjax>, we have
two terms: <mathjax>$\frac{2 r_e \deriv{r_e}{t}}{d^2}$</mathjax> and <mathjax>$\frac{r_e^2}{d^2}
\sigma_B 4T^3 \deriv{T}{t}$</mathjax>. We can measure both of these changes over
time, so now if we make a spherical approximation, I can get
<mathjax>$\deriv{r_e}{t} = \deriv{r_\perp}{t} = \deriv{r_\parallel}{t}$</mathjax>. We can
measure this last derivative by Doppler shift. If we assume that, then we
can measure the distance. And that is how we measure distance to stars that
are too far to see parallaxes and how we are able to measure the expansion
of the universe.</p>
<p>Pretty clever method. It relies on this assumption, which if a star is
vibrating, is not an unreasonable assumption. It is a much less rigorous
assumption if you have an explosion, since stability varies.</p>
<p>We used this method for supernova 1987A to check that it was indeed (at the
same distance as) the Magellanic cloud.</p>
<p>Another little thing that radio astronomers do is related to the antenna
temperature. This is very simple, actually: <mathjax>$I_\nu d\nu dA d\Omega =
\frac{c}{c^3}\frac{2h\nu^3 d\nu}{e^{h\nu/\tau}-1}$</mathjax>. If <mathjax>$\frac{h\nu}{\tau}
\ll 1$</mathjax>, <mathjax>$e^{h\nu/\tau} - 1 \sim \frac{h\nu}{\tau}$</mathjax>. so this goes to
<mathjax>$\frac{\tau}{c^2} \nu^2 d\nu dA_e d\Omega_e$</mathjax>. If our telescope is
diffraction-limited, then we know that this is roughly <mathjax>$2\tau
\frac{\nu^2}{c^2} d\nu \lambda^2$</mathjax>. So <mathjax>$I_\nu d\nu dA d\Omega \sim 2\tau
d\nu$</mathjax>, which means that the amount of power received by my telescope is
<mathjax>$2\tau d\nu$</mathjax> (independent of area of object since this is a diffuse object,
and we are diffraction-limited). So the amount of power received is just
<mathjax>$\tau$</mathjax>; <mathjax>$k_B T$</mathjax>. Antenna temperature is just amount of power at low
frequency.</p>
<p><a name='26'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Phonons, Quantum gases: April 2, 2012</h1>
<p>We have the third midterm a week from Friday. It will cover material up
through Friday's lecture.</p>
<p>So, phonons are vibrations in a crystal and are quantized. This means that
as before, <mathjax>$\epsilon_1 = \hbar\omega$</mathjax>, and <mathjax>$\epsilon_\omega =
\avg{s_\omega}\hbar\omega$</mathjax>. If you look at the dispersion relations,
plotting energy and momentum, <mathjax>$\omega, k$</mathjax> (proportional by a <mathjax>$\hbar$</mathjax>
factor), in crystals there are three acoustic modes. To a first order, they
are linear at very low energy.</p>
<p>We will make some approximation, forget that they are of different
velocities, and just consider all the photons to have the same velocity. So
we will say the energy <mathjax>$E = p c_s$</mathjax>. Typically, this velocity is of the
order of a few kilometers per second, which is roughly <mathjax>$10^{-5}c$</mathjax>. There is
one detail we need to take into account: a crystal is not a continuous
medium. If I'm speaking of transverse phonons, I can move the atoms
transversely, and this corresponds to a certain wavelength and
momentum. It's clear that wavelengths that are too short are equivalent to
a much longer wavelength: you forget about the oscillations between the
particles. Thus <mathjax>$k = \frac{p}{\hbar} \le \frac{a}{\pi} = k_D$</mathjax>. There is a limit
which is <mathjax>$\frac{\hbar\pi}{a}$</mathjax>, which is called the Debye limit. The viable
region, therefore, is known as the Brillom zone. The corresponding
frequency is <mathjax>$\omega_D = \frac{a}{\pi}c_s$</mathjax>.</p>
<p>One way of thinking about it: suppose I have a crystal with N atoms, each
of which can move in three directions. I therefore have 3N degrees of
freedom. So the sum from 0 to the Debye limit of my density of states,
multiplied by the three polarizations, has to be equal to 3N. If you work
out the computation (done in the notes, with spherical approximation), we
get <mathjax>$c_s (6\pi^2 \frac{N}{V})^{1/3}$</mathjax>. This is slightly different from what
you should get, but this is merely a result of the spherical approximation.</p>
<p>The thing you just should remember is that there is a limit in the energy
up to the momentum due to the fact that this is not a continuous medium.</p>
<p>So: is the Debye temperature <mathjax>$\theta = \frac{\hbar\omega_D}{k_B}$</mathjax> useful?
Evidently not. In the case of photons, I will get the mean occuplation
number as <mathjax>$\frac{1}{\exp(\hbar\omega/\tau) - 1}$</mathjax>. These are bosons, and
chemical potential is zero because they can disappear at the walls. If I
have a small temperature, I will excite low-energy phonons. At low
temperature, same results as the black body but replacing <mathjax>$c$</mathjax> by <mathjax>$c_s
\approx 10^{-5} c$</mathjax>. In particular, <mathjax>$U = 3\int d^3 x \int_0^{p_D} \frac{d^3
p}{h^3} \frac{1}{\exp(\hbar\omega/\tau) - 1}$</mathjax>, and I will have a similar
result where <mathjax>$U$</mathjax> goes as <mathjax>$T^4$</mathjax>, except we have three degrees of freedom as
opposed to two, and the velocity is different. <mathjax>$C \propto \frac{1}{c_s^3}
T^3$</mathjax>, <mathjax>$\sigma \sim \frac{1}{c_s^3} T^3$</mathjax>, <mathjax>$\avg{N_p} \sim \frac{1}{c_s^3}
T^3$</mathjax>. What you can show easily (which I will not do in detail) is that <mathjax>$U
\propto N\expfrac{T}{T_D}{4}$</mathjax>, and that <mathjax>$C \propto \expfrac{T}{T_D}{3}$</mathjax>.</p>
<p>Consequences: with a very small heat capacity, we can measure very small
energies with large temperatures.</p>
<p>We are sensitive enough to measure the difference in temperature of the
crystal when we look at some very distant, very faint radiation. These are
among the most sensitive instruments you can imagine, and this is because
the heat capacity plummets at very low temperatures. Okay, so probably
enough. The phonons behave essentially exactly as the photons. The velocity
of sound is roughly <mathjax>$10^{-5} c$</mathjax>, and there are three polarizations. Else,
almost everything else applies. The fact that we have a limit in the
momentum of the phonons due to the fact that the crystal is not continuous,
is irrelevant at low temperatures.</p>
<p>Do the calculation once; nothing is difficult.</p>
<p>Let us speak about quantum gases. This is the last chapter, which is really
at the core of the course. So let's just summarize what we did: we
basically started from the H theorem (microcanonical methods of counting
states). We can physically deduce all of thermodynamics. But you would
agree that counting states (especially when you impose restrictions) is
actually not very easy. So there was the trick that Boltzmann had invented,
which was to put the system in equilibrium with a reservoir. The whole
ensemble (reservoir + system) was isolated. We could apply once and for all
the microcanonical method, and we got that the probability of a certain
state of energy was <mathjax>$e^{-\epsilon/\tau}$</mathjax>.</p>
<p>We could generalize this with Gibbs if the number of particles was not
constant, i.e. the grand canonical method, where the probability of states
goes as <mathjax>$e^{-\frac{\epsilon-\mu}{\tau}N}$</mathjax>. What we then derived (either
from Boltzmann or Gibbs) was the black body radiation, where the average
number of photons <mathjax>$\avg{s}$</mathjax> was <mathjax>$\frac{1}{\exp(\hbar\omega/\tau) - 1}$</mathjax>. I
would now like to use this method, but for particles where <mathjax>$\mu \neq 0$</mathjax>. So
we derived that the mean number of particles in a state of energy
<mathjax>$\epsilon$</mathjax> was <mathjax>$\frac{1}{\exp((\epsilon-\mu)/\tau) \pm 1}$</mathjax>. Remember that
fermions don't blow up at <mathjax>$\epsilon - \mu = 0$</mathjax>.</p>
<p>There is something we will speak about, the Bose-Einstein condensation,
which is due to the fact that this fraction actually diverges.</p>
<p>skip density of states. How do we determine chemical potential? We compute
the mean number of particles. How do we compute this number? Integrate the
product of the average value with the density of states.</p>
<p><a name='27'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Fermi-Dirac/Bose-Einstein: April 4, 2012</h1>
<p>No housekeeping.</p>
<p>Focus on density of states to be sure that this concept is ingrained in
your minds by now. Focus on Fermi-Dirac and basically what we will see is
that normal operating temperatures, <mathjax>$\tau \ll \mu$</mathjax>. There are a number of
interesting approximations we can make which allow us to make these
calculations.</p>
<p>Once again: recall that the minus sign in the Bose-Einstein distribution is
responsible for the Bose-Einstein condensation, which we will cover. There
is no divergence for the Bose-Einstein distribution because <mathjax>$\mu$</mathjax> is
slightly negative, so we don't have to worry about the case where they are
equal.</p>
<h1>Density of States</h1>
<p>We have <mathjax>$\avg{s(\epsilon)} = \frac{1}{\exp\parens{\frac{\epsilon -
\mu}{\tau}} \pm 1}$</mathjax>. What we have to do is sum over <mathjax>$g_i \avg{
s(\epsilon_i)}$</mathjax>. As we get further, the density of states increases. In
that case, we can replace this sum with an integral approximation (in
particular at small energy): <mathjax>$\int_0^\infty g(i) \avg{s(\epsilon)} D(s)
d\epsilon$</mathjax>. For Fermi-Dirac it does not matter much (since there are few
states of small energy, but Bose-Einstein it will matter. Therefore this
only holds for large <mathjax>$\epsilon$</mathjax>.</p>
<p>We know that in phase space, the number of spatial quantum states
(orbitals) is <mathjax>$\frac{d^3 x d^3 p}{h^3}$</mathjax>. I will need to multiply by some
kind of degeneracy factor <mathjax>$g$</mathjax> (e.g. number of spin states: for an electron,
this would be equal to 2).</p>
<p>In order to go to <mathjax>$\epsilon$</mathjax>, I will have to integrate over space and the
angles and make the change of variables from p to <mathjax>$\omega$</mathjax>. Thus we have
<mathjax>$\frac{g}{h^3}\int_{\text{space}} d^3 x \int_{\text{angles}} p^2 dp d\Omega
= \frac{4\pi g V}{h^3}p^2 dp = \frac{gV p^2 dp}{2\pi^2 \hbar^3}$</mathjax>. Now we
must distinguish between two cases.</p>
<p>Non-relativistic (<mathjax>$\epsilon = \frac{p^2}{2m} \implies p =
\sqrt{2m\epsilon}$</mathjax>). Most of literature does not put volume in density of
states, since that is a trivial factor in front.</p>
<p>We can replace our <mathjax>$p^2$</mathjax> by <mathjax>$2m\epsilon$</mathjax> and <mathjax>$dp$</mathjax> by <mathjax>$\sqrt{\frac{m}
{2\epsilon}}$</mathjax>, so we now have <mathjax>$D(\epsilon) d\epsilon = \frac{g}{4\pi^2}
\expfrac{2m}{\hbar^2}{3/2} \sqrt{\epsilon} d\epsilon$</mathjax>. If in two
dimensions, instead of <mathjax>$p^2 dp$</mathjax> we would have <mathjax>$pdp$</mathjax>, which means that it is
constant.</p>
<p>In general, we are relativistic, and so <mathjax>$\epsilon = \sqrt{pc^2 +
m^2c^4}$</mathjax>. If we are ultrarelativistic, <mathjax>$\epsilon = pc$</mathjax>. If you do the
calculation for the ultrarelativistic case, it goes as <mathjax>$\epsilon
d\epsilon$</mathjax>.</p>
<p>Recall: we determine <mathjax>$\mu$</mathjax> by computing the mean number of particles.
That is, <mathjax>$V\int_0^\infty \avg{s(\epsilon)} D(\epsilon d\epsilon$</mathjax> (with this
convention). It does not really matter; just stick to <em>one</em> convention. The
number of states is always proportional to the volume: comes from the <mathjax>$d^3
x$</mathjax>.</p>
<p>One interesting thing is that this gives us in the limit of small
occupation number the result of the Boltzmann distribution. In the case
where <mathjax>$\epsilon - \mu$</mathjax> close to 0, then we have degeneracy, and this is a
fully quantum gas.</p>
<p>Number of particles is just a sum over energies of occupation number
multiplied by density of states (as we've determined previously). SO that's
a way you can determine <mathjax>$\mu$</mathjax>: <mathjax>$\mu(\tau)$</mathjax> is set by the requirement that N
is the total number of particles. (<mathjax>$\neq$</mathjax> black body <mathjax>$N_\gamma \alpha
\tau^3$</mathjax> -- you will have to do this integral using ultrarelativistic density
of states).</p>
<p>The energy is also triial to get: <mathjax>$U = \avg{\epsilon} = V \int_0^\infty
\frac{\epsilon D(\epsilon) d\epsilon}{\exp\parens{\frac{\epsilon -
\mu}{\tau}} \pm 1}$</mathjax>. You will get the classical case only if you neglect
the <mathjax>$\pm 1$</mathjax>.</p>
<p>The entropy is not a particularly useful expression, except that for
Fermi-Dirac, it only appears in the region of the chemical potential and
goes back to zero. There is no disorder where <mathjax>$\avg{s} = 1$</mathjax>, and there is
no disorder in states that are empty. This is a theme with Fermi-Dirac: all
the action is around the chemical potential.</p>
<p>Similar things happen for the Bose-Einstein, but we'll come back to that.</p>
<p>So: for Fermi-Dirac, for <mathjax>$\tau = 0$</mathjax>, we have a perfectly square
distribution. In that case, we can trivially do the calculations: <mathjax>$N = V
\int_0^\mu D(\epsilon) d\epsilon$</mathjax>.</p>
<p>Let's first consider a nonrelativistic gas. In that gas, <mathjax>$D(\epsilon) =
\frac{g}{4\pi^2} \expfrac{2m}{\hbar^2}{3/2}\sqrt{\epsilon}d\epsilon$</mathjax>. I can
put that back into our expression, and we get <mathjax>$VA \frac{2\mu^{3/2}}{3}$</mathjax>.</p>
<p><mathjax>$\mu(\tau = 0) = \epsilon_F = \text{Fermi energy}$</mathjax>. This goes as <mathjax>$\expfrac
{N}{V}{2/3} \equiv n^{2/3}$</mathjax>. If you take the density of free electrons in a
metal, <mathjax>$\epsilon_F \sim 5\text{ eV} \to \sim 4 \cdot 10^4 \text{ K}$</mathjax>. By
the time I am finished, I am obliged to require my electrons to have a
fairly large energy. Velocities on the order of <mathjax>$.003c$</mathjax>. This is just a
result of the Pauli exclusion principle. This I would like you to know
this. As a consequence, <mathjax>$4 \cdot 10^4 \gg \text{ lab temperature}$</mathjax>. This
approximation is quite good, in practice. So we are very close to a square
distribution.</p>
<p>So once I have the chemical potential, I can compute the energy, <mathjax>$U =
\int_0^{\epsilon_f} \epsilon D(\epsilon) d\epsilon$</mathjax>. I will have to
integrate <mathjax>$\epsilon^{3/2}$</mathjax>, so, if you do the math, this will give
<mathjax>$\frac{3}{5} N \epsilon_F = F$</mathjax> -- no disorder. I can laso compute <mathjax>$p =
-\pderiv{F(\sigma, \tau, V)}{V} = \frac{2}{5} n \epsilon_F$</mathjax> (<mathjax>$\sigma = 0$</mathjax>,
constant <mathjax>$\tau$</mathjax>).</p>
<p>If you are ultrarelativistic, you can do the same calculation, but you get
different coefficients and formulae. Does not matter; these are the same
idea. too many electrons in a white dwarf reaches this ultrarelativistic
limit. Becomes unstable; will collapse. Similarly for neutron stars.</p>
<h1>Interpretation</h1>
<p>Pauli exclusion principle. Must apply Heisenberg uncertainty principle,
large relative momentum, since position is constrained to within this
lattice.</p>
<p><mathjax>$\mu$</mathjax> does not change very much with temperature, but it does change. The
distribution rounds off. Reason for slight decrease in 3 dimensions:
density of states increases with energy. When we are rounding the
distribution, what is happening is that in the tail, we have more states
available, and therefore the contribution to the integral is larger.</p>
<p>Origin of thermocouple effect.</p>
<p><a name='28'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Fermi-Dirac/Bose-Einstein: April 6, 2012</h1>
<p>Once again, no special announcements. Midterm in a week. I will tell you
during this lecture or the next lecture where the mditerm finishes.</p>
<p>What we will speak first about is holes and electrons. More precisely,
hole-electron excitations.</p>
<p>Then we will get into crystals and speak about metals and heat capacity of
metals.</p>
<p>So that's the program for today.</p>
<p>The discussion we had at the end of the last lecture applies to not just
fermions, but any particle with half-integer spin. Cannot have more than
one particle per quantum state.</p>
<p>Fermi energy: ~5eV in a metal. Much much bigger than temperature in normal
conditions. To a first order, this is a square distribution (<mathjax>$\mu \gg
\tau$</mathjax>). Therefore when <mathjax>$\epsilon &lt; \mu$</mathjax>, <mathjax>$\avg{s(\epsilon)} = 1$</mathjax>; when
<mathjax>$\epsilon &gt; \mu$</mathjax>, <mathjax>$\avg{s(\epsilon)} = 0$</mathjax>. (<mathjax>$\theta(\mu-\epsilon)$</mathjax>). Not
exact; there is some rounding off of the distribution due to thermal
fluctuations. One of the themes of this lecture (and of the last lecture)
is that all of the action occurs around the Fermi level.</p>
<p>We have to finish the lecture by asking ourselves what happens when we
increase the temperature. The main thing that happens is that density of
states goes down, yes, but there is no reason for potential to stay
put (thermocouple effect!). We have to push the chemical potential slightly
down.</p>
<p>This applies to three dimensions; for a two-dimensional electron gas, you
can show that chemical potential is constant, and for a one-dimensional
gas, you can show that it increases with temperature.</p>
<p>So what am I doing here? I am taking the same graph we just showed, but now
flipping the coordinates. At zero temperature, all the states are full. We
actually speak of the sea of electrons, which is totally filled at zero
temperature.</p>
<p>So what happens? It increases with temperature (to first-order, you can
ignore change of potential), and it starts to round off: the sea is not
totally filled -- electrons start to pop out of the lattice, and we have
what is called a "hole". A hole is not truly a particle (it's the absence
of one, really), and in solid state we speak of "quasi-particles".</p>
<p>The first thing we can ask ourselves is the charge of this
quasiparticle. It has a positive charge.</p>
<p>Suppose I have an electric field. In some sense, I am tilting the energy:
<mathjax>$\epsilon = \epsilon_0 - \abs{q} V$</mathjax> (where V is the electric potential, q
is the elementary charge). So if I have a field going to the right, V going
down, and the electric field is constant and positive along the x, this
corresponds to tilting.</p>
<p>The hole will tend to migrate toward the right; the electron will tend to
migrate toward the left.</p>
<p>In some sense, holes act as anti-electrons (positrons). This is how Dirac
interpreted negative energy states in the solution of his equation for
half-integer spin particles.</p>
<p>We have a constant number of particles. <mathjax>$\avg{\frac{N}{V}} = \int_0^\infty
f(\epsilon) D(\epsilon) d\epsilon$</mathjax> (forgetting about temperature dependence
for now). Also equal to <mathjax>$\int_0^{\epsilon_F} D(\epsilon) d\epsilon$</mathjax> at zero
<mathjax>$\tau$</mathjax>.</p>
<p><mathjax>$$= \int_0^{\epsilon_F} f(\epsilon,\tau) D(\epsilon) d\epsilon +
\int_{\epsilon_F}^\infty f(\epsilon, \tau) D(\epsilon) d\epsilon =
\int_0^{\epsilon_F} D(\epsilon) d\epsilon
\\ \implies \int_{\epsilon_F}^\infty f D(\epsilon) d\epsilon = \int_0
^{\epsilon_F}\bracks{1 - f(\epsilon, \tau)}D(\epsilon d\epsilon
$$</mathjax></p>
<p>Recall: <mathjax>$f$</mathjax> corresponds to electrons; <mathjax>$1-f$</mathjax> corresponds to holes. Hole-like
excitation, electron-lilke excitation.</p>
<p><mathjax>$u(\tau) - u(0) = \int_0^\infty \epsilon f D(\epsilon) d\epsilon -
\int_0^{\epsilon_F} \epsilon D(\epsilon) d\epsilon =
\int_{\epsilon_F}^\infty \epsilon f D(\epsilon) d\epsilon -
\int_0^{\epsilon_F} \epsilon (1-f) D(\epsilon d\epsilon$</mathjax>. Adding zero, we
can get <mathjax>$\int_{\epsilon_F}^\infty (\epsilon - \epsilon_F) f D(\epsilon)
d\epsilon - \int (\epsilon - \epsilon_F) (1-f)D(\epsilon) d\epsilon$</mathjax>.</p>
<p>We can use the symmetry we had before. We can write this explicitly by
substituting the values we had for <mathjax>$f$</mathjax>, <mathjax>$1-f$</mathjax>. Holes: increasing energy
going "down"; electorns: increasing energy going "up".</p>
<p>So let's do, rapidly, energy band structure. Up until now, I've just
considered a gas of electrons; have not said anything about containment. In
practice, electrons are in some kind of medium, and it is easiest to
discuss crystals because then it is in a periodic medium.</p>
<p>In a (cubic) crystal, the atoms are regularly spaced. If you try to solve
the Schrodinger equation for this kind of periodic lattice, what you will
arrive at is that the energy is a function of the momentum (or wave
number), but there are regions of momenta where the electrons can tunnel
resonantly from one region to the next. In other words, they are free to
move if the wavelength is right. If the wavelength is not right, they will
get stuck locally.</p>
<p>You may have some energy band diagram. Energy gap: region of energy where
the electrons cannot propagate. So: we have several things that happen,
depending on the element that we are considering. Sometimes, the element
also has some states above the gap (conduction band) are populated. That is
what happens for metals. In some sense, this is all what we have described
so far (our Fermi sea) -- we did not have to worry about the states below
the valence band.</p>
<p>Mass related to the curvature is considerably smaller in crystals than the
electrons.</p>
<p>So the other possibility is that basically your Fermi level wants to be
inside the energy gap. In that case, you have no small temperature; no
electrons available; they're all stuck in the valence band, so you have an
insulator.</p>
<p>We will see later on that if you have impurity levels, you could have
bending of the bands.</p>
<p>Let me briefly introduce the next subject, which I was hoping to finish by
the end of the lecture, which is the heat capacity of a metal. What the
heat capacity will be just <mathjax>$\pderiv{U}{T} = V\pderiv{}{t} \int_0^\infty
\epsilon f(\epsilon,\tau) D(\epsilon) d\epsilon$</mathjax>. The other thing that
depends on the temperature is f.</p>
<p><mathjax>$\deriv{f}{\tau} = -\frac{(\epsilon - \mu)}{\tau^2}
\frac{1}{\parens{\exp(\frac{\epsilon - \mu}{\tau}) + 1}^2}$</mathjax></p>
<p>Midterm material will stop here.</p>
<p><a name='29'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Fermi-Dirac/Bose-Einstein: April 9, 2012</h1>
<p>Several things to discuss: midterm on Friday. Basically, the first seven
chapters of the notes, stopping at slide 10 of the notes without
questions. Try to have something less long, less difficult than midterm</p>
<h1>2. Mostly interested in concepts. Difficulties many of you have is that</h1>
<p>you get confused between the various concepts. You really should be asking
yourself what method to apply. Understand how all the concepts are related
to each other. Clear emphasis on black body &amp; Fermi-Dirac, but I will try
to judge how you put that in context with the rest. So one question that
comes from before. Things like definition of entropy that you need to know.</p>
<p>The two things I wanted to do today: metals -- <mathjax>$C \propto T$</mathjax>;
semiconductors. On Wednesday, we will finish the Fermi-Dirac and get into
Bose-Einstein.</p>
<p>Reminder: in crystals, the relationship between energy and momentum (what
we call in Physics the dispersion relations): gap between conduction and
valence bands -- energies are not allowed. Sometimes, the conduction band
is naturally filled. There are enough electrons in the crystal such that
you have a Fermi level above the energy of the conduction band.</p>
<p>In that case, we have basically the same formalism as we have had so far,
except that the energy is something like <mathjax>$\epsilon = \epsilon_c
\frac{\parens{p - p_0}^2}{2m_e^*}$</mathjax>. This mass is not the mass of the
electron; it is the mass of the electron inside the crystal (which is
usually smaller). All we have done can be generalized to that case. The
only thing which really changes is the mass of the electron. (also, usually
<mathjax>$p_0$</mathjax> goes away)</p>
<p>Now, let's look at the heat capacity of the electrons in the crystal. In
order to have the heat capacity, I must compute the energy of the
eleectrons. That is very straightforward: <mathjax>$V\int_0^\infty \epsilon
\frac{1}{e^{\frac{\epsilon-\mu}{\tau}} + 1}D(\epsilon) d\epsilon$</mathjax>. Since <mathjax>$C
= \pderiv{U}{T}\Big|_V = h\deriv{U}{\tau} = k_BV\int_0^\infty \epsilon
\deriv{}{\tau}\parens{\frac{1}{e^{()} + 1}}D(\epsilon)
d\epsilon$</mathjax>. Derivative of <mathjax>$s(\epsilon)$</mathjax> w.r.t <mathjax>$\tau$</mathjax>, not <mathjax>$\epsilon$</mathjax>. So
the action is occurring only around the chemical potential. So that is what
we will use. It's the narrow region around <mathjax>$\mu$</mathjax>. We have seen <mathjax>$\mu$</mathjax> does
not vary rapidly with temperature, so let's take <mathjax>$\mu \approx
\epsilon_F$</mathjax>. (we have already used this, since we did not use chain rule
for <mathjax>$\deriv{\mu}{\tau}$</mathjax>. Number of particles is <mathjax>$\int s(\epsilon)
D(\epsilon) d\epsilon$</mathjax>. But number of particles is not changing, so
<mathjax>$\pderiv{N}{\tau} = 0$</mathjax>. </p>
<p>Thus <mathjax>$C = k_B V\int_0^\infty \frac{\parens{\epsilon - \epsilon_F}^2}{\tau^2}
\frac{e^{\parens{\frac{\epsilon - \epsilon_f}{\tau}}} D(\epsilon)
d(\epsilon - \epsilon_F)}{\parens{e^{\frac{\epsilon - \epsilon_f}{\tau}} +
1}^2}$</mathjax>. Also, using the limits from <mathjax>$-\infty$</mathjax> to <mathjax>$\infty$</mathjax> is making the
approximation that <mathjax>$\epsilon \gg \tau$</mathjax>. This is a quantity that you
compute. <mathjax>$C \sim k_B D(\epsilon_F) A V \tau$</mathjax>.</p>
<p>This linear dependence on temperature is because the number of electrons
involved is proportional to temperature.</p>
<p>Taking into account phonons, <mathjax>$C_{tot} = C_e + C_\phi = \gamma T + AT^3$</mathjax>;
<mathjax>$C_e \ll \frac{3}{2}N k_B$</mathjax>. This by the way solved a great quandary that
people had at the beginning of the 19th century. There were electrons in
metals at the time, and we could actually compute their density. Heat
capacity estimate using ideal gas law was way off: only electrons that
affected anything were those close to the Fermi energy.</p>
<h1>Semiconductors</h1>
<p>Semiconductors are substances where the Fermi level is somewhere in the
gap. At zero temperature, they will be totally insulators. The difference
between insulators and semiconductors is the magnitude of the gap. If the
gap is very big, you can only excite electrons from the valence band to the
conduction band at very high temperatures. For semiconductors, the gap is
somewhat smaller, and you can excite them at lower temperatures. We still
have our usual Fermi-Dirac distribution, but now the density of states is
somewhat different: you make parabolic approximations, and the density of
states of electrons is <mathjax>$D_e(\epsilon) d\epsilon \frac{2}{4\pi^2}
\parens{\frac{2m^*_e}{\hbar^2}}^{3/2} \sqrt{\epsilon - \epsilon_c}
d\epsilon$</mathjax>. (2 is from the spin)</p>
<p>The density of states for holes is inverted, since it is in the opposite
direction, and the masses are fairly different.</p>
<p>Basically, we can compute the total number of electrons: <mathjax>$n_{Te} = \int
f(\epsilon) D(\epsilon) d\epsilon = \int_0^{\epsilon_v} f(\epsilon)
D_h(\epsilon) d\epsilon + \int_{\epsilon_c}^\infty f(\epsilon)
D_e(\epsilon) d\epsilon$</mathjax>. Let us use the fact that <mathjax>$1 - \frac{1}{e^{\frac
{\epsilon - \mu}{\tau}} + 1} = \frac{1}{e^{\frac{\mu-\epsilon}{\tau}}+1}$</mathjax>
and rewrite this. What we get is that <mathjax>$\int_0^{\epsilon_v} \frac{1}{e^{
\frac{\mu - \epsilon}{\tau}} + 1}D_h(\epsilon) d\epsilon = \int_0
^{\epsilon_v} \frac{1} {e^{\frac{\epsilon - \mu}{\tau}} + 1}D_e(\epsilon)
d\epsilon$</mathjax>. Namely, number of holes is the same as the number of (free)
electrons.</p>
<p>Note that from this, ,we can see that the number of holes in the valence
band decreases with increasing <mathjax>$\mu$</mathjax>. Furthermore, we can determine <mathjax>$\mu$</mathjax>
by imposing charge neutrality.</p>
<p>The interest of semiconductors is that impurities are localized states in
the gap (not the band!). We still need to enforce charge
neutrality. Acceptors shift fermi level down to valence band, and I will
have holes in the valence band, and mostly positive carriers: p-type
material.</p>
<p><a name='30'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Fermi-Dirac/Bose-Einstein: April 11, 2012</h1>
<p>Several housekeeping items, and midterm on Friday. What I would like to do
today is finish Fermi-Dirac and speak in particular about the Chandrasekhar
limit and Bose-Einstein condensation.</p>
<p>So: after Monday, we will have only two weeks of class remaining. The
question is what do we want to speak about after Fermi-Dirac and
Bose-Einstein. Would like to speak about phase transitions and nonideal
gas, but this will take one or two lectures. So we have one week of free
subject we can choose. Propositions: transport phenomenon (diffusion,
drift), semiconductors, or cosmology.</p>
<p>Correction: heat capacity in metals. When thinking of heat capacity, we had
to take the derivative with respect to temperature of <mathjax>$\int_0^\infty
\epsilon \mathcal{f}(s,\tau)\mathcal{D}(\epsilon)d\epsilon$</mathjax>. This is just
the energy of a Fermi-Dirac gas, and, taking the derivative with respect to
temperature, we can do a change of variables to <mathjax>$\epsilon^\prime \equiv
\epsilon - \epsilon_F$</mathjax> without changing our result (the jacobian is 1).</p>
<p>So let me provide other examples of degenerate Fermi gas. <mathjax>$^3$</mathjax>He has spin
<mathjax>$\frac{1}{2}$</mathjax>. cf problem set. very different behavior from <mathjax>$^4$</mathjax>He: phase
separation. Behaves as a magnetic fluid. Nobel prize of Richardson and
Oscheroff (?). At low temperature, these two fluids will not mix, and that
is what we actually use for supercooling.</p>
<p>Nuclear matter: protons and neutrons in the nucleus are also fermions, and
a fairly good model (bag model) is that these have relatively small
interactions with respect to each other. Have surface tension; tend to
stick together because of the bag. Not very modern way of thinking: muons
and whatnot. But to a first order, a fairly good approximation.</p>
<p>Typical radius of nucleus: <mathjax>$R \approx 1.3 \cdot 10^{-13} A^{\frac{1}{3}}$</mathjax>
cm. Velocity very high: <mathjax>$T_F \approx 3 \cdot 10^{11}$</mathjax> K. Fermi
momentum. Used here at LBL to produce first pions. Machines were not very
powerful at that time: one of first cyclotrons (380 MeV), and they sent an
<mathjax>$\alpha$</mathjax> onto a <mathjax>$^{12}C$</mathjax>, which produced this pion.</p>
<p>One of the most interesting applications of Fermi-Dirac statistics is in
white dwarves and neutron stars. What I told you so far is because we have
to stack the particles' energy (because I cannot put more than one particle
in each state), they are fairly large energies on the order of the Fermi
energy. This results in pressure. So the equilibrium of a star is the
equilibrium between (gravitational) potential energy and pressure from the
kinetic energy.</p>
<p>There is one class of stars (white dwarves), which is what our sun will
become. At some point in their lives, some stars will discard their
envelopes and contract into what is known as a white dwarf. High pressure;
very dense. What is providing the pressure is the degenerate gas of
electrons. There are of course protons and neutrons around, but they are
just not active in defining the pressure.</p>
<p>One way to express this equilibrium between gravitational pull (the star
tends to collapse, and the pressure which tends to keep it from collapsing)
is to look at the kinetic energy and the potential energy. Let's compare
the potential energy: let us assume our star is round. Take a constant
density <mathjax>$\rho$</mathjax> (not true: physically, it goes as <mathjax>$\frac{M}{\frac{4}{3}\pi
R^3}$</mathjax> -- this means that <mathjax>$n \propto \frac{M}{R^3}$</mathjax>).</p>
<p>This potential means we have to be related to the force and go as
<mathjax>$\frac{GM^2}{R}$</mathjax>. There will be some geometric factor in front, which you
will compute in your problem set. There is a negative sign, of course, and
let us call this <mathjax>$U_{\mathrm{pot}}-b\frac{M^2}{R}$</mathjax>. So what is the kinetic
energy? It goes as <mathjax>$\propto M\epsilon_F$</mathjax>. Let us distinguish between two
cases: first, if you are nonrelativistic, <mathjax>$\epsilon_F \propto n^{\frac{2}
{3}}$</mathjax>, so the kinetic energy goes as <mathjax>$\frac{M^{\frac{5}{3}}}{R^2}$</mathjax>. And if
I am ultrarelativistic (which can happen if mass of object is fairly
large), <mathjax>$\epsilon_F \propto n^{\frac{1}{3}}$</mathjax>, and so <mathjax>$U_k \propto \frac{
M^{\frac{4}{3}}}{R}$</mathjax>.</p>
<p>So in the nonrelativistic case, if I take the sum of the two energies, the
star has a minimum energy at a specific <mathjax>$R$</mathjax>. So the star is stable. This is
to be contrasted in the ultrarelativistic case: we then have still the
potential energy <mathjax>$U_p = -b\frac{M^2}{R}$</mathjax>, and kinetic energy which goes as
<mathjax>$U_k = c\frac{M^{\frac{4}{3}}}{R}$</mathjax>. For small M, the star expands; goes to
nonrelativistic cases and beocmes stable; at some sufficiently large M, the
sum is negative, and the star collapses. First described by Chandrasekhar
during his PhD. Degeneracy pressure cannot balance gravity if M too big:
roughly <mathjax>$1.4 M_\odot$</mathjax>: stars beyond this limit will go supernova.</p>
<p>Same story with neutrons, except now <mathjax>$\epsilon_F \approx 300 MeV$</mathjax>. Also has
a Chandrasekhar limit <mathjax>$3 M_\odot$</mathjax>, beyond which the star collapses on
itself and also gives rise to a supernova. Accretion from neighboring
stars.</p>
<p>Large stars collapse into neutron stars, and if the neutron star is too
large, it will further become a black hole.</p>
<p>Type Ia supernovae can be normalized: the story going to smoke is a little
more complex. Standard amount of material that just goes. What you see is
that universe expansion is accelerating instead of decelerating. Gravity
becomes repulsive (result of dark energy). Recent Nobel Prize (Saul
Perlmutter).</p>
<h1>Bose-Einstein Condensation</h1>
<p>The plus one is what gives you the square distribution in the Fermi-Dirac.</p>
<p>Note that the Pauli-exclusion principle only applies to Fermi-Dirac
particles. I can put as many particles in the ground state as I want. Talk
about choosing <mathjax>$\mu$</mathjax> (since it is, after all, one of our random variables)
such that <mathjax>$\mu \approx \epsilon_0 - \frac{\tau}{N}$</mathjax>: set <mathjax>$\frac{N(\epsilon_0
- \mu)}{\tau} = 1$</mathjax>. Issue with using the method we've been using up to this
point (i.e. for Fermi-Dirac gas), the continuous approximation is not
accurate enough, since the gaps are actually important. You must rely on
the discrete sum. You do indeed solve this equation. What you get is that
indeed <mathjax>$f = \avg{s(0)} = \frac{1}{\exp(-\frac{\mu}{\tau})} = -\frac{\mu}
{\tau}$</mathjax>, so we get an incredibly small <mathjax>$\mu$</mathjax>: <mathjax>$10^{-26}$</mathjax> eV. And if you
look at the second state, there are very few particles in the second state.</p>
<p>Temperature dependence: calculate separately condensed phase and normal
phase. Can use continuous approximation for excited state, but must isolate
ground state. We show that we have a certain Einstein condensation
temperature; <mathjax>$\tau_E = \frac{2\pi\hbar^2}{m}\expfrac{N}{2.612 V}{2/3}
\implies N_{exc} = N\expfrac{\tau}{\tau_E}{3/2}$</mathjax>. For large densities,
<mathjax>$\tau_E$</mathjax> is not very small. </p>
<p><a name='31'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Bose-Einstein Condensation: April 16, 2012</h1>
<p>Today: In particular, speak about number of states, superfluidity in, for
instance, liquid helium, and discoveries of Bose-Einstein condensates.</p>
<p>Midterms will be graded by tomorrow, presumably. You will have them on
Wednesday. So the final is on May 7, so three weeks from today, early in
the morning (sorry about that, but you are accustomed now). Of course, it
will cover everything that you have covered. After Bose-Einstein
condensates, I will have a short chapter on phase partitions (that is an
important subject in statistical mechanics), and then we will spend some
time on cosmology next week to show you how you can apply this to a
practical problem (it is not that practical).</p>
<p>Office hours Wednesday from 3-4; today is as usual (11-12).</p>
<h1>Bose-Einstein Condensates</h1>
<p>Bose-Einstein had an occupation number of <mathjax>$\frac{1}{\exp\parens{
\frac{\epsilon - \mu}{\tau}} - 1}$</mathjax>. This negative one is critical and will
determine the behavior of the condensation. What we saw was that when
<mathjax>$\frac{\mu - \epsilon_0}{\tau} \sim -\frac{1}{N}$</mathjax> (for some <mathjax>$\epsilon_0$</mathjax>
being our ground state), roughly all of the particles are in the ground
state.</p>
<p>In principle we should make the calculation of <mathjax>$\mu$</mathjax> by just looking at the
total number of particles: by the usual sum. Recall that when measuring
<mathjax>$\mu$</mathjax>, we cannot use the integral approximation that we used in the
Fermi-Dirac case, since our states of low energy are not very close
together. The integral approximation does not take into account the spacing
between the states. When <mathjax>$\mu$</mathjax> is very small, this is not a very good
approximation. One way of thinking about it: states get denser at higher
<mathjax>$\epsilon$</mathjax>, but what happens with the Bose-Einstein condensation? When
<mathjax>$\frac{\mu - \epsilon_0}{\tau} \sim \frac{1}{N} \ll \epsilon - \epsilon_0$</mathjax>,
this spacing is critical: it will enforce that most of the particles are in
the ground state, and very few are in excited states.</p>
<p>So can we make calculations of this <mathjax>$\mU$</mathjax> analytically? No; this is a
numeric problem.</p>
<p>We can talk, however, of isolating the first term and then using the
integral approximation on all excited states. That is an approximation, and
we would like this to be equal to N. We are interested in the second term,
the number of excited states. We would like <mathjax>$\frac{\epsilon_0 - \mu}{\tau}
\sim \frac{1}{N}$</mathjax>, so we can replace the excited states with <mathjax>$N_{exc}
\equiv V \int_{ \epsilon_1}^\infty \frac{1}{\exp \parens{\frac{\epsilon_0 -
\mu}{\tau}} \exp \parens{\frac{\epsilon - \epsilon_0}{\tau}} - 1}
D(\epsilon) d\epsilon$</mathjax>.</p>
<p>Working through the math, and setting <mathjax>$\epsilon_0$</mathjax> to 0, we get <mathjax>$N_{exc} =
2.612 n_Q V$</mathjax>. It does not depend on N if <mathjax>$\frac{\mu}{\tau} = 0$</mathjax>. We thus
define the Einstein condensation temperature as <mathjax>$\tau_E \equiv \frac{2\pi
\hbar^2}{m} \expfrac{N}{2.612 V}{2/3}$</mathjax>, so <mathjax>$N_{exc} = N\expfrac{\tau}
{\tau_E} {3/2}$</mathjax>.</p>
<h2>Liquid <mathjax>$^4$</mathjax>He</h2>
<p>If you look in the slides, I have actually computed (also probably in
Kittel) the numerical values for <mathjax>$^4$</mathjax>He. If you apply this naively, you
will get 3.1K. So <mathjax>$^4$</mathjax>He is expected (if there were no interactions between
the atoms) to behave like a Bose-Einstein condensate. Not exactly true:
behaves as a Bose-Einstein condensate about 2.17K. This is important:
called the Landau point (also called lambda point because this looks like a
lambda). Helium is liquid about 4K. Literally changes sign at 2.17K.</p>
<p>All of your particles are in the same state and becomes a macroscopic
quantum state. Very fun to see (oh god!). Basically in the same way as in
electromagnetism, which is doing a square root of number of photons and
plane waves: we have a wave function which is the square root of energies.</p>
<p>With this quantum effect: you can use vortices which are quantized: have a
certain amount of angular momentum. You have the equivalent of the two-slit
experiment, where basically you have liquid helium go through two slots,
and they diffract exactly like Young's double-slit experiment. You have
basically all the interference phenomena.</p>
<p>The most dramatic macroscopic property is superfluidity. Not only dramatic,
it is a pain for experimentalists working at low temperature. Basically
what is happening is that the atoms are not subject to any kinds of forces
from the wall. They begin to flow on the wall as if it had no roughness
(explanation forthcoming!). It makes the helium have no surface tension on
the surface and go through cracks. One of the problems of experimentalists
working at low temperature is something that is essentially leak-proof
above the Landau point (2.17K), but once you cross that threshold, bang!
the thing begins to leak like a sieve.</p>
<p>And of course 2.17K is something that you go and look; you'd have to warm
up and try to understand where the leak could have come from, redo the
solder, get back down, and maybe nine times out of ten, this thing is
leaking again. That's why low-temperature is sometimes called
slow-temperature physics. It takes a lot of tries to fix a system which is
leaking.</p>
<p>To give you an idea of what is going on, I would like you the following
question: when is energy transfer maximized? When the two masses are equal;
easy to show via conservation of energy and conservation of
momentum. Important consequence: if <mathjax>$M \ll m$</mathjax>, then <mathjax>$\frac{1}{2}mv^2 \to
0$</mathjax>. In one dimension (you can of course generalize to several dimensions),
with a particle of mass M, <mathjax>$E = \frac{p^2}{2M}$</mathjax>. I have to conserve energy
and momentum, so the dispersion relationship of my particle of mass <mathjax>$m$</mathjax> can
be expressed graphically by the intersection of two parabolas. If <mathjax>$m = M$</mathjax>,
the curves have the same width, so energy transfer is maximized. If m is
infinite, this is flat: I am not losing any energy.</p>
<p>We have a large effective <mathjax>$m$</mathjax>, but the analogy breaks down: system that is
very soft: no way to transfer a lot of momentum. When we send a little ball
into a superfluid liquid helium, it does not lose energy: keeps going as if
it were in a vacuum.</p>
<p>If your velocity is large enough, you can lose energy to phonons. In liquid
helium, there are also quantized oscillations. You have a system with
excitations, and there are phonons. If I am below the tangent, there is no
way I can have phonons (i.e. if travelling below velocity of sound). Only
if I am above the speed of sound can I lose energy. It will only lose
energy to phonons, and not to kicking of the system. Can emit excitations
(phonons) if velocity large enough. This may remind you of Cherenkov
radiation. This ia a phenomenon remarkably similar to that. If a particle
goes through a medium faster to the local velocity of light (smaller than c
because of diffraction index), then you will emit light. Same thing: if you
go above the velocity of sound, you will emit phonons.</p>
<p>So a lot of interesting physics; you can do the calculations, but the graph
is good enough to show what is happening.</p>
<p>By the way, there is one small experiment, which is somewhat interesting:
if you put a little pipe going through the surface of liquid helium in a
container with glass walls, and you begin to pump at 4K, and the
temperature of the helium goes down. Suddenly at 2.17K, you have a fountain
of liquid helium coming out of your tube. Very cool. Another thing that
happens is that the helium rises up on the sides and heats up to 2.17K
where it evaporates. It goes through cracks.</p>
<p>In the late 90s, there were interesting successful attempts by two groups
to artificially create Bose-Einstein condensation: one at NIST/JILA in
Boulder, and one at MIT.</p>
<p>This is an example where they were trapping atoms in the form of a ring,
which you can observe rotating. This is just the spectacular demonstration
that the particles are totally coherent.</p>
<p>Any way: how do we do that? What you need to make a Bose-Einstein
condensate: low temperature (need to cool atoms) and high density (of the
order of the quantum density). So how do you cool atoms? Having them bounce
on the wall is not a very efficient way of cooling them.</p>
<p>The breakthrough came from what was called laser cooling: suppose I have an
atom that I want to cool. This atom is going in many directions. Let's
choose an absorption line of this atom which has some resonance and
frequency. Instead of sending a laser at the resonance frequency, let's
send a laser slightly below this frequency. What is happening? This is if I
were in the rest frame of the atom. If the atom is moving towards the
laser, in this rest frame, it sees the frequency of the laser slightly
blue-shifted, so it absorbs the laser more, and it will emit the photons
over <mathjax>$4\pi$</mathjax> after a while, and it has lost kinetic energy. If it goes away
from the laser, it will scatter less (it will see the frequency
red-shifted). The net result is if I am sending laser light from all
directions, I will tend to cool my atoms and decrease their energy.</p>
<p>We can use the same idea to trap the atoms: put magnetic field on the side:
frequency is changing; oblige particles to be in the area of zero magnetic
field.</p>
<p>In practice this is a little more complex than this: you cannot make a
magnetic field that looks like an infinite square well, but you can have a
rotating magnetic field, so every time the particles want to go out, they
will see the magnetic field (whose energy will be higher: particles are
slow since they have been cooled). Two groups in our department are doing
that as their main research.</p>
<p><a name='32'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Fermi-Dirac, Bose-Einstein, Phase transitions: April 18, 2012</h1>
<p>Reasoning why chemical potential does not vary much at low temperatures: it
looks fairly rectangular under normal circumstances. When we have a finite
temperature, we have some rounding of the distribution.</p>
<p>The occupation number of holes: if I am a distance <mathjax>$\delta$</mathjax> from <mathjax>$\mu$</mathjax>,
this is the occupation number <mathjax>$\lambda$</mathjax> of electrons. The two are equal
because of this addition. On the other hand, that's not what we want to
have: we want to multiply by density of states. We have to plot
<mathjax>$f(\epsilon, \tau)$</mathjax> and integrate over that. In two dimensions,
<mathjax>$D(\epsilon)$</mathjax> is constant. So this is the same graph, except now we have
<mathjax>$D(\epsilon)$</mathjax> on our axis. At a distance <mathjax>$\delta$</mathjax> from the chemical
potential, now the <em>number</em> of holes is equal to the <em>number</em> of electrons
(as opposed to occupation number: we've taken into account density of
states, so this is for the entire system).</p>
<p>Because of this symmetry, the sum from 0 to infinity of <mathjax>$f(\epsilon, \tau)
D(\epsilon) d\epsilon$</mathjax> is exactly equal to <mathjax>$\int_0^\mu D(\epsilon
d\epsilon$</mathjax>. This is known to be <mathjax>$\int_0^{\epsilon_F} D(\epsilon)
d\epsilon$</mathjax>. Slightly different from what happens in 3 dimensions:
<mathjax>$D(\epsilon) \propto \sqrt{\epsilon}$</mathjax>, so my function <mathjax>$fD$</mathjax> looks different
because I am losing less on the hole-side than gaining on the electron, so
I have to reduce the chemical potential a little bit at <mathjax>$\tau$</mathjax> at 0.</p>
<p>Don't confuse occupation number (which is symmetric) with the number (which
is not symmetric in general). Symmetric only in two-dimensional case
because <mathjax>$D(\epsilon)$</mathjax> constant.</p>
<p>Let me, then, finish rapidly what I wanted to say on Fermi-Dirac and
Bose-Einstein. We were speaking about this very nice experiment: BEC atoms,
cooling in a trap. Now becoming routine, but before were very difficult to
do with atoms. And now people are doing that with molecules; they are
making artificial crystals; there is a whole industry. They take atoms and
arrange them in a particular fashion and potential. Now that the technology
of cooling the atoms and trapping them is well understood, there is a lot
of physics happening.</p>
<p>Graph corresponds to spatial density of atoms. Claim: not a great
discovery; just a technical feat. Superconductivity: in low-temperature
superconductors, you have electrons pairing in Cooper pairs for phonon
interactions.</p>
<p>Condensation theory is a bad approximation: interactions between Cooper
pairs are important. The temperature at which superconductivity appears is
much smaller than you would naively compute. Similar effects: zero
resistance (as in superfluidity), vortices: quantization of flux, phase
shift effects: all the <mathjax>$n^2$</mathjax> behavior. Superconductor with two junctions is
equivalent to Young's double-slit experiment. Very similar properties; very
important devices.</p>
<p><mathjax>$^3He$</mathjax> is spin <mathjax>$\frac{1}{2}$</mathjax>, and you have pairing of the spin to create a
spin of 1 to create superconductivity. Then, because this is in vacuum, you
have very strange effects: magnetic properties.</p>
<p>This is a very important effect in condensed-matter physics. Emerging
phenomenon: completely different phenomenon at low temperature. Surprising:
not that low of temperature for sufficiently dense systems.</p>
<p>Energy density for both bosons and fermions goes as <mathjax>$T^4$</mathjax>. Useful when
considering early universe when considering expansion.</p>
<p>Pressure: we have never solved problem of scattering of particles (shown
force per unit area). Once again, define independent variables when taking
partial derivatives. What is constant is energy and number of particles (if
we want to use <mathjax>$p = \tau \pderiv{\sigma}{N}$</mathjax>.</p>
<p>The force per unit area on the wall can be readily computed in the
following way: I am considering a small area on the wall <mathjax>$dA$</mathjax>. Now, if I
have a particle coming in (I will assume, by the way, because of the
symmetry, the angle of reflection is the same as that of incidence), the
force is merely <mathjax>$\pderiv{p}{t}$</mathjax>, i.e. what is the change of momentum per
unit time? And the pressure will be the force divided by <mathjax>$dA$</mathjax>. I will have
to compute this: <mathjax>$\frac{1}{dA\Delta p}\int 2p\cos\theta v\Delta t dA
\cos\theta n(p) p^2 dp d\Omega = 2\pi \int cos^2\theta d\cos\theta \int 2pv
n(p)p^2dp$</mathjax>. The <mathjax>$\theta$</mathjax> integral gives me <mathjax>$\frac{1}{3}$</mathjax>, and what we have
is that my pressure is <mathjax>$\frac{4\pi}{3} \int pv n(p) p^2 dp$</mathjax>. Depending on
whether you are nonrelativistic or (ultra)relativistic, pv is just <mathjax>$mv^2 =
2\epsilon$</mathjax>, so <mathjax>$P = \frac{2}{3}U$</mathjax>. If you are relativistic, <mathjax>$pv$</mathjax> is just
<mathjax>$pc = \epsilon$</mathjax>, the pressure is therefore just <mathjax>$\frac{1}{3}U$</mathjax>. And check
with our various results: this is the same pressure as the thermodynamic
definition <mathjax>$\tau\pderiv{\sigma}{V}\Big|_{U,N}$</mathjax></p>
<p>Explanation for why we have pressure for Fermi-Dirac even at zero
temperature: I have to stack up my states in energy space, and I have to
have states that are high velocity even at zero temperature. That's one of
the interpretations of the pressure of a Fermi-Dirac gas.</p>
<p>phase transitions: system in contact with reservoir, but not necessarily in
equilibrium. What is minimized? Landau free energy. We have seen this: free
energy is not defined because <mathjax>$\tau$</mathjax> is not defined. Energy is not minimzed
because system is constantly kicked by thermal fluctuations.</p>
<p><a name='33'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Phase transitions: April 23, 2012</h1>
<p>Would like to finish phase transitions if possible. The modern way of
looking at phase distributions involves looking at the Landau
distributions: consider the Landau free energy <mathjax>$F_L = U_s - \tau_R
\sigma_s$</mathjax> and Landau free enthalpy <mathjax>$G_L = U_s - \tau_R\sigma_s + p_R
V_s$</mathjax>. The first one is used when considering constant volume, and the
second one is used at constant pressure.</p>
<p>Generally speaking, you will look at the dependence of <mathjax>$U_s, \sigma_s, V_s$</mathjax>
on an order parameter <mathjax>$\xi$</mathjax>, and we are looking at equilibrium, which is
obtained at the minimum.</p>
<p>You may be somewhat confused by the fact that you cannot define the state
of a system by just one parameter. We must almost actually minimize with
respect to all other variables.</p>
<p>This minimization comes in usually by the expression of <mathjax>$\sigma_s
(\xi)$</mathjax>. When you speak of <mathjax>$\sigma_s(\xi)$</mathjax>, usually energy depends directly
on <mathjax>$xi$</mathjax>, whereas <mathjax>$\sigma$</mathjax> depends on probabilities. You will maximize
<mathjax>$\sigma_s$</mathjax> at some given <mathjax>$\xi$</mathjax>.</p>
<p>When I was speaking of ferromagnetism, at one point I was changing in the
expression of the entropy <mathjax>$\frac{mB}{\tau} \to \tanh^{-1}\parens{\frac{M}
{nm}}$</mathjax>, I was already doing this minimization.</p>
<p>The net result is that if I plot F as a function of the magnetization, at
high temperature, the magnetization wants to be zero (since that is the
minimum of the Landau free energy), and this curve will move upwards until
suddenly, at a critical temperature (the Curie temperature), it develops a
minimum, and the equilibrium magnetization becomes nonzero.</p>
<p>If I plot the magnetization as a function of teh temperature, it is zero
above this temperature. This is a second-order phase transition, and you
move smoothly from <mathjax>$m=0$</mathjax> to <mathjax>$m \neq 0$</mathjax>. There is no discontinuity in
<mathjax>$m$</mathjax>. This is very different from the case in which we go from gas to
liquid. Continuous evolution: the thing that is discontinuous is the first
derivative.</p>
<p>Classical gases: what are we missing in our description (point-like
particles which just scatter when they make contact with each other).</p>
<p>Issues: limit to compressibility (not taking into account volume of
particles). Interaction forces (long-distance): attractive Van der Waals
forces (polarization due to fluctuations induces polarizations in other
local particles). So how do these forces look? I have a very strong
repulsive force when they are in contact and a weaker <mathjax>$\frac{1}{r}$</mathjax> force
when they are sufficiently far apart.</p>
<p><mathjax>$V \to V - Nb$</mathjax> (where <mathjax>$b$</mathjax> is the volume of the atom). So this is the
approximation. Instead of a very fast approaching force, we linearize about
this point. For the attractive force, I will treat in a similar manner to
what we did for magnetism: mean field approximation. <mathjax>$\avg{U} \to \avg{U} -
\avg{\phi}\frac{N(N-1)}{2}$</mathjax>. I will say that <mathjax>$U = T - \frac{N^2 a}{V}$</mathjax>
because the average of <mathjax>$\phi$</mathjax> is <mathjax>$\frac{\int \phi d^3 r}{V} \equiv
\frac{2a}{V}$</mathjax>. I will make this approximation: there is an attractive force
that goes as <mathjax>$N^2$</mathjax>.</p>
<p>We want to compute <mathjax>$G_L$</mathjax>. I have <mathjax>$U_s = T - \frac{N^2 a}{V}$</mathjax>, and, back to
counting of states, <mathjax>$\sigma_s = N \bracks{\log \bracks{}\frac{\expfrac{M
U_K}{3\pi\hbar^2 N}}{\frac{N}{V - Nb}} \equiv N \log \frac{n_Q}{n} +
\frac{5}{2}N$</mathjax> (the Sackur-Tetrode formula).</p>
<p>At <mathjax>$U_k = \frac{3}{2} N\tau$</mathjax>, I am not yet in equilibrium with the
reservoir, so there is no real way to define temperature.</p>
<p>What is the pressure? There is a critical pressure <mathjax>$\frac{a}{27b^2}$</mathjax> (don't
ask me; just result of calculation) above which things behave normally:
<mathjax>$G_L$</mathjax> normally, and as temperature goes down, <mathjax>$G_L$</mathjax> as a function of <mathjax>$V_s$</mathjax>
moves downward. But below this limit, I start to develop two minima: moves
up and to the left as temperature goes down.</p>
<p><mathjax>$\pderiv{G_L}{V_s$</mathjax> gives us the Van der Waals equation of state,
<mathjax>$\parens{p_R + \frac{N^2a}{V_s^2}}\parens{V_s - Nb} = N\tau_R$</mathjax>. I have
already defined <mathjax>$p_c = \frac{a}{27b^2}$</mathjax>, and <mathjax>$\tau_c = \frac{8a}{27b}$</mathjax>.</p>
<p>At this critical temperature, it begins to develop a certain inflection
point, and the liquid/gas relationship emerges.</p>
<p>If you do this calculation numerically, it is not easy to plot: there is a
big difference between the volume of the gas and the volume of the gas for
different values of the pressure compared to the critical pressure.</p>
<p>At high pressure compared to teh critical pressure, you stay always in the
gas phase. As you increase the temperature, it will decrease, and the
minimum goes down: the volume just changes. Nothing special happens. If you
are below the critical temperature, at high temperature, volume is large,
and you see a minimum at high temperature, and there is a second minimum
which develops: the liquid. That is what is happening.</p>
<p>Transition from liquid to gas as we increase the temperature: it does not
go by itself: there is a potential barrier between the two phases: liquid
<mathjax>$\to$</mathjax> gas needs a wall or a dust particle (creating a bubble takes work.</p>
<p>Even if the gas has a smaller free enthalpy, I still have to overcome this
potential barrier (we are stuck in the liquid if nothing else
happens). Takes work to create a bubble.</p>
<p>Meta-stable states: superheated liquid or supercooled vapor: need surfaces
for transition to occur.</p>
<p>Unless you increase the temperature high enough such that there are no more
local minima, you will go extremely brutally.</p>
<p><a name='34'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Phase transitions, Cosmology: April 25, 2012</h1>
<p>What I wanted to do was speak about the final. You have the right to have
four pages now (single-sided) of notes. As usual, my advice is to rewrite
your notes because this class is more about concepts and how they relate to
each other than formulae. As should be quite obvious from the midterms, if
you take the wrong formula and apply to a situation, the result is
unpredictable.</p>
<p>I think we all agree that 8am is not advisable. So what I can propose is a
review session either on Wednesday 4-6, Friday 10-12, or Friday 2-4 (Alex's
is on Thursday 1-3pm in 9 Lewis). I will focus the review on chemical
potential, since we have not really seen this before.</p>
<p>Strong preference for Wednesday.</p>
<p>So let's look at phase transitions. We were looking at this question of how
the system goes from liquid to gas as we increase the temperature. The
thing I wanted to attract your attention that for this kind of first-order
phase transition, the behavior is not continuous: there is a discontinuity
because there are to minima in <mathjax>$G_L$</mathjax>. Because there is a potential barrier
between the two minima, the system can be stuck in one of the states. It is
not true that if you heat (pure) water at about 100 degrees Celsius, it
will not boil. It will be stuck in a metastable state of superheated
liquid. Will only boil because of defects.</p>
<p>Import in stuff like bubble chambers and particle detectors (important for
detection of dark matter). Can stay metastable for minutes. This is fairly
characteristic of what we call first-order transitions.</p>
<p>Chemical potential as a function of <mathjax>$p, \tau, N$</mathjax> has no dependence on
<mathjax>$N$</mathjax>. We showed that <mathjax>$G = N\mu(p,\tau)$</mathjax>.</p>
<p>Entropy of liquid is lower than that of gas with same parameters. Related
to the fact that there are fewer degrees of freedom, so smaller number of
states. Using that the Gibbs free energies are the same, <mathjax>$\Delta H = LN &gt;
0$</mathjax>, where <mathjax>$L$</mathjax> is the latent heat per particle.</p>
<p>Coexistence: you can follow the separation between the liquid and gas as a
function of pressure and temperature. When the Landau free enthalpy is
equal between the systems, you are on this locus where gas and liquid
coexist, and of course it stops when you reach critical pressure and
critical temperature, which we call the critical point. This is of course
of intense interest to us.</p>
<p>There is a very famous formula derived in the mid-nineteenth century by
Clausius and Clapeyron, which is very simple. Clearly we have
<mathjax>$G_L(p(\tau),\tau) = G_g(p(\tau),\tau)$</mathjax> (equation at the coexistence
line). Now, taking the derivative with respect to <mathjax>$\tau$</mathjax>, we get
<mathjax>$\pderiv{G_L}{\tau}\deriv{p}{\tau} + \pderiv{G_L}{\tau} = \pderiv{G_g}
{\tau}\deriv{p}{\tau} + \pderiv{G_g}{\tau}$</mathjax>. So what are these terms?</p>
<p><mathjax>$dG = -\sigma d\tau + Vdp + \mu dN$</mathjax>, so <mathjax>$\pderiv{G}{p} = V$</mathjax>,
<mathjax>$\pderiv{G}{\tau} = -\sigma$</mathjax>. Thus we can solve: <mathjax>$\deriv{p}{\tau} =
\frac{\sigma_l - \sigma_g}{V_g - V_l} = \frac{1}{\tau}\frac{L}{v_g - v_l}$</mathjax>,
where <mathjax>$v_g \equiv \frac{V_g}{N}$</mathjax>. If you use that <mathjax>$v_l \ll v_g \sim
\frac{\tau}{p}$</mathjax>, then this is roughly <mathjax>$-\frac{pL}{\tau^2}$</mathjax>, or <mathjax>$p \sim
\exp\parens{-\frac{L}{\tau}}$</mathjax>. If you express the partial pressure, you
have a straight line, which is actually the latent heat per particle. Very
good approximation for water (given the number of assumptions we have made)
and ice (since we can do the same thing between solid and liquid). Also an
excellent approximation for <mathjax>$^4\mathrm{He}$</mathjax>.</p>
<p>That's basically all regarding phase transitions. These arise from
correlations between particles: no phase transitions with ideal gases. The
method used here is the mean field approximation as a first-order. Two
types: first order (coexistence, latent heat, metastability) and second
order (continuous transformation -- discontinuity in derivative). In first
order, there is a critical point. Very important in modern statistical
mechanics.</p>
<h1>Cosmology</h1>
<p>Let me dissipate ambiguities regarding presence on final. The details of
what I will tell you will not be on the final. But the kind of principles
that I am applying (the thermal physics and statistical mechanics) are
clearly on the final: these are the things we have spoken about in
considerable detail over the last 14 weeks.</p>
<p>What I would like to speak to you about is basically the thermal evolution
of the universe. We have something (we call this the Big Bang). Big
explosion: a lot of unknown particle physics at the beginning. At about a
tenth of a nanosecond in, what we see is fairly well defined. A few
thousand years gives us the microwave background.</p>
<p>How do we know that the universe is expanding? We can measure the Hubble
recession of distant galaxies. To a first approximation, their velocity is
proportional to their distance. There is essentially 80 years of research
where we have learned to measure distances, account for local velocities,
and more.</p>
<p>THe second thing (which we have spoken about) is that we have observed this
background radiation (about 3K). The final thing (which I will speak about
in more detail on Friday) is that if we, in the early universe, not only
observe protons and electrons, but that pi-modal -- helium and deuterium
have been formed -- in order to understand how these things happened, we
need a very hot phase.</p>
<p>Best way to think about this expanding universe (which is mind-boggling
because everything is changing) is to divide out the expansion. We have
something called the scale parameter a(t) and go from the physical
coordinate to the comoving coordinate, where we take this expansion away.</p>
<p>Now, there is something that we call the Friedman equation: the sum of
kinetic energy and potential energy is constant. We can compute constants
in general relativity. If the curvature of the universe is infinite, this
is constant.</p>
<p>For all practical purposes, we believe this all started from a phase
transition: inflation. This second-order phase transition led to
exponential expansion. What is going on? We assume that we have a field
which we have never seen (order parameter -- inflaton). Same that we've
used, except now in quantum mechanics: instead of classic variables, we now
have quantum fields. As temperature decreases, this begins to develop a
second minimum. This will induce a phase transition. System feels that its
energy is not equal to zero, which leads to an exponential increase of the
scale parameter. We call that inflation. We believe that in the space of
less than a tenth of a nanosecond, the universe has expanded by a factor of
60E40. For all practical purposes, this is what we call the big bang.</p>
<p><a name='35'></a></p>
<h1>Physics 112: Statistical Mechanics</h1>
<h1>Cosmology: April 25, 2012</h1>
<p>Thermal history of the universe. How thermodynamics / statistical physics
is applied to the field. Before I do that: let me remind you that next week
we will have review sessions Wed 4-6 in 325 Leconte and Th 1-3 9 Lewis.</p>
<p>What I am going to do is finish talking about inflation, then speak of the
evolution of temperature as a function of time, and finally give you an
idea about nucleosynthesis: how the elements (which have nuclei) were
formed. This will give us the opportunity to speak about three important
aspects of the course: phase transitions, more general arguments about
evolution of entropy, and the mass-action law.</p>
<p>For instance: looking at a proton and an electron, we get hydrogen. Or a
neutron and a proton: deuterium. Or deuterium and a proton: Helium-3.</p>
<p>The universe is expanding in a homogenous and isotropic manner. The
physical coordinates are related to comoving coordinates by a certain
expansion factor. This is interesting: relates speed of expansion to energy
density.</p>
<p>Did speak on Wednesday regarding what happens in the early universe: phase
transition (postulate). We believe that we had a phase transition, where
what was happening as a function of order parameter, the Landau free energy
developed a minimum, and suddenly the universe wants to go to this
point. When it does that, it discovers that is has a nonzero energy
density, and it begins to expand.</p>
<p>On the order of 60E40.</p>
<p>So: why do we need something like that? Need to justify cosmic microwave
background. Remember this is the radiation from the plasma in the early
universe at a given temperature. Recombined into hydrogen, so universe
became transparent.</p>
<p>Things were close (in causal contact). Then things were put very far
apart. In GR, there is no problem with having the space expand faster than
the speed of light.</p>
<p>That was the main reason for inflation: some reason for extremely fast
expansion of universe, which then settles down.</p>
<p>Space flat, so no worry regarding initial conditions. Quantum fluctuations
are frozen in and expand with space. Will provide seed for large scale
structure.</p>
<p>If you plot the power spectrum of density fluctuations as a function of
spatial frequency <mathjax>$k$</mathjax> (just a Fourier transform). We have a power spectrum
that looks roughly parabolic (with a maximum), and we understand the
shape. If we measure the microwave background on this plot and extrapolate
from the expansion factor, it links perfectly with what we measure in the
galaxies in terms of structure. That was the great excitement about the
cosmic microwave background when we first measured these fluctuations: they
were right were we needed them to be.</p>
<p>We have no real mechanism for this field. Cosmology points to physics at
much higher energy: <mathjax>$10^{16}$</mathjax> GeV. Best accelerators are <mathjax>$10^4$</mathjax> GeV. One of
reasons for switching to cosmology.</p>
<p>What can we test? Measure polarization of microwave background: see
gravitational field. This is very much related to discussions in media on
multiverses.</p>
<p>What is constant in this comoving sphere during the expansion? The entropy
-- no heat transfer. The energy cannot be constant: the sphere is working
against the rest of the universe. There is this pressure. The volume
increases, <mathjax>$-pdV$</mathjax> acts, and so the energy inside decreases by <mathjax>$pdV$</mathjax>. The
entropy on the other hand should not decrease: universe is isotropic and
homogeneous.</p>
<p>No generation of entropy if there are no first order phase transitions
(which would mean irreversibility).</p>
<p>This tells us that entropy per unit comoving volume has to be constant. <mathjax>$T
\propto \frac{1}{a(t)}$</mathjax>. Remember: we did show that the energy density for
relativistic particles (which dominated the entropy during the early
universe) goes as <mathjax>$T^4$</mathjax>. There is a factor corresponding to the degrees of
freedom, which is 1 per polarization for bosons and <mathjax>$\frac{7}{8}$</mathjax> per
polarization for fermions So <mathjax>$\frac{\sigma}{\gamma} \propto g^* T^3$</mathjax>. That
is related to relativstic number of the number of particles.</p>
<p>So what is the <mathjax>$\frac{\sigma}{V_{\text{com}}} \propto g^*$</mathjax>. In that case,
we have <mathjax>$S_{com} \propto a^3 g^* T^3$</mathjax>, which gives us that <mathjax>$T \sim
\frac{1}{a(t)$</mathjax>. Same result as in GR.</p>
<p>Reason why temperature goes as <mathjax>$\frac{1}{a(t)}$</mathjax> is related to the Doppler
shift. As the universe expands, the wavelength of the relativistic
particles is stretching out in GR, and the wavelength increase, frequency
decreases, therefore the temperature decreases.</p>
<p>If there is no change of degrees of freedom, <mathjax>$T \propto \frac{1}{a(t)}$</mathjax>. In
the parabolic graph, used that we can compute (from first principles)
temperature of fusion of hydrogen to be ~3000K. Universe was 1000x smaller
than it is now.</p>
<p>If the number of degrees of freedom is changing, you have a kink in the
temperature evolution.</p>
<p>With high enough temperature compared to binding energy, the product,
for all practical purposes, does not exist.</p>
<p>Indeed, in this case, the chemical potentials add.</p>
<p>Problem: running out of time. Try to tell a little bit about how we discuss
the formation of atoms in the early universe. Neutrons, protons,
electrons. At the early part (before about 3 minutes in the age of the
universe), we had too high of energy, and the neutrons plus protons could
not form deuterium. When the temperature dropped, deuterium was able to
form, and we had that. Then we could form Helium-3, Helium-4, etc. We have
to be a little careful when we have two charged particles, we have a
positive potential from the EM forces between the particles.</p>
<p>Let me show just one thing: we can use this kind of argument to follow the
expansion of the density of various components. In the space of a few
moments, you are forming all of the light nuclei in the universe. Can try
to measure amount of deuterium and whatnot. You arrive at the fact that
ordinary matter is only a tiny part of what we observe in the universe.</p>
<p>By the way: cosmic microwave background gives exactly the same results with
totally different physics.</p>
<p>Conclusion: what I wanted to conclude was the link between nuclear physics
at small scale and the universe at large scale. Now attempting to explore
links between particle physics / quantum gravity and the universe.</p>
<p>Inflation: origin of small scale quantum structure.</p></div><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
var TEX = MathJax.InputJax.TeX;
var PREFILTER = TEX.prefilterMath;
TEX.Augment({
prefilterMath: function (math,displaymode,script) {
math = "\\displaystyle{"+math+"}";
return PREFILTER.call(TEX,math,displaymode,script);
}
});
});
var a = document.getElementsByTagName('a'),
ll = a.length;
if (ll > 0) {
var div = document.getElementsByClassName('pos')[0];
div.style.float = 'right';
div.style.position = 'fixed';
div.style.background = '#FFF';
div.style.fontSize = '90%';
div.style.top = '10%';
div.style.right = '5%';
div.style.width = '15%';
for (var i = 0; i < ll; i++) {
div.innerHTML += '<a href="\#' + a[i].name + '">'
+ a[i].parentElement.nextElementSibling.innerHTML
+ '</a><br />';
}
var div = document.getElementsByClassName('wrapper')[0];
div.style.width = '80%';
}
</script></div>
Jump to Line
Something went wrong with that request. Please try again.