# steveWang/Notes

Fetching contributors…
Cannot retrieve contributors at this time
1814 lines (1812 sloc) 107 KB


EE 120: Signals and Systems

January 17, 2012.

• BABAK
• email: AYAZIFAR@EECS.BERKELEY.EDU
• Best way to contact.
• If time-sensitive, make subject line explosive. Literally.
• Does his best to respond. So that's that.
• Office: 517 Cory

• Three midterms.
• MT1: 20%. Date: Feb 14. Week of drop date (almost certainly)
• MT2: 25%. Date: TBA. Percentage tentative (alt. 20%)
• MT3: 25%. Date: Last lecture.
• Homework: 15%, drop lowest two scores.
• 4-6 homeworks.
• Work in groups of 3-5 people.
• Each member of group turns in separate document.
• Must constitute primarily own work.
• Write own name at top, names of collaborators beneath.
• Tentatively due Tuesdays so Babak can have OH on Monday evenings.
• Pop quizzes (often obvious when): 15%. Drop lowest one.
• Possible research paper project. 10% => MT2, MT3 weigh 20% each.
• Just get feet wet with journals.
• Same for everybody.

Since the course is not HW-heavy, there are weeks we don't have homework. So get together with groups and plow through old exams.

The other thing you can do is look at a couple of books. Most of them are with this title: Signals & Systems. One of them is by Oppenheim, Willsky, Nawab. This one's an expensive book, actually. Don't recommend you go out and buy it unless you can find one printed overseas. Excellent problems -- mostly MIT exams. Taught out of this book previously. (both are second-edition)

There's another one with the same title by Hwei P. Hsu (Schaum's Outline) -- great for self-study because every problem is solved.

Basic Properties of Systems

• Linearity
• $x\to[F]\to y$

if:

$x_1\to[F]\to y_1$ $x_2\to[F]\to y_2$

then:

$\alpha x_1\to[F]\to\alpha y_1$ (scaling / homogeneity property) $x_1+x_2\to[F]\to y_1+y_2$ (additivity property)

if:

($\forall x_1,x_2\in X, \alpha_1 x_1\to\alpha_2x_2\to[F] \to\alpha_1 y_1+\alpha_2y_2$),

then:

($\forall x_1,x_2\in X, \alpha_1 x_1+\alpha_2x_2\to[F] \to\alpha_1y_1+\alpha_2y_2$),

(where:

$x \in X$ input signal space

$y \in Y$ output signal space

$y = F(x)$

$y(t)$ output at time t

$y(n)$ output of sample n)

• example: resistor and stuff. Ohm's law. $\vec{J} = \sigma \vec{E}$ and whatnot.

• By changing the system into a time-variant system, it doesn't lose linearity.
• ZIZO.
• converse not true.
• contrapositive/logical equiv: $\lnot \text{ZIZO} \implies \not \text{L}$.
• Modulation
• Parallel interconnection of linear systems produces a multiple-input linear system.
• Strictly speaking, this is a lie. It's not really parallel.
• Two-point moving average:
• Time Invariance
• $\hat{x}(t) = x(t-T) \implies \hat{y}(t) = y(t-T)$ $\forall x\in X, \forall T \in \mathbb{R}$

Hidden assumption: X is closed under shifts. (i.e. $x\in X \implies \hat{x}\in X$) + occurs, e.g. when something that isn't an input is time-dependent. * Causality + Basically, no dependence on future values. - If two inputs are identical to some point, then their outputs must also be identical to that same point.

EE 120: Signals and Systems

January 19, 2012.

More on causality

(proof by contradiction of previous example)

(input zero up to a certain point, but the output is nonzero before said point. output preceding input. In itself, it does not tell non-causality, but if you know the system is either linear or time-invariant, then you know it can't be causal.)

L and C implies not P

If the system is linear AND causal, output cannot precede the input.

contrapositive: P implies not (L and C) = not L or not C.

for time invariance, we don't have to insist that the input is zero up to a point; it just has to be (any) constant. Output also doesn't have to be zero; it just has to be constant (not necessarily the same constant, obviously).

(TI and C) implies not P contrapositive: P implies not TI or not C.

anti-causality is defined as the exact opposite of causality. Everything we've said about causality, the inequalities change sign.

Bounded-input bounded-output (BIBO) stability.

if $\abs{x(n)} \le Bx (\exists Bx < \infty) \implies \exists 0 < By < \infty \text{ s.t. } \abs{y(n)} \le By \forall n$

every FIR (finite-duration impulse response filter) is BIBO stable.

LTI (linear time-invariant) systems

$x(n)$ can in general be decomposed into into a linear combination of impulses. (Kr\"onecker deltas). $x(n) = \sum_{m=-\infty}^\infty x(m)\delta(n-m)$. (sifting property)

If you know the response of the LTI system to the unit impulse, you know the response to all discrete-time inputs -- decompose it into unit impulses.

convolution sum. $$x(n) = \sum_m x(m) \delta(n-m) \\ y(n) = \sum_m x(m)f(n-m) \\ y(n) = (x * f)(n) \\ y(n) = \sum_m^\infty x(m)f(n-m) = \sum_{k=-\infty}^\infty f(k)x(n-k) \\ = (f*k)(n) \\ \therefore x * f = f * x$$.

You can reverse the roles of the impulse response and the input signal.

Let $x(n) = e^{i\omega n}$. $y(n) = F(\omega) = (\sum_k f(k)e^{-i\omega k})e^{i\omega n}$. Frequency response of this LTI system.

Converges nicely iff the system is BIBO-stable. The frequency response of the system will be a very smooth function of omega.

BIBO stability for LTI systems

An LTI system F is BIBO stable iff its impulse response converges (absolutely summable in discrete case or integrable in continuous case).

IF the impulse response is absolutely summable, then F is BIBO stable (every bounded input produces a bounded output).

$$y(n) = \sum_k f(k)x(n-k) \text{ (I/O relation for an LTI system)}. \\ \abs{y(n)} = \abs{\sum_k f(k)x(n-k)} \le \sum_k \abs{f(k)}\abs{x(n-k)} \\ \abs{x(n)} \le Bx \forall n. \le (\sum_k \abs{f(k)})Bx$$

F is BIBO stable $\implies \sum_n \abs{f(n)} < \infty$

contrapositive. i.e. we can find at least one bounded input that produces an unbounded output.

(fudge something to cancel out signs; we create our absolute sum over all integers @ n=0. convolve signal with time-reversed signal; auto-correlation. convolution of a signal with itself.)

$\frac{f(n)^2}{\abs{f(n)}} = \abs{f(n)}^2/\abs{f(n)} = \abs{f(n)}$

IIR Filters

$y(n) = \alpha y(n-1) + x(n)$. causal, $y(-1) = 0$.

(geometric sum; BIBO stability depends on magnitude of \alpha being less than 1.)

EE 120: Signals and Systems

January 24, 2012.

LTI Systems and Frequency Response

Freq. response: $H(\omega)$ (continuous-time?)

Will learn set of systems for which sum doesn't converge, so we'll need a new transform: laplace (Z) transform

Two-point moving average:

$h(n) = (\delta(n) + \delta(n-1))/2$

$H(\omega) = (1 + e^{-i\omega})/2 = e^{-i\omega /2}(e^{i\omega /2} + e^{-i\omega /2})/2 = e^{-i\omega/2}cos(\omega/2)$

frequency response of discrete-time systems is periodic: adding a multiple of $2\pi$ will not change the result.

$2\pi$-periodicity of discrete-time LTI frequency responses. This means, naturally, that we don't have to plot our functions everywhere; rather, we only need to worry about a single period.

(looks like $\abs{cos(\frac{\omega}{2})})$

eigenfunction property of discrete LTI systems -- $e^{i\omega} \to H(\omega)e^{i\omega}$

$H^*(\omega) = \sum h^*(k)e^{i\omega k}$

conjugate symmetry: $H^*(\omega) = H(-\omega)$. (generalization of "even" functions)

if $h(n) \in \mathbb{R} \forall n$, $y(n) = Re{H(\omega)e^{i\omega n}} = |H(\omega )|cos(\omega n+\angle H(\omega))$

$x\to H\to y$

$h(n) = (1-\alpha ^n)/(1-\alpha)$

Ways to determine H(\omega):

Method 1:

$H(\omega) = \sum h(n)e^{-i\omega n} = \sum (\alpha e^{-i\omega})^n$. Use usual formula for geometric series, since this converges.

Method 2:

Use eigenfunction property of the complex exponential. Frequency response is probably defined. Let $x(n) \equiv e^{i\omega n}$ (a pure tone).

$y(n) = H(\omega)e^{i\omega n}. y(n-1) = H(\omega)e^{i\omega n}e^{-i\omega}$. $H(\omega) = 1/(1-\alpha e^{-i\omega})$.

How we can plot the magnitude response

eliminate all negative complex exponentials. $\alpha$ is in the unit circle. $e^{i\omega}$ is a point on the unit circle, so we can represent it with a vector.

Consider graphically.

EE 120: Signals and Systems

January 26, 2012.

numerator is 1, denominator is length of vector $e^{i\omega} - \alpha$. Task: plot ratio as $\omega$ varies.

$1/(1+\alpha), 1/(1-\alpha)$.

frequency response curve should be monotonically increasing from $-\pi$ to 0 and decreasing from 0 to $\pi$. [ talk about inflection points, concavity, etc. Second derivatives. ]

Low-pass filter for $0<\alpha<1$. If we understand the geometry for this

particular case, we can answer design-oriented questions. One question is, what if I want to sharpen the peak of this low-pass filter?

• Answer: have $\alpha$ approach 1. Can't get too close -- real-world has noise. Also, in trouble if $\alpha$ jumps onto or outside of the unit circle. Algebraic argument available, as well.

: move $\alpha$ toward the unit circle (but still keep it inside)

Can I make a high-pass filter out of this?

• Yes. Set $-1 < \alpha < 0$. : take $\alpha$ to $-1 < \alpha < 0$

• By taking $\alpha$ to be negative, we have the equivalent of a phase shift by $\pi$.

Sharper high-pass filter?

• Bring $\alpha$ closer to -1.

What if we want the filter to peak @ an arbitrary frequency, say

$\omega_0$ = $\pi/4$?

• Place $\alpha$ along the pole $\theta = \omega_0$.

$1/(1-\abs{\alpha})$

observation: this filter is a complex filter (i.e. one whose impulse response $h(n)$ is complex valued). This has its uses but is not always desirable.

$h(n) = \alpha^n u(n) = R^n e^{i\omega_0n}$

$g(n) = h(n)e^{i\omega_0n} \implies G(\omega) = H(\omega -\omega _0)$

Example:

LCCDE: linear constant coefficient difference equation

$y(n) = \alpha y(n-N) + x(n)$. N is a positive integer, not necessarily 1. $y(-N) = ... = y(n) = 0$

$(\abs{\alpha} < 1)$

$$h_{N}(n) = \alpha h_{N}(n-N) + \delta(n) = !(n % N) && \alpha^{n/N}. H_{N}(\omega)\\ = \sum_{k=0}^\infty\alpha^{k}e^{-i\omega Nk} \sum(\alpha e^{-i\omega N})^k = 1/(1-\alpha e^{-i\omega N}) \\ \abs{H_{N}(\omega)} = \abs{e^{i\omega N}/(e^{i\omega N} - \alpha)}$$

Use $\omega \equiv \omega N$, then use a change of variables (after plotting) to recover the initial basis and label axes and whatnot.

Or use some arbitrary argument that it's a contracted version by going back to $H(\omega)$. Roughly equivalent either way.

Graph of $\abs{H(2\omega)}$, for $0<\alpha <1$.

(No more graphs.)

Comb filter. (pick off values that are multiples of $\pi/N$)

design-oriented analysis. layers like an onion. they fit together very well.

Analog system

$g(t) = e^{-\alpha t}u(t), \Re{\alpha} > 0$

$G(\omega) = \int g(t)e^{-i\omega t}dt \\ = 1/(\alpha + i\omega ). \\ = 1/(i\omega - (-\alpha ))$

$\sqrt{\omega ^2 + \alpha ^2}$

EE 120: Signals and Systems

January 31, 2012.

Homework should be out today. Yes, there are homeworks in this class.

analog system procedure for figuring out frequency response of systems.

Fourier analysis today. Some of this will be review, but we'll dive more deeply into the linear algebra in this class.

• Fourier Analysis "A way of decomposing signals into their constituent frequencies. Kind of like the way a prism splits light into its components. Fourier analysis gives us the tools we need to analyze signals."
• Periodic [ in time domain. ]
• Discrete time, discrete time Fourier series.
• Continuous time, continuous time Fourier series.
• Aperiodic
• Discrete-time Fourier transform.
• Continuous-time Fourier transform We can put blocks around these. Discrete-time signals are periodic in the frequency domain.

Before I do that, I want to review an abstraction that we use for signals (and the way we look at signals) as vectors.

Periodic DT Signals

$x(n+p) = x(n) \forall n \exists p \in \{1,2,3,4,...\}$

p is called the period of x. The smallest p for which this is true is called the fundamental period.

With discrete-time signals that are periodic, you always have to find out the period before you can talk about frequencies. $\omega_0\equiv2\pi/p$ is the fundamental frequency.

Let's say we have a signal defined as ...4/2/4/2/4/2...

We can abstract and represent this signal as a cartesian vector. A euclidean vector. Namely, I can go to p-space and draw $\mathbf{x}$. I basically take the values in that one period and stack them up. $x=[4, 2]$. I can ask you a couple of questions: what the representation of this vector is in terms of two canonical vectors: $\psi_0=[1,0]$, $\psi_1=[0,1]$. $\ket{x} = 4\ket{\psi_0} + 2\ket{\psi_1}$. We do this via projection. Or a change of basis. $[4,2] = 4[1,0] + 2[0,1]$. Inner products and stuff. $\braket{x}{\psi_0} + \braket{x}{\psi_1}$.

$\braket{x}{\psi_0} = (x_0\psi_0 + x_1\psi_1)^T\psi_0 = x_0(\psi_0)^T\psi_0 + x_1(\psi_1)^T\psi_0 = x_0\psi_0 + 0$.

$x^T\psi_0 = x_0\psi_0^T\psi_0$

$x_0 = x^T\psi_0/(\psi_0^T\psi_0)$

Just use inner product with normalized basis vectors.

In this case, it was easy to find out what the coefficients were. I can repose the question by changing the $\psi_k$. I'm going to rotate the basis.

Basically, non-normalized sign basis. [3, 1].

[ talk about how we only need two orthogonal vectors, since our fundamental period is 2. Don't need a third vector. ]

Now, there's something special about these two frequencies we found.

Harmonics. Integer multiples of the fundamentals. Number of terms equals period. Certainly what this toy example is indicating.

Now the question is, how do we find these coefficients? Oh wait, we've already solved that problem.

INTERMISSION

I'm going to recast what we just did in matrix-vector form. We said that x as a vector can be represented as a linear combination of $\psi_0$ and $\psi_1$. I chose the period over the interval 0 to 1. I can do the same thing with $\psi_0$ and $\psi_1$.

This is just the matrix for the FT. I have $x = x_0\psi_0 + x_1\psi_1 = F[x_0,x_1]$, where $F = \begin{bmatrix}1&1\\1&-1\end{bmatrix} = [\psi_0, \psi_1]$. This is a 2x2 matrix.

I can write this as $x = \Psi X$. Solve for X: left-multiply by inverse Fourier transform. $F^{-1} = \frac{1}{n}F^\dagger$.

INTERMISSION

Once you have complex numbers, you can no longer use the dot product. We now need the inner product. $\braket{a}{b} = a(T)b^*$. Fails otherwise, since $\braket{x}{x} \neq \abs{x}$.

$x$ is $p$-periodic.

$\vec{x} = [x(0), ... , x(p-1)]$. I have $\psi_k \perp \psi_l$, where $b\braket{\psi_k}{\psi_l} = p\cdot\delta_{kl}$.

Proof of orthogonality of $\psi_k$, $\psi_l$, $k \neq l$. Geometric sum, not very interesting.

$x$ is the linear combination of our $\psi_k$.

$\braket{x}{\psi_l} = X_l\braket{\psi_l}{\psi_l} \implies X_l = \frac{1}{p} \braket{x}{\psi_l}$.

synthesis equation, analysis equation:

$x(n) = \sum X_k e^{ik\omega_0n}$ $X(n) = \sum x_k e^{-ik\omega_0n}$

EE 120: Signals and Systems

February 2, 2012.

DTFS

$x(n) = \sum X_k e^{ik\omega_0n}$. The complex exponentials form a basis for $\mathbb{C}^2$. The way we define inner products for signals is exactly as you'd imagine. $\sum \psi_k(n) \psi^*_l(n)$. Frequency periodicity: $\psi_{k+p} \equiv \psi_k$. Only in discrete-time periodic case are our functions guaranteed to be both periodic in time as well as frequency.

$x(n) = cos(n)$ not periodic in discrete time $\implies$ no DTFS.

Evidently there will be a quiz on Tue, since pset is due on Wed.

Consider what it means to send discrete-time periodic signals through LTI systems.

EE 120: Signals and Systems

February 7, 2012.

CTFS:

periodic if $x(t+p) = x(t) \exists p \in R$. Smallest positive $p$ is called the fundamental period. Fundamental frequency $\omega_0 \equiv \frac{2\pi}{p}$.

Can also write $p = 2\pi/\omega_0$. In discrete-time, writing it this way was dangerous, since depending on $\omega_0$, $p$ may or may not be an integer.

For discrete-time signals, the constant signal was periodic with $p=1$. $x(t) = 1 \forall t \in Z$?

What about the signal $x(t) = 1 \forall t \in R$? The fundamental period is undefined: any $p>0$ can serve as a period.

So there are subtleties in each story. In the discrete-time story, there were some sinusoids that looked periodic but weren't, and the constant signal has no fundamental period in continuous-time.

We're going to jump immediately into the Fourier series. He said you can decompose any continuous signal as a linear combination of complex exponentials that are related to each other by virtue of being at frequencies that are integer multiples of the fundamental period.

$x(t) = \sum X_k e^{ik\omega_0t} = \sum X_k\psi_k$.

We know the procedure for finding the kth coefficient. Before we go there, there's something you ought to pay attention to in this expression. When I draw a typical periodic signal, when I look at one period, how many points do I have? Uncountably infinite. Also range is potentially a set of uncountably many values. So this is a bold claim: we can represent these with a countable number of eigenfunctions.

Unlike the discrete-time story, this equality will not always be a pointwise equality. There are different gradations of convergence. Whenever you have an infinite sum, you have to worry about convergence in the back of your head. For well-behaved signals, the left and the right converge, and this is true for every t. The less well-behaved signals will no longer hold pointwise. Strange things happen, e.g. Gibbs phenomenon.

You'll have a reasonable understanding of Fourier series. We're not going to worry too much about convergence in this class. The only time it doesn't arise is in the discrete-time Fourier series.

Claim: Fourier analysis works. One path we can take is for you to take my word for it. Or we could prove it. Since last time was hilarious, we're going to take this for granted, for now. Assume orthogonality of $\psi_k$ for some definition of the inner product. $\psi_k = \exp(ik\omega_0 t)$.

I am now going to determine $X_l$. Take the inner product of $x$ with $\psi_l$.

The procedure is exactly the same. We're just swapping out our definition of inner product. Exploit the orthogonality.

For discrete-time p-periodic signals, we defined the inner product as $\braket{f}{g} = \sum fg^*$. Guess what the continuous-time inner product is for $p$-periodic signal!

And if they're non-periodic, we'll do the same, but over all time.

Show that our eigenfunctions are orthogonal.

Synthesis equation: $x(t) = \sum X_k \exp(ik\omega_0 t)$ Analysis equation: $X_k = (1/p)\int x_k\exp(-ik\omega_0 t)$.

How do I show that $\braket{\psi_k}{\psi_l} = 0 (k\neq l)$? Just evaluate the integral. We get an exponential with period p, integrated over a period p? Looks like 0 to me.

Example: $X(t) = \cos(\pi t/3)$. $(\exp(i\pi t/3) + \exp(-i\pi t/3))/2$. $\psi_1 = \psi_{-1} = \frac{1}{2}$.

$q(t) = \sum\delta(t-\ell p) = \sum Q_k \exp(ik\omega_0t)$ $\delta(t) = \deriv{u(t)}{t}$

Poisson's identity. $\sum\delta(t-\ell p) = \frac{1}{p}\sum\exp(ik\omega_0t)$

$R_k = \frac{1}{p}\int r(k)\exp(-ik\omega_0t) = \frac{1}{p}$ if $k=0$ else $\frac{\sin(k\omega_0\Delta/2)}{pk\pi\Delta/2}$

What happens if I want to approximate a signal that has finite energy? What should the coefficients $\alpha_k$ be?

orthogonal projection! Least squares!

EE 120: Signals and Systems

February 9, 2012.

Discrete-time Fourier transform

Discrete aperiodic signals. The DTFT can also handle discrete-time periodic signals, provided that we make use of Dirac deltas in the frequency domain. You all should remember that the frequency response of a discrete-time LTI system $H(\omega) = \sum h(\ell)\exp^{-i\omega\ell}$ [DTFT analysis equation]. It turns out this is the DTFT of the impulse response.

How to go from frequency domain to time domain, i.e. how to derive impulse response from frequency response.

Our goal is to determine h from H. That's what we want to do. And I'm going to do this in two ways, both of which rely on things we've already done. So there's nothing new that I'm going to go through.

Recall: With discrete LTI systems: frequency response has fundamental period $2\pi$. $H(\omega+2\pi)=H(\omega)$. We already possess the mathematical machinery to handle periodic continuous variables (i.e. we've seen this in the CTFS). Then, our continuous variable was time. Now, it's frequency.

Recall CTFS. $x(t)$. $p$: fundamental period. $\omega_0 = \frac{2\pi}{p}$: fundamental frequency. We said we can express x as a linear combination of complex exponentials that are harmonics (integer multiples) of the fundamental frequency. We had an expression for these. $X_k = \frac{1}{p}\int x(t)e^{-ik\omega t}$.

I'm going to draw parallels now with the current scenario: $x\to H$. $t\to\omega$. $p\to2\pi$. $\omega_0=\frac{2\pi}{p}\to\Omega_0 = \frac{2\pi}{2\pi}=1$. $\therefore X_k = h(-k)$.

Let $k\equiv-\ell$ in equation ($\star$). $H(\omega)=\sum h(-k)e^{ik\Omega_0\omega}$.

Our coefficients are the $h(-k) = \frac{1}{2\pi} \int_{\avg{2\pi}} H(\omega) e^{-ik\omega} d\omega$. This is exactly parallel to the previous expression.

$h(\ell) = \frac{1}{2\pi}\int H(\omega)e^{i\ell\omega}d\omega$. Synthesis equation.

DTFT equations: $H(\omega) = \sum h(n)e^{-i\omega n}$. $h(n) = \frac{1}{2\pi}\int H(\omega)e^{i\omega n}d\omega$.

$h(n) = \int \frac{d\omega H(\omega)}{2\pi}e^{i\omega n}$. Linear combination of complex exponentials. $H(\omega)$ is a measure of the contribution of frequency $\omega$ to the function $h$.

For now, we're working with the universe of functions of the continuous variable $\omega$ which happen to be periodic with period $2\pi$.

Ideal discrete-time Low-pass Filter

Impulse response $h(n) = \frac{1}{2\pi}\int H(\omega)e^{i\omega n}d\omega$.

$\frac{B}{\pi n} \sin(An)$

EE 120: Signals and Systems

February 16, 2012.

Discrete-Time Fourier Transform: continued

Recall the ideal low-pass filter. In time-domain, it had some discontinuities. In freq domain, it was NOT absolutely summable. However, it is square-summable. We have names for signals that behave in these particular ways.

$\ell_1$ signals: absolutely summable. $\sum \abs{x(n)} < \infty$ (abuse of notation, but useful at that).

$\ell_2$ signals: Square-summable. $\sum \abs{x(n)}^2 < \infty$. Finite energy.

For discrete-time, if a signal is $\ell_1$, it is $\ell_2$. Converse not true.

$\sum\abs{h(n)} = \sum_{n=0 \to \infty} \abs{\alpha}^n = \frac{1}{1-\abs{\alpha}} < \infty$. $h \in \ell_1 \implies$ finite $\abs{H(\omega)}$ and smooth.

If $x \notin \ell_1$ but $x \in \ell_2$, you run into a bit of a problem: cannot use analysis equation, since summation will not converge. But we can define the Fourier transform in that case to be the limit as $N \to \infty$ of $\sum_{-N \to N} x(n)e^{-i\omega n}$. DTFTs are continuous (not necessarily smooth / infinitely differentiable).

Turns out we get convergence in energy of the two signals. It's just for you to bear in mind, since you have an infinite sum. The most well-behaved functions are $\ell_1$. They have nice DTFTs. Next level up in terms of misbehavior is $\ell_2$. Fourier transforms cannot be obtained through analysis equation, but can be reverse-engineered or otherwise. DTFTs have discontinuities.

We have yet another level up in terms of misbehavior. We have signals of "slow growth". (zero growth) Examples: $x(n) = 1$, $x(n) = n$, $x(n) = e^{i\omega_0 n}$. Basically, these are signals that grow no faster than polynomially in time. (and signals that neither grow nor decay). Notice these signals $\cap (\ell_1 \cup \ell_2) = null$. Cannot use analysis equation, but can use synthesis equation. And their DTFTs have Dirac deltas.

Example:

$x(n) = e^{i\omega_0n} X(\omega)$? $\alpha\delta(\omega-\omega_0$). Strictly speaking, we'd have to write the spectrum as a sum: we have a delta every $2\pi$, starting at $\omega_0$. But that's not particularly interesting, since that looks messy. I'm just interested in the interval here. As long as we know that it's $2\pi$-periodic.

To get our $\alpha$, we can and will use our synthesis equation: $\frac{\alpha}{2\pi}\int \delta(\omega-\omega_0 e^{i\omega n}d\omega$. $\frac{\alpha}{2\pi} e^{i\omega_0n} = e^{i\omega_0n} \implies \alpha \equiv 2\pi$.

$\pi\delta(\omega-\omega_0) + \pi\delta(\omega+\omega_0)$ ($2\pi$-periodic, again)

For signals that belong to none of the previous categories, we have to learn a new transform -- Z-transform! Z-transform! Coming in March, to theaters near you.

Why do $\ell_1$ signals have finite DTFTs?

triangle inequality, following directly from the synthesis equation.

INTERMISSION

DTFT Properties

Time-shifting

If I have a signal that's Fourier-transformed? What do I get in the frequency domain if I shift the original signal?

We get a phase shift: if we shift x(n) by N, our new $X(\omega)$ is multiplied by $e^{-i\omega N}$.

Example / curveball

$H(\omega) = e^{-i\omega/2}4$ which yields $h(n)= \frac{(-1)^{n+1}}{\pi}(n-\frac{1}{2})$

Remember: convolution and multiplication are duals over the time and frequency domains.

Not too long from now, we are going to start studying the sampling theorem. One way to consider this as a half-delayed sample of X is to think of X as having samples. We map this to a sequence of samples in continuous time, split apart by T seconds (sampling period). Interpolation scheme that we'll learn about, etc. I can shift to the left or right by an arbitrary amount. So in continuous time, I take this signal and I shift it to the right by T/2 seconds. So I take whatever this was, take samples spaced T apart, then convert it back into Kr\"onecker deltas.

This filter is called, for the reason described, a half-sample delay. You essentially have to interpolate between the frequencies in discrete time and sample the halfway points.

What happens if I multiply in the time domain by a complex exponential (modulation)? The frequency domain is convolved with said exponential.

Using the analysis equation,we have $X(\omega) = \sum x(n) e^{i (\omega - \omega_0) n} = X(\omega-\omega_0)$.

Multiplying the time domain by $e^{i\omega_0}$ results in a frequency shift.

Multiplying by a complex exponential yields a shift in the other domain.

There is one property that is distinct, and that is the modulation property (multiplication property).

$q(n) = x(n)y(n) \iff Q(\omega)$. Not an ordinary convolution.

$Q(\omega) = \sum x(n)y(n)e^{-i\omega n}$

In particular, for $x(n)$, use synthesis equation to replace $x(n)$ with $X(\xi)$ (where $\xi$ is a dummy variable).

$$Q(\omega) = \frac{1}{2\pi} \sum_n\int_\avg{2\pi} X(\xi)e^{i\xi n}y(n)d\xi e^{-i\omega n}. = \frac{1}{2\pi} \int_\avg{2\pi}d\xi X(\xi)\sum_n e^{-i(\omega-\xi)n}y(n)d\xi = \frac{1}{2\pi} \int_\avg{2\pi}d\xi X(\xi)Y(\omega-\lambda)$$

What's special about this? It's only over a value of $2\pi$. This is a very special kind of convolution. We call this a circular convolution of $x$ and $y$. So $x(n)y(n) \iff \frac{1}{2\pi} (X*Y)(\omega)$.

Think of two functions plotted in terms of $\lambda$. One function, let's say, is a square pulse wave thingy.

easy way of carrying out circular convolution: take one of the two functions, keep one replica (e.g. the one from $-\pi$ to $\pi$), and then do a regular convolution with the other function. advice: choose the flipped + shifted signal for elimination of replicas.

EE 120: Signals and Systems

February 21, 2012.

Circular convolution wrap-up

$h(n) = \frac{B}{\pi n} \sin(An) \fourier H(\omega) = u(n+A)-u(n-A)$. (ideal low-pass filter and its impulse response)

What if I had a filter whose frequency response G(\omega) is basically the signal (circularly) convolved with itself (with some extra factors)? description: ramp going from 0 to $4AB^2$ from $-2A$ to $0$, then back down to $0$.

• Recall: To perform circular convolution:
• Keep one function fixed (in terms of a dummy frequency variable)
• Keep only one period of the other signal.
• Perform regular convolution.

• Example: $(Q * R)(\omega)$.

• Plot on a dummy-variable axis $Q(\lambda)$ with its replicas.
• Plot R only in one period (zero out all replicas).
• Flip \& slide $R$. $R(\lambda-\omega)$

What's the brute-force way to find the impulse response? Synthesis equation.

Inspection method? It looks like it's $2\pi h^2(n) = \frac{2B^2}{\pi n^2}\sin^2(An)$ (circular convolution must yield the $2\pi$ scalar)

• Recall: $q(n)r(n) \fourier 2\pi (Q*R)(\omega)$.

• Note: this filter is BIBO stable, and its Fourier transform is continuous.

Now, we can spend a lot of time on the various properties of the DTFT (what happens when you time-reverse a signal, shift, multiplication properties, modulation, etc.), and this'll probably show up in a problem set.

As the last example on the DTFT: $f(n) = e^{i\omega_0n}h(n)$. Let one of these be a sinusoid. What do you do?

It's effectively shifted by $\omega_0$, after the math clears.

Intermission

Tentative midterm date: March 13.

Continuous-Time Fourier Transform

$X(\omega)$: roughly the frequency response in continuous-time.

$$X(\omega) = \infint x(t)e^{-i\omega t}dt \\ x(t) = \frac{1}{2\pi} \int X(\omega)e^{i\omega t}d\omega$$$Slight shorthand:$X = \infint x(\tau)\phi{\tau} d\tau (\phi{t}(\omega) \defequals e^{-i\omega \tau})$To determine$x(t)$: (assume for now that$\braket{\phi_t}{\phi_\tau} = \delta{\tau t})\braket{X}{\phi_t} \defequals \infint X(\omega) \phi_t^*(\omega) d\omega$.$\braket{X}{\phi_t} = \int X\braket{\phi_\tau}{\phi_\tau}d\tau$NB: Inner products are in frequency domain. • Recall: • In the discrete-time case, we found that for$\braket{\phi_m}{\phi_n}$, we got$2\pi\delta_{mn}$. Turns out, we have exactly the same relationship in the continuous-time case. • In the continuous-time case, we have$2\pi\delta(m-n)$. • Recall Poisson's identity:$\sum \delta(t-\ell p) = \frac{1}{p}\sum \exp(ik\omega_0t)$. Now we are desperate to have$\infint e^{i\omega t}d\omega = 2\pi\delta(t)$. Which means we must have$\delta(t) \equiv \frac{1}{2\pi} \int e^{i\omega t}d\omega$. New definition for$\delta(x)$. Mutual orthogonality of the CTFT takes on this nasty form. Plausibility argument: The right-hand side is proportional to$\int e^{i\omega t}d\omega = \int \cos(\omega t)d\omega + i\int \sin(\omega t)d\omega$. Imaginary part is guaranteed to be 0. By the same argument, the cosine portion is 0 if$t \neq 0$. Otherwise, we're integrating 1 over all reals, so we have$\infty$. Constructively interfere at$t = 0$, but destructively interfere at$t \neq 0$. Therefore the right-hand side must be proportional to$\delta(t)$. The claim is that the constant of proportionality is$2\pi$. The hard part is to recognize this is actually a Dirac delta. Intermission Steam coming out of our heads, evidently. Remember, this is the only barrier to studying the DTFT. One way to show$\delta(t) \equiv \frac{1}{2\pi} \int e^{i\omega t}d\omega$This method you will see, hear, etc. in engineering contexts for figuring out something that doesn't converge, since Riemann integrals don't really work here.$\delta_\Delta(t) = \frac{1}{2\pi} \int e^{i\omega t}e^{-\Delta\abs{\omega}}$. This product is supposed to tame the function such that it is now integrable. (take$\Delta > 0$) Multiply what you've got with something that makes the product converge. Then take the limit as said function goes to 1? In other words, I am perturbing the problem slightly to make it stable and taking the limit as stability goes to 0. NB: half-width, half-maximum: where we're at half the width of our function and also half the maximum value. Perturbation theory!$\alpha \defequals $perturbation parameter. Chosen such that area is 1. Result of the integral:$\delta_\Delta(t) = \frac{\alpha\Delta}{\pi} \frac{1}{\Delta^2 + t^2}$. Turns out$\alpha=1$. This is a Cauchy probability density function. (names for dirac delta: generalized function, distribution. Look up theory of distributions) EE 120: Signals and Systems February 23, 2012. Continuous-time Fourier transform Recall: $$X(\omega) = \int_{-\infty}^\infty x(t)e^{-i\omega t}dt \\ x(t) = \frac{1}{2\pi}\int_{-\infty}^\infty X(\omega)e^{i\omega t}dt$$ We developed the inverse Fourier transform using a bizarre identity for which we established a way to derive (i.e. perturbation theory). Example: Ideal low-pass filter. In the continuous-time story, you do not assume any periodicity. Figure out the impulse response of this filter. $$\frac{1}{2\pi}\int_{-A}^A Be^{i\omega t} = \frac{B}{\pi t}\sin(At)$$ (valid for$t = 0$as well) Okay. Now, how would you plot this? maximum of$\frac{AB}{\pi}$, zeroes at$\frac{k\pi}{A} \forall k \in \mathbb{Z}$. $$H(\omega) = \int h(t) e^{-i\omega t}dt\bigg|_{\omega=0} = \int h(t)dt = H(0) = B$$. Largest triangle has the same area as the integral over all space. Another question. Let$B=1$. What is$\lim_{A\to\infty} h(t)$? Yet another Dirac delta (yet another Dirac delta definition). Take$g(t)$as a block. What is$G(t)$? It is a sinc function. If you dilate in the time domain, you squish the frequency domain, and vice versa. $$\delta(t) \fourier 1 \\ 1 \fourier 2\pi\delta(\omega) \\ e^{i\omega_0t} \fourier 2\pi\delta(\omega-\omega_0) \\ \cos(\omega_0t) \fourier \pi(\delta(\omega-\omega_0) + \delta(\omega+\omega_0)) \\ \sin(\omega_0t) \fourier i\pi(\delta(\omega+\omega_0) - \delta(\omega-\omega_0)) \\ x(t) \fourier X(\omega) \\ x(t-T) \fourier e^{-i\omega T}X(\omega) \\ e^{i\omega_0t}x(t) \fourier X(\omega - \omega_0)$$ Multiplication in Time $$h(t) = f(t)g(t) \fourier H(\omega) \\ f(t) \fourier F(\omega) \\ g(t) \fourier G(\omega)$$ Since transforms are not periodic, we have a regular convolution with an extra$1/2\pi$term.$H(\omega) = \frac{1}{2\pi}(F * G)(\omega)$. Reasoning for convolution leading to multiplication in frequency domain: cascade two systems, choose input to be complex exponential. This is an eigenfunction, so we have the output response being$F(\omega)G(\omega)$. Now choose input to be an impulse. Output is$(f*g)(t)$. INTERMISSION Amplitude modulation I'm going to start what looks like a new phase of this course, but it's hardly anything new and unpredictable. If you understand the CTFT and its fundamental properties (what we've talked about so far), you should have no problem understanding amplitude modulation. One of the things you've experienced by now is that the atmosphere is quite unforgiving to audible frequencies. Your voice only transmits over some short distance. Obvious solution is to shift spectrum to a much higher frequency range. The way we do it is to multiply by either a complex exponential or a sine/cosine (called a carrier signal). Carries signal of interest. So$Y(\omega) = \frac{1}{2\pi}(X * C)(\omega) = X(\omega - \omega_0)$. What happens is that$\omega_0$is usually large enough to make transmission through the atmosphere possible. Ignoring all degradation going through the atmosphere, to retrieve the original signal, multiply by the negative carrier frequency (complex exponential). Frequencies about zero: baseband frequencies. Shifted spectrum of received signal back to baseband. aliasing; completely garbles signal. Apply low-pass filter at end to capture baseband spectrum. Scale by two to recover original signal, complete with amplitude. Final observations: we first of all made an assumption that the received signal is the same as the transmitted signal. Whole area of communication theory that deals with deterioration. Other assumption is that the oscillator at the receiver that can generate the exact same frequency as the transmitter oscillator and at the same phase. EE 120: Signals and Systems February 28, 2012. AM Continued (review of what we just did) Recall: There are two major assumptions that we made. First of all, no transmission degradation. Second of all, the receiver has the exact same phase and frequency as the transmitter. Question: What if$\hat{c}(t) = \cos(\omega_0t + \theta)$? (still keeping the assumption that we can somehow match the frequency. New assumption: phase is constant and not time-varying) Thoughts: If the phase is off by$\pi/2$, then you lose your signal entirely.$r(t) = \frac{1}{2}\cos(\theta)x(t) + \frac{1}{2} x(t)\cos(2\omega_0t + \theta)$. If$\theta$is relatively small (compared to$\pi/2$), then we are safe, since we have our original signal. However, we lose our signal as$\cos\theta\to0$. Note: MT2 date: Tues, 13 Mar 2012. Also: when$\theta=0$, this is referred to synchronous demodulation (transmitter and receiver in sync) Instead of sending$y(t)$into a low-pass filter, what we do is send it through the diode parallel RC circuit. This is one way to do asynchronous demodulation. This is technically cheating, since we assume that our signal is entirely positive. However, we can simply apply a DC offset if we know the bounds on our values. Suppose$|x(t)| \le A \forall t$. Then, transmit$\hat{x}(t) \equiv x(t) + A$. Why is this method of transmitting$\hat{x}(t)$called AM with large carrier? We're actually also transmitting the carrier:$\hat{x}(t) \cos(\omega_0 t) = x(t)\cos(\omega_0 t) =A\cos(\omega_0 t)$. If$\abs{x(t)} \le K$, we want$K < A$. In fact,$\frac{K}{A}$is referred to as the percent modulation or modulation index. One thing you should know is that there is redundancy in information double side-band suppressed carrier. Frequency Division Multiplexing Each player is allocated a piece of real estate along the frequency axis. Quadrature multiplexing The way we can do this is by exploiting the orthogonality of cosine and sine. What's being transmitted is the sum of the two. EE 120: Signals and Systems March 1, 2012. Sampling of CT Signals Now we have to differentiate between frequency in radians per second ($\omega$), and frequency in radians per sample ($\Omega$). Our sampling interval we will represent with$T_s$, our sampling period. There are associated with these boundary blocks ($C \to D$,$D \to C$) periods.$T_r$: reconstruction period.$T_r$and$T_s$may or may not be the same. The new part really is sampling theory. So let's begin. Let's say I have a continuous-time signal$x_c(t)$. We sample periodically.$x_d(0)=x_c(0)$.$x_d(1)=x_c(T_s)$. Basically,$x_d(n)=x_c(nT_s)$. A basic question is whether or not it is possible to reconstruct the original signal? In general, the answer is no. There must be (we hope so, at least) a set of conditions that guarantees recoverability. These actually happen to be sufficient conditions. That is the whole subject of the Sampling theorem, which we will develop. So let me open up this first box, the process of sampling. We can model the process of sampling this signal as one that involves multiplying the original input signal by an impulse train, where the impulses are separated$T_s$second apart. Remember: multiplying a function that is locally continuous with an impulse is equivalent to rescaling the impulse. Once you extract$x_q$, there's a block we're not going to worry about that converts Dirac deltas to Kr\"onecker deltas. Believe it or not, there is nothing new here. What happens to the spectrum of$x_c$as it is multiplied by the impulse train? Our fundamental frequency of this signal is$\frac{2\pi}{T_s}$. It's also called, in this context, our sampling frequency. Considering the DTFT of periodic signals, we have a bunch of uniform impulses separated by$\omega_s$, and each of them has strength$\omega_2$. What happens when$T_s$gets smaller? The sampling frequency increases, and so we have stronger (and more separated) impulses as our spectrum. Our CTFT is simply$2\pi\sum_k Q_k\delta(\omega-k\omega_s)$. Multiplication in the time domain is convolution in the frequency domain, so we can consider the triangles. So what happens? In order for these triangles to be recoverable, we need$2A \le \omega_s$, or$A \le \frac{\omega_s}{2}$.$\omega_s$must be large enough that adjacent triangles do not overlap (and this is, indeed, the crux of the sampling theorem) -- in this case, we can certainly recover our original signal by applying a low-pass filter. The low-pass filtering is an interpolation operation -- we're interpolating between adjacent values of the signal. (explanation: You've got your samplings of the signal, so we've actually got a set of impulses. We're applying a low-pass filter on this and filling in the missing portions of the signal. Remember that our ideal low-pass filter is the sinc, and so we're taking the linear combination of a countably infinite set of sincs) Whittaker-Nyquist-Kotelnikov-Shannon Sampling Theorem (1915, 1928, 1933, 1949) Most frequently known as the Shannon Sampling theorem. This is the set of sufficient conditions for recovering a signal from a sequence of samples. •$x_c$is band limited. Namely,$\abs{X_c(\omega)} = 0 (\abs{\omega} > A)$. • Sampling rate$\omega_s$is large enough such that$2A \le \omega_s$There is no universal definition of bandwidth. In some circles, the highest frequency A is called the bandwidth. In other circles, the frequency's footprint 2A is called the bandwidth. Does not really matter. As mentioned, nothing mentioned today is new; we've just used very basic properties of the CTFT of periodic signals and the convolution property. Thoughts What if$\omega_s < 2A$? (by the way,$2A$is called the Nyquist rate). There has been a big push lately. Look up sub-Nyquist in a group of papers. There are many people trying to figure out how to sample at a lower-than-Nyquist rate. So what happens if$\omega_s = 2A$? Our triangles are tangent. In the ideal case, this is still acceptable, but it is impossible to build an ideal low-pass filter, so it doesn't actually matter. Oversampling: you exceed the Nyquist rate by a certain amount to compensate. Past a certain amount of oversampling, it doesn't really mean anything. If$\omega_s < 2A$, our triangles overlap, and so the spectrum is significantly different. Higher frequencies get folded into lower frequencies. This is called aliasing: the artifact of folding of higher frequencies into lower frequencies. Once you have that, you have irrecoverably lost your original signal. In image processing, we have anti-aliasing (our next topic). Notice that if the signal is band-limited, we can make$\omega_s$large enough such that we do not get overlap. If$x_c$is not band-limited, then no matter how high we make$\omega_s$, we're going to have some overlap. There's one thing we can do in that case, but that's a compromise. We call this: Anti-aliasing The idea is we discard frequencies above a certain value. With aliasing, the frequency region is distorted, which is reflected everywhere in the time domain. This is generally not good, unless it's really small and you can ignore it. Anti-alias filtering is a preprocessing stage.$x_c(t) \to LPF \to q(t)$. Namely, apply a low-pass filter (with cutoffs @$-\frac{\omega_s} {2}$,$\frac{\omega_s}{2}$) before sampling. We eliminate the aliasing we know will happen (in the event that we don't do this). The result is that you don't lose quite as high of frequencies. There is no more aliasing. So why anti-alias first instead of allowing it to alias, if you lose information either way? • Anti-aliasing allows for the preservation of more of the original signal. The set of signals that does not produce aliasing with a fixed$\omega_s$are those that are band-limited by$\left(-\frac{\omega_s}{2},\frac{\omega_s} {2}\right)$. (talk about Parseval's theorem) Carriage-wheel effect Aliasing is what causes the phenomenon that some of you may have noticed: when on the highway, and you stare at the wheels of the car passing by you, and they seem to be moving much slower in the opposite direction. Used to show up at PhD qualifying exams at MIT. You may also find this described as the carriage-wheel effect. Since the frame rate of old movies was 24 frames per second, the wheels looked to be moving backward. Mark a point on the unit circle. I strobe this wheel at a rate$\omega_s$(strobing it is like sampling it). So I've got this strobe gun with a sampling rate of$\omega_s$.$\omega_s$happens to be$\frac{3}{2} \omega_0$(so not exceeding the Nyquist rate). If I sample exactly at$\omega_0$, the point looks stationary. So the only way to capture the motion properly is to capture at$2\omega_0$and higher. But I'm not doing that. Its spectrum is a single Dirac impulse of strength$2\pi$centered at$\omega_0$. The sampling signal$Q(\omega)$will have an impulse train separated by$\omega_s$. When we convolve this and apply a low-pass filter, we have just one remaining frequency at$\omega_0-\omega_s = -\frac {\omega_0}{2}$. EE 120: Signals and Systems March 6, 2012. Sampling Cont'd We are still in the first of three blocks (where we take a continuous-time signal and create a discrete-time signal). This end-to-end system is effectively a continuous-time system. This kind of processing we refer to (for obvious reasons) as discrete-time processing of continuous-time signals. The opposite is also possible, where you start out with a discrete-time signal, process in continuous-time, and spit out a discrete-time signal. Last time, we opened up the first box ($C \to D$). We didn't even talk about the entire box -- there's still some stuff to discuss. So, consider an impulse train. We then take this through a Dirac$\to$Kr\"onecker block to produce$x_d(n)$. Question: How is$X_q(\omega)$related to$X_d(\Omega)$? etc, work with moving around between coords. All of this assumes that there has been no aliasing. For us to have had no aliasing, remember the Nyquist sampling theorem. Considerations of LTI for$Y_c(\omega) \equiv X_c\parens{\frac{\omega T_r}{T_s}}$. The only way your end-to-end system will have an LTI-equivalent is if you fulfill the conditions of the Nyquist sampling theorem (no aliasing), and your reconstruction period is the same as your sampling period. Then you know that your output is equal$\frac{G}{T} H_d\parens{ \omega T} X_c(\omega)$. The equivalent LTI filter is simply$\frac{G}{T} nH_d \parens{ \omega T}$EE 120: Signals and Systems March 8, 2012. Spoke about sampling, impulse train, conditions under which signal can be recovered -- if band-limited, and frequency high enough, we can recover with a low-pass filter. We drew the spectrum, and we showed that if we sampled fast enough, we could recover the original signal. What I want to do now is talk about what is going on in the time domain. I am sending the nonuniform impulse train into some low-pass filter, and I want to see what$r(t)$is. So the first thing I want you to do is tell me what the impulse response of this filter is.$h(t) = \sinc\parens{ \frac{ \omega_s t}{2}}. When you convolve the sinc with these impulses, what do you get? Notice the alignment. Sincex_q(t) = \sum x_c(nT_s)\delta(t-nT_s)$,$r(t) = \sum x_c (nT_s) h(t-nT_s)$. So let's draw this. (basic premise: interpolation; further from center means impulse has weaker effect. Also, zero crossings occur at locations of all other impulses. Scaling is strength of impulse.) We're only going to get one signal. Only one will be obtained from this interpolation scheme. Remember,$\sinc \alpha = \frac{\sin \pi \alpha}{\pi\alpha}$. Thus the expression for$x_q(t)$can be rewritten as$\sum x_c (nT_s) \sinc(\frac{\omega_s(t-nT_s)}{2\pi})$. This is a loaded expression. Remember how when we started Fourier series, how we expand a signal in terms of orthogonal basis functions? This is an orthogonal expansion. The values of the signals at the sample points; these shifted sincs are the orthogonal functions. Now, if you look at your problem set, there are a couple of problems you ought to pay attention to. One: Fourier transform preserves orthogonality, i.e. if two signals are mutually orthogonal in the time domain, they're mutually orthogonal in the frequency domain. You use that later to show that these shifted sincs are orthogonal: you do not show this in the time domain. That's where you use orthogonality-preserving property of the continuous-time Fourier transform. What does this mean? A band-limited signal$x$has an orthogonal expansion in terms of shifted sincs. These are not arbitrary sincs:$T_s$, the sampling period, enters this expression. It's very related to the bandwidth of the original signal. Yet another context in which orthogonal expansions show up. You can essentially consider$h_0(t)$as the unshifted$\sinc (\frac {\omega_s t}{2\pi}) = h(t)$.$h_n(t) = h_0(t - nT_s)$. These functions, we claim, are mutually orthogonal. Namely,$\braket{h_k}{h_l} = \delta_{kl}$. Here, you use Parseval's theorem, since you do not want to integrate$\sinc^2$. Called band-limited interpolation, since you do not get the original signal if either it wasn't band-limited, or you didn't sample fast enough (which is where aliasing occurs). Another view of sampling as an orthogonal expansion: Another way to look at this is to start in the frequency domain, where I have my spectrum of my signal. If you remember, Fourier series expansion is not just useful for representing periodic signals, but also for finite-duration functions (in this case, it's finite in frequency). You can think of these phantom replications of this triangle. In other words, I can create a periodic extension of this triangle, which is my$X_c(\omega)$, if I create this periodic extension in such a way that these triangles touch exactly. I can write$X_c(\omega) \equiv \sum_{k\in \mathbb{Z}} X_k e^{ikT_s \omega}$. We refer to$T_0 = 2\pi/2A$as our fundamental "frequency" of the periodic extension of$X_c(\omega)$. (this has units of seconds, in fact). This expression is only valid for the range$\abs{\omega} \le A$. It is zero outside of said range. You can do this for any finite-duration function, since this can be thought of as one period of a periodic signal. What I'm going to do is go back to the time-domain function using the CTFT inverse formula. I'm looking at a band-limited function, so the integral doesn't actually go from$-\infty$to$\infty$, but rather from$-A$to$A$. And in this range, you know that$X_c$is equal to the Fourier series expansion. So, exchanging the integral and the summation (as we do so very often), and evaluating the integral, we get a linear combination of sinc functions. It turns out these coefficients are the values of the signal at the sampled points. That isn't obvious yet. After some algebraic manipulations, we can finally see that this whole thing is just$x_c(kT_s)$. EE 120: Signals and Systems March 13, 2012.$\mathcal{Z}$Transform Something we've been brushing under the carpet for a while is the set of signals in which the Fourier transform is not defined. So the Z transform is defined for discrete-time LTI systems. (discrete analogue of the Laplace transform) We know that the output is simply the convolution of the input with the impulse response. If we said that$h(n) = \alpha^n u(n)$, where$\abs{\alpha} > 1$, then we know that this signal has no frequency response. However, behavior can be well-defined even though you can't say anything about it in the frequency domain. For instance: if x is of finite duration, then you have a finite number of terms in the convolution corresponding to the output signal. You can thus talk about the output of this system for such an impulse: convolution is perfectly well-defined, and$y(n)$is finite for all n (and thus well-defined). Another situation: if x is right-sided (i.e.$x(n) = 0$for$n < N$-- may also have zero values to the right of$N$, but to the left, it is zero), what happens? Note that causal signals are a subset of right-sided signals. If n is anything smaller than zero, you are looking at a right-sided but not necessarily causal signal. In the convolution of two right-sided signals (or two left-sided signals, even!), finitely many terms contribute. Therefore in these cases (input and impulse response) we can define our output anywhere (i.e. finite, but not necessarily bounded). Finally, if x is bounded and h is absolutely summable, then the system is BIBO-stable --$y$is finite, so it is also bounded.$\ell^\infty = \set{ x : \mathbb{Z} \mapsto \mathbb{C} : x(n) < B_x < \infty}$Recall that$\ell^1 = \set{ h : \mathbb{Z} \mapsto \mathbb{C} : \sum \abs{h(n)} < B_x < \infty }$For DTFT, we applied$e^{i\omega n}$to our system and observed that our output was$H(\omega)e^{i\omega n}$, where we defined$H(\omega) = \sum_n h(n)e^{-i\omega n}$. We're going to now relax the constraint that$\abs{e^{i\omega n}} = 1$, i.e.$\omega \in \mathbb{R}$. Now let$x(n) = z^n \ (z \in \mathbb{C})$. So if I apply this signal to the system, what am I going to get? $$y(n) = \sum_m h(m)x(n-m) = \sum_m h(m)z^{n-m} = z^n\sum_m h(m)z^{-m} \\ z^{-n} y(n) = \sum_m h(m)z^{-m} \equiv \hat{H}(z)$$ This is called the transfer function of the system. The transfer function for us will either be the Z-transform (if discrete-time) or the Laplace transform (if continuous-time). For now we're stuck with the Z-transform. Notice the similarity of the format of these expressions. The main difference is that now, I'm allowed to veer away from the unit circle. This is an infinite sum, so just as with the Fourier transform, we must worry about convergence. With Z transforms and Laplace transforms, we can't get away from convergence. Associated with this sort of expression is what we call a region of convergence (RoC). Basically, region in the complex plane for which this sum converges. Going to brush aside a lot of subtleties regarding convergence.$R_h$is the region in the complex plane (i.e. the set of$z$) such that$\sum_m \abs{h(m)z^{-m}} < \infty$. If the kernel of this sum is absolutely summable, we say that we are in the region of convergence. The values of$z$for which this is true make up the region of convergence. I can take any discrete-time function and talk about its Z-transform. Just as with the Fourier-transform, I can talk about the FT of any function. So what if I'm looking at$x(n) = \delta(n)$?$\hat{X}(z) = 1$-- we only have one value in our sum.$R_x = \mathbb{C}$. In other words,$0 \le \abs{z}$. $$\delta(n-1) \ztrans z^{-1}\:\: (R_h = \set{z : 0 < \abs{z}}) \\ \delta(n+1) \ztrans z\:\: (R_h = \set{z : 0 \le \abs{z} < \infty})$$ Now, two-point moving average:$\frac{1}{2}\parens{\delta(n) + \delta(n-1)} \ztrans \frac{1 + z^{-1}}{2} = \frac{z + 1}{2z}\ (R_h = \set{z : 0 < \abs{z}})$. Note that if this had been an anti-causal two-point moving average, we'd include 0 and exclude infinity. All of these signals so far are finite-duration signals (FIR filters). The radius of convergence of a function that has a finite region of support is the entire complex plane with the possible exception of zero, infinity, or both. Example of both: three-point moving average, centered at zero. May have noticed already that region of convergence has a particular convergence so far: all of these resemble a radius, and rational in z. Not always the case, but these are most of the signals we'll work with. There's a nice accounting between numerator and denominator that allows you to determine where the region of convergence is. Ex:$h(n) = \alpha^n u(n)$,$\alpha \equiv \frac{3}{2}\hat{H}(z) = \sum_{n=0}^\infty \parens{\frac{3}{2z}}^n = \frac{1}{1 - \alpha z^{-1}} = \frac{z}{z - \alpha}$. RoC:$\set{z : \abs{z} > \frac{3}{2}}$When we talk about the z-transform, you can't just give an expression; you also must provide a region of convergence. One without the other is an incomplete picture. Plotting region of convergence: In this case, just draw a dotted circle (radius not included), and shade its exterior. This is not a proof, but notice that we've got a causal signal, and a region of convergence that is outside of some circle (and extends to infinity). Roots of the denominator are called poles of the system. Roots of the numerator are called zeroes. Therefore this system has one zero and one pole. It turns out that for right-sided functions, the RoC is always to the outside of the radius defined by the outermost pole. One thing I want you to pay attention to is the following: the angle of the pole makes no difference in the region of convergence (ever!). When you look at$\hat{H}(z) = \sum_n h(n)z^{-n}$and replace$z = Re^{i\omega}$, you notice that this is$\sum h(n)R^{-n}e^{-i\omega n}$.$e^{-i\omega n}$plays no role in whether or not the kernel is absolutely summable. So the region in the complex plane where this sum is convergent is independent of$\omega$. I could have made this$\alpha$a complex number on the same radius, and the region of convergence would have been exactly the same. It is the magnitude of$\alpha$that is important. Ex:$g(n) = -\alpha^n u(-(n+1))$Notice that this is a left-sided signal.$\hat{G}(z) = -\sum_{-\infty}^{-1} \parens{\frac{\alpha}{z}}^n = -\sum_1^\infty \parens{\frac{\alpha}{z}}^{-n^\prime} = \frac{z}{z - \alpha}$($\set{z : \abs{\alpha} > \abs{z}}$) This is exactly the same expression in z, but the region of convergence is different. This is why we are compelled to always consider the region of convergence. So two very different expressions in time yield the same expression in their z-transforms, but the difference is in their radii of convergence. Just as with right-sided functions, the RoC for left-sided functions is always inside the radius defined by the innermost pole. Monologuing With frequency response and Fourier transforms, we all knew what we were trying to do. We were trying to decompose a signal into its constituent frequencies. There is no such notion for the Z-transform. The whole idea of stabilizing an unstable system when placing a system in a feedback configuration requires the Z-transform. Consider$\alpha^n u(n)$. In this case, we have not specified whether$\alpha$is inside or outside the unit circle. The expression is exactly the same. Let's take the first case, where$\abs{\alpha} < 1$. The region of convergence is outside the circle of radius$\alpha$. We could consider the DTFT then. This is a case where the region of convergence strictly includes the unit circle. If that is true, then there is a very simple relationship between the z-transform and the Fourier transform: we can evaluate the z-transform on the unit circle, i.e.$z = e^{i\omega}$. It is because of this that some people consider the z-transform to be a generalization of the Fourier transform. However, there are functions for which we have a Fourier transform but no z-transform. You also know that when the RoC contains the unit circle, the time-domain function must be absolutely summable: the z-transform looks like the Fourier transform of$R^{-n}h(n)$. The point of$R^{-n}$is to tame the function. If$\alpha$is outside the unit circle, no such relationship exists between the z-transform and Fourier transform, simply because there is no Fourier transform. Now let's consider the anti-causal case:$-\alpha^n u(-(n+1))$. If$\alpha$happens to be within the unit circle, the function has no Fourier transform. But if$\alpha$is outside the unit circle, then the function has a Fourier transform. So that's the relationship between the z-transform and the Fourier transform: if the region of convergence contains the unit circle, then you can equate them. If$h(n) \ztrans \hat{H}(z)$, then$h(n-1) \ztrans \frac{\hat{H}(z)} {z}$. Similarly,$h(n+1) \ztrans z\hat{H}(z)$. There is a difference between bounded and unbounded regions of convergence. We have a few minutes, so let me talk about the distinctions between causal signals and right-sided signals (and also anticausal / left-sided). So let's say we take a right-sided but not causal signal. Now the RoC is outside of radius$\alpha$, but now you have to exclude$\infty.

Similarly, for left-sided signals, you'd then exclude 0.

EE 120: Signals and Systems

March 15, 2012.

More on the $\mathcal{Z}$-transform

$$h(n) \ztrans \hat{H}(z) = \infsum{n} h(n) z^{-n} \\ R_h = \text{region of convergence} \\ \defequals \set{z \in \cplx \middle| \sum_n \abs{h(n) z^{-n}} < \infty}$$

For signals of finite duration, the region of convergence is the entire complex plane, minus possibly $r=0$ and $r=\infty$.

Example which was causal, $x(n) = \alpha^n u(n)$. $\hat{X}(z) = \frac{1}{1 - \alpha z^{-1}}$, $\abs{\alpha} < \abs{z}$ (i.e. outside the circle).

We also had an anticausal example, $q(n) = -\alpha^n u(-n-1)$. $\hat{Q}(z) = \frac{1}{1 - \alpha z^{-1}}$, $\abs{z} < \abs{\alpha}$ (i.e. inside the circle).

Furthermore, we discussed that a Fourier transform existed if and only if the unit circle was contained in the region of convergence.

Notice that the RoC of a causal system was outside, all the way to infinity, while the RoC of an anticausal was inside, all the way to zero.

We further learned that causal signals are a subset of right-sided signals, and anti-causal signals are a subset of left-handed signals.

So what happens if we shift our signal, i.e. $r(n) = x(n+1)$?

$\hat{R}(z) = z\hat{X}(\omega)$. This is a simple example of what we call the time-shift property. You can guess what happens when we shift by an arbitrary integer. $x(n-N) \ztrans z^N X(z)$. Note that this is no longer causal, but it is still right-sided.

Notice that now the signal blows up at infinity, so our radius of convergence is now $R_r: \abs{\alpha} < \abs{z} < \infty$. The set of right-sided signals is a strict superset of the set of causal signals.

This is the difference between the z-transform of right-sided signals and that of causal signals.

Similarly, with a left-sided signal, we would exclude the origin from the RoC.

There's also a simple way of showing why causal signals are outside of some radius of convergence.

Let x be causal. Its z-transform starts from the earliest possible point, i.e. $n = 0$. $\hat{X}(z) = x(0) + \frac{x(1)}{z} + ...$.

If $\abs{z} = R_1 \in R_x$, I want you to argue why $\abs{z} = R_2$, where $R_1 < R_2$ is also in the RoC. Reasoning: with larger radii, we have smaller values in our absolute sum.

Right-sided signals: almost identical, except we have a finite number of elements on the left, and so infinity must be excluded.

Once you find the radius at which the sum converges, everything else outside also converges.

Similar argument for anti-causal and left-sided signals.

So now let's combine these.

Example:

$g(n) = \parens{\frac{1}{2}}^n u(n) - \parens{\frac{3}{2}}^n u(-n-1)$

As done before, we have $\frac{1}{1 - \alpha z^{-1}}$ for both values of "$\alpha$". Thus our region of convergence is $\frac{1}{2} < \abs{r} < \frac{3}{2}$ (superposition tells us the corresponding z-transform is $\frac{2z}{2z - 1} + \frac{2z}{2z - 3} = \frac{2z(z-1)}{(z-\frac{1}{2}) (z-\frac{3}{2})}$).

As you can see, two-sided signals have annular regions for their RoCs.

Reason for zeroes: if I were to ask you to find the inverse of the system, what would you do? Let's say this represents distortion, and you want to undo the distortion.

Also comes into play when you want to plot the frequency response.

Let's do another

Example:

$h(n) = \expfrac{3}{2}{n}u(n) - \expfrac{1}{2}{n} u(n-1)$. Now we've got nothing -- there is no overlap between the two regions, so there is neither a Z-transform nor a region of convergence. (we would have the same expression, but this doesn't hold anywhere.)

Intermission

Time Shift Property

$x(n-N) \to z^{-N}\hat{X}(z)$. What does this do to the region of convergence? It can potentially eliminate infinity (if N positive) or zero (if N negative), but not both.

Convolution Property

If you have $h \equiv f \star g \ztrans \hat{H}(z) = \hat{F}(z)\hat{G}(z)$. Simple way to show this is by cascading filters and feeding in (instead of a complex exponential) $z^n$ -- this is identical to the eigenfunction property of LTI systems. And what's the RoC? $R_h \supset (R_f \cap R_g)$ (could be bigger if pole-zero cancellation occurs)

Think of these poles as dam (damn?) walls.

If we put this system in cascade with another one such that $q(n) = \delta(n) - \frac{1}{2}\delta(n-1)$, $\hat{Q}(z) = \frac{z-\frac{1}{2}} {z}$. Since this is a FIR filter, $R_q = \cplx - \set{0}$.

$\hat{A}(z) = \hat{R}(z)\hat{Q}(z) = \frac{2(z-1)}{z - \frac{3}{2}}$ . We get double pole-cancellation, in fact, so $R_q = \set{z \middle| \abs{z} < \frac{3}{2}}$.

Time-reversal

$x(n) \ztrans \hat{X}(z)$. $x(-n) \ztrans ?$ Do a variable substitution, and then you see that everywhere you had $z$, it's now a $z^{-1}$. Thus $x(-n) \ztrans \hat{X}(\frac{1}{z})$. When you correlate this with the Fourier-transform story, we got a frequency reversal in the frequency domain. Locations of poles and zeroes map to their inverses, i.e. $\hat{X}(z_0) \to \hat{X}^\prime(\frac{1}{z_0})$.

Multiplication by a complex exponential

Presume $$g(n) \ztrans \hat{G}(z) \\ h(n) = z_0^n g(n) \ztrans \hat{H}(z) = ?$$

$\hat{G}(\frac{z}{z_0})$, after the dust clears. If $p_0$ is a pole of $\hat{G}$, it moves to $z_0p_0$.

Z-transform of the unit step? $\frac{1}{1 - z^{-1}}$, where $1 < \abs{z}$. This is a perfect example of why the z-transform is not a strict superset of the Fourier transform -- that only happens when the unit circle is strictly part of the RoC. Otherwise you can't evaluate the expression there.

Z-transform of the tone burst (suddenly-applied cosine wave)? We've done this (albeit in parts).

Note that radius of convergence isn't changing.

Will leave it to you to figure out what the transform of $r^n \cos(\omega_0 n) u(n)$ is.

EE 120: Signals and Systems

March 22, 2012.

Upsampling property

$$x(n) \mapsto \uparrow N \mapsto y(n) = \begin{cases} x(n/N) & \text{if } n \equiv 0 (\mod N) \\ 0 & \rm{e/w} \end{cases}$$

i.e. we have the same values, but now interspersed with more zeroes. Take the axis and dilate by three. So see if you can come up with an expression for the Z-transform of the upsampled signal. We should just have $\hat{X}(z^N)$.

This should not surprise you. When you upsampled in the time domain, what happened in frequency? We contracted in the frequency domain. You get that even from here. If I remind you of an example we did eons ago, $y(n) = \alpha y(n-1) + x(n)$ had a frequency response of $G(\omega) = \frac{1}{1 - \alpha e^{-i\omega}}$. If I change this to the parameters of $y(n) = \alpha y(n-N) + x(n)$, $H(\omega) = \frac{1}{1 - \alpha e^{i\omega N}}$. But if you compare $g(n) = \alpha^n u(n)$ with $h(n) = \alpha^{nN} u(n)$, we've already seen this.

So when you upsample, you have the $z$ raised to the $n^{th}$ power.

What's the RoC? Should bring up two more questions: what happens to the poles and zeroes? We take the $n^{th}$ root of everything (i.e. the inverse function), so everything moves closer to 1. (rationale: $z_p^N = p \implies z_p = ?$. We get $N$ times as many poles, in fact, since we have $N$ roots of $p$. ibid for zeros.

Going back to the question for the region of convergence for y: if the RoC for x is $R_1 < \abs{z} < R_2$, the RoC for y is $R_1 < \abs{z}^N < R_2$, so $R_y = R_x^{1/N}$.

So let's do the example given earlier: $y(n) = \alpha y(n-N) + x(n)$. $\hat{H}_1(z) = \frac{1}{1 - \alpha z^{-1}}$. $\hat{H}_4 = \frac{1}{1 - \alpha z^{-4}}$. Draw pole-zero diagrams, region of convergence?

(note that we've got degeneracy -- multiplicity. Must denote with a number in parentheses if you've got multiplicity greater than 2; if multiplicity is 2, you can use a double-circle or double-x).

Differentiation

Another property that's actually very important is differentiation in Z. So suppose you've transformed $x \ztrans \hat{X}(z)$. What is $\deriv{\hat{X}} {z}$? $-z\deriv{\hat{X}}{z} \ztrans n x(n)$.

Example:

$g(n) = n\alpha^n u(n) \ztrans \hat{G}(z) = ?$

If you want to make this look like the original form, just multiply top and bottom by $z^{-2}$. Very important point: extension to higher derivatives.

So what happens as we increase this? What does this mean?

We can decompose any rational z transform into a linear composition of lower-order terms. Fundamental theorem of algebra. Proposition: suppose we've got a transfer function. We've got a numerator over a denominator. We can factor the numerator and denominator. You also learned that whenever you do this, you can break apart the ratio in terms of a sum.

Note that this starts breaking when you have degeneracy (i.e. systems with duplicate poles). So from this qualitative argument, it should not surprise you if I tell you that the only way you can get a rational Z-transform is if the system is the sum of one-sided exponentials multiplied by some polynomial.

We'd also have to include the left-sided versions of these.

We can make a general statement: a Z-transform expression $\hat{X}(z)$ is rational iff x(n) is a linear combination of terms $n^k \alpha^n u(n)$, $n^k\beta^n u(-n)$. Shifted versions will certainly also work.

Using partial fractions is one of the methods of doing an inverse transform. We're not going to learn a formal inverse Z-transform; we're just going to use various heuristics (not unlike solving differential equations).

In general, the inverse z-transform requires a contour integral (complex analysis) and thus is not a required in this class.

Now, if you believe this, we've got several things: $n^k\alpha^n u(n)$, LCCDEs, and rational Z-transforms. They form a family.

LCCDEs and Rational Z Transforms

Suppose I've got an input, an impulse response, and an output. You know this is a convolution of x and h, so $\hat{Y} = \hat{X}\hat{Z}$, which means the transfer function of an LTI system is the ratio of the transform of the output to the transform of the input (for LTI systems).

Frequency response of the filter gives you the Fourier transform of the output.

We can write our difference equation as $\sum_{k=0}^N a_k y(n-k) = \sum_m^M b_m x(n-m)$. We've seen this.

One way to get the transfer function is to take the z-transforms of both sides. If they're equal in the time domain, their z-transforms must also be equal in the frequency domain. Time-shift property. Just considering the ratio $\hat{H} \equiv \frac{\hat{Y}}{\hat{X}}$, we have our transfer function.

Familiarize yourself with going from the LCCDE to the transfer function by inspection.

Now, for the end of the lecture: irrational Z-transform.

Example

This is a standard example in practically any signal-processing book you'll find. $\hat{X} = \log(1 + \alpha z^{-1}$. Determine $x(n)$.

(differentiation property)

($\frac{(-1)^{n-1} \alpha^n}{n}u(n-1)$)

You can also use (to check) Taylor expansion centered at 1: $\log(1 + \lambda) = \sum_{n=1}^\infty \frac{(-1)^{n+1} \lambda^n}{n}$.

EE 120: Signals and Systems

April 3, 2012.

We finished talking about the relationship between LCCDEs and Z-transforms. A little later -- either next lecture or next week -- we're going to talk about how to solve Z-transforms this way. Convenience: convolutions turn into multiplications (which are almost always easier to do than convolutions). Z-transforms turn difference equations into algebraic expressions.

Properties:

Initial Value Theorem

If you have a causal discrete-time signal x (which means $x(n) = 0, n < 0$), and suppose I'm looking at its Z-transform. $X(z) = \sum_{n=0}^\infty x(n)z^{-n} = x(0) + \frac{x(1)}{z} + \frac{x(2)}{z^2} + ...$. $\lim_{z\to\infty} \hat{X}(z) = x(0)$, ergo the name initial-value theorem. Simple to prove; sometimes helpful when you have a rational function or some other signal which you happen to know is causal. You can also massage this expression to get the other values.

Obv. does not work for FT: in that case, $z$ always on unit circle.

Dancing around inverse Z-transforms: $x(n) = \frac{1}{2\pi i} \oint \hat{X}(z) z^{n-1} dz$ (contour integral). We are not going to use this (i.e. forget about it).

The ways we invert:

• Inspection

If I ask you what the inverse transform is of $\frac{1}{1 - \alpha z^{-1}}$, we know the result, depending on whether $\abs{\alpha}$ is greater or smaller than $\abs{z}$.

Now consider $\hat{X}(z) = \frac{1}{3}z - \frac{1}{2} + 2z^{-1}$. We can decompose this FIR into its contributing values.

• Power Series Method

We can use the equivalent power series and just transform it term by term. (we could also get some of these via time-reversal and inspection).

• Long Division

Recall rational transforms correspond to functions that have difference equations.

Suppose $\hat{G}(z) = \frac{z}{z-1}$. Doing the long division the usual way (i.e. by $z-1$) will yield the causal signal $u(n)$. Doing long division a different way (i.e. by $1-z$) will yield the anticausal signal $-u(-n-1)$.

The point is that you have flexibility with respect to how you do the long divison, and each of them will give you a different corresponding one-sided signal.

Recall: a signal cannot be causal if its transfer function diverges. However, the number of samples to the left corresponds to the order of growth of the transfer function (as a polynomial).

Rational Transform Pole-Zero Book-keeping

Suppose I have a transfer function $\hat{H}(z) = \frac{A(z)}{B(z)}$, where $A$ is $m^{th}$ order, and $B$ is $n^{th}$ order. If $M < N$, H is strictly proper. If $M \le N$, H is proper. ANd if $M > N$, H is improper.

For $M < N$, there are $N$ finite poles (counting multiplicities), $M$ finite zeros, and $N-M$ zeros at infinity.

For $M = N$, there are $M = N$ finite poles, $M = N$ finite zeros. No activity at infinity.

For $M > N$, there are $M$ finite zeros, $N$ finite poles. and $M-N$ poles at infinity.

In any of these cases, the number of poles equals the number of zeros. The difference is always what is happening at infinity.

Back to power series

$\hat{F}(z) = e^{\frac{1}{z}}$. Just use the Taylor series for the exponential function. Falls into place.

Partial Fraction Expansion

Again we are speaking of a rational Z-transform. You've studied partial fractions in calculus. It's no different here, really. However, this partial fraction must be proper. If not, just do long division until it's in that form.

Case I: Simple Poles

Best case is when all finite poles are simple (i.e. order 1).

(remember: causal signal means that the RoC must be outside the outermost pole.)

So what happens if I ignore the causality constraint and instead add the constraint that the signal is BIBO stable? We get a different equation, which is now a two-sided signal, and nothing blows up.

EE 120: Signals and Systems

April 5, 2012.

Partial Fraction Expansions (Cont'd)

The case of multiple poles:

Example: $\hat{G}(z) = \frac{z^2}{\parens{z-\frac{1}{2}}^2(z-2)}$.

Remember, double-pole at $\frac{1}{2}$ (for this, we can do the double-cross), and a pole at 2. How many possible regions of convergence can we have? 3.

Now let's try to find the impulse response. Before you can do that, you need to break this rational transfer function into a combination of first and second-order terms. We're not going to carry this all the way to the impulse response, but we are going to break it up.

Any time you have a multiple pole, you have to write the expansion in terms of the first order term plus the second order term (or $Az + B$ in the numerator).

The trick at the end is to differentiate and evaluate at the multiple pole.

Assume causal system. Determine $g(n)$. Linear combination of the inverse Z-transforms, since this is a linear operator. Differentiation property to derive inverse transform of second term.

Steady-State and Transient Decomposition of DT-LTI System Responses

We're going to talk in this course about two ways of decomposing the responses of DT systems. One of these is decomposing into transient (which dies out in the long term) and steady-state (long-term dominant) components.

Starting with a causal system. First-order IIR filter, single pole at the origin. Since $\alpha$ is inside the unit circle, this system is BIBO stable.

Simple question: here's your system h, it's got some impulse response, I apply a step function to it.

Once again, remember that partial fraction expansion requires that you have a strictly proper fraction. You may need to pull out some of your zeros to make the fraction strictly proper (or do long division).

The end result is the transform $y(n) = \frac{1}{1-\alpha} u(n) - \frac{\alpha}{1 - \alpha} \alpha^n u(n)$. The first term is our steady-state response. It does not go away with time. The second term is the transient response. It

Thus we can decompose any signal into a transient portion and a steady-state portion.

If a system is BIBO-stable and causal, all of its poles are inside the unit circle.

Question: a BIBO-stable and causal DT-LTI system has all its poles inside the unit circle. Why is that?

The RoC must be outside of the outermost pole (result of causality). It is also BIBO stable, so it must include the unit circle. Thus the outermost pole must be inside the unit circle, so all other poles must also be inside the unit circle.

Transient is any part that dies out at infinity, while steady-state is anything that is either steady or growing. Notions of dominance (separate but connected closely to transient/steady-state analysis). Dominance is tied to the long-term behavior (i.e. which term dominates when $n$ large?)

Back to the original setup: $x(n) = u(n) \implies y(n) = \frac{\alpha} {\alpha-1} \alpha^n u(n) + \frac{1}{1-\alpha} u(n)$. What if $x(n) = 1$ (for all time)?

Since this system has settled, we have just the steady-state component.

We want to get to the continuous-time story. What if the input signal is a constant signal 1? We get as output our steady-state result. What this means is that the system is unable to distinguish between the constant signal 1 and the unit step which kicks in at $n=0$ if you wait long enough.

All this talk about complex exponentials was really just a discussion regarding steady-state. Non-stable pole (always one on the unit circle), which is going to dominate the response.

Now that you have this connection, you don't have to solve for the coefficient (if the system is BIBO-stable). All you have to do is figure out the transient portion.

Deferring ZIZO until next week.

Example

Consider the causal system described by $y(n) = \frac{5}{6}y(n-1) - \frac{1}{6} y(n-2) + x(n)$. Determine the unit step response of the system.

Transform everything. Remember that $\hat{H}(z) = \frac{\hat{Y}}{\hat{X}}$.

(linear algebra approach:

Looks like $\lambda^n$.

Thus $Y^n = \frac{5}{6}Y^{n-1} - \frac{1}{6}Y^{n-2}$.

$$y^2 = \frac{5}{6}y - \frac{1}{6}y \\ y^2 - \frac{5}{6}y + \frac{1}{6}y = 0 \\ y = -\frac{1}{2}, -\frac{1}{3}$$

These are our eigenvalues. Some linear combination of these two exponentials will give us our initial conditions (i.e. $y(0) = 1$, $y(-1) = y(-2) = 0$). That is, $y = a_0 \parens{-\frac{1}{2}}^n + a_1 \parens{-\frac{1}{3}}^n$ )

EE 120: Signals and Systems

April 10, 2012.

Zero-State and Zero-Input Responses

Alternate decomposition of systems that is based on whether the system is initially at rest or not. If initially at rest, what is response due to initial conditions, and what is response to impulse?

Example believed to be in the textbook: system described by $y(n) - 0.9 y(n-1) = x(n)$. Causal.

If the system is not at rest, technically it is not LTI. Not nonlinear, though. It's what we call an incrementally-linear system. What distinguishes these from linear systems is that these have non-zero intercepts.

Turning off the input, all you've got is some nonzero initial condition. Figure out the response as time goes forward. This is called the zero-input response of the system (turning off the input).

What if $y(-1) = 0$, and $x(n) = u(n)$? We've got a geometric series! Or do it the z-transform style; do the transform as we've been doing for the past few weeks, and causality will tell you the rest. Output is $HX$. We know how to do partial fractions and stuff. Talk about damn walls.

This response is called the zero-state response ($y_{ZSR}$), meaning the initial state (set of initial conditions) is zero.

So you have the zero-input response plus the zero-state response as yet another decomposition of your system.

Now we're learning about the contributions of the nonzero initial state. We did this by splitting the response. There is actually a way to figure out the total response using transforms: there is a transform method. Main point of today.

Transform Method to get the Total Response

So, the method begins by looking at the difference equation, e.g. $y(n) = 0.9 y(n-1) + x(n)$. I'm going to use the Lee & Varaiya method, and then we'll look at it another very related method (the unilateral Z-transform).

For starters, multiply each side by $u(n)$. So we have $y(n)u(n) = 0.9y(n-1)u(n) + x(n)u(n)$.

Then take the Z-transform of both sides (using the definition of the Z-transform). Note that this Z-transform looks very much like the z-transform you've seen up until now, except that it starts at zero and goes up to infinity. This is called the unilateral z-transform of $y$.

For any causal signal, the unilateral transform is the same as the bilateral z-transform.

With the unilateral transform, you can do it all in one go.

Laplace Transform

$\hat{X}(s) \defequals \int_{-\infty}^{\infty} x(t) e^{-st} dt$, $s = \sigma + i\omega$. Just as with the Z-transform, we do not use an inverse transform formula. We're going to use similar methods.

Why do we even bother with this? For reasons similar to why we justified the Z-transform, we need a comparable transform for continuous-time systems.

Notice how the integral is actually the Fourier transform of the perturbed ("tamed") function $x(t)e^{-\sigma t}$. The radius of convergence is defined by $\sigma = \mathrm{Re}(s)$ -- $\omega$ plays no role in convergence.

In continuous time, there is a very nice correspondence between sidedness of signals and the RoC in continuous-time. Easier to remember.

Once again, causality means that the RoC extends all the way to (and includes!) infinity.

Notice that in this case, the RoC contains the $i\omega$ axis. (Conjecture, since not yet proven in this class) As in the z-transform, this is because $x(t)$ is a stable signal. That is, it is absolutely integrable. The proof is fairly trivial: $\int\abs{x(t)e^{-i\omega t}} dt = \int \abs{x(t)} dt < \infty$.

Transform pairs!

$$\renewcommand{\Re}{\mathrm{Re}} e^{-at} u(t) \ltrans \frac{1}{s+a} (-\Re(a) < \Re(s)) \\ -e^{-at} u(-t) \ltrans \frac{1}{s+a} (\Re(s) < -\Re(a))$$

EE 120: Signals and Systems

April 17, 2012.

Differentiation property:

$$x(t) \ltrans \hat{X}(s) \\ \dot{x}(t) \ltrans s\hat{X}(s)$$

LCCDEs

$y^{(N)} + ... + a_1 y^{(1)}(t) + a_0 = b_M x^{(M)} + ... + b_0 x(t)$. What I want you is to apply the differentiation property to find the transfer function of this. (x-coefficients-polynomial divided by y-coefficients-polynomial)

$\frac{\sum_m b_m s^m}{\sum_n a_n s^n}$.

Going back to a series-RC circuit powered by a voltage source, we have $z(t) = \frac{x(t) - y(t)}{R}$, $C\dot{y}(t) = z(t)$. So $C\dot{y} - \frac{1}{R}y(t) = x(t)$. Transfer function therefore is $\frac{1}{RCs + 1}$. Other way is to plug in $e^{-st}$ and insist silly things.

Inverting this signal yields $\frac{1}{RC}e^{-t/(RC)}u(t)$. That is the impulse response of the system.

So that was differentiation in time. There is differentiation in the s-domain.

Differentiation in s

$$x(t) \ltrans \hat{X}(s) \\ -t x \ltrans \deriv{\hat{X}}{s}$$

$x(t) = \frac{1}{2\pi i} \oint \hat{X}(s) e^{st} ds$

$te^{-at}u(t) \ltrans \frac{1}{(s+a)^2}$.

Conjecture: terms of the form $t^n e^{-at} u(t)$ and their anticausal counterparts are the only kinds that can be combined (subject to matching RoCs) to produce rational transforms. This means that the impulse response of any rational transfer function must be the sum of these terms.

In differential equations, you studied simple and multiple roots (which correspond to simple/multiple poles in our vernacular).

$s + 1 + 1/(s+3)$

(unit doublet)

on one side, you have delta, step, ramp, quadratic. On the other side, you've got a doublet, second derivative of delta, etc. Delta is $u_0(t)$, doublet is $u_1(t)$, step is $u_{-1}(t)$, etc.

If not strictlty proper, have polynomial in s.

Method 1: non-transform method

If delta goes into the system, what comes out? $h$. If the unit step goes in, we get $u*h$.

Method 2: transform method

partial fractions and stuff. Consistent with result of method 1.

Integration in time/transform domain

Just relabel variables, and it becomes self-evident.

$$x(t) \ltrans \hat{X}(s) \\ \int x dt^\prime \ltrans \frac{1}{s}\hat{X}(s)$$

Steady-State & Transient Response of LTI Systems

Exactly the same as expected. Note that the second one dies out because of the pole of the system.

With BIBO-stable system, input pole to right of rightmost pole of system dominates output.

EE 120: Signals and Systems

April 19, 2012.

Let's talk a bit about a causal BIBO-stable system. Which is usually the case with practical applications. Has a rational transfer function, so usually ratio of two polynomials in $s$. Not going to be too concerned about zeros of system, so we'll write the factored denominator in terms of the poles of the system.

Assume all poles are simple. All poles are in left half-plane. Also, assume transfer function is strictly proper.

To this system, I apply a one-sided (causal) complex exponential signal. What is the output?

transforms and multiplications.

Eigenfunction property (plus other stuff?!).

True for any BIBO-stable function: you can evaluate the Laplace transform on the $i\omega$ axis and get the Fourier transform for that particular $\omega$.

What happens to all the terms involving the Rs? These, collectively, compose your transient response. The last term (result from input)? Doesn't die out. Steady-state.

What this says is that the system cannot distinguish between $e^{i\omega_0 t}$ and its truncated cousin $e^{i\omega_0 t}u(t)$ if we wait long enough: i.e. transients become insignificant. Only portion of response that remains is the one corresponding to $e^{i\omega_0 t}$. Notice that the pole of the input is to the right of the rightmost pole of the system.

Important: all poles of the system are in the left half-plane, and the pole of the input is on the $i\omega$ axis, which means it's to the right of the rightmost pole (and of course the system is causal). Therefore the pole of the input will dominate the response.

Eigenfunction property applies to steady-state solution. Can also extend to sinusoids.

Likely a good time to move to the unilateral Laplace transform and how we can use it to solve ordinary LDEs.

Unilateral Laplace Transform& linear, constant-coefficient differential equations with non-zero initial conditions

Whenever you have nonzero initial conditions, you need to truncate. Trick used: multiply by unit step, then take Laplace transform. Effectively the same as taking unilateral Laplace transform.

$\hat{\mathcal{X}}(s) = \int_{0^-}^\infty x(t) e^{-st} dt$. A lot of textbooks only deal with the unilateral transform because they're interested in causal systems. As are we, in this context.

If I am looking at the unilateral Laplace transform of $\dot{x}$, one additional term appears. If we integrate by parts, we can see which this term must be. In the bilateral case, we evaluated $uv$ at both infinities. The second term (i.e. $\int vdu$) required that this product pevaluate to zero at infinities -- otherwise the integral would not converge.

In the unilateral case, we therefore have an additional term: $-x(0^-)$.

Zero-state, zero-input method. Remember: different from transient and steady-state. Best not to think of these at the same time.

Method 2: use unilateral Laplace transform.

Note that if a signal is causal, its unilateral Laplace transform is the same as its bilateral Laplace transform.

EE 120: Signals and Systems

April 24, 2012.

DC Motor Control

Application of what we've been studying. Way to review and test fluency with material. We've got a DC motor whose model is some second order linear differential equation. We've got applied torque and damping. Moment of inertia of rotor and whatever's hooked up to it.

Transfer function?

Feedback to stabilize the system. Place this in proportional feedback configuration: only other thing in the feedback system is K, which is a scalar. Integrator: of form $\frac{1}{s}$. What's the transfer function? Characteristic polynomial from differential equations.

K must be positive for BIBO stability.

If roots complex, guaranteed stability. Real part of each pole is $-\frac{D}{2M} < 0$.

Oscillations you get when you have complex poles. Underdamping, critical damping, overdamping. Robustness discussion.

Bode plots!

Gots two building blocks:

$$\hat{F}_I(s) = 1 + \frac{s}{\omega_0} \\ F_I(\omega) = \hat{F}_I(i\omega)$$

Asymptotic plot of $20\log\abs{F_I}$ and $\angle F_I$. The horizontal axis is a logarithmic axis.

What happens when $\omega$ is very small? Asymptotically zero. At higher frequencies, $\omega$ large, so imaginary part will dominate. And so when you take its magnitude, you jump by 20 every time you increase magnitude by 10 -- slope is 20dB/dec. $\omega_0$ is your corner frequency (named for obvious reasons). 3dB point: corner frequency. One of foundational blocks for frequency responses on logarithmic scales. 45 degrees per decade. (use 10x to determine dominance).

This building block is called a regular zero. Not widely used; Babak learned from circuits professor at CalTech (R.D. Middlebrook).

The second building block is $\frac{s}{\omega_0}$. Simple zero.

Claim: all expressions with real roots can be written as combinations of these two.

Inverted zero: $1 + \frac{\omega_0}{s}$