Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
tree: a63ddafc37
Fetching contributors…

Cannot retrieve contributors at this time

file 1813 lines (1812 sloc) 109.712 kb
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813
<div><div class='wrapper'>
<p><a name='1'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>January 17, 2012.</h2>
<ul>
<li>BABAK</li>
<li>email: AYAZIFAR@EECS.BERKELEY.EDU</li>
<li>Best way to contact.<ul>
<li>If time-sensitive, make subject line explosive. Literally.</li>
<li>Does his best to respond. So that's that.</li>
</ul>
</li>
<li>Office: 517 Cory</li>
</ul>
<h2>Grading:</h2>
<ul>
<li>Three midterms.</li>
<li>MT1: 20%. Date: Feb 14. Week of drop date (almost certainly)</li>
<li>MT2: 25%. Date: TBA. Percentage tentative (alt. 20%)</li>
<li>MT3: 25%. Date: Last lecture.</li>
<li>Homework: 15%, drop lowest two scores.</li>
<li>4-6 homeworks.</li>
<li>Work in groups of 3-5 people.</li>
<li>Each member of group turns in separate document.<ul>
<li>Must constitute primarily own work.</li>
<li>Write own name at top, names of collaborators beneath.</li>
</ul>
</li>
<li>Tentatively due Tuesdays so Babak can have OH on Monday evenings.</li>
<li>Pop quizzes (often obvious when): 15%. Drop lowest one.</li>
<li>Possible research paper project. 10% =&gt; MT2, MT3 weigh 20% each.</li>
<li>Just get feet wet with journals.</li>
<li>Same for everybody.</li>
</ul>
<p>Since the course is not HW-heavy, there are weeks we don't have
homework. So get together with groups and plow through old exams.</p>
<p>The other thing you can do is look at a couple of books. Most of them
are with this title: Signals &amp; Systems. One of them is by Oppenheim,
Willsky, Nawab. This one's an expensive book, actually. Don't
recommend you go out and buy it unless you can find one printed
overseas. Excellent problems -- mostly MIT exams. Taught out of this
book previously. (both are second-edition)</p>
<p>There's another one with the same title by Hwei P. Hsu (Schaum's
Outline) -- great for self-study because every problem is solved.</p>
<h1>Basic Properties of Systems</h1>
<ul>
<li>Linearity</li>
<li>
<p><mathjax>$x\to[F]\to y$</mathjax></p>
<p>if:</p>
<p><mathjax>$x_1\to[F]\to y_1$</mathjax>
<mathjax>$x_2\to[F]\to y_2$</mathjax></p>
<p>then:</p>
<p><mathjax>$\alpha x_1\to[F]\to\alpha y_1$</mathjax> (scaling / homogeneity property)
<mathjax>$x_1+x_2\to[F]\to y_1+y_2$</mathjax> (additivity property)</p>
<p>if:</p>
<p>(<mathjax>$\forall x_1,x_2\in X, \alpha_1 x_1\to\alpha_2x_2\to[F]
\to\alpha_1 y_1+\alpha_2y_2$</mathjax>),</p>
<p>then:</p>
<p>(<mathjax>$\forall x_1,x_2\in X, \alpha_1 x_1+\alpha_2x_2\to[F]
\to\alpha_1y_1+\alpha_2y_2$</mathjax>),</p>
<p>(where:</p>
<p><mathjax>$x \in X$</mathjax> input signal space</p>
<p><mathjax>$y \in Y$</mathjax> output signal space</p>
<p><mathjax>$y = F(x)$</mathjax></p>
<p><mathjax>$y(t)$</mathjax> output at time t</p>
<p><mathjax>$y(n)$</mathjax> output of sample n)</p>
</li>
<li>
<p>example: resistor and stuff. Ohm's law. <mathjax>$\vec{J} = \sigma \vec{E}$</mathjax> and
    whatnot.</p>
<ul>
<li>By changing the system into a time-variant system, it doesn't
  lose linearity.</li>
<li>ZIZO.</li>
<li>converse not true.</li>
<li>contrapositive/logical equiv: <mathjax>$\lnot \text{ZIZO} \implies \not
    \text{L}$</mathjax>.</li>
</ul>
</li>
<li>Modulation</li>
<li>Parallel interconnection of linear systems produces a
    multiple-input linear system.<ul>
<li>Strictly speaking, this is a lie. It's not really parallel.</li>
</ul>
</li>
<li>Two-point moving average:<ul>
<li>Scaling and additivity properties.</li>
</ul>
</li>
<li>Time Invariance</li>
<li>
<p><mathjax>$\hat{x}(t) = x(t-T) \implies \hat{y}(t) = y(t-T)$</mathjax>
    <mathjax>$\forall x\in X, \forall T \in \mathbb{R}$</mathjax></p>
<p>Hidden assumption: X is closed under shifts. (i.e. <mathjax>$x\in X \implies
\hat{x}\in X$</mathjax>)
  + occurs, e.g. when something that isn't an input is time-dependent.
* Causality
  + Basically, no dependence on future values.
- If two inputs are identical to some point, then their outputs
  must also be identical to that same point.</p>
</li>
</ul>
<p><a name='2'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>January 19, 2012.</h2>
<h2>More on causality</h2>
<p>(proof by contradiction of previous example)</p>
<p>(input zero up to a certain point, but the output is nonzero before said
point. output preceding input. In itself, it does not tell non-causality,
but if you know the system is either linear or time-invariant, then you
know it can't be causal.)</p>
<p>L and C implies not P</p>
<p>If the system is linear AND causal, output cannot precede the input.</p>
<p>contrapositive: P implies not (L and C) = not L or not C.</p>
<p>for time invariance, we don't have to insist that the input is zero up to a
point; it just has to be (any) constant. Output also doesn't have to be
zero; it just has to be constant (not necessarily the same constant,
obviously).</p>
<p>(TI and C) implies not P
contrapositive: P implies not TI or not C.</p>
<p>anti-causality is defined as the exact opposite of causality. Everything
we've said about causality, the inequalities change sign.</p>
<h2>Bounded-input bounded-output (BIBO) stability.</h2>
<p>if <mathjax>$\abs{x(n)} \le Bx (\exists Bx &lt; \infty) \implies \exists 0 &lt; By &lt;
\infty \text{ s.t. } \abs{y(n)} \le By \forall n$</mathjax></p>
<p>every FIR (finite-duration impulse response filter) is BIBO stable.</p>
<h2>LTI (linear time-invariant) systems</h2>
<p><mathjax>$x(n)$</mathjax> can in general be decomposed into into a linear combination of
impulses. (Kr\"onecker deltas). <mathjax>$x(n) = \sum_{m=-\infty}^\infty
x(m)\delta(n-m)$</mathjax>. (sifting property)</p>
<p>If you know the response of the LTI system to the unit impulse, you know
the response to all discrete-time inputs -- decompose it into unit
impulses.</p>
<p>convolution sum.
<mathjax>$$
x(n) = \sum_m x(m) \delta(n-m)
\\ y(n) = \sum_m x(m)f(n-m)
\\ y(n) = (x * f)(n)
\\ y(n) = \sum_m^\infty x(m)f(n-m) = \sum_{k=-\infty}^\infty f(k)x(n-k)
\\ = (f*k)(n)
\\ \therefore x * f = f * x
$$</mathjax>.</p>
<p>You can reverse the roles of the impulse response and the input signal.</p>
<p>Let <mathjax>$x(n) = e^{i\omega n}$</mathjax>. <mathjax>$y(n) = F(\omega) = (\sum_k f(k)e^{-i\omega
k})e^{i\omega n}$</mathjax>. Frequency response of this LTI system.</p>
<p>Converges nicely iff the system is BIBO-stable. The frequency response of
the system will be a very smooth function of omega.</p>
<h2>BIBO stability for LTI systems</h2>
<p>An LTI system F is BIBO stable iff its impulse response converges
(absolutely summable in discrete case or integrable in continuous case).</p>
<p>IF the impulse response is absolutely summable, then F is BIBO stable
(every bounded input produces a bounded output).</p>
<p><mathjax>$$y(n) = \sum_k f(k)x(n-k) \text{ (I/O relation for an LTI system)}.
\\ \abs{y(n)} = \abs{\sum_k f(k)x(n-k)} \le \sum_k \abs{f(k)}\abs{x(n-k)}
\\ \abs{x(n)} \le Bx \forall n. \le (\sum_k \abs{f(k)})Bx$$</mathjax></p>
<p>F is BIBO stable <mathjax>$\implies \sum_n \abs{f(n)} &lt; \infty$</mathjax></p>
<p>contrapositive. i.e. we can find at least one bounded input that produces
an unbounded output.</p>
<p>(fudge something to cancel out signs; we create our absolute sum over all
integers @ n=0. convolve signal with time-reversed signal;
auto-correlation. convolution of a signal with itself.)</p>
<p><mathjax>$\frac{f(n)^2}{\abs{f(n)}} = \abs{f(n)}^2/\abs{f(n)} = \abs{f(n)}$</mathjax></p>
<h2>IIR Filters</h2>
<p><mathjax>$y(n) = \alpha y(n-1) + x(n)$</mathjax>. causal, <mathjax>$y(-1) = 0$</mathjax>.</p>
<p>(geometric sum; BIBO stability depends on magnitude of \alpha being less
than
1.)</p>
<p><a name='3'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>January 24, 2012.</h2>
<h1>LTI Systems and Frequency Response</h1>
<p>Freq. response: <mathjax>$H(\omega)$</mathjax>
(continuous-time?)</p>
<p>Will learn set of systems for which sum doesn't converge, so we'll need a
new transform: laplace (Z) transform</p>
<h2>Two-point moving average:</h2>
<p><mathjax>$h(n) = (\delta(n) + \delta(n-1))/2$</mathjax></p>
<p><mathjax>$H(\omega) = (1 + e^{-i\omega})/2 = e^{-i\omega /2}(e^{i\omega /2} +
     e^{-i\omega /2})/2 = e^{-i\omega/2}cos(\omega/2)$</mathjax></p>
<p>frequency response of discrete-time systems is periodic: adding a multiple
of <mathjax>$2\pi$</mathjax> will not change the result.</p>
<p><mathjax>$2\pi$</mathjax>-periodicity of discrete-time LTI frequency responses. This means,
naturally, that we don't have to plot our functions everywhere; rather, we
only need to worry about a single period.</p>
<p>(looks like <mathjax>$\abs{cos(\frac{\omega}{2})})$</mathjax></p>
<p>eigenfunction property of discrete LTI systems -- <mathjax>$e^{i\omega} \to
H(\omega)e^{i\omega}$</mathjax></p>
<p><mathjax>$H^*(\omega) = \sum h^*(k)e^{i\omega k}$</mathjax></p>
<p>conjugate symmetry: <mathjax>$H^*(\omega) = H(-\omega)$</mathjax>. (generalization of "even"
functions)</p>
<p>if <mathjax>$h(n) \in \mathbb{R} \forall n$</mathjax>, <mathjax>$y(n) = Re{H(\omega)e^{i\omega n}} =
|H(\omega )|cos(\omega n+\angle H(\omega))$</mathjax></p>
<p><mathjax>$x\to H\to y$</mathjax></p>
<p><mathjax>$h(n) = (1-\alpha ^n)/(1-\alpha)$</mathjax></p>
<p>Ways to determine H(\omega):</p>
<h2>Method 1:</h2>
<p><mathjax>$H(\omega) = \sum h(n)e^{-i\omega n} = \sum (\alpha e^{-i\omega})^n$</mathjax>. Use
usual formula for geometric series, since this converges.</p>
<h2>Method 2:</h2>
<p>Use eigenfunction property of the complex exponential. Frequency response
is probably defined. Let <mathjax>$x(n) \equiv e^{i\omega n}$</mathjax> (a pure tone).</p>
<p><mathjax>$y(n) = H(\omega)e^{i\omega n}. y(n-1) = H(\omega)e^{i\omega n}e^{-i\omega}$</mathjax>.
<mathjax>$H(\omega) = 1/(1-\alpha e^{-i\omega})$</mathjax>.</p>
<h2>How we can plot the magnitude response</h2>
<p>eliminate all negative complex exponentials. <mathjax>$\alpha$</mathjax> is in the unit
circle. <mathjax>$e^{i\omega}$</mathjax> is a point on the unit circle, so we can represent it
with a vector.</p>
<p>Consider graphically.</p>
<p><a name='4'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>January 26, 2012.</h2>
<p>numerator is 1, denominator is length of vector <mathjax>$e^{i\omega} -
\alpha$</mathjax>. Task: plot ratio as <mathjax>$\omega$</mathjax> varies.</p>
<p><mathjax>$1/(1+\alpha), 1/(1-\alpha)$</mathjax>.</p>
<p>frequency response curve should be monotonically increasing from <mathjax>$-\pi$</mathjax> to
0 and decreasing from 0 to <mathjax>$\pi$</mathjax>. [ talk about inflection points, concavity,
etc. Second derivatives. ]</p>
<h1>Low-pass filter for <mathjax>$0&lt;\alpha&lt;1$</mathjax>. If we understand the geometry for this</h1>
<p>particular case, we can answer design-oriented questions. One question
  is, what if I want to sharpen the peak of this low-pass filter?</p>
<ul>
<li>
<p>Answer: have <mathjax>$\alpha$</mathjax> approach 1. Can't get too close -- real-world has
    noise. Also, in trouble if <mathjax>$\alpha$</mathjax> jumps onto or outside of the unit
    circle. Algebraic argument available, as well.</p>
<p>: move <mathjax>$\alpha$</mathjax> toward the unit circle (but still keep it inside)</p>
</li>
</ul>
<h1>Can I make a high-pass filter out of this?</h1>
<ul>
<li>
<p>Yes. Set <mathjax>$-1 &lt; \alpha &lt; 0$</mathjax>.
    : take <mathjax>$\alpha$</mathjax> to <mathjax>$-1 &lt; \alpha &lt; 0$</mathjax></p>
</li>
<li>
<p>By taking <mathjax>$\alpha$</mathjax> to be negative, we have the equivalent of a phase
    shift by <mathjax>$\pi$</mathjax>.</p>
</li>
</ul>
<h1>Sharper high-pass filter?</h1>
<ul>
<li>Bring <mathjax>$\alpha$</mathjax> closer to -1.</li>
</ul>
<h1>What if we want the filter to peak @ an arbitrary frequency, say</h1>
<p><mathjax>$\omega_0$</mathjax> = <mathjax>$\pi/4$</mathjax>?</p>
<ul>
<li>Place <mathjax>$\alpha$</mathjax> along the pole <mathjax>$\theta = \omega_0$</mathjax>.</li>
</ul>
<p><mathjax>$1/(1-\abs{\alpha})$</mathjax></p>
<p>observation: this filter is a complex filter (i.e. one whose impulse
  response <mathjax>$h(n)$</mathjax> is complex valued). This has its uses but is not always
  desirable.</p>
<p><mathjax>$h(n) = \alpha^n u(n) = R^n e^{i\omega_0n}$</mathjax></p>
<p><mathjax>$g(n) = h(n)e^{i\omega_0n} \implies G(\omega) = H(\omega -\omega _0)$</mathjax></p>
<h2>Example:</h2>
<p>LCCDE: linear constant coefficient difference equation</p>
<p><mathjax>$y(n) = \alpha y(n-N) + x(n)$</mathjax>. N is a positive integer, not necessarily 1.
<mathjax>$y(-N) = ... = y(n) = 0$</mathjax></p>
<p><mathjax>$(\abs{\alpha} &lt; 1)$</mathjax></p>
<p><mathjax>$$h_{N}(n) = \alpha h_{N}(n-N) + \delta(n) = !(n % N) &amp;&amp; \alpha^{n/N}.
H_{N}(\omega)\\ = \sum_{k=0}^\infty\alpha^{k}e^{-i\omega Nk}
  \sum(\alpha e^{-i\omega N})^k = 1/(1-\alpha e^{-i\omega N})
\\ \abs{H_{N}(\omega)} = \abs{e^{i\omega N}/(e^{i\omega N} - \alpha)}
$$</mathjax></p>
<p>Use <mathjax>$\omega \equiv \omega N$</mathjax>, then use a change of variables (after
plotting) to recover the initial basis and label axes and whatnot.</p>
<p>Or use some arbitrary argument that it's a contracted version by going back
to <mathjax>$H(\omega)$</mathjax>. Roughly equivalent either way.</p>
<p>Graph of <mathjax>$\abs{H(2\omega)}$</mathjax>, for <mathjax>$0&lt;\alpha &lt;1$</mathjax>.</p>
<p>(No more graphs.)</p>
<p>Comb filter. (pick off values that are multiples of <mathjax>$\pi/N$</mathjax>)</p>
<p>design-oriented analysis. layers like an onion. they fit together very
well.</p>
<h2>Analog system</h2>
<p><mathjax>$g(t) = e^{-\alpha t}u(t), \Re{\alpha} &gt; 0$</mathjax></p>
<p><mathjax>$G(\omega) = \int g(t)e^{-i\omega t}dt
   \\ = 1/(\alpha + i\omega ).
   \\ = 1/(i\omega - (-\alpha ))$</mathjax></p>
<p><mathjax>$\sqrt{\omega ^2 + \alpha ^2}$</mathjax></p>
<p><a name='5'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>January 31, 2012.</h2>
<p>Homework should be out today. Yes, there are homeworks in this class.</p>
<p>analog system procedure for figuring out frequency response of
systems.</p>
<p>Fourier analysis today. Some of this will be review, but we'll dive
more deeply into the linear algebra in this class.</p>
<p>Roadmap:</p>
<ul>
<li>Fourier Analysis
   "A way of decomposing signals into their constituent frequencies.
   Kind of like the way a prism splits light into its components.
   Fourier analysis gives us the tools we need to analyze signals."</li>
<li>Periodic [ in time domain. ]<ul>
<li>Discrete time, discrete time Fourier series.</li>
<li>Continuous time, continuous time Fourier series.</li>
</ul>
</li>
<li>Aperiodic<ul>
<li>Discrete-time Fourier transform.</li>
<li>Continuous-time Fourier transform
   We can put blocks around these. Discrete-time signals are periodic
   in the frequency domain.</li>
</ul>
</li>
</ul>
<p>Before I do that, I want to review an abstraction that we use for
   signals (and the way we look at signals) as vectors.</p>
<h1>Periodic DT Signals</h1>
<p><mathjax>$x(n+p) = x(n) \forall n \exists p \in \{1,2,3,4,...\}$</mathjax></p>
<p>p is called the period of x. The smallest p for which this is true is
called the fundamental period.</p>
<p>With discrete-time signals that are periodic, you always have to find out
the period before you can talk about frequencies. <mathjax>$\omega_0\equiv2\pi/p$</mathjax> is
the fundamental frequency.</p>
<p>Let's say we have a signal defined as ...4/2/4/2/4/2...</p>
<p>We can abstract and represent this signal as a cartesian vector. A
euclidean vector. Namely, I can go to p-space and draw <mathjax>$\mathbf{x}$</mathjax>. I
basically take the values in that one period and stack them
up. <mathjax>$x=[4, 2]$</mathjax>. I can ask you a couple of questions: what the
representation of this vector is in terms of two canonical vectors:
<mathjax>$\psi_0=[1,0]$</mathjax>, <mathjax>$\psi_1=[0,1]$</mathjax>. <mathjax>$\ket{x} = 4\ket{\psi_0} +
2\ket{\psi_1}$</mathjax>. We do this via projection. Or a change of basis. <mathjax>$[4,2] =
4[1,0] + 2[0,1]$</mathjax>. Inner products and stuff. <mathjax>$\braket{x}{\psi_0} +
\braket{x}{\psi_1}$</mathjax>.</p>
<p><mathjax>$\braket{x}{\psi_0} = (x_0\psi_0 + x_1\psi_1)^T\psi_0 = x_0(\psi_0)^T\psi_0
+ x_1(\psi_1)^T\psi_0 = x_0\psi_0 + 0$</mathjax>.</p>
<p><mathjax>$x^T\psi_0 = x_0\psi_0^T\psi_0$</mathjax></p>
<p><mathjax>$x_0 = x^T\psi_0/(\psi_0^T\psi_0)$</mathjax></p>
<p>Just use inner product with normalized basis vectors.</p>
<p>In this case, it was easy to find out what the coefficients were. I can
repose the question by changing the <mathjax>$\psi_k$</mathjax>. I'm going to rotate the basis.</p>
<p>Basically, non-normalized sign basis. [3, 1].</p>
<p>[ talk about how we only need two orthogonal vectors, since our fundamental
  period is 2. Don't need a third vector. ]</p>
<p>Now, there's something special about these two frequencies we found.</p>
<p>Harmonics. Integer multiples of the fundamentals. Number of terms equals
period. Certainly what this toy example is indicating.</p>
<p>Now the question is, how do we find these coefficients? Oh wait, we've
already solved that problem.</p>
<h1>INTERMISSION</h1>
<p>I'm going to recast what we just did in matrix-vector form. We said that x
as a vector can be represented as a linear combination of <mathjax>$\psi_0$</mathjax> and
<mathjax>$\psi_1$</mathjax>. I chose the period over the interval 0 to 1. I can do the same
thing with <mathjax>$\psi_0$</mathjax> and <mathjax>$\psi_1$</mathjax>.</p>
<p>This is just the matrix for the FT. I have <mathjax>$x = x_0\psi_0 + x_1\psi_1 =
F[x_0,x_1]$</mathjax>, where <mathjax>$F = \begin{bmatrix}1&amp;1\\1&amp;-1\end{bmatrix} =
[\psi_0, \psi_1]$</mathjax>. This is a 2x2 matrix.</p>
<p>I can write this as <mathjax>$x = \Psi X$</mathjax>. Solve for X: left-multiply by inverse Fourier
transform. <mathjax>$F^{-1} = \frac{1}{n}F^\dagger$</mathjax>.</p>
<h1>INTERMISSION</h1>
<p>Once you have complex numbers, you can no longer use the dot product. We
now need the inner product. <mathjax>$\braket{a}{b} = a(T)b^*$</mathjax>. Fails otherwise, since
<mathjax>$\braket{x}{x} \neq \abs{x}$</mathjax>.</p>
<p><mathjax>$x$</mathjax> is <mathjax>$p$</mathjax>-periodic.</p>
<p><mathjax>$\vec{x} = [x(0), ... , x(p-1)]$</mathjax>. I have <mathjax>$\psi_k \perp \psi_l$</mathjax>, where
<mathjax>$b\braket{\psi_k}{\psi_l} = p\cdot\delta_{kl}$</mathjax>.</p>
<p>Proof of orthogonality of <mathjax>$\psi_k$</mathjax>, <mathjax>$\psi_l$</mathjax>, <mathjax>$k \neq l$</mathjax>. Geometric sum,
not very interesting.</p>
<p><mathjax>$x$</mathjax> is the linear combination of our <mathjax>$\psi_k$</mathjax>.</p>
<p><mathjax>$\braket{x}{\psi_l} = X_l\braket{\psi_l}{\psi_l} \implies X_l =
\frac{1}{p} \braket{x}{\psi_l}$</mathjax>.</p>
<p>synthesis equation, analysis equation:</p>
<p><mathjax>$x(n) = \sum X_k e^{ik\omega_0n}$</mathjax>
<mathjax>$X(n) = \sum x_k e^{-ik\omega_0n}$</mathjax></p>
<p><a name='6'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 2, 2012.</h2>
<h1>DTFS</h1>
<p><mathjax>$x(n) = \sum X_k e^{ik\omega_0n}$</mathjax>. The complex exponentials form a basis
for <mathjax>$\mathbb{C}^2$</mathjax>. The way we define inner products for signals is exactly
as you'd imagine. <mathjax>$\sum \psi_k(n) \psi^*_l(n)$</mathjax>. Frequency periodicity:
<mathjax>$\psi_{k+p} \equiv \psi_k$</mathjax>. Only in discrete-time periodic case are our
functions guaranteed to be both periodic in time as well as frequency.</p>
<p><mathjax>$x(n) = cos(n)$</mathjax> not periodic in discrete time <mathjax>$\implies$</mathjax> no DTFS.</p>
<p>Evidently there will be a quiz on Tue, since pset is due on Wed.</p>
<p>Consider what it means to send discrete-time periodic signals through
LTI systems.</p>
<p><a name='7'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 7, 2012.</h2>
<p>CTFS:</p>
<p>periodic if <mathjax>$x(t+p) = x(t) \exists p \in R$</mathjax>. Smallest positive <mathjax>$p$</mathjax> is
called the fundamental period. Fundamental frequency <mathjax>$\omega_0 \equiv
\frac{2\pi}{p}$</mathjax>.</p>
<p>Can also write <mathjax>$p = 2\pi/\omega_0$</mathjax>. In discrete-time, writing it this way
was dangerous, since depending on <mathjax>$\omega_0$</mathjax>, <mathjax>$p$</mathjax> may or may not be an
integer.</p>
<p>For discrete-time signals, the constant signal was periodic with <mathjax>$p=1$</mathjax>.
<mathjax>$x(t) = 1 \forall t \in Z$</mathjax>?</p>
<p>What about the signal <mathjax>$x(t) = 1 \forall t \in R$</mathjax>? The fundamental period is
undefined: any <mathjax>$p&gt;0$</mathjax> can serve as a period.</p>
<p>So there are subtleties in each story. In the discrete-time story, there
were some sinusoids that looked periodic but weren't, and the constant
signal has no fundamental period in continuous-time.</p>
<p>We're going to jump immediately into the Fourier series. He said you can
decompose any continuous signal as a linear combination of complex
exponentials that are related to each other by virtue of being at
frequencies that are integer multiples of the fundamental period.</p>
<p><mathjax>$x(t) = \sum X_k e^{ik\omega_0t} = \sum X_k\psi_k$</mathjax>.</p>
<p>We know the procedure for finding the kth coefficient. Before we go there,
there's something you ought to pay attention to in this expression. When I
draw a typical periodic signal, when I look at one period, how many points
do I have? Uncountably infinite. Also range is potentially a set of
uncountably many values. So this is a bold claim: we can represent these
with a countable number of eigenfunctions.</p>
<p>Unlike the discrete-time story, this equality will not always be a
pointwise equality. There are different gradations of convergence. Whenever
you have an infinite sum, you have to worry about convergence in the back
of your head. For well-behaved signals, the left and the right converge,
and this is true for every t. The less well-behaved signals will no longer
hold pointwise. Strange things happen, e.g. Gibbs phenomenon.</p>
<p>You'll have a reasonable understanding of Fourier series. We're not going
to worry too much about convergence in this class. The only time it doesn't
arise is in the discrete-time Fourier series.</p>
<p>Claim: Fourier analysis works. One path we can take is for you to take my
word for it. Or we could prove it. Since last time was hilarious, we're
going to take this for granted, for now. Assume orthogonality of <mathjax>$\psi_k$</mathjax>
for some definition of the inner product. <mathjax>$\psi_k = \exp(ik\omega_0 t)$</mathjax>.</p>
<p>I am now going to determine <mathjax>$X_l$</mathjax>. Take the inner product of <mathjax>$x$</mathjax> with
<mathjax>$\psi_l$</mathjax>.</p>
<p>The procedure is exactly the same. We're just swapping out our definition
of inner product. Exploit the orthogonality.</p>
<p>For discrete-time p-periodic signals, we defined the inner product as
<mathjax>$\braket{f}{g} = \sum fg^*$</mathjax>. Guess what the continuous-time inner product
is for <mathjax>$p$</mathjax>-periodic signal!</p>
<p>And if they're non-periodic, we'll do the same, but over all time.</p>
<p>Show that our eigenfunctions are orthogonal.</p>
<p>Synthesis equation: <mathjax>$x(t) = \sum X_k \exp(ik\omega_0 t)$</mathjax>
Analysis equation: <mathjax>$X_k = (1/p)\int x_k\exp(-ik\omega_0 t)$</mathjax>.</p>
<p>How do I show that <mathjax>$\braket{\psi_k}{\psi_l} = 0 (k\neq l)$</mathjax>? Just evaluate
the integral. We get an exponential with period p, integrated over a period
p? Looks like 0 to me.</p>
<p>Example: <mathjax>$X(t) = \cos(\pi t/3)$</mathjax>. <mathjax>$(\exp(i\pi t/3) + \exp(-i\pi
t/3))/2$</mathjax>. <mathjax>$\psi_1 = \psi_{-1} = \frac{1}{2}$</mathjax>.</p>
<p><mathjax>$q(t) = \sum\delta(t-\ell p) = \sum Q_k \exp(ik\omega_0t)$</mathjax>
<mathjax>$\delta(t) = \deriv{u(t)}{t}$</mathjax></p>
<p>Poisson's identity. <mathjax>$\sum\delta(t-\ell p) = \frac{1}{p}\sum\exp(ik\omega_0t)$</mathjax></p>
<p><mathjax>$R_k = \frac{1}{p}\int r(k)\exp(-ik\omega_0t) = \frac{1}{p}$</mathjax> if <mathjax>$k=0$</mathjax> else
<mathjax>$\frac{\sin(k\omega_0\Delta/2)}{pk\pi\Delta/2}$</mathjax></p>
<p>What happens if I want to approximate a signal that has finite energy? What
should the coefficients <mathjax>$\alpha_k$</mathjax> be?</p>
<p>orthogonal projection! Least squares!</p>
<p><a name='8'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 9, 2012.</h2>
<h1>Discrete-time Fourier transform</h1>
<p>Discrete aperiodic signals. The DTFT can also handle discrete-time periodic
signals, provided that we make use of Dirac deltas in the frequency
domain. You all should remember that the frequency response of a
discrete-time LTI system <mathjax>$H(\omega) = \sum h(\ell)\exp^{-i\omega\ell}$</mathjax>
[DTFT analysis equation]. It turns out this is the DTFT of the impulse
response.</p>
<p>How to go from frequency domain to time domain, i.e. how to derive impulse
response from frequency response.</p>
<p>Our goal is to determine h from H. That's what we want to do. And I'm going
to do this in two ways, both of which rely on things we've already done. So
there's nothing new that I'm going to go through.</p>
<p>Recall: With discrete LTI systems: frequency response has fundamental
period <mathjax>$2\pi$</mathjax>. <mathjax>$H(\omega+2\pi)=H(\omega)$</mathjax>. We already possess the
mathematical machinery to handle periodic continuous variables (i.e. we've
seen this in the CTFS). Then, our continuous variable was time. Now, it's
frequency.</p>
<p>Recall CTFS. <mathjax>$x(t)$</mathjax>. <mathjax>$p$</mathjax>: fundamental period. <mathjax>$\omega_0 = \frac{2\pi}{p}$</mathjax>:
fundamental frequency. We said we can express x as a linear combination of
complex exponentials that are harmonics (integer multiples) of the
fundamental frequency. We had an expression for these. <mathjax>$X_k =
\frac{1}{p}\int x(t)e^{-ik\omega t}$</mathjax>.</p>
<p>I'm going to draw parallels now with the current scenario: <mathjax>$x\to
H$</mathjax>. <mathjax>$t\to\omega$</mathjax>. <mathjax>$p\to2\pi$</mathjax>. <mathjax>$\omega_0=\frac{2\pi}{p}\to\Omega_0 =
\frac{2\pi}{2\pi}=1$</mathjax>. <mathjax>$\therefore X_k = h(-k)$</mathjax>.</p>
<p>Let <mathjax>$k\equiv-\ell$</mathjax> in equation (<mathjax>$\star$</mathjax>). <mathjax>$H(\omega)=\sum
h(-k)e^{ik\Omega_0\omega}$</mathjax>.</p>
<p>Our coefficients are the <mathjax>$h(-k) = \frac{1}{2\pi} \int_{\avg{2\pi}} H(\omega)
e^{-ik\omega} d\omega$</mathjax>. This is exactly parallel to the previous
expression.</p>
<p><mathjax>$h(\ell) = \frac{1}{2\pi}\int H(\omega)e^{i\ell\omega}d\omega$</mathjax>. Synthesis
equation.</p>
<p>DTFT equations:
<mathjax>$H(\omega) = \sum h(n)e^{-i\omega n}$</mathjax>.
<mathjax>$h(n) = \frac{1}{2\pi}\int H(\omega)e^{i\omega n}d\omega$</mathjax>.</p>
<p><mathjax>$h(n) = \int \frac{d\omega H(\omega)}{2\pi}e^{i\omega n}$</mathjax>. Linear combination of
complex exponentials. <mathjax>$H(\omega)$</mathjax> is a measure of the contribution of
frequency <mathjax>$\omega$</mathjax> to the function <mathjax>$h$</mathjax>.</p>
<p>For now, we're working with the universe of functions of the continuous
variable <mathjax>$\omega$</mathjax> which happen to be periodic with period <mathjax>$2\pi$</mathjax>.</p>
<h1>Ideal discrete-time Low-pass Filter</h1>
<p>Impulse response <mathjax>$h(n) = \frac{1}{2\pi}\int H(\omega)e^{i\omega n}d\omega$</mathjax>.</p>
<p><mathjax>$\frac{B}{\pi n} \sin(An)$</mathjax></p>
<p><a name='9'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 16, 2012.</h2>
<h2>Discrete-Time Fourier Transform: continued</h2>
<p>Recall the ideal low-pass filter. In time-domain, it had some
discontinuities. In freq domain, it was NOT absolutely summable. However,
it is square-summable. We have names for signals that behave in these
particular ways.</p>
<p><mathjax>$\ell_1$</mathjax> signals: absolutely summable. <mathjax>$\sum \abs{x(n)} &lt; \infty$</mathjax> (abuse of
notation, but useful at that).</p>
<p><mathjax>$\ell_2$</mathjax> signals: Square-summable. <mathjax>$\sum \abs{x(n)}^2 &lt; \infty$</mathjax>. Finite
energy.</p>
<p>For discrete-time, if a signal is <mathjax>$\ell_1$</mathjax>, it is <mathjax>$\ell_2$</mathjax>. Converse not
true.</p>
<p><mathjax>$\sum\abs{h(n)} = \sum_{n=0 \to \infty} \abs{\alpha}^n =
\frac{1}{1-\abs{\alpha}} &lt; \infty$</mathjax>. <mathjax>$h \in \ell_1 \implies$</mathjax> finite
<mathjax>$\abs{H(\omega)}$</mathjax> and smooth.</p>
<p>If <mathjax>$x \notin \ell_1$</mathjax> but <mathjax>$x \in \ell_2$</mathjax>, you run into a bit of a problem:
cannot use analysis equation, since summation will not converge. But we can
define the Fourier transform in that case to be the limit as <mathjax>$N \to \infty$</mathjax>
of <mathjax>$\sum_{-N \to N} x(n)e^{-i\omega n}$</mathjax>. DTFTs are continuous (not
necessarily smooth / infinitely differentiable).</p>
<p>Turns out we get convergence in energy of the two signals. It's just for
you to bear in mind, since you have an infinite sum. The most well-behaved
functions are <mathjax>$\ell_1$</mathjax>. They have nice DTFTs. Next level up in terms of
misbehavior is <mathjax>$\ell_2$</mathjax>. Fourier transforms cannot be obtained through
analysis equation, but can be reverse-engineered or otherwise. DTFTs have
discontinuities.</p>
<p>We have yet another level up in terms of misbehavior. We have signals of
"slow growth". (zero growth) Examples: <mathjax>$x(n) = 1$</mathjax>, <mathjax>$x(n) = n$</mathjax>, <mathjax>$x(n) =
e^{i\omega_0 n}$</mathjax>. Basically, these are signals that grow no faster than
polynomially in time. (and signals that neither grow nor decay). Notice
these signals <mathjax>$\cap (\ell_1 \cup \ell_2) = null$</mathjax>. Cannot use analysis
equation, but can use synthesis equation. And their DTFTs have Dirac
deltas.</p>
<p>Example:</p>
<p><mathjax>$x(n) = e^{i\omega_0n} X(\omega)$</mathjax>? <mathjax>$\alpha\delta(\omega-\omega_0$</mathjax>). Strictly
speaking, we'd have to write the spectrum as a sum: we have a delta every
<mathjax>$2\pi$</mathjax>, starting at <mathjax>$\omega_0$</mathjax>. But that's not particularly interesting,
since that looks messy. I'm just interested in the interval here. As long
as we know that it's <mathjax>$2\pi$</mathjax>-periodic.</p>
<p>To get our <mathjax>$\alpha$</mathjax>, we can and will use our synthesis equation:
<mathjax>$\frac{\alpha}{2\pi}\int \delta(\omega-\omega_0 e^{i\omega
n}d\omega$</mathjax>. <mathjax>$\frac{\alpha}{2\pi} e^{i\omega_0n} = e^{i\omega_0n} \implies
\alpha \equiv 2\pi$</mathjax>.</p>
<p><mathjax>$\pi\delta(\omega-\omega_0) + \pi\delta(\omega+\omega_0)$</mathjax> (<mathjax>$2\pi$</mathjax>-periodic,
again)</p>
<p>For signals that belong to none of the previous categories, we have to
learn a new transform -- Z-transform! Z-transform! Coming in March, to
theaters near you.</p>
<p>Why do <mathjax>$\ell_1$</mathjax> signals have finite DTFTs?</p>
<p>triangle inequality, following directly from the synthesis equation.</p>
<h1>INTERMISSION</h1>
<h2>DTFT Properties</h2>
<h1>Time-shifting</h1>
<p>If I have a signal that's Fourier-transformed? What do I get in the
frequency domain if I shift the original signal?</p>
<p>We get a phase shift: if we shift x(n) by N, our new <mathjax>$X(\omega)$</mathjax> is
multiplied by <mathjax>$e^{-i\omega N}$</mathjax>.</p>
<h2>Example / curveball</h2>
<p><mathjax>$H(\omega) = e^{-i\omega/2}4$</mathjax>
which yields <mathjax>$h(n)= \frac{(-1)^{n+1}}{\pi}(n-\frac{1}{2})$</mathjax></p>
<p>Remember: convolution and multiplication are duals over the time and
frequency domains.</p>
<p>Not too long from now, we are going to start studying the sampling
theorem. One way to consider this as a half-delayed sample of X is to think
of X as having samples. We map this to a sequence of samples in continuous
time, split apart by T seconds (sampling period). Interpolation scheme that
we'll learn about, etc. I can shift to the left or right by an arbitrary
amount. So in continuous time, I take this signal and I shift it to the
right by T/2 seconds. So I take whatever this was, take samples spaced T
apart, then convert it back into Kr\"onecker deltas.</p>
<p>This filter is called, for the reason described, a half-sample delay. You
essentially have to interpolate between the frequencies in discrete time
and sample the halfway points.</p>
<p>What happens if I multiply in the time domain by a complex exponential
(modulation)? The frequency domain is convolved with said exponential.</p>
<p>Using the analysis equation,we have <mathjax>$X(\omega) = \sum x(n) e^{i (\omega -
\omega_0) n} = X(\omega-\omega_0)$</mathjax>.</p>
<h1>Multiplying the time domain by <mathjax>$e^{i\omega_0}$</mathjax> results in a frequency shift.</h1>
<p>Multiplying by a complex exponential yields a shift in the other domain.</p>
<p>There is one property that is distinct, and that is the modulation
property (multiplication property).</p>
<p><mathjax>$q(n) = x(n)y(n) \iff Q(\omega)$</mathjax>. Not an ordinary convolution.</p>
<p><mathjax>$Q(\omega) = \sum x(n)y(n)e^{-i\omega n}$</mathjax></p>
<p>In particular, for <mathjax>$x(n)$</mathjax>, use synthesis equation to replace <mathjax>$x(n)$</mathjax> with
<mathjax>$X(\xi)$</mathjax> (where <mathjax>$\xi$</mathjax> is a dummy variable).</p>
<p><mathjax>$$
Q(\omega) = \frac{1}{2\pi} \sum_n\int_\avg{2\pi}
X(\xi)e^{i\xi n}y(n)d\xi e^{-i\omega n}.
= \frac{1}{2\pi} \int_\avg{2\pi}d\xi X(\xi)\sum_n e^{-i(\omega-\xi)n}y(n)d\xi
= \frac{1}{2\pi} \int_\avg{2\pi}d\xi X(\xi)Y(\omega-\lambda)
$$</mathjax></p>
<p>What's special about this? It's only over a value of <mathjax>$2\pi$</mathjax>. This is a very
special kind of convolution. We call this a circular convolution of <mathjax>$x$</mathjax> and
<mathjax>$y$</mathjax>. So <mathjax>$x(n)y(n) \iff \frac{1}{2\pi} (X*Y)(\omega)$</mathjax>.</p>
<p>Think of two functions plotted in terms of <mathjax>$\lambda$</mathjax>. One function, let's
say, is a square pulse wave thingy.</p>
<p>easy way of carrying out circular convolution: take one of the two
functions, keep one replica (e.g. the one from <mathjax>$-\pi$</mathjax> to <mathjax>$\pi$</mathjax>), and then
do a regular convolution with the other function. advice: choose the
flipped + shifted signal for elimination of replicas.</p>
<p><a name='10'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 21, 2012.</h2>
<h1>Circular convolution wrap-up</h1>
<p><mathjax>$h(n) = \frac{B}{\pi n} \sin(An) \fourier H(\omega) = u(n+A)-u(n-A)$</mathjax>.
(ideal low-pass filter and its impulse response)</p>
<p>What if I had a filter whose frequency response G(\omega) is basically the
signal (circularly) convolved with itself (with some extra factors)?
description: ramp going from 0 to <mathjax>$4AB^2$</mathjax> from <mathjax>$-2A$</mathjax> to <mathjax>$0$</mathjax>, then back down
to <mathjax>$0$</mathjax>.</p>
<ul>
<li>Recall: To perform circular convolution:</li>
<li>Keep one function fixed (in terms of a dummy frequency variable)</li>
<li>Keep only one period of the other signal.</li>
<li>
<p>Perform regular convolution.</p>
</li>
<li>
<p>Example: <mathjax>$(Q * R)(\omega)$</mathjax>.</p>
</li>
<li>Plot on a dummy-variable axis <mathjax>$Q(\lambda)$</mathjax> with its replicas.</li>
<li>Plot R only in one period (zero out all replicas).</li>
<li>Flip \&amp; slide <mathjax>$R$</mathjax>. <mathjax>$R(\lambda-\omega)$</mathjax></li>
</ul>
<p>What's the brute-force way to find the impulse response? Synthesis
equation.</p>
<p>Inspection method? It looks like it's <mathjax>$2\pi h^2(n) = \frac{2B^2}{\pi
n^2}\sin^2(An)$</mathjax> (circular convolution must yield the <mathjax>$2\pi$</mathjax> scalar)</p>
<ul>
<li>
<p>Recall: <mathjax>$q(n)r(n) \fourier 2\pi (Q*R)(\omega)$</mathjax>.</p>
</li>
<li>
<p><em>Note: this filter is BIBO stable, and its Fourier transform is
  continuous.</em></p>
</li>
</ul>
<p>Now, we can spend a lot of time on the various properties of the DTFT (what
happens when you time-reverse a signal, shift, multiplication properties,
modulation, etc.), and this'll probably show up in a problem set.</p>
<p>As the last example on the DTFT: <mathjax>$f(n) = e^{i\omega_0n}h(n)$</mathjax>. Let one of
these be a sinusoid. What do you do?</p>
<p>It's effectively shifted by <mathjax>$\omega_0$</mathjax>, after the math clears.</p>
<h1>Intermission</h1>
<p>Tentative midterm date: March 13.</p>
<h1>Continuous-Time Fourier Transform</h1>
<p><mathjax>$X(\omega)$</mathjax>: roughly the frequency response in continuous-time.</p>
<p><mathjax>$$X(\omega) = \infint x(t)e^{-i\omega t}dt
\\ x(t) = \frac{1}{2\pi} \int X(\omega)e^{i\omega t}d\omega$$</mathjax>$</p>
<p>Slight shorthand: <mathjax>$X = \infint x(\tau)\phi{\tau} d\tau (\phi{t}(\omega)
\defequals e^{-i\omega \tau})$</mathjax></p>
<p>To determine <mathjax>$x(t)$</mathjax>: (assume for now that <mathjax>$\braket{\phi_t}{\phi_\tau} =
\delta{\tau t})$</mathjax></p>
<p><mathjax>$\braket{X}{\phi_t} \defequals \infint X(\omega) \phi_t^*(\omega) d\omega$</mathjax>.</p>
<p><mathjax>$\braket{X}{\phi_t} = \int X\braket{\phi_\tau}{\phi_\tau}d\tau$</mathjax></p>
<p><em>NB: Inner products are in frequency domain.</em></p>
<ul>
<li>Recall:</li>
<li>In the discrete-time case, we found that for <mathjax>$\braket{\phi_m}{\phi_n}$</mathjax>,
    we got <mathjax>$2\pi\delta_{mn}$</mathjax>. Turns out, we have exactly the same
    relationship in the continuous-time case.</li>
<li>
<p>In the continuous-time case, we have <mathjax>$2\pi\delta(m-n)$</mathjax>.</p>
</li>
<li>
<p>Recall Poisson's identity: <mathjax>$\sum \delta(t-\ell p) = \frac{1}{p}\sum
  \exp(ik\omega_0t)$</mathjax>.</p>
</li>
</ul>
<p>Now we are desperate to have <mathjax>$\infint e^{i\omega t}d\omega =
2\pi\delta(t)$</mathjax>. Which means we must have <mathjax>$\delta(t) \equiv \frac{1}{2\pi}
\int e^{i\omega t}d\omega$</mathjax>. New definition for <mathjax>$\delta(x)$</mathjax>.</p>
<p>Mutual orthogonality of the CTFT takes on this nasty form.</p>
<h2>Plausibility argument:</h2>
<p>The right-hand side is proportional to <mathjax>$\int e^{i\omega t}d\omega = \int
\cos(\omega t)d\omega + i\int \sin(\omega t)d\omega$</mathjax>. Imaginary part is
guaranteed to be 0. By the same argument, the cosine portion is 0 if <mathjax>$t
\neq 0$</mathjax>. Otherwise, we're integrating 1 over all reals, so we have
<mathjax>$\infty$</mathjax>.</p>
<p>Constructively interfere at <mathjax>$t = 0$</mathjax>, but destructively interfere at <mathjax>$t \neq
0$</mathjax>.</p>
<p>Therefore the right-hand side must be proportional to <mathjax>$\delta(t)$</mathjax>.</p>
<p>The claim is that the constant of proportionality is <mathjax>$2\pi$</mathjax>. The hard part is
to recognize this is actually a Dirac delta.</p>
<h1>Intermission</h1>
<p>Steam coming out of our heads, evidently. Remember, this is the only
barrier to studying the DTFT.</p>
<h1>One way to show <mathjax>$\delta(t) \equiv \frac{1}{2\pi} \int e^{i\omega t}d\omega$</mathjax></h1>
<p>This method you will see, hear, etc. in engineering contexts for figuring
out something that doesn't converge, since Riemann integrals don't really
work here.</p>
<p><mathjax>$\delta_\Delta(t) = \frac{1}{2\pi} \int e^{i\omega t}e^{-\Delta\abs{\omega}}$</mathjax>. This
product is supposed to tame the function such that it is now integrable.</p>
<p>(take <mathjax>$\Delta &gt; 0$</mathjax>)</p>
<p>Multiply what you've got with something that makes the product
converge. Then take the limit as said function goes to 1?</p>
<p>In other words, I am perturbing the problem slightly to make it stable and
taking the limit as stability goes to 0.</p>
<p>NB: half-width, half-maximum: where we're at half the width of our function
and also half the maximum value.</p>
<p>Perturbation theory!</p>
<p><mathjax>$\alpha \defequals $</mathjax> perturbation parameter. Chosen such that area is 1.</p>
<p>Result of the integral: <mathjax>$\delta_\Delta(t) = \frac{\alpha\Delta}{\pi}
\frac{1}{\Delta^2 + t^2}$</mathjax>. Turns out <mathjax>$\alpha=1$</mathjax>.</p>
<p>This is a Cauchy probability density function.</p>
<p>(names for dirac delta: generalized function, distribution. Look up theory
 of distributions)</p>
<p><a name='11'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 23, 2012.</h2>
<h2>Continuous-time Fourier transform</h2>
<p>Recall:
<mathjax>$$X(\omega) = \int_{-\infty}^\infty x(t)e^{-i\omega t}dt
\\ x(t) = \frac{1}{2\pi}\int_{-\infty}^\infty X(\omega)e^{i\omega t}dt$$</mathjax></p>
<p>We developed the inverse Fourier transform using a bizarre identity for
which we established a way to derive (i.e. perturbation theory).</p>
<p>Example: Ideal low-pass filter. In the continuous-time story, you do not
assume any periodicity. Figure out the impulse response of this filter.</p>
<p><mathjax>$$\frac{1}{2\pi}\int_{-A}^A Be^{i\omega t} = \frac{B}{\pi t}\sin(At)$$</mathjax> (valid
for <mathjax>$t = 0$</mathjax> as well)</p>
<p>Okay. Now, how would you plot this? maximum of <mathjax>$\frac{AB}{\pi}$</mathjax>, zeroes at
<mathjax>$\frac{k\pi}{A} \forall k \in \mathbb{Z}$</mathjax>.</p>
<p><mathjax>$$H(\omega) = \int h(t) e^{-i\omega t}dt\bigg|_{\omega=0} = \int h(t)dt =
H(0) = B$$</mathjax>.</p>
<p>Largest triangle has the same area as the integral over all space.</p>
<p>Another question. Let <mathjax>$B=1$</mathjax>. What is <mathjax>$\lim_{A\to\infty} h(t)$</mathjax>? Yet another
Dirac delta (yet another Dirac delta definition).</p>
<p>Take <mathjax>$g(t)$</mathjax> as a block. What is <mathjax>$G(t)$</mathjax>? It is a sinc function.</p>
<p>If you dilate in the time domain, you squish the frequency domain, and vice
versa.
<mathjax>$$
\delta(t) \fourier 1
\\ 1 \fourier 2\pi\delta(\omega)
\\ e^{i\omega_0t} \fourier 2\pi\delta(\omega-\omega_0)
\\ \cos(\omega_0t) \fourier \pi(\delta(\omega-\omega_0) + \delta(\omega+\omega_0))
\\ \sin(\omega_0t) \fourier i\pi(\delta(\omega+\omega_0) - \delta(\omega-\omega_0))
\\ x(t) \fourier X(\omega)
\\ x(t-T) \fourier e^{-i\omega T}X(\omega)
\\ e^{i\omega_0t}x(t) \fourier X(\omega - \omega_0)
$$</mathjax></p>
<h2>Multiplication in Time</h2>
<p><mathjax>$$h(t) = f(t)g(t) \fourier H(\omega)
\\ f(t) \fourier F(\omega)
\\ g(t) \fourier G(\omega)$$</mathjax></p>
<p>Since transforms are not periodic, we have a regular convolution with an
extra <mathjax>$1/2\pi$</mathjax> term. <mathjax>$H(\omega) = \frac{1}{2\pi}(F * G)(\omega)$</mathjax>.</p>
<p>Reasoning for convolution leading to multiplication in frequency domain:
cascade two systems, choose input to be complex exponential. This is an
eigenfunction, so we have the output response being <mathjax>$F(\omega)G(\omega)$</mathjax>. Now
choose input to be an impulse. Output is <mathjax>$(f*g)(t)$</mathjax>.</p>
<h2>INTERMISSION</h2>
<h2>Amplitude modulation</h2>
<p>I'm going to start what looks like a new phase of this course, but it's
hardly anything new and unpredictable. If you understand the CTFT and its
fundamental properties (what we've talked about so far), you should have no
problem understanding amplitude modulation.</p>
<p>One of the things you've experienced by now is that the atmosphere is quite
unforgiving to audible frequencies. Your voice only transmits over some
short distance.</p>
<p>Obvious solution is to shift spectrum to a much higher frequency range. The
way we do it is to multiply by either a complex exponential or a
sine/cosine (called a carrier signal). Carries signal of interest. So
<mathjax>$Y(\omega) = \frac{1}{2\pi}(X * C)(\omega) = X(\omega - \omega_0)$</mathjax>. What
happens is that <mathjax>$\omega_0$</mathjax> is usually large enough to make transmission
through the atmosphere possible.</p>
<p>Ignoring all degradation going through the atmosphere, to retrieve the
original signal, multiply by the negative carrier frequency (complex
exponential). Frequencies about zero: baseband frequencies. Shifted
spectrum of received signal back to baseband.</p>
<p>aliasing; completely garbles signal.</p>
<p>Apply low-pass filter at end to capture baseband spectrum. Scale by two to
recover original signal, complete with amplitude.</p>
<p>Final observations: we first of all made an assumption that the received
signal is the same as the transmitted signal. Whole area of communication
theory that deals with deterioration. Other assumption is that the
oscillator at the receiver that can generate the exact same frequency as
the transmitter oscillator and at the same phase.</p>
<p><a name='12'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>February 28, 2012.</h2>
<h1>AM Continued</h1>
<p>(review of what we just did)</p>
<h2>Recall:</h2>
<p>There are two major assumptions that we made. First of all, no
transmission degradation. Second of all, the receiver has the exact
same phase and frequency as the transmitter.</p>
<h2>Question:</h2>
<p>What if <mathjax>$\hat{c}(t) = \cos(\omega_0t + \theta)$</mathjax>? (still keeping the
assumption that we can somehow match the frequency. New assumption: phase
is constant and not time-varying)</p>
<h2>Thoughts:</h2>
<p>If the phase is off by <mathjax>$\pi/2$</mathjax>, then you lose your signal entirely. <mathjax>$r(t) =
\frac{1}{2}\cos(\theta)x(t) + \frac{1}{2} x(t)\cos(2\omega_0t +
\theta)$</mathjax>. If <mathjax>$\theta$</mathjax> is relatively small (compared to <mathjax>$\pi/2$</mathjax>), then we
are safe, since we have our original signal. However, we lose our signal
as <mathjax>$\cos\theta\to0$</mathjax>.</p>
<p>Note:
MT2 date: Tues, 13 Mar 2012.</p>
<p>Also: when <mathjax>$\theta=0$</mathjax>, this is referred to synchronous demodulation
(transmitter and receiver in sync)</p>
<p>Instead of sending <mathjax>$y(t)$</mathjax> into a low-pass filter, what we do is send it
through the diode parallel RC circuit. This is one way to do asynchronous
demodulation. This is technically cheating, since we assume that our signal
is entirely positive. However, we can simply apply a DC offset if we know
the bounds on our values.</p>
<p>Suppose <mathjax>$|x(t)| \le A \forall t$</mathjax>. Then, transmit <mathjax>$\hat{x}(t) \equiv x(t) +
A$</mathjax>.</p>
<p>Why is this method of transmitting <mathjax>$\hat{x}(t)$</mathjax> called AM with large
carrier? We're actually also transmitting the carrier: <mathjax>$\hat{x}(t)
\cos(\omega_0 t) = x(t)\cos(\omega_0 t) =A\cos(\omega_0 t)$</mathjax>.</p>
<p>If <mathjax>$\abs{x(t)} \le K$</mathjax>, we want <mathjax>$K &lt; A$</mathjax>. In fact, <mathjax>$\frac{K}{A}$</mathjax> is referred
to as the percent modulation or modulation index.</p>
<p>One thing you should know is that there is redundancy in information double
side-band suppressed carrier.</p>
<h1>Frequency Division Multiplexing</h1>
<p>Each player is allocated a piece of real estate along the frequency axis.</p>
<h2>Quadrature multiplexing</h2>
<p>The way we can do this is by exploiting the orthogonality of cosine and
sine. What's being transmitted is the sum of the two.</p>
<p><a name='13'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>March 1, 2012.</h2>
<h1>Sampling of CT Signals</h1>
<p>Now we have to differentiate between frequency in radians per second
(<mathjax>$\omega$</mathjax>), and frequency in radians per sample (<mathjax>$\Omega$</mathjax>). Our sampling
interval we will represent with <mathjax>$T_s$</mathjax>, our sampling period.</p>
<p>There are associated with these boundary blocks (<mathjax>$C \to D$</mathjax>, <mathjax>$D \to C$</mathjax>)
periods. <mathjax>$T_r$</mathjax> : reconstruction period. <mathjax>$T_r$</mathjax> and <mathjax>$T_s$</mathjax> may or may not be
the same.</p>
<p>The new part really is sampling theory.</p>
<p>So let's begin.</p>
<p>Let's say I have a continuous-time signal <mathjax>$x_c(t)$</mathjax>. We sample
periodically. <mathjax>$x_d(0)=x_c(0)$</mathjax>. <mathjax>$x_d(1)=x_c(T_s)$</mathjax>. Basically,
<mathjax>$x_d(n)=x_c(nT_s)$</mathjax>. A basic question is whether or not it is possible to
reconstruct the original signal? In general, the answer is no.</p>
<p>There must be (we hope so, at least) a set of conditions that guarantees
recoverability. These actually happen to be sufficient conditions. That is
the whole subject of the Sampling theorem, which we will develop.</p>
<p>So let me open up this first box, the process of sampling. We can model the
process of sampling this signal as one that involves multiplying the
original input signal by an impulse train, where the impulses are separated
<mathjax>$T_s$</mathjax> second apart. Remember: multiplying a function that is locally
continuous with an impulse is equivalent to rescaling the impulse.</p>
<p>Once you extract <mathjax>$x_q$</mathjax>, there's a block we're not going to worry about that
converts Dirac deltas to Kr\"onecker deltas. Believe it or not, there is
nothing new here.</p>
<p>What happens to the spectrum of <mathjax>$x_c$</mathjax> as it is multiplied by the impulse
train?</p>
<p>Our fundamental frequency of this signal is <mathjax>$\frac{2\pi}{T_s}$</mathjax>. It's also
called, in this context, our sampling frequency.</p>
<p>Considering the DTFT of periodic signals, we have a bunch of uniform
impulses separated by <mathjax>$\omega_s$</mathjax>, and each of them has strength
<mathjax>$\omega_2$</mathjax>. What happens when <mathjax>$T_s$</mathjax> gets smaller? The sampling frequency
increases, and so we have stronger (and more separated) impulses as our
spectrum.</p>
<p>Our CTFT is simply <mathjax>$2\pi\sum_k Q_k\delta(\omega-k\omega_s)$</mathjax>. Multiplication
in the time domain is convolution in the frequency domain, so we can
consider the triangles.</p>
<p>So what happens? In order for these triangles to be recoverable, we need <mathjax>$2A
\le \omega_s$</mathjax>, or <mathjax>$A \le \frac{\omega_s}{2}$</mathjax>. <mathjax>$\omega_s$</mathjax> must be large
enough that adjacent triangles do not overlap (and this is, indeed, the
crux of the sampling theorem) -- in this case, we can certainly recover our
original signal by applying a low-pass filter. </p>
<p>The low-pass filtering is an interpolation operation -- we're interpolating
between adjacent values of the signal.</p>
<p>(explanation: You've got your samplings of the signal, so we've actually
got a set of impulses. We're applying a low-pass filter on this and filling
in the missing portions of the signal. Remember that our ideal low-pass
filter is the sinc, and so we're taking the linear combination of a
countably infinite set of sincs)</p>
<h1>Whittaker-Nyquist-Kotelnikov-Shannon Sampling Theorem</h1>
<p>(1915, 1928, 1933, 1949)</p>
<p>Most frequently known as the Shannon Sampling theorem. This is the set of
sufficient conditions for recovering a signal from a sequence of samples.</p>
<ul>
<li><mathjax>$x_c$</mathjax> is band limited. Namely, <mathjax>$\abs{X_c(\omega)} = 0 (\abs{\omega} &gt;
   A)$</mathjax>.</li>
<li>Sampling rate <mathjax>$\omega_s$</mathjax> is large enough such that <mathjax>$2A \le \omega_s$</mathjax></li>
</ul>
<p>There is no universal definition of bandwidth. In some circles, the highest
frequency A is called the bandwidth. In other circles, the frequency's
footprint 2A is called the bandwidth. Does not really matter.</p>
<p>As mentioned, nothing mentioned today is new; we've just used very basic
properties of the CTFT of periodic signals and the convolution property.</p>
<h2>Thoughts</h2>
<p>What if <mathjax>$\omega_s &lt; 2A$</mathjax>? (by the way, <mathjax>$2A$</mathjax> is called the Nyquist
rate). There has been a big push lately. Look up sub-Nyquist in a group of
papers. There are many people trying to figure out how to sample at a
lower-than-Nyquist rate.</p>
<p>So what happens if <mathjax>$\omega_s = 2A$</mathjax>? Our triangles are tangent. In the ideal
case, this is still acceptable, but it is impossible to build an ideal
low-pass filter, so it doesn't actually matter.</p>
<p>Oversampling: you exceed the Nyquist rate by a certain amount to
compensate. Past a certain amount of oversampling, it doesn't really mean
anything.</p>
<p>If <mathjax>$\omega_s &lt; 2A$</mathjax>, our triangles overlap, and so the spectrum is
significantly different.</p>
<p>Higher frequencies get folded into lower frequencies. This is called
aliasing: the artifact of folding of higher frequencies into lower
frequencies. Once you have that, you have irrecoverably lost your original
signal.</p>
<p>In image processing, we have anti-aliasing (our next topic).</p>
<p>Notice that if the signal is band-limited, we can make <mathjax>$\omega_s$</mathjax> large
enough such that we do not get overlap. If <mathjax>$x_c$</mathjax> is <strong>not</strong> band-limited,
then no matter how high we make <mathjax>$\omega_s$</mathjax>, we're going to have some
overlap. There's one thing we can do in that case, but that's a
compromise. We call this:</p>
<h2>Anti-aliasing</h2>
<p>The idea is we discard frequencies above a certain value. With aliasing, the frequency
region is distorted, which is reflected everywhere in the time domain. This
is generally not good, unless it's really small and you can ignore it.</p>
<p>Anti-alias filtering is a preprocessing stage. <mathjax>$x_c(t) \to LPF \to
q(t)$</mathjax>. Namely, apply a low-pass filter (with cutoffs @ <mathjax>$-\frac{\omega_s}
{2}$</mathjax>, <mathjax>$\frac{\omega_s}{2}$</mathjax>) before sampling. We eliminate the aliasing we
know will happen (in the event that we don't do this).</p>
<p>The result is that you don't lose quite as high of frequencies. There is no
more aliasing.</p>
<p>So why anti-alias first instead of allowing it to alias, if you lose
information either way?</p>
<ul>
<li>Anti-aliasing allows for the preservation of more of the original
   signal.</li>
</ul>
<p>The set of signals that does not produce aliasing with a fixed <mathjax>$\omega_s$</mathjax>
are those that are band-limited by <mathjax>$\left(-\frac{\omega_s}{2},\frac{\omega_s}
{2}\right)$</mathjax>.</p>
<p>(talk about Parseval's theorem)</p>
<h2>Carriage-wheel effect</h2>
<p>Aliasing is what causes the phenomenon that some of you may have noticed:
when on the highway, and you stare at the wheels of the car passing by you,
and they seem to be moving much slower in the opposite direction. Used to
show up at PhD qualifying exams at MIT. You may also find this described as
the carriage-wheel effect. Since the frame rate of old movies was 24 frames
per second, the wheels looked to be moving backward.</p>
<p>Mark a point on the unit circle. I strobe this wheel at a rate <mathjax>$\omega_s$</mathjax>
(strobing it is like sampling it). So I've got this strobe gun with a
sampling rate of <mathjax>$\omega_s$</mathjax>. <mathjax>$\omega_s$</mathjax> happens to be <mathjax>$\frac{3}{2}
\omega_0$</mathjax> (so not exceeding the Nyquist rate). If I sample exactly at
<mathjax>$\omega_0$</mathjax>, the point looks stationary. So the only way to capture the
motion properly is to capture at <mathjax>$2\omega_0$</mathjax> and higher. But I'm not doing
that.</p>
<p>Its spectrum is a single Dirac impulse of strength <mathjax>$2\pi$</mathjax> centered at
<mathjax>$\omega_0$</mathjax>. The sampling signal <mathjax>$Q(\omega)$</mathjax> will have an impulse train
separated by <mathjax>$\omega_s$</mathjax>. When we convolve this and apply a low-pass filter,
we have just one remaining frequency at <mathjax>$\omega_0-\omega_s = -\frac
{\omega_0}{2}$</mathjax>.</p>
<p><a name='14'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>March 6, 2012.</h2>
<h1>Sampling Cont'd</h1>
<p>We are still in the first of three blocks (where we take a continuous-time
signal and create a discrete-time signal).</p>
<p>This end-to-end system is effectively a continuous-time system.</p>
<p>This kind of processing we refer to (for obvious reasons) as discrete-time
processing of continuous-time signals. The opposite is also possible, where
you start out with a discrete-time signal, process in continuous-time, and
spit out a discrete-time signal.</p>
<p>Last time, we opened up the first box (<mathjax>$C \to D$</mathjax>). We didn't even talk
about the entire box -- there's still some stuff to discuss.</p>
<p>So, consider an impulse train. We then take this through a Dirac <mathjax>$\to$</mathjax>
Kr\"onecker block to produce <mathjax>$x_d(n)$</mathjax>.</p>
<p>Question: How is <mathjax>$X_q(\omega)$</mathjax> related to <mathjax>$X_d(\Omega)$</mathjax>?</p>
<p>etc, work with moving around between coords.</p>
<p>All of this assumes that there has been no aliasing. For us to have had no
aliasing, remember the Nyquist sampling theorem.</p>
<p>Considerations of LTI for <mathjax>$Y_c(\omega) \equiv X_c\parens{\frac{\omega
T_r}{T_s}}$</mathjax>.</p>
<p>The only way your end-to-end system will have an LTI-equivalent is if you
fulfill the conditions of the Nyquist sampling theorem (no aliasing), and
your reconstruction period is the same as your sampling period.</p>
<p>Then you know that your output is equal <mathjax>$\frac{G}{T} H_d\parens{ \omega T}
X_c(\omega)$</mathjax>. The equivalent LTI filter is simply <mathjax>$\frac{G}{T} nH_d \parens{
\omega T}$</mathjax></p>
<p><a name='15'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>March 8, 2012.</h2>
<p>Spoke about sampling, impulse train, conditions under which signal can be
recovered -- if band-limited, and frequency high enough, we can recover
with a low-pass filter.</p>
<p>We drew the spectrum, and we showed that if we sampled fast enough, we
could recover the original signal.</p>
<p>What I want to do now is talk about what is going on in the time domain. I
am sending the nonuniform impulse train into some low-pass filter, and I
want to see what <mathjax>$r(t)$</mathjax> is. So the first thing I want you to do is tell me
what the impulse response of this filter is. <mathjax>$h(t) = \sinc\parens{ \frac{
\omega_s t}{2}}$</mathjax>.</p>
<p>When you convolve the sinc with these impulses, what do you get? Notice the
alignment. Since <mathjax>$x_q(t) = \sum x_c(nT_s)\delta(t-nT_s)$</mathjax>, <mathjax>$r(t) = \sum x_c
(nT_s) h(t-nT_s)$</mathjax>. So let's draw this. (basic premise: interpolation;
further from center means impulse has weaker effect. Also, zero crossings
occur at locations of all other impulses. Scaling is strength of impulse.)
We're only going to get one signal. Only one will be obtained from this
interpolation scheme.</p>
<p>Remember, <mathjax>$\sinc \alpha = \frac{\sin \pi \alpha}{\pi\alpha}$</mathjax>.</p>
<p>Thus the expression for <mathjax>$x_q(t)$</mathjax> can be rewritten as <mathjax>$\sum x_c (nT_s)
\sinc(\frac{\omega_s(t-nT_s)}{2\pi})$</mathjax>. This is a loaded
expression. Remember how when we started Fourier series, how we expand a
signal in terms of orthogonal basis functions? This is an orthogonal
expansion. The values of the signals at the sample points; these shifted
sincs are the orthogonal functions.</p>
<p>Now, if you look at your problem set, there are a couple of problems you
ought to pay attention to. One: Fourier transform preserves orthogonality,
i.e. if two signals are mutually orthogonal in the time domain, they're
mutually orthogonal in the frequency domain. You use that later to show
that these shifted sincs are orthogonal: you do not show this in the time
domain. That's where you use orthogonality-preserving property of the
continuous-time Fourier transform.</p>
<p>What does this mean? A band-limited signal <mathjax>$x$</mathjax> has an orthogonal expansion
in terms of shifted sincs. These are not arbitrary sincs: <mathjax>$T_s$</mathjax>, the
sampling period, enters this expression. It's very related to the bandwidth
of the original signal. Yet another context in which orthogonal expansions
show up.</p>
<p>You can essentially consider <mathjax>$h_0(t)$</mathjax> as the unshifted <mathjax>$\sinc (\frac
{\omega_s t}{2\pi}) = h(t)$</mathjax>. <mathjax>$h_n(t) = h_0(t - nT_s)$</mathjax>. These functions, we
claim, are mutually orthogonal. Namely, <mathjax>$\braket{h_k}{h_l} =
\delta_{kl}$</mathjax>. Here, you use Parseval's theorem, since you do not want to
integrate <mathjax>$\sinc^2$</mathjax>. Called band-limited interpolation, since you do not
get the original signal if either it wasn't band-limited, or you didn't
sample fast enough (which is where aliasing occurs).</p>
<h2>Another view of sampling as an orthogonal expansion:</h2>
<p>Another way to look at this is to start in the frequency domain, where I
have my spectrum of my signal. If you remember, Fourier series expansion is
not just useful for representing periodic signals, but also for
finite-duration functions (in this case, it's finite in frequency). You can
think of these phantom replications of this triangle. In other words, I can
create a periodic extension of this triangle, which is my <mathjax>$X_c(\omega)$</mathjax>, if
I create this periodic extension in such a way that these triangles touch
exactly.</p>
<p>I can write <mathjax>$X_c(\omega) \equiv \sum_{k\in \mathbb{Z}} X_k e^{ikT_s
\omega}$</mathjax>. We refer to <mathjax>$T_0 = 2\pi/2A$</mathjax> as our fundamental "frequency" of the
periodic extension of <mathjax>$X_c(\omega)$</mathjax>. (this has units of seconds, in
fact). This expression is only valid for the range <mathjax>$\abs{\omega} \le A$</mathjax>. It
is zero outside of said range.</p>
<p>You can do this for any finite-duration function, since this can be thought
of as one period of a periodic signal.</p>
<p>What I'm going to do is go back to the time-domain function using the CTFT
inverse formula. I'm looking at a band-limited function, so the integral
doesn't actually go from <mathjax>$-\infty$</mathjax> to <mathjax>$\infty$</mathjax>, but rather from <mathjax>$-A$</mathjax> to
<mathjax>$A$</mathjax>. And in this range, you know that <mathjax>$X_c$</mathjax> is equal to the Fourier series
expansion. So, exchanging the integral and the summation (as we do so very
often), and evaluating the integral, we get a linear combination of sinc
functions.</p>
<p>It turns out these coefficients are the values of the signal at the sampled
points. That isn't obvious yet. After some algebraic manipulations, we can
finally see that this whole thing is just <mathjax>$x_c(kT_s)$</mathjax>.</p>
<p><a name='16'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>March 13, 2012.</h2>
<h1><mathjax>$\mathcal{Z}$</mathjax> Transform</h1>
<p>Something we've been brushing under the carpet for a while is the set of
signals in which the Fourier transform is not defined.</p>
<p>So the Z transform is defined for discrete-time LTI systems. (discrete
analogue of the Laplace transform)</p>
<p>We know that the output is simply the convolution of the input with the
impulse response.</p>
<p>If we said that <mathjax>$h(n) = \alpha^n u(n)$</mathjax>, where <mathjax>$\abs{\alpha} &gt; 1$</mathjax>, then we
know that this signal has no frequency response. However, behavior can be
well-defined even though you can't say anything about it in the frequency
domain.</p>
<p>For instance: if x is of finite duration, then you have a finite number of
terms in the convolution corresponding to the output signal. You can thus
talk about the output of this system for such an impulse: convolution is
perfectly well-defined, and <mathjax>$y(n)$</mathjax> is finite for all n (and thus
well-defined).</p>
<p>Another situation: if x is right-sided (i.e. <mathjax>$x(n) = 0$</mathjax> for <mathjax>$n &lt; N$</mathjax> -- may
also have zero values to the right of <mathjax>$N$</mathjax>, but to the left, it is zero),
what happens?</p>
<p>Note that causal signals are a subset of right-sided signals. If n is
anything smaller than zero, you are looking at a right-sided but not
necessarily causal signal.</p>
<p>In the convolution of two right-sided signals (or two left-sided signals,
even!), finitely many terms contribute. Therefore in these cases (input and
impulse response) we can define our output anywhere (i.e. finite, but not
necessarily bounded).</p>
<p>Finally, if x is bounded and h is absolutely summable, then the system is
BIBO-stable -- <mathjax>$y$</mathjax> is finite, so it is also bounded.</p>
<p><mathjax>$\ell^\infty = \set{ x : \mathbb{Z} \mapsto \mathbb{C} : x(n) &lt; B_x &lt;
\infty}$</mathjax></p>
<p>Recall that <mathjax>$\ell^1 = \set{ h : \mathbb{Z} \mapsto \mathbb{C} : \sum
\abs{h(n)} &lt; B_x &lt; \infty }$</mathjax></p>
<p>For DTFT, we applied <mathjax>$e^{i\omega n}$</mathjax> to our system and observed that our
output was <mathjax>$H(\omega)e^{i\omega n}$</mathjax>, where we defined <mathjax>$H(\omega) = \sum_n
h(n)e^{-i\omega n}$</mathjax>.</p>
<p>We're going to now relax the constraint that <mathjax>$\abs{e^{i\omega n}} = 1$</mathjax>,
i.e. <mathjax>$\omega \in \mathbb{R}$</mathjax>. Now let <mathjax>$x(n) = z^n \ (z \in \mathbb{C})$</mathjax>. So
if I apply this signal to the system, what am I going to get?</p>
<p><mathjax>$$
y(n) = \sum_m h(m)x(n-m) = \sum_m h(m)z^{n-m} = z^n\sum_m h(m)z^{-m}
\\ z^{-n} y(n) = \sum_m h(m)z^{-m} \equiv \hat{H}(z)
$$</mathjax></p>
<p>This is called the transfer function of the system.</p>
<p>The transfer function for us will either be the Z-transform (if
discrete-time) or the Laplace transform (if continuous-time). For now we're
stuck with the Z-transform.</p>
<p>Notice the similarity of the format of these expressions. The main
difference is that now, I'm allowed to veer away from the unit circle. This
is an infinite sum, so just as with the Fourier transform, we must worry
about convergence.</p>
<p>With Z transforms and Laplace transforms, we can't get away from
convergence. Associated with this sort of expression is what we call a
region of convergence (RoC). Basically, region in the complex plane for
which this sum converges.</p>
<p>Going to brush aside a lot of subtleties regarding convergence. <mathjax>$R_h$</mathjax> is
the region in the complex plane (i.e. the set of <mathjax>$z$</mathjax>) such that <mathjax>$\sum_m
\abs{h(m)z^{-m}} &lt; \infty$</mathjax>. If the kernel of this sum is absolutely
summable, we say that we are in the region of convergence. The values of
<mathjax>$z$</mathjax> for which this is true make up the region of convergence.</p>
<p>I can take any discrete-time function and talk about its Z-transform. Just
as with the Fourier-transform, I can talk about the FT of any function.</p>
<p>So what if I'm looking at <mathjax>$x(n) = \delta(n)$</mathjax>? <mathjax>$\hat{X}(z) = 1$</mathjax> -- we only
have one value in our sum. <mathjax>$R_x = \mathbb{C}$</mathjax>. In other words, <mathjax>$0 \le
\abs{z}$</mathjax>.</p>
<p><mathjax>$$
\delta(n-1) \ztrans z^{-1}\:\: (R_h = \set{z : 0 &lt; \abs{z}})
\\ \delta(n+1) \ztrans z\:\: (R_h = \set{z : 0 \le \abs{z} &lt; \infty})
$$</mathjax></p>
<p>Now, two-point moving average: <mathjax>$\frac{1}{2}\parens{\delta(n) + \delta(n-1)}
\ztrans \frac{1 + z^{-1}}{2} = \frac{z + 1}{2z}\ (R_h = \set{z : 0 &lt;
\abs{z}})$</mathjax>. Note that if this had been an anti-causal two-point moving
average, we'd include 0 and exclude infinity.</p>
<p>All of these signals so far are finite-duration signals (FIR filters).</p>
<p><strong>The radius of convergence of a function that has a finite region of
  support is the entire complex plane with the possible exception of zero,
  infinity, or both.</strong></p>
<p>Example of both: three-point moving average, centered at zero.</p>
<p>May have noticed already that region of convergence has a particular
convergence so far: all of these resemble a radius, and rational in z. Not
always the case, but these are most of the signals we'll work with. There's
a nice accounting between numerator and denominator that allows you to
determine where the region of convergence is.</p>
<h2>Ex: <mathjax>$h(n) = \alpha^n u(n)$</mathjax>, <mathjax>$\alpha \equiv \frac{3}{2}$</mathjax></h2>
<p><mathjax>$\hat{H}(z) = \sum_{n=0}^\infty \parens{\frac{3}{2z}}^n = \frac{1}{1 -
\alpha z^{-1}} = \frac{z}{z - \alpha}$</mathjax>. RoC: <mathjax>$\set{z : \abs{z} &gt;
\frac{3}{2}}$</mathjax></p>
<p>When we talk about the z-transform, you can't just give an expression; you
also must provide a region of convergence. One without the other is an
incomplete picture.</p>
<h2>Plotting region of convergence:</h2>
<p>In this case, just draw a dotted circle (radius not included), and shade
its exterior.</p>
<p>This is not a proof, but notice that we've got a causal signal, and a
region of convergence that is outside of some circle (and extends to
infinity). Roots of the denominator are called <strong>poles</strong> of the
system. Roots of the numerator are called <strong>zeroes</strong>. Therefore this system
has one zero and one pole.</p>
<p>It turns out that for right-sided functions, the RoC is always to the
outside of the radius defined by the outermost pole.</p>
<p>One thing I want you to pay attention to is the following: the angle of the
pole makes no difference in the region of convergence (ever!). When you
look at <mathjax>$\hat{H}(z) = \sum_n h(n)z^{-n}$</mathjax> and replace <mathjax>$z = Re^{i\omega}$</mathjax>,
you notice that this is <mathjax>$\sum h(n)R^{-n}e^{-i\omega n}$</mathjax>. <mathjax>$e^{-i\omega n}$</mathjax>
plays no role in whether or not the kernel is absolutely summable.</p>
<p>So the region in the complex plane where this sum is convergent is
independent of <mathjax>$\omega$</mathjax>.</p>
<p>I could have made this <mathjax>$\alpha$</mathjax> a complex number on the same radius, and
the region of convergence would have been exactly the same. It is the
magnitude of <mathjax>$\alpha$</mathjax> that is important.</p>
<h2>Ex: <mathjax>$g(n) = -\alpha^n u(-(n+1))$</mathjax></h2>
<p>Notice that this is a left-sided signal.</p>
<p><mathjax>$\hat{G}(z) = -\sum_{-\infty}^{-1} \parens{\frac{\alpha}{z}}^n =
-\sum_1^\infty \parens{\frac{\alpha}{z}}^{-n^\prime} = \frac{z}{z -
\alpha}$</mathjax> (<mathjax>$\set{z : \abs{\alpha} &gt; \abs{z}}$</mathjax>)</p>
<p>This is exactly the same expression in z, but the region of convergence is
different. This is why we are compelled to always consider the region of
convergence.</p>
<p>So two very different expressions in time yield the same expression in
their z-transforms, but the difference is in their radii of convergence.</p>
<p>Just as with right-sided functions, the RoC for left-sided functions is
always inside the radius defined by the innermost pole.</p>
<h1>Monologuing</h1>
<p>With frequency response and Fourier transforms, we all knew what we were
trying to do. We were trying to decompose a signal into its constituent
frequencies. There is no such notion for the Z-transform. The whole idea of
stabilizing an unstable system when placing a system in a feedback
configuration requires the Z-transform.</p>
<p>Consider <mathjax>$\alpha^n u(n)$</mathjax>.</p>
<p>In this case, we have not specified whether <mathjax>$\alpha$</mathjax> is inside or outside
the unit circle. The expression is exactly the same.</p>
<p>Let's take the first case, where <mathjax>$\abs{\alpha} &lt; 1$</mathjax>. The region of
convergence is outside the circle of radius <mathjax>$\alpha$</mathjax>. We could consider the
DTFT then. This is a case where the region of convergence strictly includes
the unit circle. If that is true, then there is a very simple relationship
between the z-transform and the Fourier transform: we can evaluate the
z-transform on the unit circle, i.e. <mathjax>$z = e^{i\omega}$</mathjax>. It is because of
this that some people consider the z-transform to be a generalization of
the Fourier transform. However, there are functions for which we have a
Fourier transform but no z-transform.</p>
<p>You also know that when the RoC contains the unit circle, the time-domain
function must be absolutely summable: the z-transform looks like the
Fourier transform of <mathjax>$R^{-n}h(n)$</mathjax>. The point of <mathjax>$R^{-n}$</mathjax> is to tame the
function.</p>
<p>If <mathjax>$\alpha$</mathjax> is outside the unit circle, no such relationship exists between
the z-transform and Fourier transform, simply because there is no Fourier
transform.</p>
<p>Now let's consider the anti-causal case: <mathjax>$-\alpha^n u(-(n+1))$</mathjax>.</p>
<p>If <mathjax>$\alpha$</mathjax> happens to be within the unit circle, the function has no
Fourier transform. But if <mathjax>$\alpha$</mathjax> is outside the unit circle, then
the function has a Fourier transform.</p>
<p>So that's the relationship between the z-transform and the Fourier
transform: if the region of convergence contains the unit circle, then you
can equate them.</p>
<p>If <mathjax>$h(n) \ztrans \hat{H}(z)$</mathjax>, then <mathjax>$h(n-1) \ztrans \frac{\hat{H}(z)}
{z}$</mathjax>. Similarly, <mathjax>$h(n+1) \ztrans z\hat{H}(z)$</mathjax>.</p>
<p>There is a difference between bounded and unbounded regions of
convergence. </p>
<p>We have a few minutes, so let me talk about the distinctions between causal
signals and right-sided signals (and also anticausal / left-sided).</p>
<p>So let's say we take a right-sided but not causal signal. Now the RoC is
outside of radius <mathjax>$\alpha$</mathjax>, but now you have to exclude $\infty.</p>
<p>Similarly, for left-sided signals, you'd then exclude 0.</p>
<p><a name='17'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>March 15, 2012.</h2>
<h1>More on the <mathjax>$\mathcal{Z}$</mathjax>-transform</h1>
<p><mathjax>$$
h(n) \ztrans \hat{H}(z) = \infsum{n} h(n) z^{-n}
\\ R_h = \text{region of convergence}
\\ \defequals \set{z \in \cplx \middle| \sum_n \abs{h(n) z^{-n}} &lt; \infty}
$$</mathjax></p>
<p>For signals of finite duration, the region of convergence is the entire
complex plane, minus possibly <mathjax>$r=0$</mathjax> and <mathjax>$r=\infty$</mathjax>.</p>
<p>Example which was causal, <mathjax>$x(n) = \alpha^n u(n)$</mathjax>. <mathjax>$\hat{X}(z) = \frac{1}{1
- \alpha z^{-1}}$</mathjax>, <mathjax>$\abs{\alpha} &lt; \abs{z}$</mathjax> (i.e. outside the circle).</p>
<p>We also had an anticausal example, <mathjax>$q(n) = -\alpha^n u(-n-1)$</mathjax>. <mathjax>$\hat{Q}(z)
= \frac{1}{1 - \alpha z^{-1}}$</mathjax>, <mathjax>$\abs{z} &lt; \abs{\alpha}$</mathjax> (i.e. inside the
circle).</p>
<p>Furthermore, we discussed that a Fourier transform existed if and only if
the unit circle was contained in the region of convergence.</p>
<p>Notice that the RoC of a causal system was outside, all the way to
infinity, while the RoC of an anticausal was inside, all the way to zero.</p>
<p>We further learned that causal signals are a subset of right-sided signals,
and anti-causal signals are a subset of left-handed signals.</p>
<p>So what happens if we shift our signal, i.e. <mathjax>$r(n) = x(n+1)$</mathjax>?</p>
<p><mathjax>$\hat{R}(z) = z\hat{X}(\omega)$</mathjax>. This is a simple example of what we call
the time-shift property. You can guess what happens when we shift by an
arbitrary integer. <mathjax>$x(n-N) \ztrans z^N X(z)$</mathjax>. Note that this is no longer
causal, but it is still right-sided.</p>
<p>Notice that now the signal blows up at infinity, so our radius of
convergence is now <mathjax>$R_r: \abs{\alpha} &lt; \abs{z} &lt; \infty$</mathjax>. The set of
right-sided signals is a strict superset of the set of causal signals.</p>
<p>This is the difference between the z-transform of right-sided signals and
that of causal signals.</p>
<p>Similarly, with a left-sided signal, we would exclude the origin from the
RoC.</p>
<p>There's also a simple way of showing why causal signals are outside of some
radius of convergence.</p>
<p>Let x be causal. Its z-transform starts from the earliest possible point,
i.e. <mathjax>$n = 0$</mathjax>. <mathjax>$\hat{X}(z) = x(0) + \frac{x(1)}{z} + ... $</mathjax>.</p>
<p>If <mathjax>$\abs{z} = R_1 \in R_x$</mathjax>, I want you to argue why <mathjax>$\abs{z} = R_2$</mathjax>, where
<mathjax>$R_1 &lt; R_2$</mathjax> is also in the RoC. Reasoning: with larger radii, we have
smaller values in our absolute sum.</p>
<p>Right-sided signals: almost identical, except we have a finite number of
elements on the left, and so infinity must be excluded.</p>
<p>Once you find the radius at which the sum converges, everything else
outside also converges.</p>
<p>Similar argument for anti-causal and left-sided signals.</p>
<p>So now let's combine these.</p>
<h2>Example:</h2>
<p><mathjax>$g(n) = \parens{\frac{1}{2}}^n u(n) - \parens{\frac{3}{2}}^n u(-n-1)$</mathjax></p>
<p>As done before, we have <mathjax>$\frac{1}{1 - \alpha z^{-1}}$</mathjax> for both values of
"<mathjax>$\alpha$</mathjax>". Thus our region of convergence is <mathjax>$\frac{1}{2} &lt; \abs{r} &lt;
\frac{3}{2}$</mathjax> (superposition tells us the corresponding z-transform is
<mathjax>$\frac{2z}{2z - 1} + \frac{2z}{2z - 3} = \frac{2z(z-1)}{(z-\frac{1}{2})
(z-\frac{3}{2})}$</mathjax>).</p>
<p>As you can see, two-sided signals have annular regions for their RoCs.</p>
<p>Reason for zeroes: if I were to ask you to find the inverse of the system,
what would you do? Let's say this represents distortion, and you want to
undo the distortion.</p>
<p>Also comes into play when you want to plot the frequency response.</p>
<p>Let's do another</p>
<h2>Example:</h2>
<p><mathjax>$h(n) = \expfrac{3}{2}{n}u(n) - \expfrac{1}{2}{n} u(n-1)$</mathjax>. Now we've got
<strong>nothing</strong> -- there is no overlap between the two regions, so there is
neither a Z-transform nor a region of convergence. (we would have the same
expression, but this doesn't hold anywhere.)</p>
<h2>Intermission</h2>
<h1>Time Shift Property</h1>
<p><mathjax>$x(n-N) \to z^{-N}\hat{X}(z)$</mathjax>. What does this do to the region of
convergence? It can potentially eliminate infinity (if N positive) or zero
(if N negative), but not both.</p>
<h1>Convolution Property</h1>
<p>If you have <mathjax>$h \equiv f \star g \ztrans \hat{H}(z) =
\hat{F}(z)\hat{G}(z)$</mathjax>. Simple way to show this is by cascading filters and
feeding in (instead of a complex exponential) <mathjax>$z^n$</mathjax> -- this is identical to
the eigenfunction property of LTI systems. And what's the RoC? <mathjax>$R_h \supset
(R_f \cap R_g)$</mathjax> (could be bigger if pole-zero cancellation occurs)</p>
<p>Think of these poles as dam (damn?) walls.</p>
<p>If we put this system in cascade with another one such that <mathjax>$q(n) =
\delta(n) - \frac{1}{2}\delta(n-1)$</mathjax>, <mathjax>$\hat{Q}(z) = \frac{z-\frac{1}{2}}
{z}$</mathjax>. Since this is a FIR filter, <mathjax>$R_q = \cplx - \set{0}$</mathjax>.</p>
<p><mathjax>$\hat{A}(z) = \hat{R}(z)\hat{Q}(z) = \frac{2(z-1)}{z - \frac{3}{2}}$</mathjax> . We
get double pole-cancellation, in fact, so <mathjax>$R_q = \set{z \middle| \abs{z} &lt;
\frac{3}{2}}$</mathjax>.</p>
<h1>Time-reversal</h1>
<p><mathjax>$x(n) \ztrans \hat{X}(z)$</mathjax>. <mathjax>$x(-n) \ztrans ?$</mathjax> Do a variable substitution,
and then you see that everywhere you had <mathjax>$z$</mathjax>, it's now a <mathjax>$z^{-1}$</mathjax>. Thus
<mathjax>$x(-n) \ztrans \hat{X}(\frac{1}{z})$</mathjax>. When you correlate this with the
Fourier-transform story, we got a frequency reversal in the frequency
domain. Locations of poles and zeroes map to their inverses,
i.e. <mathjax>$\hat{X}(z_0) \to \hat{X}^\prime(\frac{1}{z_0})$</mathjax>.</p>
<h1>Multiplication by a complex exponential</h1>
<p>Presume
<mathjax>$$
g(n) \ztrans \hat{G}(z)
\\ h(n) = z_0^n g(n) \ztrans \hat{H}(z) = ?
$$</mathjax></p>
<p><mathjax>$\hat{G}(\frac{z}{z_0})$</mathjax>, after the dust clears. If <mathjax>$p_0$</mathjax> is a pole of
<mathjax>$\hat{G}$</mathjax>, it moves to <mathjax>$z_0p_0$</mathjax>.</p>
<p>Z-transform of the unit step? <mathjax>$\frac{1}{1 - z^{-1}}$</mathjax>, where <mathjax>$1 &lt;
\abs{z}$</mathjax>. This is a perfect example of why the z-transform is not a strict
superset of the Fourier transform -- that only happens when the unit circle
is strictly part of the RoC. Otherwise you can't evaluate the expression
there.</p>
<p>Z-transform of the tone burst (suddenly-applied cosine wave)? We've done
this (albeit in parts).</p>
<p>Note that radius of convergence isn't changing.</p>
<p>Will leave it to you to figure out what the transform of <mathjax>$r^n \cos(\omega_0
n) u(n)$</mathjax> is.</p>
<p><a name='18'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>March 22, 2012.</h2>
<h1>Upsampling property</h1>
<p><mathjax>$$
x(n) \mapsto \uparrow N \mapsto y(n) = \begin{cases}
x(n/N) &amp; \text{if } n \equiv 0 (\mod N)
\\ 0 &amp; \rm{e/w}
\end{cases}
$$</mathjax></p>
<p>i.e. we have the same values, but now interspersed with more zeroes. Take
the axis and dilate by three. So see if you can come up with an expression
for the Z-transform of the upsampled signal. We should just have
<mathjax>$\hat{X}(z^N)$</mathjax>.</p>
<p>This should not surprise you. When you upsampled in the time domain, what
happened in frequency? We contracted in the frequency domain. You get that
even from here. If I remind you of an example we did eons ago, <mathjax>$y(n) =
\alpha y(n-1) + x(n)$</mathjax> had a frequency response of <mathjax>$G(\omega) = \frac{1}{1 -
\alpha e^{-i\omega}}$</mathjax>. If I change this to the parameters of <mathjax>$y(n) = \alpha
y(n-N) + x(n)$</mathjax>, <mathjax>$H(\omega) = \frac{1}{1 - \alpha e^{i\omega N}}$</mathjax>. But if you
compare <mathjax>$g(n) = \alpha^n u(n)$</mathjax> with <mathjax>$h(n) = \alpha^{nN} u(n)$</mathjax>, we've
already seen this.</p>
<p>So when you upsample, you have the <mathjax>$z$</mathjax> raised to the <mathjax>$n^{th}$</mathjax> power.</p>
<p>What's the RoC? Should bring up two more questions: what happens to the
poles and zeroes? We take the <mathjax>$n^{th}$</mathjax> root of everything (i.e. the inverse
function), so everything moves closer to 1. (rationale: <mathjax>$z_p^N = p \implies
z_p = ?$</mathjax>. We get <mathjax>$N$</mathjax> times as many poles, in fact, since we have <mathjax>$N$</mathjax> roots
of <mathjax>$p$</mathjax>. ibid for zeros.</p>
<p>Going back to the question for the region of convergence for y: if the RoC
for x is <mathjax>$R_1 &lt; \abs{z} &lt; R_2$</mathjax>, the RoC for y is <mathjax>$R_1 &lt; \abs{z}^N &lt; R_2$</mathjax>,
so <mathjax>$R_y = R_x^{1/N}$</mathjax>.</p>
<p>So let's do the example given earlier: <mathjax>$y(n) = \alpha y(n-N) + x(n)$</mathjax>.
<mathjax>$\hat{H}_1(z) = \frac{1}{1 - \alpha z^{-1}}$</mathjax>. <mathjax>$\hat{H}_4 = \frac{1}{1 -
\alpha z^{-4}}$</mathjax>. Draw pole-zero diagrams, region of convergence?</p>
<p>(note that we've got degeneracy -- multiplicity. Must denote with a number
in parentheses if you've got multiplicity greater than 2; if multiplicity
is 2, you can use a double-circle or double-x).</p>
<h1>Differentiation</h1>
<p>Another property that's actually very important is differentiation in Z. So
suppose you've transformed <mathjax>$x \ztrans \hat{X}(z)$</mathjax>. What is <mathjax>$\deriv{\hat{X}}
{z}$</mathjax>? <mathjax>$-z\deriv{\hat{X}}{z} \ztrans n x(n)$</mathjax>.</p>
<h2>Example:</h2>
<p><mathjax>$g(n) = n\alpha^n u(n) \ztrans \hat{G}(z) = ?$</mathjax></p>
<p>If you want to make this look like the original form, just multiply top and
bottom by <mathjax>$z^{-2}$</mathjax>. Very important point: extension to higher derivatives.</p>
<p>So what happens as we increase this? What does this mean?</p>
<p>We can decompose any rational z transform into a linear composition of
lower-order terms. Fundamental theorem of algebra. Proposition: suppose
we've got a transfer function. We've got a numerator over a denominator. We
can factor the numerator and denominator. You also learned that whenever
you do this, you can break apart the ratio in terms of a sum.</p>
<p>Note that this starts breaking when you have degeneracy (i.e. systems with
duplicate poles). So from this qualitative argument, it should not surprise
you if I tell you that the only way you can get a rational Z-transform is
if the system is the sum of one-sided exponentials multiplied by
some polynomial.</p>
<p>We'd also have to include the left-sided versions of these.</p>
<p>We can make a general statement: a Z-transform expression <mathjax>$\hat{X}(z)$</mathjax> is
rational iff x(n) is a linear combination of terms <mathjax>$n^k \alpha^n u(n)$</mathjax>,
<mathjax>$n^k\beta^n u(-n)$</mathjax>. Shifted versions will certainly also work.</p>
<p>Using partial fractions is one of the methods of doing an inverse
transform. We're not going to learn a formal inverse Z-transform; we're
just going to use various heuristics (not unlike solving differential
equations).</p>
<p>In general, the inverse z-transform requires a contour integral (complex
analysis) and thus is not a required in this class.</p>
<p>Now, if you believe this, we've got several things: <mathjax>$n^k\alpha^n u(n)$</mathjax>,
LCCDEs, and rational Z-transforms. They form a family.</p>
<h1>LCCDEs and Rational Z Transforms</h1>
<p>Suppose I've got an input, an impulse response, and an output. You know
this is a convolution of x and h, so <mathjax>$\hat{Y} = \hat{X}\hat{Z}$</mathjax>, which
means the transfer function of an LTI system is the ratio of the transform
of the output to the transform of the input (for LTI systems).</p>
<p>Frequency response of the filter gives you the Fourier transform of the
output.</p>
<p>We can write our difference equation as <mathjax>$\sum_{k=0}^N a_k y(n-k) = \sum_m^M
b_m x(n-m)$</mathjax>. We've seen this.</p>
<p>One way to get the transfer function is to take the z-transforms of both
sides. If they're equal in the time domain, their z-transforms must also be
equal in the frequency domain. Time-shift property. Just considering the
ratio <mathjax>$\hat{H} \equiv \frac{\hat{Y}}{\hat{X}}$</mathjax>, we have our transfer
function.</p>
<p>Familiarize yourself with going from the LCCDE to the transfer function by
inspection.</p>
<p>Now, for the end of the lecture: irrational Z-transform.</p>
<h2>Example</h2>
<p>This is a standard example in practically any signal-processing book you'll
find. <mathjax>$\hat{X} = \log(1 + \alpha z^{-1}$</mathjax>. Determine <mathjax>$x(n)$</mathjax>.</p>
<p>(differentiation property)</p>
<p>(<mathjax>$\frac{(-1)^{n-1} \alpha^n}{n}u(n-1)$</mathjax>)</p>
<p>You can also use (to check) Taylor expansion centered at 1: <mathjax>$\log(1 +
\lambda) = \sum_{n=1}^\infty \frac{(-1)^{n+1} \lambda^n}{n}$</mathjax>.</p>
<p><a name='19'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 3, 2012.</h2>
<p>We finished talking about the relationship between LCCDEs and
Z-transforms. A little later -- either next lecture or next week -- we're
going to talk about how to solve Z-transforms this way. Convenience:
convolutions turn into multiplications (which are almost always easier to
do than convolutions). Z-transforms turn difference equations into
algebraic expressions.</p>
<p>Properties:</p>
<h1>Initial Value Theorem</h1>
<p>If you have a causal discrete-time signal x (which means <mathjax>$x(n) = 0, n &lt;
0$</mathjax>), and suppose I'm looking at its Z-transform. <mathjax>$X(z) = \sum_{n=0}^\infty
x(n)z^{-n} = x(0) + \frac{x(1)}{z} + \frac{x(2)}{z^2} +
...$</mathjax>. <mathjax>$\lim_{z\to\infty} \hat{X}(z) = x(0)$</mathjax>, ergo the name initial-value
theorem. Simple to prove; sometimes helpful when you have a rational
function or some other signal which you happen to know is causal. You can
also massage this expression to get the other values.</p>
<p>Obv. does not work for FT: in that case, <mathjax>$z$</mathjax> always on unit circle.</p>
<p>Dancing around inverse Z-transforms: <mathjax>$x(n) = \frac{1}{2\pi i} \oint
\hat{X}(z) z^{n-1} dz$</mathjax> (contour integral). We are not going to use this
(i.e. forget about it).</p>
<h1>The ways we invert:</h1>
<ul>
<li>Inspection</li>
</ul>
<p>If I ask you what the inverse transform is of <mathjax>$\frac{1}{1 - \alpha
z^{-1}}$</mathjax>, we know the result, depending on whether <mathjax>$\abs{\alpha}$</mathjax> is
greater or smaller than <mathjax>$\abs{z}$</mathjax>.</p>
<p>Now consider <mathjax>$\hat{X}(z) = \frac{1}{3}z - \frac{1}{2} + 2z^{-1}$</mathjax>. We can
decompose this FIR into its contributing values.</p>
<ul>
<li>Power Series Method</li>
</ul>
<p>We can use the equivalent power series and just transform it term by term.
(we could also get some of these via time-reversal and inspection).</p>
<ul>
<li>Long Division</li>
</ul>
<p>Recall rational transforms correspond to functions that have difference
equations.</p>
<p>Suppose <mathjax>$\hat{G}(z) = \frac{z}{z-1}$</mathjax>. Doing the long division the usual way
(i.e. by <mathjax>$z-1$</mathjax>) will yield the causal signal <mathjax>$u(n)$</mathjax>. Doing long division a
different way (i.e. by <mathjax>$1-z$</mathjax>) will yield the anticausal signal <mathjax>$-u(-n-1)$</mathjax>.</p>
<p>The point is that you have flexibility with respect to how you do the long
divison, and each of them will give you a different corresponding one-sided
signal.</p>
<p>Recall: a signal cannot be causal if its transfer function
diverges. However, the number of samples to the left corresponds to the
order of growth of the transfer function (as a polynomial).</p>
<h1>Rational Transform Pole-Zero Book-keeping</h1>
<p>Suppose I have a transfer function <mathjax>$\hat{H}(z) = \frac{A(z)}{B(z)}$</mathjax>, where
<mathjax>$A$</mathjax> is <mathjax>$m^{th}$</mathjax> order, and <mathjax>$B$</mathjax> is <mathjax>$n^{th}$</mathjax> order. If <mathjax>$M &lt; N$</mathjax>, H is strictly
proper. If <mathjax>$M \le N$</mathjax>, H is proper. ANd if <mathjax>$M &gt; N$</mathjax>, H is improper.</p>
<p>For <mathjax>$M &lt; N$</mathjax>, there are <mathjax>$N$</mathjax> finite poles (counting multiplicities), <mathjax>$M$</mathjax>
finite zeros, and <mathjax>$N-M$</mathjax> zeros at infinity.</p>
<p>For <mathjax>$M = N$</mathjax>, there are <mathjax>$M = N$</mathjax> finite poles, <mathjax>$M = N$</mathjax> finite zeros. No
activity at infinity.</p>
<p>For <mathjax>$M &gt; N$</mathjax>, there are <mathjax>$M$</mathjax> finite zeros, <mathjax>$N$</mathjax> finite poles. and <mathjax>$M-N$</mathjax> poles
at infinity.</p>
<p>In any of these cases, the number of poles equals the number of zeros. The
difference is always what is happening at infinity.</p>
<h1>Back to power series</h1>
<p><mathjax>$\hat{F}(z) = e^{\frac{1}{z}}$</mathjax>. Just use the Taylor series for the
exponential function. Falls into place.</p>
<h1>Partial Fraction Expansion</h1>
<p>Again we are speaking of a rational Z-transform. You've studied partial
fractions in calculus. It's no different here, really. However, this
partial fraction must be proper. If not, just do long division until it's
in that form.</p>
<h2>Case I: Simple Poles</h2>
<p>Best case is when all finite poles are simple (i.e. order 1).</p>
<p>(remember: causal signal means that the RoC must be outside the outermost
pole.)</p>
<p>So what happens if I ignore the causality constraint and instead add the
constraint that the signal is BIBO stable? We get a different equation,
which is now a two-sided signal, and nothing blows up.</p>
<p><a name='20'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 5, 2012.</h2>
<h1>Partial Fraction Expansions (Cont'd)</h1>
<h2>The case of multiple poles:</h2>
<p>Example: <mathjax>$\hat{G}(z) = \frac{z^2}{\parens{z-\frac{1}{2}}^2(z-2)}$</mathjax>.</p>
<p>Remember, double-pole at <mathjax>$\frac{1}{2}$</mathjax> (for this, we can do the
double-cross), and a pole at 2. How many possible regions of convergence
can we have? 3.</p>
<p>Now let's try to find the impulse response. Before you can do that, you
need to break this rational transfer function into a combination of first
and second-order terms. We're not going to carry this all the way to the
impulse response, but we are going to break it up.</p>
<p>Any time you have a multiple pole, you have to write the expansion in terms
of the first order term plus the second order term (or <mathjax>$Az + B$</mathjax> in the
numerator).</p>
<p>The trick at the end is to differentiate and evaluate at the multiple pole.</p>
<p>Assume causal system. Determine <mathjax>$g(n)$</mathjax>. Linear combination of the inverse
Z-transforms, since this is a linear operator. Differentiation property to
derive inverse transform of second term.</p>
<h1>Transient, steady-state systems</h1>
<h2>Steady-State and Transient Decomposition of DT-LTI System Responses</h2>
<p>We're going to talk in this course about two ways of decomposing the
responses of DT systems. One of these is decomposing into transient (which
dies out in the long term) and steady-state (long-term dominant)
components.</p>
<p>Starting with a causal system. First-order IIR filter, single pole at the
origin. Since <mathjax>$\alpha$</mathjax> is inside the unit circle, this system is BIBO
stable.</p>
<p>Simple question: here's your system h, it's got some impulse response, I
apply a step function to it.</p>
<p>Once again, remember that partial fraction expansion <em>requires</em> that you
have a strictly proper fraction. You may need to pull out some of your
zeros to make the fraction strictly proper (or do long division).</p>
<p>The end result is the transform <mathjax>$y(n) = \frac{1}{1-\alpha} u(n) -
\frac{\alpha}{1 - \alpha} \alpha^n u(n)$</mathjax>. The first term is our
steady-state response. It does not go away with time. The second term is
the transient response. It </p>
<p>Thus we can decompose any signal into a transient portion and a
steady-state portion.</p>
<p><strong>If a system is BIBO-stable <em>and</em> causal, all of its poles are inside the
unit circle.</strong></p>
<p>Question: a BIBO-stable and causal DT-LTI system has all its poles inside
the unit circle. Why is that?</p>
<p>The RoC must be outside of the outermost pole (result of causality). It is
also BIBO stable, so it must include the unit circle. Thus the outermost
pole must be inside the unit circle, so all other poles must also be inside
the unit circle.</p>
<p>Transient is any part that dies out at infinity, while steady-state is
anything that is either steady or growing. Notions of dominance (separate
but connected closely to transient/steady-state analysis). Dominance is
tied to the long-term behavior (i.e. which term dominates when <mathjax>$n$</mathjax> large?)</p>
<p>Back to the original setup: <mathjax>$x(n) = u(n) \implies y(n) = \frac{\alpha}
{\alpha-1} \alpha^n u(n) + \frac{1}{1-\alpha} u(n)$</mathjax>. What if <mathjax>$x(n) = 1$</mathjax>
(for all time)?</p>
<p>Since this system has settled, we have just the steady-state component.</p>
<p>We want to get to the continuous-time story. What if the input signal is a
constant signal 1? We get as output our steady-state result. What this
means is that the system is unable to distinguish between the constant
signal 1 and the unit step which kicks in at <mathjax>$n=0$</mathjax> if you wait long
enough.</p>
<p>All this talk about complex exponentials was really just a discussion
regarding steady-state. Non-stable pole (always one on the unit circle),
which is going to dominate the response.</p>
<p>Now that you have this connection, you don't have to solve for the
coefficient (if the system is BIBO-stable). All you have to do is figure
out the transient portion.</p>
<p>Deferring ZIZO until next week.</p>
<h2>Example</h2>
<p>Consider the causal system described by <mathjax>$y(n) = \frac{5}{6}y(n-1) -
\frac{1}{6} y(n-2) + x(n)$</mathjax>. Determine the unit step response of the system.</p>
<p>Transform everything. Remember that <mathjax>$\hat{H}(z) = \frac{\hat{Y}}{\hat{X}}$</mathjax>.</p>
<p>(linear algebra approach:</p>
<p>Looks like <mathjax>$\lambda^n$</mathjax>.</p>
<p>Thus <mathjax>$Y^n = \frac{5}{6}Y^{n-1} - \frac{1}{6}Y^{n-2}$</mathjax>.</p>
<p><mathjax>$$
y^2 = \frac{5}{6}y - \frac{1}{6}y
\\ y^2 - \frac{5}{6}y + \frac{1}{6}y = 0
\\ y = -\frac{1}{2}, -\frac{1}{3}
$$</mathjax></p>
<p>These are our eigenvalues. Some linear combination of these two
exponentials will give us our initial conditions (i.e. <mathjax>$y(0) = 1$</mathjax>, <mathjax>$y(-1) =
y(-2) = 0$</mathjax>). That is, <mathjax>$y = a_0 \parens{-\frac{1}{2}}^n + a_1
\parens{-\frac{1}{3}}^n$</mathjax> )</p>
<p><a name='21'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 10, 2012.</h2>
<h1>Zero-State and Zero-Input Responses</h1>
<p>Alternate decomposition of systems that is based on whether the system is
initially at rest or not. If initially at rest, what is response due to
initial conditions, and what is response to impulse?</p>
<p>Example believed to be in the textbook: system described by <mathjax>$y(n) -
0.9 y(n-1) = x(n)$</mathjax>. Causal.</p>
<p>If the system is not at rest, technically it is not LTI. Not nonlinear,
though. It's what we call an incrementally-linear system. What
distinguishes these from linear systems is that these have non-zero
intercepts.</p>
<p>Turning off the input, all you've got is some nonzero initial
condition. Figure out the response as time goes forward. This is called the
zero-input response of the system (turning off the input).</p>
<p>What if <mathjax>$y(-1) = 0$</mathjax>, and <mathjax>$x(n) = u(n)$</mathjax>? We've got a geometric series! Or do
it the z-transform style; do the transform as we've been doing for the past
few weeks, and causality will tell you the rest. Output is <mathjax>$HX$</mathjax>. We know
how to do partial fractions and stuff. Talk about damn walls.</p>
<p>This response is called the zero-state response (<mathjax>$y_{ZSR}$</mathjax>), meaning the
initial state (set of initial conditions) is zero.</p>
<p>So you have the zero-input response plus the zero-state response as yet
another decomposition of your system.</p>
<p>Now we're learning about the contributions of the nonzero initial state. We
did this by splitting the response. There is actually a way to figure out
the total response using transforms: there is a transform method. Main
point of today.</p>
<h1>Transform Method to get the Total Response</h1>
<p>So, the method begins by looking at the difference equation, e.g. <mathjax>$y(n) =
0.9 y(n-1) + x(n)$</mathjax>. I'm going to use the Lee &amp; Varaiya method, and then
we'll look at it another very related method (the unilateral
Z-transform).</p>
<p>For starters, multiply each side by <mathjax>$u(n)$</mathjax>. So we have <mathjax>$y(n)u(n) =
0.9y(n-1)u(n) + x(n)u(n)$</mathjax>.</p>
<p>Then take the Z-transform of both sides (using the definition of the
Z-transform). Note that this Z-transform looks very much like the
z-transform you've seen up until now, except that it starts at zero and
goes up to infinity. This is called the unilateral z-transform of <mathjax>$y$</mathjax>.</p>
<p>For any causal signal, the unilateral transform is the same as the
bilateral z-transform.</p>
<p>With the unilateral transform, you can do it all in one go.</p>
<h1>Laplace Transform</h1>
<p><mathjax>$\hat{X}(s) \defequals \int_{-\infty}^{\infty} x(t) e^{-st} dt$</mathjax>, <mathjax>$s =
\sigma + i\omega$</mathjax>. Just as with the Z-transform, we do not use an inverse
transform formula. We're going to use similar methods.</p>
<p>Why do we even bother with this? For reasons similar to why we justified
the Z-transform, we need a comparable transform for continuous-time
systems.</p>
<p>Notice how the integral is actually the Fourier transform of the perturbed
("tamed") function <mathjax>$x(t)e^{-\sigma t}$</mathjax>. The radius of convergence is
defined by <mathjax>$\sigma = \mathrm{Re}(s)$</mathjax> -- <mathjax>$\omega$</mathjax> plays no role in
convergence.</p>
<p>In continuous time, there is a very nice correspondence between sidedness
of signals and the RoC in continuous-time. Easier to remember.</p>
<p>Once again, causality means that the RoC extends all the way to (and
includes!) infinity.</p>
<p>Notice that in this case, the RoC contains the <mathjax>$i\omega$</mathjax> axis. (Conjecture,
since not yet proven in this class) As in the z-transform, this is because
<mathjax>$x(t)$</mathjax> is a stable signal. That is, it is absolutely integrable. The proof
is fairly trivial: <mathjax>$\int\abs{x(t)e^{-i\omega t}} dt = \int \abs{x(t)} dt &lt;
\infty$</mathjax>.</p>
<p>Transform pairs!</p>
<p><mathjax>$$
\renewcommand{\Re}{\mathrm{Re}}
e^{-at} u(t) \ltrans \frac{1}{s+a} (-\Re(a) &lt; \Re(s))
\\ -e^{-at} u(-t) \ltrans \frac{1}{s+a} (\Re(s) &lt; -\Re(a))
$$</mathjax></p>
<p><a name='23'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 17, 2012.</h2>
<h2>Differentiation property:</h2>
<p><mathjax>$$
x(t) \ltrans \hat{X}(s)
\\ \dot{x}(t) \ltrans s\hat{X}(s)
$$</mathjax></p>
<p>LCCDEs</p>
<p><mathjax>$y^{(N)} + ... + a_1 y^{(1)}(t) + a_0 = b_M x^{(M)} + ... + b_0 x(t)$</mathjax>. What
I want you is to apply the differentiation property to find the transfer
function of this. (x-coefficients-polynomial divided by
y-coefficients-polynomial)</p>
<p><mathjax>$\frac{\sum_m b_m s^m}{\sum_n a_n s^n}$</mathjax>.</p>
<p>Going back to a series-RC circuit powered by a voltage source, we have
<mathjax>$z(t) = \frac{x(t) - y(t)}{R}$</mathjax>, <mathjax>$C\dot{y}(t) = z(t)$</mathjax>. So <mathjax>$C\dot{y} -
\frac{1}{R}y(t) = x(t)$</mathjax>. Transfer function therefore is <mathjax>$\frac{1}{RCs +
1}$</mathjax>. Other way is to plug in <mathjax>$e^{-st}$</mathjax> and insist silly things.</p>
<p>Inverting this signal yields <mathjax>$\frac{1}{RC}e^{-t/(RC)}u(t)$</mathjax>. That is the
impulse response of the system.</p>
<p>So that was differentiation in time. There is differentiation in the
s-domain. </p>
<h2>Differentiation in s</h2>
<p><mathjax>$$
x(t) \ltrans \hat{X}(s)
\\ -t x \ltrans \deriv{\hat{X}}{s}
$$</mathjax></p>
<p><mathjax>$x(t) = \frac{1}{2\pi i} \oint \hat{X}(s) e^{st} ds$</mathjax></p>
<p><mathjax>$te^{-at}u(t) \ltrans \frac{1}{(s+a)^2}$</mathjax>.</p>
<p>Conjecture: terms of the form <mathjax>$t^n e^{-at} u(t)$</mathjax> and their anticausal
counterparts are the only kinds that can be combined (subject to matching
RoCs) to produce rational transforms. This means that the impulse response
of any rational transfer function must be the sum of these terms.</p>
<p>In differential equations, you studied simple and multiple roots (which
correspond to simple/multiple poles in our vernacular).</p>
<p><mathjax>$s + 1 + 1/(s+3)$</mathjax></p>
<p>(unit doublet)</p>
<p>on one side, you have delta, step, ramp, quadratic. On the other side,
you've got a doublet, second derivative of delta, etc. Delta is <mathjax>$u_0(t)$</mathjax>,
doublet is <mathjax>$u_1(t)$</mathjax>, step is <mathjax>$u_{-1}(t)$</mathjax>, etc.</p>
<p>If not strictlty proper, have polynomial in s.</p>
<h2>Method 1: non-transform method</h2>
<p>If delta goes into the system, what comes out? <mathjax>$h$</mathjax>. If the unit step goes
in, we get <mathjax>$u*h$</mathjax>.</p>
<h2>Method 2: transform method</h2>
<p>partial fractions and stuff. Consistent with result of method 1.</p>
<h2>Integration in time/transform domain</h2>
<p>Just relabel variables, and it becomes self-evident.</p>
<p><mathjax>$$
x(t) \ltrans \hat{X}(s)
\\ \int x dt^\prime \ltrans \frac{1}{s}\hat{X}(s)
$$</mathjax></p>
<h1>Steady-State &amp; Transient Response of LTI Systems</h1>
<p>Exactly the same as expected. Note that the second one dies out because of
the pole of the system.</p>
<p>With BIBO-stable system, input pole to right of rightmost pole of system
dominates output.</p>
<p><a name='24'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 19, 2012.</h2>
<p>Transient/Steady-State Wrap up</p>
<p>Let's talk a bit about a causal BIBO-stable system. Which is usually the
case with practical applications. Has a rational transfer function, so
usually ratio of two polynomials in <mathjax>$s$</mathjax>. Not going to be too concerned
about zeros of system, so we'll write the factored denominator in terms of
the poles of the system.</p>
<p>Assume all poles are simple. All poles are in left half-plane. Also, assume
transfer function is strictly proper.</p>
<p>To this system, I apply a one-sided (causal) complex exponential
signal. What is the output?</p>
<p>transforms and multiplications.</p>
<p>Eigenfunction property (plus other stuff?!).</p>
<p>True for any BIBO-stable function: you can evaluate the Laplace transform
on the <mathjax>$i\omega$</mathjax> axis and get the Fourier transform for that particular
<mathjax>$\omega$</mathjax>.</p>
<p>What happens to all the terms involving the Rs? These, collectively,
compose your transient response. The last term (result from input)? Doesn't
die out. Steady-state.</p>
<p>What this says is that the system cannot distinguish between <mathjax>$e^{i\omega_0
t}$</mathjax> and its truncated cousin <mathjax>$e^{i\omega_0 t}u(t)$</mathjax> if we wait long enough:
i.e. transients become insignificant. Only portion of response that remains
is the one corresponding to <mathjax>$e^{i\omega_0 t}$</mathjax>. Notice that the pole of the
input is to the right of the rightmost pole of the system.</p>
<p>Important: all poles of the system are in the left half-plane, and the pole
of the input is on the <mathjax>$i\omega$</mathjax> axis, which means it's to the right of the
rightmost pole (and of course the system is causal). Therefore the pole of
the input will dominate the response.</p>
<p>Eigenfunction property applies to steady-state solution. Can also extend to
sinusoids.</p>
<p>Likely a good time to move to the unilateral Laplace transform and how we
can use it to solve ordinary LDEs.</p>
<h1>Unilateral Laplace Transform&amp; linear, constant-coefficient differential equations with non-zero initial conditions</h1>
<p>Whenever you have nonzero initial conditions, you need to truncate. Trick
used: multiply by unit step, then take Laplace transform. Effectively the
same as taking unilateral Laplace transform.</p>
<p><mathjax>$\hat{\mathcal{X}}(s) = \int_{0^-}^\infty x(t) e^{-st} dt$</mathjax>. A lot of
textbooks only deal with the unilateral transform because they're
interested in causal systems. As are we, in this context.</p>
<p>If I am looking at the unilateral Laplace transform of <mathjax>$\dot{x}$</mathjax>, one
additional term appears. If we integrate by parts, we can see which this
term must be. In the bilateral case, we evaluated <mathjax>$uv$</mathjax> at both
infinities. The second term (i.e. <mathjax>$\int vdu$</mathjax>) required that this product
pevaluate to zero at infinities -- otherwise the integral would not
converge.</p>
<p>In the unilateral case, we therefore have an additional term: <mathjax>$-x(0^-)$</mathjax>.</p>
<p>Zero-state, zero-input method. Remember: <strong>different</strong> from transient and
steady-state. Best not to think of these at the same time.</p>
<p>Method 2: use unilateral Laplace transform.</p>
<p>Note that if a signal is causal, its unilateral Laplace transform is the
same as its bilateral Laplace transform.</p>
<p><a name='25'></a></p>
<h1>EE 120: Signals and Systems</h1>
<h2>April 24, 2012.</h2>
<h1>DC Motor Control</h1>
<p>Application of what we've been studying. Way to review and test fluency
with material. We've got a DC motor whose model is some second order linear
differential equation. We've got applied torque and damping. Moment of
inertia of rotor and whatever's hooked up to it.</p>
<p>Transfer function?</p>
<p>Feedback to stabilize the system. Place this in proportional feedback
configuration: only other thing in the feedback system is K, which is a
scalar. Integrator: of form <mathjax>$\frac{1}{s}$</mathjax>. What's the transfer function?
Characteristic polynomial from differential equations.</p>
<p>K must be positive for BIBO stability.</p>
<p>If roots complex, guaranteed stability. Real part of each pole is
<mathjax>$-\frac{D}{2M} &lt; 0$</mathjax>.</p>
<p>Oscillations you get when you have complex poles. Underdamping, critical
damping, overdamping. Robustness discussion.</p>
<h1>Bode plots!</h1>
<p>Gots two building blocks:</p>
<p><mathjax>$$
\hat{F}_I(s) = 1 + \frac{s}{\omega_0}
\\ F_I(\omega) = \hat{F}_I(i\omega)
$$</mathjax></p>
<p>Asymptotic plot of <mathjax>$20\log\abs{F_I}$</mathjax> and <mathjax>$\angle F_I$</mathjax>. The horizontal axis
is a logarithmic axis.</p>
<p>What happens when <mathjax>$\omega$</mathjax> is very small? Asymptotically zero. At higher
frequencies, <mathjax>$\omega$</mathjax> large, so imaginary part will dominate. And so when
you take its magnitude, you jump by 20 every time you increase magnitude by
10 -- slope is 20dB/dec. <mathjax>$\omega_0$</mathjax> is your corner frequency (named for
obvious reasons). 3dB point: corner frequency. One of foundational blocks
for frequency responses on logarithmic scales. 45 degrees per decade. (use
10x to determine dominance).</p>
<p>This building block is called a regular zero. Not widely used; Babak
learned from circuits professor at CalTech (R.D. Middlebrook).</p>
<p>The second building block is <mathjax>$\frac{s}{\omega_0}$</mathjax>. Simple zero.</p>
<p>Claim: all expressions with real roots can be written as combinations of
these two.</p>
<p>Inverted zero: <mathjax>$1 + \frac{\omega_0}{s}$</mathjax></p><div class='pos'></div>
<script src='mathjax/unpacked/MathJax.js?config=default'></script>
<script type="text/javascript">
MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
var TEX = MathJax.InputJax.TeX;
var PREFILTER = TEX.prefilterMath;
TEX.Augment({
  prefilterMath: function (math,displaymode,script) {
      math = "\\displaystyle{"+math+"}";
      return PREFILTER.call(TEX,math,displaymode,script);
    }
  });
});

var a = document.getElementsByTagName('a'),
    ll = a.length;
if (ll > 0) {
  var div = document.getElementsByClassName('pos')[0];
  div.style.float = 'right';
  div.style.position = 'fixed';
  div.style.background = '#FFF';
  div.style.fontSize = '90%';
  div.style.top = '10%';
  div.style.right = '5%';
  div.style.width = '15%';
  for (var i = 0; i < ll; i++) {
    div.innerHTML += '<a href="\#' + a[i].name + '">'
                     + a[i].parentElement.nextElementSibling.innerHTML
                     + '</a><br />';
  }
  var div = document.getElementsByClassName('wrapper')[0];
  div.style.width = '80%';
}
</script></div>
Something went wrong with that request. Please try again.