# publicsteveWang/Notes

### Subversion checkout URL

You can clone with HTTPS or Subversion.

Fetching contributors…

Cannot retrieve contributors at this time

file 1813 lines (1812 sloc) 109.712 kb
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 

EE 120: Signals and Systems

January 17, 2012.

• BABAK
• email: AYAZIFAR@EECS.BERKELEY.EDU
• Best way to contact.
• If time-sensitive, make subject line explosive. Literally.
• Does his best to respond. So that's that.
• Office: 517 Cory

• Three midterms.
• MT1: 20%. Date: Feb 14. Week of drop date (almost certainly)
• MT2: 25%. Date: TBA. Percentage tentative (alt. 20%)
• MT3: 25%. Date: Last lecture.
• Homework: 15%, drop lowest two scores.
• 4-6 homeworks.
• Work in groups of 3-5 people.
• Each member of group turns in separate document.
• Must constitute primarily own work.
• Write own name at top, names of collaborators beneath.
• Tentatively due Tuesdays so Babak can have OH on Monday evenings.
• Pop quizzes (often obvious when): 15%. Drop lowest one.
• Possible research paper project. 10% => MT2, MT3 weigh 20% each.
• Just get feet wet with journals.
• Same for everybody.

Since the course is not HW-heavy, there are weeks we don't havehomework. So get together with groups and plow through old exams.

The other thing you can do is look at a couple of books. Most of themare with this title: Signals & Systems. One of them is by Oppenheim,Willsky, Nawab. This one's an expensive book, actually. Don'trecommend you go out and buy it unless you can find one printedoverseas. Excellent problems -- mostly MIT exams. Taught out of thisbook previously. (both are second-edition)

There's another one with the same title by Hwei P. Hsu (Schaum'sOutline) -- great for self-study because every problem is solved.

Basic Properties of Systems

• Linearity
• $x\to[F]\to y$

if:

$x_1\to[F]\to y_1$$x_2\to[F]\to y_2 then: \alpha x_1\to[F]\to\alpha y_1 (scaling / homogeneity property)x_1+x_2\to[F]\to y_1+y_2 (additivity property) if: (\forall x_1,x_2\in X, \alpha_1 x_1\to\alpha_2x_2\to[F]\to\alpha_1 y_1+\alpha_2y_2), then: (\forall x_1,x_2\in X, \alpha_1 x_1+\alpha_2x_2\to[F]\to\alpha_1y_1+\alpha_2y_2), (where: x \in X input signal space y \in Y output signal space y = F(x) y(t) output at time t y(n) output of sample n) • example: resistor and stuff. Ohm's law. \vec{J} = \sigma \vec{E} and whatnot. • By changing the system into a time-variant system, it doesn't lose linearity. • ZIZO. • converse not true. • contrapositive/logical equiv: \lnot \text{ZIZO} \implies \not \text{L}. • Modulation • Parallel interconnection of linear systems produces a multiple-input linear system. • Strictly speaking, this is a lie. It's not really parallel. • Two-point moving average: • Scaling and additivity properties. • Time Invariance • \hat{x}(t) = x(t-T) \implies \hat{y}(t) = y(t-T) \forall x\in X, \forall T \in \mathbb{R} Hidden assumption: X is closed under shifts. (i.e. x\in X \implies\hat{x}\in X) + occurs, e.g. when something that isn't an input is time-dependent.* Causality + Basically, no dependence on future values.- If two inputs are identical to some point, then their outputs must also be identical to that same point. EE 120: Signals and Systems January 19, 2012. More on causality (proof by contradiction of previous example) (input zero up to a certain point, but the output is nonzero before saidpoint. output preceding input. In itself, it does not tell non-causality,but if you know the system is either linear or time-invariant, then youknow it can't be causal.) L and C implies not P If the system is linear AND causal, output cannot precede the input. contrapositive: P implies not (L and C) = not L or not C. for time invariance, we don't have to insist that the input is zero up to apoint; it just has to be (any) constant. Output also doesn't have to bezero; it just has to be constant (not necessarily the same constant,obviously). (TI and C) implies not Pcontrapositive: P implies not TI or not C. anti-causality is defined as the exact opposite of causality. Everythingwe've said about causality, the inequalities change sign. Bounded-input bounded-output (BIBO) stability. if \abs{x(n)} \le Bx (\exists Bx < \infty) \implies \exists 0 < By <\infty \text{ s.t. } \abs{y(n)} \le By \forall n every FIR (finite-duration impulse response filter) is BIBO stable. LTI (linear time-invariant) systems x(n) can in general be decomposed into into a linear combination ofimpulses. (Kr\"onecker deltas). x(n) = \sum_{m=-\infty}^\inftyx(m)\delta(n-m). (sifting property) If you know the response of the LTI system to the unit impulse, you knowthe response to all discrete-time inputs -- decompose it into unitimpulses. convolution sum.$$x(n) = \sum_m x(m) \delta(n-m)\\ y(n) = \sum_m x(m)f(n-m)\\ y(n) = (x * f)(n)\\ y(n) = \sum_m^\infty x(m)f(n-m) = \sum_{k=-\infty}^\infty f(k)x(n-k)\\ = (f*k)(n)\\ \therefore x * f = f * x$$. You can reverse the roles of the impulse response and the input signal. Let x(n) = e^{i\omega n}. y(n) = F(\omega) = (\sum_k f(k)e^{-i\omegak})e^{i\omega n}. Frequency response of this LTI system. Converges nicely iff the system is BIBO-stable. The frequency response ofthe system will be a very smooth function of omega. BIBO stability for LTI systems An LTI system F is BIBO stable iff its impulse response converges(absolutely summable in discrete case or integrable in continuous case). IF the impulse response is absolutely summable, then F is BIBO stable(every bounded input produces a bounded output).$$y(n) = \sum_k f(k)x(n-k) \text{ (I/O relation for an LTI system)}.\\ \abs{y(n)} = \abs{\sum_k f(k)x(n-k)} \le \sum_k \abs{f(k)}\abs{x(n-k)}\\ \abs{x(n)} \le Bx \forall n. \le (\sum_k \abs{f(k)})Bx$$F is BIBO stable \implies \sum_n \abs{f(n)} < \infty contrapositive. i.e. we can find at least one bounded input that producesan unbounded output. (fudge something to cancel out signs; we create our absolute sum over allintegers @ n=0. convolve signal with time-reversed signal;auto-correlation. convolution of a signal with itself.) \frac{f(n)^2}{\abs{f(n)}} = \abs{f(n)}^2/\abs{f(n)} = \abs{f(n)} IIR Filters y(n) = \alpha y(n-1) + x(n). causal, y(-1) = 0. (geometric sum; BIBO stability depends on magnitude of \alpha being lessthan1.) EE 120: Signals and Systems January 24, 2012. LTI Systems and Frequency Response Freq. response: H(\omega)(continuous-time?) Will learn set of systems for which sum doesn't converge, so we'll need anew transform: laplace (Z) transform Two-point moving average: h(n) = (\delta(n) + \delta(n-1))/2 H(\omega) = (1 + e^{-i\omega})/2 = e^{-i\omega /2}(e^{i\omega /2} + e^{-i\omega /2})/2 = e^{-i\omega/2}cos(\omega/2) frequency response of discrete-time systems is periodic: adding a multipleof 2\pi will not change the result. 2\pi-periodicity of discrete-time LTI frequency responses. This means,naturally, that we don't have to plot our functions everywhere; rather, weonly need to worry about a single period. (looks like \abs{cos(\frac{\omega}{2})}) eigenfunction property of discrete LTI systems -- e^{i\omega} \toH(\omega)e^{i\omega} H^*(\omega) = \sum h^*(k)e^{i\omega k} conjugate symmetry: H^*(\omega) = H(-\omega). (generalization of "even"functions) if h(n) \in \mathbb{R} \forall n, y(n) = Re{H(\omega)e^{i\omega n}} =|H(\omega )|cos(\omega n+\angle H(\omega)) x\to H\to y h(n) = (1-\alpha ^n)/(1-\alpha) Ways to determine H(\omega): Method 1: H(\omega) = \sum h(n)e^{-i\omega n} = \sum (\alpha e^{-i\omega})^n. Useusual formula for geometric series, since this converges. Method 2: Use eigenfunction property of the complex exponential. Frequency responseis probably defined. Let x(n) \equiv e^{i\omega n} (a pure tone). y(n) = H(\omega)e^{i\omega n}. y(n-1) = H(\omega)e^{i\omega n}e^{-i\omega}.H(\omega) = 1/(1-\alpha e^{-i\omega}). How we can plot the magnitude response eliminate all negative complex exponentials. \alpha is in the unitcircle. e^{i\omega} is a point on the unit circle, so we can represent itwith a vector. Consider graphically. EE 120: Signals and Systems January 26, 2012. numerator is 1, denominator is length of vector e^{i\omega} -\alpha. Task: plot ratio as \omega varies. 1/(1+\alpha), 1/(1-\alpha). frequency response curve should be monotonically increasing from -\pi to0 and decreasing from 0 to \pi. [ talk about inflection points, concavity,etc. Second derivatives. ] Low-pass filter for 0<\alpha<1. If we understand the geometry for this particular case, we can answer design-oriented questions. One question is, what if I want to sharpen the peak of this low-pass filter? • Answer: have \alpha approach 1. Can't get too close -- real-world has noise. Also, in trouble if \alpha jumps onto or outside of the unit circle. Algebraic argument available, as well. : move \alpha toward the unit circle (but still keep it inside) Can I make a high-pass filter out of this? • Yes. Set -1 < \alpha < 0. : take \alpha to -1 < \alpha < 0 • By taking \alpha to be negative, we have the equivalent of a phase shift by \pi. Sharper high-pass filter? • Bring \alpha closer to -1. What if we want the filter to peak @ an arbitrary frequency, say \omega_0 = \pi/4? • Place \alpha along the pole \theta = \omega_0. 1/(1-\abs{\alpha}) observation: this filter is a complex filter (i.e. one whose impulse response h(n) is complex valued). This has its uses but is not always desirable. h(n) = \alpha^n u(n) = R^n e^{i\omega_0n} g(n) = h(n)e^{i\omega_0n} \implies G(\omega) = H(\omega -\omega _0) Example: LCCDE: linear constant coefficient difference equation y(n) = \alpha y(n-N) + x(n). N is a positive integer, not necessarily 1.y(-N) = ... = y(n) = 0 (\abs{\alpha} < 1)$$h_{N}(n) = \alpha h_{N}(n-N) + \delta(n) = !(n % N) && \alpha^{n/N}.H_{N}(\omega)\\ = \sum_{k=0}^\infty\alpha^{k}e^{-i\omega Nk} \sum(\alpha e^{-i\omega N})^k = 1/(1-\alpha e^{-i\omega N})\\ \abs{H_{N}(\omega)} = \abs{e^{i\omega N}/(e^{i\omega N} - \alpha)}$$Use \omega \equiv \omega N, then use a change of variables (afterplotting) to recover the initial basis and label axes and whatnot. Or use some arbitrary argument that it's a contracted version by going backto H(\omega). Roughly equivalent either way. Graph of \abs{H(2\omega)}, for 0<\alpha <1. (No more graphs.) Comb filter. (pick off values that are multiples of \pi/N) design-oriented analysis. layers like an onion. they fit together verywell. Analog system g(t) = e^{-\alpha t}u(t), \Re{\alpha} > 0 G(\omega) = \int g(t)e^{-i\omega t}dt \\ = 1/(\alpha + i\omega ). \\ = 1/(i\omega - (-\alpha )) \sqrt{\omega ^2 + \alpha ^2} EE 120: Signals and Systems January 31, 2012. Homework should be out today. Yes, there are homeworks in this class. analog system procedure for figuring out frequency response ofsystems. Fourier analysis today. Some of this will be review, but we'll divemore deeply into the linear algebra in this class. Roadmap: • Fourier Analysis "A way of decomposing signals into their constituent frequencies. Kind of like the way a prism splits light into its components. Fourier analysis gives us the tools we need to analyze signals." • Periodic [ in time domain. ] • Discrete time, discrete time Fourier series. • Continuous time, continuous time Fourier series. • Aperiodic • Discrete-time Fourier transform. • Continuous-time Fourier transform We can put blocks around these. Discrete-time signals are periodic in the frequency domain. Before I do that, I want to review an abstraction that we use for signals (and the way we look at signals) as vectors. Periodic DT Signals x(n+p) = x(n) \forall n \exists p \in \{1,2,3,4,...\} p is called the period of x. The smallest p for which this is true iscalled the fundamental period. With discrete-time signals that are periodic, you always have to find outthe period before you can talk about frequencies. \omega_0\equiv2\pi/p isthe fundamental frequency. Let's say we have a signal defined as ...4/2/4/2/4/2... We can abstract and represent this signal as a cartesian vector. Aeuclidean vector. Namely, I can go to p-space and draw \mathbf{x}. Ibasically take the values in that one period and stack themup. x=[4, 2]. I can ask you a couple of questions: what therepresentation of this vector is in terms of two canonical vectors:\psi_0=[1,0], \psi_1=[0,1]. \ket{x} = 4\ket{\psi_0} +2\ket{\psi_1}. We do this via projection. Or a change of basis. [4,2] =4[1,0] + 2[0,1]. Inner products and stuff. \braket{x}{\psi_0} +\braket{x}{\psi_1}. \braket{x}{\psi_0} = (x_0\psi_0 + x_1\psi_1)^T\psi_0 = x_0(\psi_0)^T\psi_0+ x_1(\psi_1)^T\psi_0 = x_0\psi_0 + 0. x^T\psi_0 = x_0\psi_0^T\psi_0 x_0 = x^T\psi_0/(\psi_0^T\psi_0) Just use inner product with normalized basis vectors. In this case, it was easy to find out what the coefficients were. I canrepose the question by changing the \psi_k. I'm going to rotate the basis. Basically, non-normalized sign basis. [3, 1]. [ talk about how we only need two orthogonal vectors, since our fundamental period is 2. Don't need a third vector. ] Now, there's something special about these two frequencies we found. Harmonics. Integer multiples of the fundamentals. Number of terms equalsperiod. Certainly what this toy example is indicating. Now the question is, how do we find these coefficients? Oh wait, we'vealready solved that problem. INTERMISSION I'm going to recast what we just did in matrix-vector form. We said that xas a vector can be represented as a linear combination of \psi_0 and\psi_1. I chose the period over the interval 0 to 1. I can do the samething with \psi_0 and \psi_1. This is just the matrix for the FT. I have x = x_0\psi_0 + x_1\psi_1 =F[x_0,x_1], where F = \begin{bmatrix}1&1\\1&-1\end{bmatrix} =[\psi_0, \psi_1]. This is a 2x2 matrix. I can write this as x = \Psi X. Solve for X: left-multiply by inverse Fouriertransform. F^{-1} = \frac{1}{n}F^\dagger. INTERMISSION Once you have complex numbers, you can no longer use the dot product. Wenow need the inner product. \braket{a}{b} = a(T)b^*. Fails otherwise, since\braket{x}{x} \neq \abs{x}. x is p-periodic. \vec{x} = [x(0), ... , x(p-1)]. I have \psi_k \perp \psi_l, whereb\braket{\psi_k}{\psi_l} = p\cdot\delta_{kl}. Proof of orthogonality of \psi_k, \psi_l, k \neq l. Geometric sum,not very interesting. x is the linear combination of our \psi_k. \braket{x}{\psi_l} = X_l\braket{\psi_l}{\psi_l} \implies X_l =\frac{1}{p} \braket{x}{\psi_l}. synthesis equation, analysis equation: x(n) = \sum X_k e^{ik\omega_0n}$$X(n) = \sum x_k e^{-ik\omega_0n}$

EE 120: Signals and Systems

February 2, 2012.

DTFS

$x(n) = \sum X_k e^{ik\omega_0n}$. The complex exponentials form a basisfor $\mathbb{C}^2$. The way we define inner products for signals is exactlyas you'd imagine. $\sum \psi_k(n) \psi^*_l(n)$. Frequency periodicity:$\psi_{k+p} \equiv \psi_k$. Only in discrete-time periodic case are ourfunctions guaranteed to be both periodic in time as well as frequency.

$x(n) = cos(n)$ not periodic in discrete time $\implies$ no DTFS.

Evidently there will be a quiz on Tue, since pset is due on Wed.

Consider what it means to send discrete-time periodic signals throughLTI systems.

EE 120: Signals and Systems

February 7, 2012.

CTFS:

periodic if $x(t+p) = x(t) \exists p \in R$. Smallest positive $p$ iscalled the fundamental period. Fundamental frequency $\omega_0 \equiv\frac{2\pi}{p}$.

Can also write $p = 2\pi/\omega_0$. In discrete-time, writing it this waywas dangerous, since depending on $\omega_0$, $p$ may or may not be aninteger.

For discrete-time signals, the constant signal was periodic with $p=1$.$x(t) = 1 \forall t \in Z$?

What about the signal $x(t) = 1 \forall t \in R$? The fundamental period isundefined: any $p>0$ can serve as a period.

So there are subtleties in each story. In the discrete-time story, therewere some sinusoids that looked periodic but weren't, and the constantsignal has no fundamental period in continuous-time.

We're going to jump immediately into the Fourier series. He said you candecompose any continuous signal as a linear combination of complexexponentials that are related to each other by virtue of being atfrequencies that are integer multiples of the fundamental period.

$x(t) = \sum X_k e^{ik\omega_0t} = \sum X_k\psi_k$.

We know the procedure for finding the kth coefficient. Before we go there,there's something you ought to pay attention to in this expression. When Idraw a typical periodic signal, when I look at one period, how many pointsdo I have? Uncountably infinite. Also range is potentially a set ofuncountably many values. So this is a bold claim: we can represent thesewith a countable number of eigenfunctions.

Unlike the discrete-time story, this equality will not always be apointwise equality. There are different gradations of convergence. Wheneveryou have an infinite sum, you have to worry about convergence in the backof your head. For well-behaved signals, the left and the right converge,and this is true for every t. The less well-behaved signals will no longerhold pointwise. Strange things happen, e.g. Gibbs phenomenon.

You'll have a reasonable understanding of Fourier series. We're not goingto worry too much about convergence in this class. The only time it doesn'tarise is in the discrete-time Fourier series.

Claim: Fourier analysis works. One path we can take is for you to take myword for it. Or we could prove it. Since last time was hilarious, we'regoing to take this for granted, for now. Assume orthogonality of $\psi_k$for some definition of the inner product. $\psi_k = \exp(ik\omega_0 t)$.

I am now going to determine $X_l$. Take the inner product of $x$ with$\psi_l$.

The procedure is exactly the same. We're just swapping out our definitionof inner product. Exploit the orthogonality.

For discrete-time p-periodic signals, we defined the inner product as$\braket{f}{g} = \sum fg^*$. Guess what the continuous-time inner productis for $p$-periodic signal!

And if they're non-periodic, we'll do the same, but over all time.

Show that our eigenfunctions are orthogonal.

Synthesis equation: $x(t) = \sum X_k \exp(ik\omega_0 t)$Analysis equation: $X_k = (1/p)\int x_k\exp(-ik\omega_0 t)$.

How do I show that $\braket{\psi_k}{\psi_l} = 0 (k\neq l)$? Just evaluatethe integral. We get an exponential with period p, integrated over a periodp? Looks like 0 to me.

Example: $X(t) = \cos(\pi t/3)$. $(\exp(i\pi t/3) + \exp(-i\pit/3))/2$. $\psi_1 = \psi_{-1} = \frac{1}{2}$.

q(t) = \sum\delta(t-\ell p) = \sum Q_k \exp(ik\omega_0t)$$\delta(t) = \deriv{u(t)}{t} Poisson's identity. \sum\delta(t-\ell p) = \frac{1}{p}\sum\exp(ik\omega_0t) R_k = \frac{1}{p}\int r(k)\exp(-ik\omega_0t) = \frac{1}{p} if k=0 else\frac{\sin(k\omega_0\Delta/2)}{pk\pi\Delta/2} What happens if I want to approximate a signal that has finite energy? Whatshould the coefficients \alpha_k be? orthogonal projection! Least squares! EE 120: Signals and Systems February 9, 2012. Discrete-time Fourier transform Discrete aperiodic signals. The DTFT can also handle discrete-time periodicsignals, provided that we make use of Dirac deltas in the frequencydomain. You all should remember that the frequency response of adiscrete-time LTI system H(\omega) = \sum h(\ell)\exp^{-i\omega\ell}[DTFT analysis equation]. It turns out this is the DTFT of the impulseresponse. How to go from frequency domain to time domain, i.e. how to derive impulseresponse from frequency response. Our goal is to determine h from H. That's what we want to do. And I'm goingto do this in two ways, both of which rely on things we've already done. Sothere's nothing new that I'm going to go through. Recall: With discrete LTI systems: frequency response has fundamentalperiod 2\pi. H(\omega+2\pi)=H(\omega). We already possess themathematical machinery to handle periodic continuous variables (i.e. we'veseen this in the CTFS). Then, our continuous variable was time. Now, it'sfrequency. Recall CTFS. x(t). p: fundamental period. \omega_0 = \frac{2\pi}{p}:fundamental frequency. We said we can express x as a linear combination ofcomplex exponentials that are harmonics (integer multiples) of thefundamental frequency. We had an expression for these. X_k =\frac{1}{p}\int x(t)e^{-ik\omega t}. I'm going to draw parallels now with the current scenario: x\toH. t\to\omega. p\to2\pi. \omega_0=\frac{2\pi}{p}\to\Omega_0 =\frac{2\pi}{2\pi}=1. \therefore X_k = h(-k). Let k\equiv-\ell in equation (\star). H(\omega)=\sumh(-k)e^{ik\Omega_0\omega}. Our coefficients are the h(-k) = \frac{1}{2\pi} \int_{\avg{2\pi}} H(\omega)e^{-ik\omega} d\omega. This is exactly parallel to the previousexpression. h(\ell) = \frac{1}{2\pi}\int H(\omega)e^{i\ell\omega}d\omega. Synthesisequation. DTFT equations:H(\omega) = \sum h(n)e^{-i\omega n}.h(n) = \frac{1}{2\pi}\int H(\omega)e^{i\omega n}d\omega. h(n) = \int \frac{d\omega H(\omega)}{2\pi}e^{i\omega n}. Linear combination ofcomplex exponentials. H(\omega) is a measure of the contribution offrequency \omega to the function h. For now, we're working with the universe of functions of the continuousvariable \omega which happen to be periodic with period 2\pi. Ideal discrete-time Low-pass Filter Impulse response h(n) = \frac{1}{2\pi}\int H(\omega)e^{i\omega n}d\omega. \frac{B}{\pi n} \sin(An) EE 120: Signals and Systems February 16, 2012. Discrete-Time Fourier Transform: continued Recall the ideal low-pass filter. In time-domain, it had somediscontinuities. In freq domain, it was NOT absolutely summable. However,it is square-summable. We have names for signals that behave in theseparticular ways. \ell_1 signals: absolutely summable. \sum \abs{x(n)} < \infty (abuse ofnotation, but useful at that). \ell_2 signals: Square-summable. \sum \abs{x(n)}^2 < \infty. Finiteenergy. For discrete-time, if a signal is \ell_1, it is \ell_2. Converse nottrue. \sum\abs{h(n)} = \sum_{n=0 \to \infty} \abs{\alpha}^n =\frac{1}{1-\abs{\alpha}} < \infty. h \in \ell_1 \implies finite\abs{H(\omega)} and smooth. If x \notin \ell_1 but x \in \ell_2, you run into a bit of a problem:cannot use analysis equation, since summation will not converge. But we candefine the Fourier transform in that case to be the limit as N \to \inftyof \sum_{-N \to N} x(n)e^{-i\omega n}. DTFTs are continuous (notnecessarily smooth / infinitely differentiable). Turns out we get convergence in energy of the two signals. It's just foryou to bear in mind, since you have an infinite sum. The most well-behavedfunctions are \ell_1. They have nice DTFTs. Next level up in terms ofmisbehavior is \ell_2. Fourier transforms cannot be obtained throughanalysis equation, but can be reverse-engineered or otherwise. DTFTs havediscontinuities. We have yet another level up in terms of misbehavior. We have signals of"slow growth". (zero growth) Examples: x(n) = 1, x(n) = n, x(n) =e^{i\omega_0 n}. Basically, these are signals that grow no faster thanpolynomially in time. (and signals that neither grow nor decay). Noticethese signals \cap (\ell_1 \cup \ell_2) = null. Cannot use analysisequation, but can use synthesis equation. And their DTFTs have Diracdeltas. Example: x(n) = e^{i\omega_0n} X(\omega)? \alpha\delta(\omega-\omega_0). Strictlyspeaking, we'd have to write the spectrum as a sum: we have a delta every2\pi, starting at \omega_0. But that's not particularly interesting,since that looks messy. I'm just interested in the interval here. As longas we know that it's 2\pi-periodic. To get our \alpha, we can and will use our synthesis equation:\frac{\alpha}{2\pi}\int \delta(\omega-\omega_0 e^{i\omegan}d\omega. \frac{\alpha}{2\pi} e^{i\omega_0n} = e^{i\omega_0n} \implies\alpha \equiv 2\pi. \pi\delta(\omega-\omega_0) + \pi\delta(\omega+\omega_0) (2\pi-periodic,again) For signals that belong to none of the previous categories, we have tolearn a new transform -- Z-transform! Z-transform! Coming in March, totheaters near you. Why do \ell_1 signals have finite DTFTs? triangle inequality, following directly from the synthesis equation. INTERMISSION DTFT Properties Time-shifting If I have a signal that's Fourier-transformed? What do I get in thefrequency domain if I shift the original signal? We get a phase shift: if we shift x(n) by N, our new X(\omega) ismultiplied by e^{-i\omega N}. Example / curveball H(\omega) = e^{-i\omega/2}4which yields h(n)= \frac{(-1)^{n+1}}{\pi}(n-\frac{1}{2}) Remember: convolution and multiplication are duals over the time andfrequency domains. Not too long from now, we are going to start studying the samplingtheorem. One way to consider this as a half-delayed sample of X is to thinkof X as having samples. We map this to a sequence of samples in continuoustime, split apart by T seconds (sampling period). Interpolation scheme thatwe'll learn about, etc. I can shift to the left or right by an arbitraryamount. So in continuous time, I take this signal and I shift it to theright by T/2 seconds. So I take whatever this was, take samples spaced Tapart, then convert it back into Kr\"onecker deltas. This filter is called, for the reason described, a half-sample delay. Youessentially have to interpolate between the frequencies in discrete timeand sample the halfway points. What happens if I multiply in the time domain by a complex exponential(modulation)? The frequency domain is convolved with said exponential. Using the analysis equation,we have X(\omega) = \sum x(n) e^{i (\omega -\omega_0) n} = X(\omega-\omega_0). Multiplying the time domain by e^{i\omega_0} results in a frequency shift. Multiplying by a complex exponential yields a shift in the other domain. There is one property that is distinct, and that is the modulationproperty (multiplication property). q(n) = x(n)y(n) \iff Q(\omega). Not an ordinary convolution. Q(\omega) = \sum x(n)y(n)e^{-i\omega n} In particular, for x(n), use synthesis equation to replace x(n) withX(\xi) (where \xi is a dummy variable).$$Q(\omega) = \frac{1}{2\pi} \sum_n\int_\avg{2\pi}X(\xi)e^{i\xi n}y(n)d\xi e^{-i\omega n}.= \frac{1}{2\pi} \int_\avg{2\pi}d\xi X(\xi)\sum_n e^{-i(\omega-\xi)n}y(n)d\xi= \frac{1}{2\pi} \int_\avg{2\pi}d\xi X(\xi)Y(\omega-\lambda)$$What's special about this? It's only over a value of 2\pi. This is a veryspecial kind of convolution. We call this a circular convolution of x andy. So x(n)y(n) \iff \frac{1}{2\pi} (X*Y)(\omega). Think of two functions plotted in terms of \lambda. One function, let'ssay, is a square pulse wave thingy. easy way of carrying out circular convolution: take one of the twofunctions, keep one replica (e.g. the one from -\pi to \pi), and thendo a regular convolution with the other function. advice: choose theflipped + shifted signal for elimination of replicas. EE 120: Signals and Systems February 21, 2012. Circular convolution wrap-up h(n) = \frac{B}{\pi n} \sin(An) \fourier H(\omega) = u(n+A)-u(n-A).(ideal low-pass filter and its impulse response) What if I had a filter whose frequency response G(\omega) is basically thesignal (circularly) convolved with itself (with some extra factors)?description: ramp going from 0 to 4AB^2 from -2A to 0, then back downto 0. • Recall: To perform circular convolution: • Keep one function fixed (in terms of a dummy frequency variable) • Keep only one period of the other signal. • Perform regular convolution. • Example: (Q * R)(\omega). • Plot on a dummy-variable axis Q(\lambda) with its replicas. • Plot R only in one period (zero out all replicas). • Flip \& slide R. R(\lambda-\omega) What's the brute-force way to find the impulse response? Synthesisequation. Inspection method? It looks like it's 2\pi h^2(n) = \frac{2B^2}{\pin^2}\sin^2(An) (circular convolution must yield the 2\pi scalar) • Recall: q(n)r(n) \fourier 2\pi (Q*R)(\omega). • Note: this filter is BIBO stable, and its Fourier transform is continuous. Now, we can spend a lot of time on the various properties of the DTFT (whathappens when you time-reverse a signal, shift, multiplication properties,modulation, etc.), and this'll probably show up in a problem set. As the last example on the DTFT: f(n) = e^{i\omega_0n}h(n). Let one ofthese be a sinusoid. What do you do? It's effectively shifted by \omega_0, after the math clears. Intermission Tentative midterm date: March 13. Continuous-Time Fourier Transform X(\omega): roughly the frequency response in continuous-time.$$X(\omega) = \infint x(t)e^{-i\omega t}dt\\ x(t) = \frac{1}{2\pi} \int X(\omega)e^{i\omega t}d\omega$$Slight shorthand: X = \infint x(\tau)\phi{\tau} d\tau (\phi{t}(\omega)\defequals e^{-i\omega \tau}) To determine x(t): (assume for now that \braket{\phi_t}{\phi_\tau} =\delta{\tau t}) \braket{X}{\phi_t} \defequals \infint X(\omega) \phi_t^*(\omega) d\omega. \braket{X}{\phi_t} = \int X\braket{\phi_\tau}{\phi_\tau}d\tau NB: Inner products are in frequency domain. • Recall: • In the discrete-time case, we found that for \braket{\phi_m}{\phi_n}, we got 2\pi\delta_{mn}. Turns out, we have exactly the same relationship in the continuous-time case. • In the continuous-time case, we have 2\pi\delta(m-n). • Recall Poisson's identity: \sum \delta(t-\ell p) = \frac{1}{p}\sum \exp(ik\omega_0t). Now we are desperate to have \infint e^{i\omega t}d\omega =2\pi\delta(t). Which means we must have \delta(t) \equiv \frac{1}{2\pi}\int e^{i\omega t}d\omega. New definition for \delta(x). Mutual orthogonality of the CTFT takes on this nasty form. Plausibility argument: The right-hand side is proportional to \int e^{i\omega t}d\omega = \int\cos(\omega t)d\omega + i\int \sin(\omega t)d\omega. Imaginary part isguaranteed to be 0. By the same argument, the cosine portion is 0 if t\neq 0. Otherwise, we're integrating 1 over all reals, so we have\infty. Constructively interfere at t = 0, but destructively interfere at t \neq0. Therefore the right-hand side must be proportional to \delta(t). The claim is that the constant of proportionality is 2\pi. The hard part isto recognize this is actually a Dirac delta. Intermission Steam coming out of our heads, evidently. Remember, this is the onlybarrier to studying the DTFT. One way to show \delta(t) \equiv \frac{1}{2\pi} \int e^{i\omega t}d\omega This method you will see, hear, etc. in engineering contexts for figuringout something that doesn't converge, since Riemann integrals don't reallywork here. \delta_\Delta(t) = \frac{1}{2\pi} \int e^{i\omega t}e^{-\Delta\abs{\omega}}. Thisproduct is supposed to tame the function such that it is now integrable. (take \Delta > 0) Multiply what you've got with something that makes the productconverge. Then take the limit as said function goes to 1? In other words, I am perturbing the problem slightly to make it stable andtaking the limit as stability goes to 0. NB: half-width, half-maximum: where we're at half the width of our functionand also half the maximum value. Perturbation theory! \alpha \defequals perturbation parameter. Chosen such that area is 1. Result of the integral: \delta_\Delta(t) = \frac{\alpha\Delta}{\pi}\frac{1}{\Delta^2 + t^2}. Turns out \alpha=1. This is a Cauchy probability density function. (names for dirac delta: generalized function, distribution. Look up theory of distributions) EE 120: Signals and Systems February 23, 2012. Continuous-time Fourier transform Recall:$$X(\omega) = \int_{-\infty}^\infty x(t)e^{-i\omega t}dt\\ x(t) = \frac{1}{2\pi}\int_{-\infty}^\infty X(\omega)e^{i\omega t}dt$$We developed the inverse Fourier transform using a bizarre identity forwhich we established a way to derive (i.e. perturbation theory). Example: Ideal low-pass filter. In the continuous-time story, you do notassume any periodicity. Figure out the impulse response of this filter.$$\frac{1}{2\pi}\int_{-A}^A Be^{i\omega t} = \frac{B}{\pi t}\sin(At)$$(validfor t = 0 as well) Okay. Now, how would you plot this? maximum of \frac{AB}{\pi}, zeroes at\frac{k\pi}{A} \forall k \in \mathbb{Z}.$$H(\omega) = \int h(t) e^{-i\omega t}dt\bigg|_{\omega=0} = \int h(t)dt =H(0) = B$$. Largest triangle has the same area as the integral over all space. Another question. Let B=1. What is \lim_{A\to\infty} h(t)? Yet anotherDirac delta (yet another Dirac delta definition). Take g(t) as a block. What is G(t)? It is a sinc function. If you dilate in the time domain, you squish the frequency domain, and viceversa.$$\delta(t) \fourier 1\\ 1 \fourier 2\pi\delta(\omega)\\ e^{i\omega_0t} \fourier 2\pi\delta(\omega-\omega_0)\\ \cos(\omega_0t) \fourier \pi(\delta(\omega-\omega_0) + \delta(\omega+\omega_0))\\ \sin(\omega_0t) \fourier i\pi(\delta(\omega+\omega_0) - \delta(\omega-\omega_0))\\ x(t) \fourier X(\omega)\\ x(t-T) \fourier e^{-i\omega T}X(\omega)\\ e^{i\omega_0t}x(t) \fourier X(\omega - \omega_0)$$Multiplication in Time$$h(t) = f(t)g(t) \fourier H(\omega)\\ f(t) \fourier F(\omega)\\ g(t) \fourier G(\omega)Since transforms are not periodic, we have a regular convolution with anextra 1/2\pi term. H(\omega) = \frac{1}{2\pi}(F * G)(\omega). Reasoning for convolution leading to multiplication in frequency domain:cascade two systems, choose input to be complex exponential. This is aneigenfunction, so we have the output response being F(\omega)G(\omega). Nowchoose input to be an impulse. Output is (f*g)(t). INTERMISSION Amplitude modulation I'm going to start what looks like a new phase of this course, but it'shardly anything new and unpredictable. If you understand the CTFT and itsfundamental properties (what we've talked about so far), you should have noproblem understanding amplitude modulation. One of the things you've experienced by now is that the atmosphere is quiteunforgiving to audible frequencies. Your voice only transmits over someshort distance. Obvious solution is to shift spectrum to a much higher frequency range. Theway we do it is to multiply by either a complex exponential or asine/cosine (called a carrier signal). Carries signal of interest. SoY(\omega) = \frac{1}{2\pi}(X * C)(\omega) = X(\omega - \omega_0). Whathappens is that \omega_0 is usually large enough to make transmissionthrough the atmosphere possible. Ignoring all degradation going through the atmosphere, to retrieve theoriginal signal, multiply by the negative carrier frequency (complexexponential). Frequencies about zero: baseband frequencies. Shiftedspectrum of received signal back to baseband. aliasing; completely garbles signal. Apply low-pass filter at end to capture baseband spectrum. Scale by two torecover original signal, complete with amplitude. Final observations: we first of all made an assumption that the receivedsignal is the same as the transmitted signal. Whole area of communicationtheory that deals with deterioration. Other assumption is that theoscillator at the receiver that can generate the exact same frequency asthe transmitter oscillator and at the same phase. EE 120: Signals and Systems February 28, 2012. AM Continued (review of what we just did) Recall: There are two major assumptions that we made. First of all, notransmission degradation. Second of all, the receiver has the exactsame phase and frequency as the transmitter. Question: What if \hat{c}(t) = \cos(\omega_0t + \theta)? (still keeping theassumption that we can somehow match the frequency. New assumption: phaseis constant and not time-varying) Thoughts: If the phase is off by \pi/2, then you lose your signal entirely. r(t) =\frac{1}{2}\cos(\theta)x(t) + \frac{1}{2} x(t)\cos(2\omega_0t +\theta). If \theta is relatively small (compared to \pi/2), then weare safe, since we have our original signal. However, we lose our signalas \cos\theta\to0. Note:MT2 date: Tues, 13 Mar 2012. Also: when \theta=0, this is referred to synchronous demodulation(transmitter and receiver in sync) Instead of sending y(t) into a low-pass filter, what we do is send itthrough the diode parallel RC circuit. This is one way to do asynchronousdemodulation. This is technically cheating, since we assume that our signalis entirely positive. However, we can simply apply a DC offset if we knowthe bounds on our values. Suppose |x(t)| \le A \forall t. Then, transmit \hat{x}(t) \equiv x(t) +A. Why is this method of transmitting \hat{x}(t) called AM with largecarrier? We're actually also transmitting the carrier: \hat{x}(t)\cos(\omega_0 t) = x(t)\cos(\omega_0 t) =A\cos(\omega_0 t). If \abs{x(t)} \le K, we want K < A. In fact, \frac{K}{A} is referredto as the percent modulation or modulation index. One thing you should know is that there is redundancy in information doubleside-band suppressed carrier. Frequency Division Multiplexing Each player is allocated a piece of real estate along the frequency axis. Quadrature multiplexing The way we can do this is by exploiting the orthogonality of cosine andsine. What's being transmitted is the sum of the two. EE 120: Signals and Systems March 1, 2012. Sampling of CT Signals Now we have to differentiate between frequency in radians per second(\omega), and frequency in radians per sample (\Omega). Our samplinginterval we will represent with T_s, our sampling period. There are associated with these boundary blocks (C \to D, D \to C)periods. T_r : reconstruction period. T_r and T_s may or may not bethe same. The new part really is sampling theory. So let's begin. Let's say I have a continuous-time signal x_c(t). We sampleperiodically. x_d(0)=x_c(0). x_d(1)=x_c(T_s). Basically,x_d(n)=x_c(nT_s). A basic question is whether or not it is possible toreconstruct the original signal? In general, the answer is no. There must be (we hope so, at least) a set of conditions that guaranteesrecoverability. These actually happen to be sufficient conditions. That isthe whole subject of the Sampling theorem, which we will develop. So let me open up this first box, the process of sampling. We can model theprocess of sampling this signal as one that involves multiplying theoriginal input signal by an impulse train, where the impulses are separatedT_s second apart. Remember: multiplying a function that is locallycontinuous with an impulse is equivalent to rescaling the impulse. Once you extract x_q, there's a block we're not going to worry about thatconverts Dirac deltas to Kr\"onecker deltas. Believe it or not, there isnothing new here. What happens to the spectrum of x_c as it is multiplied by the impulsetrain? Our fundamental frequency of this signal is \frac{2\pi}{T_s}. It's alsocalled, in this context, our sampling frequency. Considering the DTFT of periodic signals, we have a bunch of uniformimpulses separated by \omega_s, and each of them has strength\omega_2. What happens when T_s gets smaller? The sampling frequencyincreases, and so we have stronger (and more separated) impulses as ourspectrum. Our CTFT is simply 2\pi\sum_k Q_k\delta(\omega-k\omega_s). Multiplicationin the time domain is convolution in the frequency domain, so we canconsider the triangles. So what happens? In order for these triangles to be recoverable, we need 2A\le \omega_s, or A \le \frac{\omega_s}{2}. \omega_s must be largeenough that adjacent triangles do not overlap (and this is, indeed, thecrux of the sampling theorem) -- in this case, we can certainly recover ouroriginal signal by applying a low-pass filter. The low-pass filtering is an interpolation operation -- we're interpolatingbetween adjacent values of the signal. (explanation: You've got your samplings of the signal, so we've actuallygot a set of impulses. We're applying a low-pass filter on this and fillingin the missing portions of the signal. Remember that our ideal low-passfilter is the sinc, and so we're taking the linear combination of acountably infinite set of sincs) Whittaker-Nyquist-Kotelnikov-Shannon Sampling Theorem (1915, 1928, 1933, 1949) Most frequently known as the Shannon Sampling theorem. This is the set ofsufficient conditions for recovering a signal from a sequence of samples. • x_c is band limited. Namely, \abs{X_c(\omega)} = 0 (\abs{\omega} > A). • Sampling rate \omega_s is large enough such that 2A \le \omega_s There is no universal definition of bandwidth. In some circles, the highestfrequency A is called the bandwidth. In other circles, the frequency'sfootprint 2A is called the bandwidth. Does not really matter. As mentioned, nothing mentioned today is new; we've just used very basicproperties of the CTFT of periodic signals and the convolution property. Thoughts What if \omega_s < 2A? (by the way, 2A is called the Nyquistrate). There has been a big push lately. Look up sub-Nyquist in a group ofpapers. There are many people trying to figure out how to sample at alower-than-Nyquist rate. So what happens if \omega_s = 2A? Our triangles are tangent. In the idealcase, this is still acceptable, but it is impossible to build an ideallow-pass filter, so it doesn't actually matter. Oversampling: you exceed the Nyquist rate by a certain amount tocompensate. Past a certain amount of oversampling, it doesn't really meananything. If \omega_s < 2A, our triangles overlap, and so the spectrum issignificantly different. Higher frequencies get folded into lower frequencies. This is calledaliasing: the artifact of folding of higher frequencies into lowerfrequencies. Once you have that, you have irrecoverably lost your originalsignal. In image processing, we have anti-aliasing (our next topic). Notice that if the signal is band-limited, we can make \omega_s largeenough such that we do not get overlap. If x_c is not band-limited,then no matter how high we make \omega_s, we're going to have someoverlap. There's one thing we can do in that case, but that's acompromise. We call this: Anti-aliasing The idea is we discard frequencies above a certain value. With aliasing, the frequencyregion is distorted, which is reflected everywhere in the time domain. Thisis generally not good, unless it's really small and you can ignore it. Anti-alias filtering is a preprocessing stage. x_c(t) \to LPF \toq(t). Namely, apply a low-pass filter (with cutoffs @ -\frac{\omega_s}{2}, \frac{\omega_s}{2}) before sampling. We eliminate the aliasing weknow will happen (in the event that we don't do this). The result is that you don't lose quite as high of frequencies. There is nomore aliasing. So why anti-alias first instead of allowing it to alias, if you loseinformation either way? • Anti-aliasing allows for the preservation of more of the original signal. The set of signals that does not produce aliasing with a fixed \omega_sare those that are band-limited by \left(-\frac{\omega_s}{2},\frac{\omega_s}{2}\right). (talk about Parseval's theorem) Carriage-wheel effect Aliasing is what causes the phenomenon that some of you may have noticed:when on the highway, and you stare at the wheels of the car passing by you,and they seem to be moving much slower in the opposite direction. Used toshow up at PhD qualifying exams at MIT. You may also find this described asthe carriage-wheel effect. Since the frame rate of old movies was 24 framesper second, the wheels looked to be moving backward. Mark a point on the unit circle. I strobe this wheel at a rate \omega_s(strobing it is like sampling it). So I've got this strobe gun with asampling rate of \omega_s. \omega_s happens to be \frac{3}{2}\omega_0 (so not exceeding the Nyquist rate). If I sample exactly at\omega_0, the point looks stationary. So the only way to capture themotion properly is to capture at 2\omega_0 and higher. But I'm not doingthat. Its spectrum is a single Dirac impulse of strength 2\pi centered at\omega_0. The sampling signal Q(\omega) will have an impulse trainseparated by \omega_s. When we convolve this and apply a low-pass filter,we have just one remaining frequency at \omega_0-\omega_s = -\frac{\omega_0}{2}. EE 120: Signals and Systems March 6, 2012. Sampling Cont'd We are still in the first of three blocks (where we take a continuous-timesignal and create a discrete-time signal). This end-to-end system is effectively a continuous-time system. This kind of processing we refer to (for obvious reasons) as discrete-timeprocessing of continuous-time signals. The opposite is also possible, whereyou start out with a discrete-time signal, process in continuous-time, andspit out a discrete-time signal. Last time, we opened up the first box (C \to D). We didn't even talkabout the entire box -- there's still some stuff to discuss. So, consider an impulse train. We then take this through a Dirac \toKr\"onecker block to produce x_d(n). Question: How is X_q(\omega) related to X_d(\Omega)? etc, work with moving around between coords. All of this assumes that there has been no aliasing. For us to have had noaliasing, remember the Nyquist sampling theorem. Considerations of LTI for Y_c(\omega) \equiv X_c\parens{\frac{\omegaT_r}{T_s}}. The only way your end-to-end system will have an LTI-equivalent is if youfulfill the conditions of the Nyquist sampling theorem (no aliasing), andyour reconstruction period is the same as your sampling period. Then you know that your output is equal \frac{G}{T} H_d\parens{ \omega T}X_c(\omega). The equivalent LTI filter is simply \frac{G}{T} nH_d \parens{\omega T} EE 120: Signals and Systems March 8, 2012. Spoke about sampling, impulse train, conditions under which signal can berecovered -- if band-limited, and frequency high enough, we can recoverwith a low-pass filter. We drew the spectrum, and we showed that if we sampled fast enough, wecould recover the original signal. What I want to do now is talk about what is going on in the time domain. Iam sending the nonuniform impulse train into some low-pass filter, and Iwant to see what r(t) is. So the first thing I want you to do is tell mewhat the impulse response of this filter is. h(t) = \sinc\parens{ \frac{\omega_s t}{2}}. When you convolve the sinc with these impulses, what do you get? Notice thealignment. Since x_q(t) = \sum x_c(nT_s)\delta(t-nT_s), r(t) = \sum x_c(nT_s) h(t-nT_s). So let's draw this. (basic premise: interpolation;further from center means impulse has weaker effect. Also, zero crossingsoccur at locations of all other impulses. Scaling is strength of impulse.)We're only going to get one signal. Only one will be obtained from thisinterpolation scheme. Remember, \sinc \alpha = \frac{\sin \pi \alpha}{\pi\alpha}. Thus the expression for x_q(t) can be rewritten as \sum x_c (nT_s)\sinc(\frac{\omega_s(t-nT_s)}{2\pi}). This is a loadedexpression. Remember how when we started Fourier series, how we expand asignal in terms of orthogonal basis functions? This is an orthogonalexpansion. The values of the signals at the sample points; these shiftedsincs are the orthogonal functions. Now, if you look at your problem set, there are a couple of problems youought to pay attention to. One: Fourier transform preserves orthogonality,i.e. if two signals are mutually orthogonal in the time domain, they'remutually orthogonal in the frequency domain. You use that later to showthat these shifted sincs are orthogonal: you do not show this in the timedomain. That's where you use orthogonality-preserving property of thecontinuous-time Fourier transform. What does this mean? A band-limited signal x has an orthogonal expansionin terms of shifted sincs. These are not arbitrary sincs: T_s, thesampling period, enters this expression. It's very related to the bandwidthof the original signal. Yet another context in which orthogonal expansionsshow up. You can essentially consider h_0(t) as the unshifted \sinc (\frac{\omega_s t}{2\pi}) = h(t). h_n(t) = h_0(t - nT_s). These functions, weclaim, are mutually orthogonal. Namely, \braket{h_k}{h_l} =\delta_{kl}. Here, you use Parseval's theorem, since you do not want tointegrate \sinc^2. Called band-limited interpolation, since you do notget the original signal if either it wasn't band-limited, or you didn'tsample fast enough (which is where aliasing occurs). Another view of sampling as an orthogonal expansion: Another way to look at this is to start in the frequency domain, where Ihave my spectrum of my signal. If you remember, Fourier series expansion isnot just useful for representing periodic signals, but also forfinite-duration functions (in this case, it's finite in frequency). You canthink of these phantom replications of this triangle. In other words, I cancreate a periodic extension of this triangle, which is my X_c(\omega), ifI create this periodic extension in such a way that these triangles touchexactly. I can write X_c(\omega) \equiv \sum_{k\in \mathbb{Z}} X_k e^{ikT_s\omega}. We refer to T_0 = 2\pi/2A as our fundamental "frequency" of theperiodic extension of X_c(\omega). (this has units of seconds, infact). This expression is only valid for the range \abs{\omega} \le A. Itis zero outside of said range. You can do this for any finite-duration function, since this can be thoughtof as one period of a periodic signal. What I'm going to do is go back to the time-domain function using the CTFTinverse formula. I'm looking at a band-limited function, so the integraldoesn't actually go from -\infty to \infty, but rather from -A toA. And in this range, you know that X_c is equal to the Fourier seriesexpansion. So, exchanging the integral and the summation (as we do so veryoften), and evaluating the integral, we get a linear combination of sincfunctions. It turns out these coefficients are the values of the signal at the sampledpoints. That isn't obvious yet. After some algebraic manipulations, we canfinally see that this whole thing is just x_c(kT_s). EE 120: Signals and Systems March 13, 2012. \mathcal{Z} Transform Something we've been brushing under the carpet for a while is the set ofsignals in which the Fourier transform is not defined. So the Z transform is defined for discrete-time LTI systems. (discreteanalogue of the Laplace transform) We know that the output is simply the convolution of the input with theimpulse response. If we said that h(n) = \alpha^n u(n), where \abs{\alpha} > 1, then weknow that this signal has no frequency response. However, behavior can bewell-defined even though you can't say anything about it in the frequencydomain. For instance: if x is of finite duration, then you have a finite number ofterms in the convolution corresponding to the output signal. You can thustalk about the output of this system for such an impulse: convolution isperfectly well-defined, and y(n) is finite for all n (and thuswell-defined). Another situation: if x is right-sided (i.e. x(n) = 0 for n < N -- mayalso have zero values to the right of N, but to the left, it is zero),what happens? Note that causal signals are a subset of right-sided signals. If n isanything smaller than zero, you are looking at a right-sided but notnecessarily causal signal. In the convolution of two right-sided signals (or two left-sided signals,even!), finitely many terms contribute. Therefore in these cases (input andimpulse response) we can define our output anywhere (i.e. finite, but notnecessarily bounded). Finally, if x is bounded and h is absolutely summable, then the system isBIBO-stable -- y is finite, so it is also bounded. \ell^\infty = \set{ x : \mathbb{Z} \mapsto \mathbb{C} : x(n) < B_x <\infty} Recall that \ell^1 = \set{ h : \mathbb{Z} \mapsto \mathbb{C} : \sum\abs{h(n)} < B_x < \infty } For DTFT, we applied e^{i\omega n} to our system and observed that ouroutput was H(\omega)e^{i\omega n}, where we defined H(\omega) = \sum_nh(n)e^{-i\omega n}. We're going to now relax the constraint that \abs{e^{i\omega n}} = 1,i.e. \omega \in \mathbb{R}. Now let x(n) = z^n \ (z \in \mathbb{C}). Soif I apply this signal to the system, what am I going to get?y(n) = \sum_m h(m)x(n-m) = \sum_m h(m)z^{n-m} = z^n\sum_m h(m)z^{-m}\\ z^{-n} y(n) = \sum_m h(m)z^{-m} \equiv \hat{H}(z)$$This is called the transfer function of the system. The transfer function for us will either be the Z-transform (ifdiscrete-time) or the Laplace transform (if continuous-time). For now we'restuck with the Z-transform. Notice the similarity of the format of these expressions. The maindifference is that now, I'm allowed to veer away from the unit circle. Thisis an infinite sum, so just as with the Fourier transform, we must worryabout convergence. With Z transforms and Laplace transforms, we can't get away fromconvergence. Associated with this sort of expression is what we call aregion of convergence (RoC). Basically, region in the complex plane forwhich this sum converges. Going to brush aside a lot of subtleties regarding convergence. R_h isthe region in the complex plane (i.e. the set of z) such that \sum_m\abs{h(m)z^{-m}} < \infty. If the kernel of this sum is absolutelysummable, we say that we are in the region of convergence. The values ofz for which this is true make up the region of convergence. I can take any discrete-time function and talk about its Z-transform. Justas with the Fourier-transform, I can talk about the FT of any function. So what if I'm looking at x(n) = \delta(n)? \hat{X}(z) = 1 -- we onlyhave one value in our sum. R_x = \mathbb{C}. In other words, 0 \le\abs{z}.$$\delta(n-1) \ztrans z^{-1}\:\: (R_h = \set{z : 0 < \abs{z}})\\ \delta(n+1) \ztrans z\:\: (R_h = \set{z : 0 \le \abs{z} < \infty})$$Now, two-point moving average: \frac{1}{2}\parens{\delta(n) + \delta(n-1)}\ztrans \frac{1 + z^{-1}}{2} = \frac{z + 1}{2z}\ (R_h = \set{z : 0 <\abs{z}}). Note that if this had been an anti-causal two-point movingaverage, we'd include 0 and exclude infinity. All of these signals so far are finite-duration signals (FIR filters). The radius of convergence of a function that has a finite region of support is the entire complex plane with the possible exception of zero, infinity, or both. Example of both: three-point moving average, centered at zero. May have noticed already that region of convergence has a particularconvergence so far: all of these resemble a radius, and rational in z. Notalways the case, but these are most of the signals we'll work with. There'sa nice accounting between numerator and denominator that allows you todetermine where the region of convergence is. Ex: h(n) = \alpha^n u(n), \alpha \equiv \frac{3}{2} \hat{H}(z) = \sum_{n=0}^\infty \parens{\frac{3}{2z}}^n = \frac{1}{1 -\alpha z^{-1}} = \frac{z}{z - \alpha}. RoC: \set{z : \abs{z} >\frac{3}{2}} When we talk about the z-transform, you can't just give an expression; youalso must provide a region of convergence. One without the other is anincomplete picture. Plotting region of convergence: In this case, just draw a dotted circle (radius not included), and shadeits exterior. This is not a proof, but notice that we've got a causal signal, and aregion of convergence that is outside of some circle (and extends toinfinity). Roots of the denominator are called poles of thesystem. Roots of the numerator are called zeroes. Therefore this systemhas one zero and one pole. It turns out that for right-sided functions, the RoC is always to theoutside of the radius defined by the outermost pole. One thing I want you to pay attention to is the following: the angle of thepole makes no difference in the region of convergence (ever!). When youlook at \hat{H}(z) = \sum_n h(n)z^{-n} and replace z = Re^{i\omega},you notice that this is \sum h(n)R^{-n}e^{-i\omega n}. e^{-i\omega n}plays no role in whether or not the kernel is absolutely summable. So the region in the complex plane where this sum is convergent isindependent of \omega. I could have made this \alpha a complex number on the same radius, andthe region of convergence would have been exactly the same. It is themagnitude of \alpha that is important. Ex: g(n) = -\alpha^n u(-(n+1)) Notice that this is a left-sided signal. \hat{G}(z) = -\sum_{-\infty}^{-1} \parens{\frac{\alpha}{z}}^n =-\sum_1^\infty \parens{\frac{\alpha}{z}}^{-n^\prime} = \frac{z}{z -\alpha} (\set{z : \abs{\alpha} > \abs{z}}) This is exactly the same expression in z, but the region of convergence isdifferent. This is why we are compelled to always consider the region ofconvergence. So two very different expressions in time yield the same expression intheir z-transforms, but the difference is in their radii of convergence. Just as with right-sided functions, the RoC for left-sided functions isalways inside the radius defined by the innermost pole. Monologuing With frequency response and Fourier transforms, we all knew what we weretrying to do. We were trying to decompose a signal into its constituentfrequencies. There is no such notion for the Z-transform. The whole idea ofstabilizing an unstable system when placing a system in a feedbackconfiguration requires the Z-transform. Consider \alpha^n u(n). In this case, we have not specified whether \alpha is inside or outsidethe unit circle. The expression is exactly the same. Let's take the first case, where \abs{\alpha} < 1. The region ofconvergence is outside the circle of radius \alpha. We could consider theDTFT then. This is a case where the region of convergence strictly includesthe unit circle. If that is true, then there is a very simple relationshipbetween the z-transform and the Fourier transform: we can evaluate thez-transform on the unit circle, i.e. z = e^{i\omega}. It is because ofthis that some people consider the z-transform to be a generalization ofthe Fourier transform. However, there are functions for which we have aFourier transform but no z-transform. You also know that when the RoC contains the unit circle, the time-domainfunction must be absolutely summable: the z-transform looks like theFourier transform of R^{-n}h(n). The point of R^{-n} is to tame thefunction. If \alpha is outside the unit circle, no such relationship exists betweenthe z-transform and Fourier transform, simply because there is no Fouriertransform. Now let's consider the anti-causal case: -\alpha^n u(-(n+1)). If \alpha happens to be within the unit circle, the function has noFourier transform. But if \alpha is outside the unit circle, thenthe function has a Fourier transform. So that's the relationship between the z-transform and the Fouriertransform: if the region of convergence contains the unit circle, then youcan equate them. If h(n) \ztrans \hat{H}(z), then h(n-1) \ztrans \frac{\hat{H}(z)}{z}. Similarly, h(n+1) \ztrans z\hat{H}(z). There is a difference between bounded and unbounded regions ofconvergence. We have a few minutes, so let me talk about the distinctions between causalsignals and right-sided signals (and also anticausal / left-sided). So let's say we take a right-sided but not causal signal. Now the RoC isoutside of radius \alpha, but now you have to exclude \infty. Similarly, for left-sided signals, you'd then exclude 0. EE 120: Signals and Systems March 15, 2012. More on the \mathcal{Z}-transform$$h(n) \ztrans \hat{H}(z) = \infsum{n} h(n) z^{-n}\\ R_h = \text{region of convergence}\\ \defequals \set{z \in \cplx \middle| \sum_n \abs{h(n) z^{-n}} < \infty}$$For signals of finite duration, the region of convergence is the entirecomplex plane, minus possibly r=0 and r=\infty. Example which was causal, x(n) = \alpha^n u(n). \hat{X}(z) = \frac{1}{1- \alpha z^{-1}}, \abs{\alpha} < \abs{z} (i.e. outside the circle). We also had an anticausal example, q(n) = -\alpha^n u(-n-1). \hat{Q}(z)= \frac{1}{1 - \alpha z^{-1}}, \abs{z} < \abs{\alpha} (i.e. inside thecircle). Furthermore, we discussed that a Fourier transform existed if and only ifthe unit circle was contained in the region of convergence. Notice that the RoC of a causal system was outside, all the way toinfinity, while the RoC of an anticausal was inside, all the way to zero. We further learned that causal signals are a subset of right-sided signals,and anti-causal signals are a subset of left-handed signals. So what happens if we shift our signal, i.e. r(n) = x(n+1)? \hat{R}(z) = z\hat{X}(\omega). This is a simple example of what we callthe time-shift property. You can guess what happens when we shift by anarbitrary integer. x(n-N) \ztrans z^N X(z). Note that this is no longercausal, but it is still right-sided. Notice that now the signal blows up at infinity, so our radius ofconvergence is now R_r: \abs{\alpha} < \abs{z} < \infty. The set ofright-sided signals is a strict superset of the set of causal signals. This is the difference between the z-transform of right-sided signals andthat of causal signals. Similarly, with a left-sided signal, we would exclude the origin from theRoC. There's also a simple way of showing why causal signals are outside of someradius of convergence. Let x be causal. Its z-transform starts from the earliest possible point,i.e. n = 0. \hat{X}(z) = x(0) + \frac{x(1)}{z} + ... . If \abs{z} = R_1 \in R_x, I want you to argue why \abs{z} = R_2, whereR_1 < R_2 is also in the RoC. Reasoning: with larger radii, we havesmaller values in our absolute sum. Right-sided signals: almost identical, except we have a finite number ofelements on the left, and so infinity must be excluded. Once you find the radius at which the sum converges, everything elseoutside also converges. Similar argument for anti-causal and left-sided signals. So now let's combine these. Example: g(n) = \parens{\frac{1}{2}}^n u(n) - \parens{\frac{3}{2}}^n u(-n-1) As done before, we have \frac{1}{1 - \alpha z^{-1}} for both values of"\alpha". Thus our region of convergence is \frac{1}{2} < \abs{r} <\frac{3}{2} (superposition tells us the corresponding z-transform is\frac{2z}{2z - 1} + \frac{2z}{2z - 3} = \frac{2z(z-1)}{(z-\frac{1}{2})(z-\frac{3}{2})}). As you can see, two-sided signals have annular regions for their RoCs. Reason for zeroes: if I were to ask you to find the inverse of the system,what would you do? Let's say this represents distortion, and you want toundo the distortion. Also comes into play when you want to plot the frequency response. Let's do another Example: h(n) = \expfrac{3}{2}{n}u(n) - \expfrac{1}{2}{n} u(n-1). Now we've gotnothing -- there is no overlap between the two regions, so there isneither a Z-transform nor a region of convergence. (we would have the sameexpression, but this doesn't hold anywhere.) Intermission Time Shift Property x(n-N) \to z^{-N}\hat{X}(z). What does this do to the region ofconvergence? It can potentially eliminate infinity (if N positive) or zero(if N negative), but not both. Convolution Property If you have h \equiv f \star g \ztrans \hat{H}(z) =\hat{F}(z)\hat{G}(z). Simple way to show this is by cascading filters andfeeding in (instead of a complex exponential) z^n -- this is identical tothe eigenfunction property of LTI systems. And what's the RoC? R_h \supset(R_f \cap R_g) (could be bigger if pole-zero cancellation occurs) Think of these poles as dam (damn?) walls. If we put this system in cascade with another one such that q(n) =\delta(n) - \frac{1}{2}\delta(n-1), \hat{Q}(z) = \frac{z-\frac{1}{2}}{z}. Since this is a FIR filter, R_q = \cplx - \set{0}. \hat{A}(z) = \hat{R}(z)\hat{Q}(z) = \frac{2(z-1)}{z - \frac{3}{2}} . Weget double pole-cancellation, in fact, so R_q = \set{z \middle| \abs{z} <\frac{3}{2}}. Time-reversal x(n) \ztrans \hat{X}(z). x(-n) \ztrans ? Do a variable substitution,and then you see that everywhere you had z, it's now a z^{-1}. Thusx(-n) \ztrans \hat{X}(\frac{1}{z}). When you correlate this with theFourier-transform story, we got a frequency reversal in the frequencydomain. Locations of poles and zeroes map to their inverses,i.e. \hat{X}(z_0) \to \hat{X}^\prime(\frac{1}{z_0}). Multiplication by a complex exponential Presume$$g(n) \ztrans \hat{G}(z)\\ h(n) = z_0^n g(n) \ztrans \hat{H}(z) = ?$$\hat{G}(\frac{z}{z_0}), after the dust clears. If p_0 is a pole of\hat{G}, it moves to z_0p_0. Z-transform of the unit step? \frac{1}{1 - z^{-1}}, where 1 <\abs{z}. This is a perfect example of why the z-transform is not a strictsuperset of the Fourier transform -- that only happens when the unit circleis strictly part of the RoC. Otherwise you can't evaluate the expressionthere. Z-transform of the tone burst (suddenly-applied cosine wave)? We've donethis (albeit in parts). Note that radius of convergence isn't changing. Will leave it to you to figure out what the transform of r^n \cos(\omega_0n) u(n) is. EE 120: Signals and Systems March 22, 2012. Upsampling property$$x(n) \mapsto \uparrow N \mapsto y(n) = \begin{cases}x(n/N) & \text{if } n \equiv 0 (\mod N)\\ 0 & \rm{e/w}\end{cases}$$i.e. we have the same values, but now interspersed with more zeroes. Takethe axis and dilate by three. So see if you can come up with an expressionfor the Z-transform of the upsampled signal. We should just have\hat{X}(z^N). This should not surprise you. When you upsampled in the time domain, whathappened in frequency? We contracted in the frequency domain. You get thateven from here. If I remind you of an example we did eons ago, y(n) =\alpha y(n-1) + x(n) had a frequency response of G(\omega) = \frac{1}{1 -\alpha e^{-i\omega}}. If I change this to the parameters of y(n) = \alphay(n-N) + x(n), H(\omega) = \frac{1}{1 - \alpha e^{i\omega N}}. But if youcompare g(n) = \alpha^n u(n) with h(n) = \alpha^{nN} u(n), we'vealready seen this. So when you upsample, you have the z raised to the n^{th} power. What's the RoC? Should bring up two more questions: what happens to thepoles and zeroes? We take the n^{th} root of everything (i.e. the inversefunction), so everything moves closer to 1. (rationale: z_p^N = p \impliesz_p = ?. We get N times as many poles, in fact, since we have N rootsof p. ibid for zeros. Going back to the question for the region of convergence for y: if the RoCfor x is R_1 < \abs{z} < R_2, the RoC for y is R_1 < \abs{z}^N < R_2,so R_y = R_x^{1/N}. So let's do the example given earlier: y(n) = \alpha y(n-N) + x(n).\hat{H}_1(z) = \frac{1}{1 - \alpha z^{-1}}. \hat{H}_4 = \frac{1}{1 -\alpha z^{-4}}. Draw pole-zero diagrams, region of convergence? (note that we've got degeneracy -- multiplicity. Must denote with a numberin parentheses if you've got multiplicity greater than 2; if multiplicityis 2, you can use a double-circle or double-x). Differentiation Another property that's actually very important is differentiation in Z. Sosuppose you've transformed x \ztrans \hat{X}(z). What is \deriv{\hat{X}}{z}? -z\deriv{\hat{X}}{z} \ztrans n x(n). Example: g(n) = n\alpha^n u(n) \ztrans \hat{G}(z) = ? If you want to make this look like the original form, just multiply top andbottom by z^{-2}. Very important point: extension to higher derivatives. So what happens as we increase this? What does this mean? We can decompose any rational z transform into a linear composition oflower-order terms. Fundamental theorem of algebra. Proposition: supposewe've got a transfer function. We've got a numerator over a denominator. Wecan factor the numerator and denominator. You also learned that wheneveryou do this, you can break apart the ratio in terms of a sum. Note that this starts breaking when you have degeneracy (i.e. systems withduplicate poles). So from this qualitative argument, it should not surpriseyou if I tell you that the only way you can get a rational Z-transform isif the system is the sum of one-sided exponentials multiplied bysome polynomial. We'd also have to include the left-sided versions of these. We can make a general statement: a Z-transform expression \hat{X}(z) isrational iff x(n) is a linear combination of terms n^k \alpha^n u(n),n^k\beta^n u(-n). Shifted versions will certainly also work. Using partial fractions is one of the methods of doing an inversetransform. We're not going to learn a formal inverse Z-transform; we'rejust going to use various heuristics (not unlike solving differentialequations). In general, the inverse z-transform requires a contour integral (complexanalysis) and thus is not a required in this class. Now, if you believe this, we've got several things: n^k\alpha^n u(n),LCCDEs, and rational Z-transforms. They form a family. LCCDEs and Rational Z Transforms Suppose I've got an input, an impulse response, and an output. You knowthis is a convolution of x and h, so \hat{Y} = \hat{X}\hat{Z}, whichmeans the transfer function of an LTI system is the ratio of the transformof the output to the transform of the input (for LTI systems). Frequency response of the filter gives you the Fourier transform of theoutput. We can write our difference equation as \sum_{k=0}^N a_k y(n-k) = \sum_m^Mb_m x(n-m). We've seen this. One way to get the transfer function is to take the z-transforms of bothsides. If they're equal in the time domain, their z-transforms must also beequal in the frequency domain. Time-shift property. Just considering theratio \hat{H} \equiv \frac{\hat{Y}}{\hat{X}}, we have our transferfunction. Familiarize yourself with going from the LCCDE to the transfer function byinspection. Now, for the end of the lecture: irrational Z-transform. Example This is a standard example in practically any signal-processing book you'llfind. \hat{X} = \log(1 + \alpha z^{-1}. Determine x(n). (differentiation property) (\frac{(-1)^{n-1} \alpha^n}{n}u(n-1)) You can also use (to check) Taylor expansion centered at 1: \log(1 +\lambda) = \sum_{n=1}^\infty \frac{(-1)^{n+1} \lambda^n}{n}. EE 120: Signals and Systems April 3, 2012. We finished talking about the relationship between LCCDEs andZ-transforms. A little later -- either next lecture or next week -- we'regoing to talk about how to solve Z-transforms this way. Convenience:convolutions turn into multiplications (which are almost always easier todo than convolutions). Z-transforms turn difference equations intoalgebraic expressions. Properties: Initial Value Theorem If you have a causal discrete-time signal x (which means x(n) = 0, n <0), and suppose I'm looking at its Z-transform. X(z) = \sum_{n=0}^\inftyx(n)z^{-n} = x(0) + \frac{x(1)}{z} + \frac{x(2)}{z^2} +.... \lim_{z\to\infty} \hat{X}(z) = x(0), ergo the name initial-valuetheorem. Simple to prove; sometimes helpful when you have a rationalfunction or some other signal which you happen to know is causal. You canalso massage this expression to get the other values. Obv. does not work for FT: in that case, z always on unit circle. Dancing around inverse Z-transforms: x(n) = \frac{1}{2\pi i} \oint\hat{X}(z) z^{n-1} dz (contour integral). We are not going to use this(i.e. forget about it). The ways we invert: • Inspection If I ask you what the inverse transform is of \frac{1}{1 - \alphaz^{-1}}, we know the result, depending on whether \abs{\alpha} isgreater or smaller than \abs{z}. Now consider \hat{X}(z) = \frac{1}{3}z - \frac{1}{2} + 2z^{-1}. We candecompose this FIR into its contributing values. • Power Series Method We can use the equivalent power series and just transform it term by term.(we could also get some of these via time-reversal and inspection). • Long Division Recall rational transforms correspond to functions that have differenceequations. Suppose \hat{G}(z) = \frac{z}{z-1}. Doing the long division the usual way(i.e. by z-1) will yield the causal signal u(n). Doing long division adifferent way (i.e. by 1-z) will yield the anticausal signal -u(-n-1). The point is that you have flexibility with respect to how you do the longdivison, and each of them will give you a different corresponding one-sidedsignal. Recall: a signal cannot be causal if its transfer functiondiverges. However, the number of samples to the left corresponds to theorder of growth of the transfer function (as a polynomial). Rational Transform Pole-Zero Book-keeping Suppose I have a transfer function \hat{H}(z) = \frac{A(z)}{B(z)}, whereA is m^{th} order, and B is n^{th} order. If M < N, H is strictlyproper. If M \le N, H is proper. ANd if M > N, H is improper. For M < N, there are N finite poles (counting multiplicities), Mfinite zeros, and N-M zeros at infinity. For M = N, there are M = N finite poles, M = N finite zeros. Noactivity at infinity. For M > N, there are M finite zeros, N finite poles. and M-N polesat infinity. In any of these cases, the number of poles equals the number of zeros. Thedifference is always what is happening at infinity. Back to power series \hat{F}(z) = e^{\frac{1}{z}}. Just use the Taylor series for theexponential function. Falls into place. Partial Fraction Expansion Again we are speaking of a rational Z-transform. You've studied partialfractions in calculus. It's no different here, really. However, thispartial fraction must be proper. If not, just do long division until it'sin that form. Case I: Simple Poles Best case is when all finite poles are simple (i.e. order 1). (remember: causal signal means that the RoC must be outside the outermostpole.) So what happens if I ignore the causality constraint and instead add theconstraint that the signal is BIBO stable? We get a different equation,which is now a two-sided signal, and nothing blows up. EE 120: Signals and Systems April 5, 2012. Partial Fraction Expansions (Cont'd) The case of multiple poles: Example: \hat{G}(z) = \frac{z^2}{\parens{z-\frac{1}{2}}^2(z-2)}. Remember, double-pole at \frac{1}{2} (for this, we can do thedouble-cross), and a pole at 2. How many possible regions of convergencecan we have? 3. Now let's try to find the impulse response. Before you can do that, youneed to break this rational transfer function into a combination of firstand second-order terms. We're not going to carry this all the way to theimpulse response, but we are going to break it up. Any time you have a multiple pole, you have to write the expansion in termsof the first order term plus the second order term (or Az + B in thenumerator). The trick at the end is to differentiate and evaluate at the multiple pole. Assume causal system. Determine g(n). Linear combination of the inverseZ-transforms, since this is a linear operator. Differentiation property toderive inverse transform of second term. Transient, steady-state systems Steady-State and Transient Decomposition of DT-LTI System Responses We're going to talk in this course about two ways of decomposing theresponses of DT systems. One of these is decomposing into transient (whichdies out in the long term) and steady-state (long-term dominant)components. Starting with a causal system. First-order IIR filter, single pole at theorigin. Since \alpha is inside the unit circle, this system is BIBOstable. Simple question: here's your system h, it's got some impulse response, Iapply a step function to it. Once again, remember that partial fraction expansion requires that youhave a strictly proper fraction. You may need to pull out some of yourzeros to make the fraction strictly proper (or do long division). The end result is the transform y(n) = \frac{1}{1-\alpha} u(n) -\frac{\alpha}{1 - \alpha} \alpha^n u(n). The first term is oursteady-state response. It does not go away with time. The second term isthe transient response. It Thus we can decompose any signal into a transient portion and asteady-state portion. If a system is BIBO-stable and causal, all of its poles are inside theunit circle. Question: a BIBO-stable and causal DT-LTI system has all its poles insidethe unit circle. Why is that? The RoC must be outside of the outermost pole (result of causality). It isalso BIBO stable, so it must include the unit circle. Thus the outermostpole must be inside the unit circle, so all other poles must also be insidethe unit circle. Transient is any part that dies out at infinity, while steady-state isanything that is either steady or growing. Notions of dominance (separatebut connected closely to transient/steady-state analysis). Dominance istied to the long-term behavior (i.e. which term dominates when n large?) Back to the original setup: x(n) = u(n) \implies y(n) = \frac{\alpha}{\alpha-1} \alpha^n u(n) + \frac{1}{1-\alpha} u(n). What if x(n) = 1(for all time)? Since this system has settled, we have just the steady-state component. We want to get to the continuous-time story. What if the input signal is aconstant signal 1? We get as output our steady-state result. What thismeans is that the system is unable to distinguish between the constantsignal 1 and the unit step which kicks in at n=0 if you wait longenough. All this talk about complex exponentials was really just a discussionregarding steady-state. Non-stable pole (always one on the unit circle),which is going to dominate the response. Now that you have this connection, you don't have to solve for thecoefficient (if the system is BIBO-stable). All you have to do is figureout the transient portion. Deferring ZIZO until next week. Example Consider the causal system described by y(n) = \frac{5}{6}y(n-1) -\frac{1}{6} y(n-2) + x(n). Determine the unit step response of the system. Transform everything. Remember that \hat{H}(z) = \frac{\hat{Y}}{\hat{X}}. (linear algebra approach: Looks like \lambda^n. Thus Y^n = \frac{5}{6}Y^{n-1} - \frac{1}{6}Y^{n-2}.$$y^2 = \frac{5}{6}y - \frac{1}{6}y\\ y^2 - \frac{5}{6}y + \frac{1}{6}y = 0\\ y = -\frac{1}{2}, -\frac{1}{3}$$These are our eigenvalues. Some linear combination of these twoexponentials will give us our initial conditions (i.e. y(0) = 1, y(-1) =y(-2) = 0). That is, y = a_0 \parens{-\frac{1}{2}}^n + a_1\parens{-\frac{1}{3}}^n ) EE 120: Signals and Systems April 10, 2012. Zero-State and Zero-Input Responses Alternate decomposition of systems that is based on whether the system isinitially at rest or not. If initially at rest, what is response due toinitial conditions, and what is response to impulse? Example believed to be in the textbook: system described by y(n) -0.9 y(n-1) = x(n). Causal. If the system is not at rest, technically it is not LTI. Not nonlinear,though. It's what we call an incrementally-linear system. Whatdistinguishes these from linear systems is that these have non-zerointercepts. Turning off the input, all you've got is some nonzero initialcondition. Figure out the response as time goes forward. This is called thezero-input response of the system (turning off the input). What if y(-1) = 0, and x(n) = u(n)? We've got a geometric series! Or doit the z-transform style; do the transform as we've been doing for the pastfew weeks, and causality will tell you the rest. Output is HX. We knowhow to do partial fractions and stuff. Talk about damn walls. This response is called the zero-state response (y_{ZSR}), meaning theinitial state (set of initial conditions) is zero. So you have the zero-input response plus the zero-state response as yetanother decomposition of your system. Now we're learning about the contributions of the nonzero initial state. Wedid this by splitting the response. There is actually a way to figure outthe total response using transforms: there is a transform method. Mainpoint of today. Transform Method to get the Total Response So, the method begins by looking at the difference equation, e.g. y(n) =0.9 y(n-1) + x(n). I'm going to use the Lee & Varaiya method, and thenwe'll look at it another very related method (the unilateralZ-transform). For starters, multiply each side by u(n). So we have y(n)u(n) =0.9y(n-1)u(n) + x(n)u(n). Then take the Z-transform of both sides (using the definition of theZ-transform). Note that this Z-transform looks very much like thez-transform you've seen up until now, except that it starts at zero andgoes up to infinity. This is called the unilateral z-transform of y. For any causal signal, the unilateral transform is the same as thebilateral z-transform. With the unilateral transform, you can do it all in one go. Laplace Transform \hat{X}(s) \defequals \int_{-\infty}^{\infty} x(t) e^{-st} dt, s =\sigma + i\omega. Just as with the Z-transform, we do not use an inversetransform formula. We're going to use similar methods. Why do we even bother with this? For reasons similar to why we justifiedthe Z-transform, we need a comparable transform for continuous-timesystems. Notice how the integral is actually the Fourier transform of the perturbed("tamed") function x(t)e^{-\sigma t}. The radius of convergence isdefined by \sigma = \mathrm{Re}(s) -- \omega plays no role inconvergence. In continuous time, there is a very nice correspondence between sidednessof signals and the RoC in continuous-time. Easier to remember. Once again, causality means that the RoC extends all the way to (andincludes!) infinity. Notice that in this case, the RoC contains the i\omega axis. (Conjecture,since not yet proven in this class) As in the z-transform, this is becausex(t) is a stable signal. That is, it is absolutely integrable. The proofis fairly trivial: \int\abs{x(t)e^{-i\omega t}} dt = \int \abs{x(t)} dt <\infty. Transform pairs!$$\renewcommand{\Re}{\mathrm{Re}}e^{-at} u(t) \ltrans \frac{1}{s+a} (-\Re(a) < \Re(s))\\ -e^{-at} u(-t) \ltrans \frac{1}{s+a} (\Re(s) < -\Re(a))$$EE 120: Signals and Systems April 17, 2012. Differentiation property:$$x(t) \ltrans \hat{X}(s)\\ \dot{x}(t) \ltrans s\hat{X}(s)$$LCCDEs y^{(N)} + ... + a_1 y^{(1)}(t) + a_0 = b_M x^{(M)} + ... + b_0 x(t). WhatI want you is to apply the differentiation property to find the transferfunction of this. (x-coefficients-polynomial divided byy-coefficients-polynomial) \frac{\sum_m b_m s^m}{\sum_n a_n s^n}. Going back to a series-RC circuit powered by a voltage source, we havez(t) = \frac{x(t) - y(t)}{R}, C\dot{y}(t) = z(t). So C\dot{y} -\frac{1}{R}y(t) = x(t). Transfer function therefore is \frac{1}{RCs +1}. Other way is to plug in e^{-st} and insist silly things. Inverting this signal yields \frac{1}{RC}e^{-t/(RC)}u(t). That is theimpulse response of the system. So that was differentiation in time. There is differentiation in thes-domain. Differentiation in s$$x(t) \ltrans \hat{X}(s)\\ -t x \ltrans \deriv{\hat{X}}{s}$$x(t) = \frac{1}{2\pi i} \oint \hat{X}(s) e^{st} ds te^{-at}u(t) \ltrans \frac{1}{(s+a)^2}. Conjecture: terms of the form t^n e^{-at} u(t) and their anticausalcounterparts are the only kinds that can be combined (subject to matchingRoCs) to produce rational transforms. This means that the impulse responseof any rational transfer function must be the sum of these terms. In differential equations, you studied simple and multiple roots (whichcorrespond to simple/multiple poles in our vernacular). s + 1 + 1/(s+3) (unit doublet) on one side, you have delta, step, ramp, quadratic. On the other side,you've got a doublet, second derivative of delta, etc. Delta is u_0(t),doublet is u_1(t), step is u_{-1}(t), etc. If not strictlty proper, have polynomial in s. Method 1: non-transform method If delta goes into the system, what comes out? h. If the unit step goesin, we get u*h. Method 2: transform method partial fractions and stuff. Consistent with result of method 1. Integration in time/transform domain Just relabel variables, and it becomes self-evident.$$x(t) \ltrans \hat{X}(s)\\ \int x dt^\prime \ltrans \frac{1}{s}\hat{X}(s)$$Steady-State & Transient Response of LTI Systems Exactly the same as expected. Note that the second one dies out because ofthe pole of the system. With BIBO-stable system, input pole to right of rightmost pole of systemdominates output. EE 120: Signals and Systems April 19, 2012. Transient/Steady-State Wrap up Let's talk a bit about a causal BIBO-stable system. Which is usually thecase with practical applications. Has a rational transfer function, sousually ratio of two polynomials in s. Not going to be too concernedabout zeros of system, so we'll write the factored denominator in terms ofthe poles of the system. Assume all poles are simple. All poles are in left half-plane. Also, assumetransfer function is strictly proper. To this system, I apply a one-sided (causal) complex exponentialsignal. What is the output? transforms and multiplications. Eigenfunction property (plus other stuff?!). True for any BIBO-stable function: you can evaluate the Laplace transformon the i\omega axis and get the Fourier transform for that particular\omega. What happens to all the terms involving the Rs? These, collectively,compose your transient response. The last term (result from input)? Doesn'tdie out. Steady-state. What this says is that the system cannot distinguish between e^{i\omega_0t} and its truncated cousin e^{i\omega_0 t}u(t) if we wait long enough:i.e. transients become insignificant. Only portion of response that remainsis the one corresponding to e^{i\omega_0 t}. Notice that the pole of theinput is to the right of the rightmost pole of the system. Important: all poles of the system are in the left half-plane, and the poleof the input is on the i\omega axis, which means it's to the right of therightmost pole (and of course the system is causal). Therefore the pole ofthe input will dominate the response. Eigenfunction property applies to steady-state solution. Can also extend tosinusoids. Likely a good time to move to the unilateral Laplace transform and how wecan use it to solve ordinary LDEs. Unilateral Laplace Transform& linear, constant-coefficient differential equations with non-zero initial conditions Whenever you have nonzero initial conditions, you need to truncate. Trickused: multiply by unit step, then take Laplace transform. Effectively thesame as taking unilateral Laplace transform. \hat{\mathcal{X}}(s) = \int_{0^-}^\infty x(t) e^{-st} dt. A lot oftextbooks only deal with the unilateral transform because they'reinterested in causal systems. As are we, in this context. If I am looking at the unilateral Laplace transform of \dot{x}, oneadditional term appears. If we integrate by parts, we can see which thisterm must be. In the bilateral case, we evaluated uv at bothinfinities. The second term (i.e. \int vdu) required that this productpevaluate to zero at infinities -- otherwise the integral would notconverge. In the unilateral case, we therefore have an additional term: -x(0^-). Zero-state, zero-input method. Remember: different from transient andsteady-state. Best not to think of these at the same time. Method 2: use unilateral Laplace transform. Note that if a signal is causal, its unilateral Laplace transform is thesame as its bilateral Laplace transform. EE 120: Signals and Systems April 24, 2012. DC Motor Control Application of what we've been studying. Way to review and test fluencywith material. We've got a DC motor whose model is some second order lineardifferential equation. We've got applied torque and damping. Moment ofinertia of rotor and whatever's hooked up to it. Transfer function? Feedback to stabilize the system. Place this in proportional feedbackconfiguration: only other thing in the feedback system is K, which is ascalar. Integrator: of form \frac{1}{s}. What's the transfer function?Characteristic polynomial from differential equations. K must be positive for BIBO stability. If roots complex, guaranteed stability. Real part of each pole is-\frac{D}{2M} < 0. Oscillations you get when you have complex poles. Underdamping, criticaldamping, overdamping. Robustness discussion. Bode plots! Gots two building blocks:$$\hat{F}_I(s) = 1 + \frac{s}{\omega_0}\\ F_I(\omega) = \hat{F}_I(i\omega)$Asymptotic plot of$20\log\abs{F_I}$and$\angle F_I$. The horizontal axisis a logarithmic axis. What happens when$\omega$is very small? Asymptotically zero. At higherfrequencies,$\omega$large, so imaginary part will dominate. And so whenyou take its magnitude, you jump by 20 every time you increase magnitude by10 -- slope is 20dB/dec.$\omega_0$is your corner frequency (named forobvious reasons). 3dB point: corner frequency. One of foundational blocksfor frequency responses on logarithmic scales. 45 degrees per decade. (use10x to determine dominance). This building block is called a regular zero. Not widely used; Babaklearned from circuits professor at CalTech (R.D. Middlebrook). The second building block is$\frac{s}{\omega_0}$. Simple zero. Claim: all expressions with real roots can be written as combinations ofthese two. Inverted zero:$1 + \frac{\omega_0}{s}\$

Something went wrong with that request. Please try again.