# public steveWang /Notes

### Subversion checkout URL

You can clone with HTTPS or Subversion.

• 2 commits
• 29 files changed
• 1 contributor
Sep 25, 2012
Add more days of notes. 17a7744
Sep 27, 2012
More updates. Slight format change. 0cc157f
295  cs150.html
 @@ -594,7 +594,300 @@ 594 594  sums. (Section 2.7). Based on the combining theorem, which says that $XA + 595 595  X\bar{A} = X$. Ideally: every row should just have a single value 596 596  changing. So, I use Gray codes. (e.g. 00, 01, 11, 10). Graphical 597 -representation!

597 +representation!

598 +

599 +

CS 150: Digital Design & Computer Architecture

600 +

September 18, 2012

601 +

Lab this week you are learning about chipscope. Chipscope is kinda like 602 +what it sounds: allows you to monitor things happening in the FPGA. One of 603 +the interesting things about Chipscope is that it's a FSM monitoring stuff 604 +in your FPGA, it also gets compiled down, and it changes the location of 605 +everything that goes into your chip. It can actually make your bug go away 606 +(e.g. timing bugs).

607 +

So. Counters. How do counters work? If I've got a 4-bit counter and I'm 608 +counting from 0, what's going on here?

609 +

D-ff with an inverter and enable line? This is a T-ff (toggle 610 +flipflop). That'll get me my first bit, but my second bit is slower. $Q_1$ 611 +wants to toggle only when $Q_0$ is 1. With subsequent bits, they want to 612 +toggle when all lower bits are 1.

613 +

Counter with en: enable is tied to the toggle of the first bit. Counter 614 +with ld: four input bits, four output bits. Clock. Load. Then we're going 615 +to want to do a counter with ld, en, rst. Put in logic, etc.

616 +

Quite common: ripple carry out (RCO), where we AND $Q[3:0]$ and feed this 617 +into the enable of $T_4$.

618 +

Ring counter (shift register with one hot out), If reset is low I just 619 +shift this thing around and make a circular shift register. If high, I clear 620 +the out bit.

621 +

Mobius counter: just a ring counter with a feedback inverter in it. Just 622 +going to take whatever state in there, and after n clock ticks, it inverts 623 +itself. So you have $n$ flipflops, and you get $2n$ states.

624 +

And then you've got LFSRs (linear feedback shift registers). Given N 625 +flipflops, we know that a straight up or down counter will give us $2^N$ 626 +states. Turns out that an LFSR give syou almost that (not 0). So why do 627 +that instead of an up-counter? This can give you a PRNG. Fun times with 628 +Galois fields.

629 +

Various uses, seeds, high enough periods (Mersenne twisters are higher).

630 +

RAM

631 +

Remember, decoder, cell array, $2^n$ rows, $2^n$ word lines, some number of 632 +bit lines coming out of that cell array for I/O with output-enable and 633 +write-enable.

634 +

When output-enable is low, D goes to high-Z. At some point, some external 635 +device starts driving some Din (not from memory). Then I can apply a write 636 +pulse (write strobe), which causes our data to be written into the memory 637 +at this address location. Whatever was driving it releases, so it goes back 638 +to high-impedance, and if we turn output-enable again, we'll see "Din" from 639 +the cell array.

640 +

During the write pulse, we need Din stable and address stable. We have a 641 +pulse because we don't want to break things. Bad things happen.

642 +

Notice: no clock anywhere. Your FPGA (in particular, the block ram on the 643 +ML505) is a little different in that it has registered input (addr & 644 +data). First off, very configurable. All sorts of ways you can set this up, 645 +etc. Addr in particular goes into a register and comes out of there, and 646 +then goes into a decoder before it goes into the cell array, and what comes 647 +out of that cell array is a little bit different also in that there's a 648 +data-in line that goes into a register and some data-out as well that's 649 +separate and can be configured in a whole bunch of different ways so that 650 +you can do a bunch of different things.

651 +

The important thing is that you can apply your address to those inputs, and 652 +it doesn't show up until the rising edge of the clock. There's the option 653 +of having either registered or non-registered output (non-registered for 654 +this lab).

655 +

So now we've got an ALU and RAM. And so we can build some simple 656 +datapaths. For sure you're going to see on the final (and most likely the 657 +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design 658 +a system to find the largest unsigned int in the SRAM."

659 +

Demonstration of clock cycles, etc. So what's our FSM look like? Either 660 +LOAD or HOLD.

661 +

On homework, did not say sync SRAM. Will probably change.

662 +

663 +

CS 150: Digital Design & Computer Architecture

664 +

September 20, 2012

665 +

Non-overlapping clocks. n-phase means that you've got n different outputs, 666 +and at most one high at any time. Guaranteed dead time between when one 667 +goes low and next goes high.

668 +

K-maps

669 +

Finding minimal sum-of-products and product-of-sums expressions for 670 +functions. On-set: all the ones of a function; implicant: one or 671 +more circled ones in the onset; a minterm is the smallest implicant you 672 +can have, and they go up by powers of two in the number of things you can 673 +have; a prime implicant can't be combined with another (by circling); 674 +an essential prime implicant is a prime implicant that contains at 675 +least one one not in any other prime implicant. A cover is any 676 +collection of implicants that contains all of the ones in the on-set, and a 677 +minimal cover is one made up of essential prime implicants and the 678 +minimum number of implicants.

679 +

Hazards vs. glitches. Glitches are when timing issues result in dips (or 680 +spikes) in the output; hazards are if they might happen. Completely 681 +irrelevant in synchronous logic.

682 +

Project

683 +

3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If 684 +we look at the datapath elements, the storage elements, you've got your 685 +program counter, your instruction memory, register file, and data 686 +memory. Figure 7.1 from the book. If you mix that in with figure 8.28, 687 +which talks about MMIO, that data memory, there's an address and data bus 688 +that this is hooked up to, and if you want to talk to a serial port on a 689 +MIPS processor (or an ARM processor, or something like that), you don't 690 +address a particular port (not like x86). Most ports are 691 +memory-mapped. Actually got a MMIO module that is also hooked up to the 692 +address and data bus. For some range of addresses, it's the one that 693 +handles reads and writes.

694 +

You've got a handful of different modules down here such as a UART receive 695 +module and a UART transmit module. In your project, you'll have your 696 +personal computer that has a serial port on it, and that will be hooked up 697 +to your project, which contains the MIPS150 processor. Somehow, you've got 698 +to be able to handle characters transmitted in each direction.

699 +

UART

700 +

Common ground, TX on one side connected to RX port on other side, and vice 701 +versa. Whole bunch more in different connectors. Basic protocol is called 702 +RS232, common (people often refer to it by connector name: DB9 (rarely 703 +DB25); fortunately, we've moved away from this world and use USB. We'll 704 +talk about these other protocols later, some sync, some async. Workhorse 705 +for long time, still all over the place.

706 +

You're going to build the UART receiver/transmitter and MMIO module that 707 +interfaces them. See when something's coming in from software / 708 +hardware. Going to start out with polling; we will implement interrupts 709 +later on in the project (for timing and serial IO on the MIPS 710 +processor). That's really the hardcore place where software and hardware 711 +meet. People who understand how each interface works and how to use those 712 +optimally together are valuable and rare people.

713 +

What you're doing in Lab 4, there's really two concepts of (1) how does 714 +serial / UART work and (2) ready / valid handshake.

715 +

On the MIPS side, you've got some addresses. Anything that starts with FFFF 716 +is part of the memory-mapped region. In particular, the first four are 717 +mapped to the UART: they are RX control, RX data, TX control, and TX data.

718 +

When you want to send something out the UART, you write the byte -- there's 719 +just one bit for the control and one byte for data.

720 +

Data goes into some FSM system, and you've got an RX shift register and a 721 +TX shift register.

722 +

There's one other piece of this, which is that inside of here, the thing 723 +interfacing to this IO-mapped module uses this ready bit. If you have two 724 +modules: a source and a sink (diagram from the document), the source has 725 +some data that is sending out, tells the sink when the data is valid, and 726 +the sink tells the source when it is ready. And there's a shared "clock" 727 +(baud rate), and this is a synchronous interface.

728 +
729 +
• source presents data
•  730 +
• source raises valid
•  731 +
• when ready & valid on posedge clock, both sides know the transaction was 732 + successful.
•  733 +
734 +

Whatever order this happens in, source is responsible for making sure data 735 +is valid.

736 +

HDLC? Takes bytes and puts into packets, ACKs, etc.

737 +

Talk about quartz crystals, resonators. $\pi \cdot 10^7$.

738 +

So: before I let you go, parallel load, n bits in, serial out, etc.

739 +

740 +

UART, MIPS and Timing

741 +

September 25, 2012

742 +

Timing: motivation for next lecture (pipelining). Lot of online resources 743 +(resources, period) on MIPS. Should have lived + breathed this thing during 744 +61C. For sure, you've got your 61C lecture notes and CS150 lecture notes 745 +(both from last semester). Also the green card (reference) and there's 746 +obviously the book. Should have tons of material on the MIPS processor out 747 +there.

748 +

So, from last time: we talked about a universal asynchronous receiver 749 +transmitter. On your homework, I want you to draw a couple of boxes 750 +(control and datapath; they exchange signals). Datapath is mostly shift 751 +registers. May be transmitting and receiving at same time; one may be idle; 752 +any mix. Some serial IO lines going to some other system not synchronized 753 +with you. Talked about clock and how much clock accuracy you need. For 754 +eight-bit, you need a couple percent matching parity. In years past, we've 755 +used N64 game controllers as input for the project. All they had was an RC 756 +relaxation oscillator. Had same format: start bit, two data bits, and stop 757 +bit. Data was sent Manchester-coded (0 -> 01; 1: 10). In principle, I can 758 +have a 33% error, which is something I can do with an RC oscillator.

759 +

Also part of the datapath, 8-bit data going in and out. Whatever, going to 760 +be MIPS interface. Set of memory-mapped addresses on the MIPS, so you can 761 +read/write on the serial port. Also some ready/valid stuff up 762 +here. Parallel data to/from MIPS datapath.

763 +

MIPS: invented by our own Dave Patterson and John Henessey from 764 +Stanford. Started company, Kris saw business plan. Was confidential, now 765 +probably safe to talk about. Started off and said they're going to end up 766 +getting venture capital, and VCs going to take equity, which is going to 767 +dilute their equity. Simple solution, don't take venture money. These guys 768 +have seen enough of this. By the time they're all done, it would be awesome 769 +if they each had 4% of the company. They set things up so that they started 770 +at 4%. Were going to allocate 20% for all of the employees, series A going 771 +to take half, series B, they'll give up a third, and C, 15%. Interesting 772 +bit about MIPS that you didn't learn in 61C.

773 +

One of the resources, the green sheet, once you've got this thing, you know 774 +a whole bunch about the processor. You know you've got a program counter 775 +over here, and you've got a register file in here, and how big it 776 +is. Obviously you've got an ALU and some data memory over here, and you 777 +know the instruction format. You don't explicitly know that you've got a 778 +separate instruction memory (that's a choice you get to make as an 779 +implementor); you don't know how many cycles it'll be (or pipelined, 780 +etc). People tend to have separate data and instruction memory for embedded 781 +systems, and locally, it looks like separate memories (even on more 782 +powerful systems).

783 +

We haven't talked yet about what a register file looks like inside. Not 784 +absolute requirement about register file, but it would be nice if your 785 +register file had two read and one write address.

786 +

We go from a D-ff, and we know that sticking an enable line on there lets 787 +us turn this into a D-ff with enable. Then if I string 32 of these in 788 +parallel, I now have a register (clocked), with a write-enable on it.

789 +

Not going to talk about ALU today: probably after midterm.

790 +

So now, I've got a set of 32 registers. Considerations of cost. Costs on 791 +the order of a hundredth of a cent.

792 +

Now I've made my register file. How big is that logic? NAND gates to 793 +implement a 5->32 bit decoder.

794 +

Asynchronous reads. At the rising edge of the clock, synchronous write.

795 +

So, now we get back to MIPS review. The MIPS instrctions, you've got 796 +R/I/J-type instructions. All start with opcode (same length: 6 bits). Tiny 797 +fraction of all 32-bit instructions.

798 +

More constraints as we get more stuff. If we then want to constrain that 799 +this is a single-cycle processor, then you end up with a pretty clear 800 +picture of what you want. PC doesn't need 32 bits (two LSBs are always 0); 801 +can implement PC with a counter.

802 +

PC goes into instruction memory, and out comes my instruction. If, for 803 +example, we want to execute LW $s0 12(%s3), then we look at the green 804 +card, and it tells us the RTL.  805 + Adding R-type to the I-type datapath adds three muxes. Not too bad.  806 +  807 + Pipelining  808 + September 27, 2012  809 + Last time, I just mentioned in passing that we will always be reading 810 +32-bit instruction words in this class, but ARM has both 32- and 16-bit 811 +instruction sets. MicroMIPS does the same thing.  812 + Optimized for size rather than speed; will run at 100 MHz (not very good 813 +compared to desktop microprocessors made in the same process, which run in 814 +the gigahertz range), but it burns 3 mW.$0.06 \text{mm}^2$. Questions 815 +about power monitor -- you've got a chip that's somehow hanging off of the 816 +power plug and manages one way or the other to get a voltage and current 817 +signal. You know the voltage is going to look like 155 amplitude.  818 + Serial! Your serial line, the thing I want you to play around with is the 819 +receiver. We give this to you in the lab, but the thing is I want you to 820 +design the basic architecture.  821 + Start, stop, some bits between. You've got a counter on here that's running 822 +at 1024 ticks per bit of input. Eye diagrams.  823 + Notion of factoring state machines. Or you can draw 10000 states if you 824 +want.  825 + Something about Kris + scanners, it always ends badly. Will be putting 826 +lectures on the course website (and announce on Piazza). High-level, look 827 +at pipelines.  828 + MIPS pipeline  829 + For sure, you should be reading 7.5, if you haven't already. H&H do a great 830 +job. Slightly different way of looking at pipelines, which is probably 831 +inferior, but it's different.  832 + First off, suppose I've got something like my Golden Bear power monitor, 833 +and$f = (A+B)C + D$. It's going to give me this ALU that does addition, ALU 834 +that does multiplication, and then an ALU that does addition again, and 835 +that will end up in my output register.  836 + There is a critical path (how fast can I clock this thing?). For now, 837 +assume "perfect" fast registers. This, however, is a bad assumption.  838 + So let's talk about propagation delay in registers.  839 + Timing & Delay (H&H 3.5; Fig 3.35,36)  840 + Suppose I have a simple edge-triggered D flipflop, and these things come 841 +with some specs on the input and output, and in particular, there is a 842 +setup time ($t_{\mathrm{setup}}$) and a hold time ($t_{\mathrm{hold}}$).  843 + On the FPGA, these are each like 0.4 ns, whereas in 22nm, these are more 844 +like 10 ps.  845 + And then the output is not going to change immediately (going to remain 846 +constant for some period of time before it changes),$t_{ccq}$is the 847 +minimum time for clock to contamination (change) in Q. And then there's a 848 +maximum called$t_{pcq}$, the maximum (worst-case) for clock to stable 849 +Q. Just parameters that you can't control (aside from choosing a different 850 +flipflop).  851 + So what do we want to do? We want to combine these flipflops through some 852 +combinational logic with some propagation delay ($t_{pd}$) and see what our 853 +constraints are going to be on the timing.  854 + Once the output is stable ($t_{pcq}$), it has to go through my 855 +combinational logic ($t_{pd}$), and then counting backwards, I've got 856 +$t_{setup}$, and that overall has to be less than my cycle. Tells you how 857 +complex logic can be, and how many stages of pipelines you need. Part of 858 +the story of selling microprocessors was clock speed. Some of the people 859 +who got bachelors in EE cared, but people only really bought the higher 860 +clock speeds. So there'd be like 4 NAND gate delays, and that was it. One 861 +of the reasons why Intel machines have such incredibly deep pipelines: 862 +everything was cut into pieces so they could have these clock speeds.  863 + So.$t_{pd}$on your Xilinx FPGA for block RAM, which you care about, is 864 +something like 2 ns from clock to data. 32-bit adders are also on the order 865 +of 2 ns. What you're likely to end up with is a 50 MHz part. I also have to 866 +worry about fast combinational logic -- what happens as the rising edge 867 +goes high, my new input contaminates, and it messes up this register before 868 +the setup time? Therefore$t_{ccq} + t_{pd} > t_{hold}$, necessarily, so we 869 +need$t_{ccq} > t_{hold}$for a good flipflop (consider shift registers, 870 +where we have basically no propagation delay).  871 + Therefore$t_{pcq} + t_{setup} + t_{pd} < t_{cycle}$.  872 + What does this have to do with the flipflop we know about? If we look at 873 +the flipflop that we've done in the past (with inverters, controlled 874 +buffers, etc), what is$t_{setup}$? We have several delays;$t_{setup}$ 875 +should ideally have D propagate to X and Y. How long is the hold 876 +afterwards? You'd like$D$to be constant for an inverter delay (so that it 877 +can stop having an effect). That's pretty stable.$t_{hold}$is something 878 +like the delay of an inverter (if you want to be really safe, you'd say 879 +twice that number).$t_{pcq}$, assuming we have valid setup, the D value 880 +will be sitting on Y, and we've got two inverter delays, and$t_{ccq}$is 881 +also 2 inverter delays.  882 + Good midterm-like question for you: if I have a flipflop with some 883 +characteristic setup and hold time, and I put a delay of 1 ps on the input, 884 +and I called this a new flipflop, how does that change any of these things? 885 +Can make$t_{hold}$negative. How do I add more delay? Just add more 886 +inverters in the front. Hold time can in fact go negative. Lot of 141-style 887 +stuff in here that you can play with.  888 + Given that, you have to deal with the fact that you've got this propagation 889 +time and the setup time. Cost of pipelined registers.  890 + Critical path time, various calculations.  598 891   599 892  338  cs_h195.html  @@ -382,7 +382,343 @@ 382 382   383 383  CS H195: Ethics with Harvey  384 384  September 17, 2012  385 - Lawsuit to get records about NSA's surveillance information.  385 + Lawsuit to get records about NSA's surveillance information.  386 + Video games affecting people, evidently.  387 + Government subpoenaed Twitter to give people tweets.  388 + Records can be subpoenaed in a court case, etc. We'll see how this plays 389 +out. Today, in today's Daily Cal, UCB suing big companies. Universities do 390 +research, etc. Back in the day, core memory meant people paid money to IBM 391 +and MIT. Berkeley holds a bunch of patents. Non-software seems reasonable.  392 + Important point: the burst of genius is very rarely true. Enabling 393 +technologies have reached the point of making things feasible. Usual story 394 +about inventions. Flash bulb in a camera, single-use: before sustainable 395 +light bulb. Steam engine. Some inventions aren't like that. Some really do 396 +just come to somebody (velcro, xerography). Nobody else was working on 397 +that. More often, everyone is thinking about this stuff.  398 + IP. A patent is the right to develop an invention, to produce things 399 +dependent on an invention. Copyright is not about invention, it's about 400 +creative and artistic works. And there, if you have an idea and write about 401 +it, other people are allowed to use your ideas, not your words. Trademark, 402 +you know what it is; you can register one; people are not allowed to use it 403 +in ways that might confuse people. You can in principle make a vacuum 404 +cleaner called "time". How close do things have to be to raise a lawsuit? 405 +Lawsuit about Apple Computers vs Apple Records. Later did, which caused a 406 +later round of battling.  407 + Personal likeness, I can't take a picture of you and publish it with 408 +certain exceptions. Most important for famous people. Funny rules: 409 +newsworthy, and news photographers are allowed to take pictures of 410 +newsworthy people.  411 + Trade secrets: if a company has secrets, and you are a competing company, 412 +you may not send a spy to extract these secrets.  413 + House ownership. There are houses where people have had houses for 414 +millennia. Patents and copyrights are not like that: not a right. Those 415 +things are bargains between creators and society. Purpose to society is 416 +that these eventually belong to the public. One of the readings talks about 417 +a different history of patents quoting Italian legal scholars, and if 418 +correct, patents were supposed to be permanent ownership. Why might it be 419 +good to society? Used to be people who made new inventions. Guilds. Hard to 420 +join, and you would be a slave for a while. Master would teach apprentice 421 +the trade, and advantage was that it reduced competition. Trouble was that 422 +there is a long history of things people used to be able to do that we 423 +can't anymore. Textbook example: Stradivarius violins.  424 + Nonetheless, nobody knows how Stradivarius made violins. Stories about how 425 +to make paints of particular colors. What the patent system is trying to 426 +avoid. Describe how invention works so someone in the field can create 427 +it. By making this disclosure, you are given a limited-term exclusive right 428 +to make these.  429 + The thing is, sooner or later, your technology is going to be obsolete. To 430 +your advantage to have a clear legal statement.  431 + Patent treaties. Used to be that if you invented something important, you'd 432 +hire a bunch of lawyers.  433 + Until recently, software was not patentable. ATT wanted to patent the 434 +SETUID bit. In those days, you could not patent any math or software or 435 +algorithm.  436 + Patents stifling innovation in the field. When you file a patent 437 +application. Let's say you deny the patent. You would like to fall back on 438 +trade secrecy. Patent applications are secret until approved. Startups 439 +doomed. Wouldn't matter if term were short compared to innovation cycle of 440 +the industry.  441 + Another thing in the Constitution is that treaties take precedence over 442 +domestic laws.  443 + So let's talk about copyrights! So. Nobody says let's do away with 444 +copyright altogether. Copyright (at its worst) is less socially harmful 445 +than patents because it's so specific. Again, copyrights are a 446 +bargain. Started in Britain between the King and printers. Printers wanted 447 +exclusive right to things they printed. King wanted printers to be 448 +censors. Originally not authors who had copyright, but the publisher. Often 449 +creators of rights will sell the rights to publishers.  450 + This is where computers come in. How to sell to world? Used to need big 451 +company with facilities to create copies and widely 452 +distribute. Self-publish: work available to everyone. Important: rarely 453 +author who complains about copyrights. Usually publishers.  454 + There's always been piracy, but limited historically by analog media losing 455 +information when copying.  456 + Term of copyright has gotten longer and longer. Lawsuit about this about 457 +the most recent extension. In effect, making permanent copyright, against 458 +constitution. Ironic because copyright law now would have made much of 459 +what made Disney rich would have been copyrighted. Lot of exceptions to 460 +copyright law. Fair use. e.g. cannot write a Harry Potter novel, but can 461 +write a Harry Potter parody. Famous case: Gone with the Wind. About how 462 +wonderful life was for the owners of slaves. Someone wrote a book 463 +(retelling from slave's point of view); ruled as fair use (political 464 +commentary, protected by free speech).  465 + Stallman actually invented a system that has 5 different categories of 466 +work. Even Stallman doesn't say to ditch copyright. Hardly any musicians 467 +make any money selling music because their contracts say that they make a 468 +certain percentage of net proceeds. The way musicians survive is concerts, 469 +and ironically, selling concert CDs. Stallman says to make music players 470 +have a money button and send money directly to the musician.  471 +  472 + CS H195: Ethics with Harvey  473 + September 24, 2012  474 + Vastly oversimplified picture of moral philosophy. Leaves out a lot.  475 + So Socrates says famously "to know the good is to desire the good", by 476 +which he means that if you really understand what's in your own interest, 477 +it's going to turn out to be the right thing. Counter-intuitive, since 478 +we've probably encountered situations in which we think what's good for us 479 +isn't good for the rest of the community.  480 + Ended up convicting Socrates, and he was offered the choice between exile 481 +from Athens and death -- chose death because he felt that he could not 482 +exist outside of his own community. His most famous student was Plato, who 483 +started an Academy (Socrates just wandered around from hand to mouth), took 484 +in students (one of whom was Aristotle). If you're scientists or engineers, 485 +you've been taught to make fun of Aristotle, since he said that heavier 486 +objects fall faster than light objects, and famously, Galileo took two 487 +objects, dropped them, and they hit the ground at the same time.  488 + It's true that some of the things Aristotle said about the physical world 489 +have turned out not to be right. But it's important to understand it in 490 +terms of the physical world, he did not have the modern idea of trying to 491 +make a universal theory that explained everything.  492 + Objects falling in atmosphere with friction different from behavior of 493 +planets orbiting sun? Perfectly fine with Aristotle.  494 + One of the things Aristotle knew? When you see a plate of donuts, you know 495 +perfectly well that it's just carbs and fat and you shouldn't eat them, but 496 +you do anyway. Socrates explains that as "you don't really know through and 497 +through that it is bad for you", and Aristotle doesn't like that 498 +explanation. Knowing what to do and actually doing it are two different 499 +things. Took that in two directions: action syllogism (transitivity), 500 +extended so that conclusion of the syllogism can be an action. Not 501 +important to us: important to us is that he introduces the idea of 502 +virtues. A virtue is not an understanding of what's right, but a habit -- 503 +like a good habit you get into.  504 + Aristotle lists a bunch of virtues, and in all cases he describes it as a 505 +midpoint between two extremes (e.g. courage between cowardice and 506 +foolhardiness, or honesty as a middle ground between dishonesty and saying 507 +too much).  508 + Better have good habits, since you don't have time in real crises to 509 +think. So Aristotle's big on habits. And he says that you learn the virtues 510 +through being a member of a community and through the role you play in that 511 +community, Lived in a time that people inherited roles a lot. The argument 512 +goes a little like this. What does it mean to be a good person? Hard 513 +question. What does it mean to be a good carpenter? Much easier. A good 514 +carpenter builds stuff that holds together and looks nice, etc. What are 515 +the virtues that lead to being a good carpenter? Also easy: patience, care, 516 +measurement, honesty, etc. Much easier than what's a good 517 +person.  518 + Aristotle's going to say that the virtues of being a good person are 519 +precisely the virtues you learn in social practices from people older than 520 +you who are masters of the practice. One remnant of that in modern society 521 +is martial arts instruction. When you go to a martial arts school and say 522 +you want to learn, one of the first things you learn is respect for your 523 +instructor, and you're supposed to live your life in a disciplined way, and 524 +you're not learning skills so much as habits. Like what Aristotle'd say 525 +about any practice. Not so much of that today: when you're learning to be a 526 +computer scientist, there isn't a lot of instruction in "here are the 527 +habits that make you a (morally) good computer scientist".  528 + Kant was not a communitarian: was more of "we can figure out the right 529 +answer to ethical dilemmas." He has an axiom system, just like in 530 +mathematics: with small numbers of axioms, you can prove things. Claims 531 +just one axiom, which he describes in multiple ways.  532 + Categorical imperative number one: treat people as ends, not means. This is 533 +the grown-up version of the golden rule. Contracts are all right as long as 534 +both parties have their needs met and exchange is not too unequal.  535 + Second version: universalizability. An action is good if it is 536 +universalizable. That means, if everybody did it, would it work? Textbook 537 +example is "you shouldn't tell lies". The only reason telling lies works is 538 +because people usually tell the truth, and so people are predisposed to 539 +thinking that it's usually true. If everyone told lies, then we'd be 540 +predisposed to disbelieve statements. Lying would no longer be effective.  541 + There's a third one which BH can never remember which is much less 542 +important. Kant goes on to prove theorems to resolve moral dilemmas.  543 + Problem from Kant: A runs past you into the house. B comes up with a gun 544 +and asks you where A is. Kant suggests something along the lines of 545 +misleading B.  546 + Axiomatic, resolve ethical problems through logic and proving what you want 547 +to do. Very popular among engineers, mainly for the work of Rawls, who 548 +talks about the veil of ignorance. You have to imagine yourself, looking at 549 +life on Earth, and not knowing in what social role you're going to be 550 +born. Rawls thinks that from this perspective, you have to root for the 551 +underdog when situations come up, because in any particular thing that 552 +comes up, harm to the rich person is going to be less than the gains of the 553 +poor person (in terms of total wealth, total needs). Going to worry about 554 +being on side of underdog, etc. More to Rawls: taking into account how 555 +things affect all different constituencies.  556 + Another descendant of Plato are utilitarians. One of the reasons it's 557 +important for you to understand this chart: when you don't think about it 558 +too hard, you use utilitarian principles, which is sometimes 559 +bad. Utilitarians talk about the greatest good for the greatest number.  560 + Back to something from this class: what if I illegally download some movie? 561 +Is that okay? How much do I benefit, and how much is the movie-maker 562 +harmed? Not from principled arguments, which is what Kant wants you to do, 563 +but from nuts and bolts, who benefits how much, each way.  564 + Putting that in a different fashion, Kantians are interested in what 565 +motivates your action, why you did it. Utilitarians are interested in the 566 +result of your action. One thing that makes utilitarian hard is that you 567 +have to guess as to what probably will happen.  568 + Now I want to talk to you about MacIntyre. Gave you a lot of reading, 569 +probably hardest reading in the course. Talks like a philosopher. Uses 570 +dessert as what you deserve (noun of deserve). Life-changing for BH when he 571 +came across MacIntyre; passing it on to you as a result.  572 + He starts by saying to imagine an aftermath in which science is blamed and 573 +destroyed. A thousand years later, some people digging through the remains 574 +of our culture read about this word science, and it's all about 575 +understanding how the physical world works, and they want to revive this 576 +practice. Dig up books by scientists, read and memorize bits of them, 577 +analyze, have discussions. The people who do this call themselves 578 +scientists because they're studying science.  579 + We from our perspective would say that isn't science at all -- you don't 580 +just engage with books, but rather engage with the physical world through 581 +experiments. Those imagined guys from a millennium from now have lost the 582 +practice. They think they're following a practice, but they have no idea 583 +what it's like. MacIntyre argues this is us with ethics.  584 + Equivalent to WW3 according to MacIntyre is Kant. Kant really, more than 585 +anyone else, brought into being the modern era. Why? Because in the times 586 +prior to Kant, a lot of arguments not only about ethics but also by the 587 +physical world were resolved by religious authority. Decisions made based 588 +on someone's interpretation of the bible, e.g.  589 + Kant claims to be a Christian, but he thinks the way we understand God's 590 +will is by applying the categorical imperative. Instead of asking a priest 591 +what to do, we reason it out. We don't ask authorities, we work it out. 592 +Also, he starts this business of ethical dilemmas. Everybody in the top 593 +half of the world talks in terms of the good life. Even Socrates, who 594 +thinks you can know what to do, talks about the good life, too. So ethics 595 +is not about "what do I do in this situation right now", but rather the 596 +entirety of one's life and what it means to live a good life.  597 + Kant and Mill: no sense of life as a flow; rather, moments of 598 +decisions. What MacIntyre calls the ethical equivalent of WW3: at that 599 +point, we lost the thread, since we stopped talking about the good 600 +life. Now, it wasn't an unmitigated disaster, since it gives us -- the 601 +modern liberal society, not in the American sense of voting for democrats, 602 +but in the sense that your life goals are up to you as an individual, and 603 +the role of society is to build infrastructure and getting in people's way, 604 +so stopping people from doing things. I can, say, have some sexual practice 605 +different from yours. So that was a long time coming. Now, in our 606 +particular culture, the only thing that's bad is having sex with children, 607 +as far as I can tell -- as long as it doesn't involve you messing up 608 +someone else's life, e.g. rape. As long as it involves two (or more?) 609 +consenting adults, that's okay.  610 + MacIntyre says that there are things that came up with Kant that we can't 611 +just turn back to being Aristotlean. The people who lived the good life 612 +were male Athenian citizens. They had wives who weren't eligible, and they 613 +had slaves who did most of the grunt work. And so male Athenian citizens 614 +could spend their time walking around chatting with Socrates because they 615 +were supported by slavery. And nobody wants to go back to that. No real way 616 +to go back to being Aristotlean without giving up modern civil rights.  617 + So. One of the things I really like about MacIntyre is the example of 618 +wanting to teach a child how to play chess, but he's not particularly 619 +interested. He is, however, interested in candy. You say, every time you 620 +play with me, I'll give you a piece of candy. If you win, two pieces. Will 621 +play in a way that's difficult but possible to beat me. So, MacIntyre says 622 +this child is now motivated to play and to play well. But he's also 623 +motivated to cheat, if he can get away with it. So let's say this 624 +arrangement goes on for some time, and the kid gets better at it. What you 625 +hope is that the child reaches a point where the game is valuable to 626 +itself: he or she sees playing chess as rewarding (as an intellectual 627 +challenge). When that happens, cheating becomes self-defeating.  628 + While the child is motivated by external goods (rewards, money, fame, 629 +whatever), then the child is not part of the community of practice. But 630 +once the game becomes important (the internal benefits motivate him), then 631 +he does feel like part of the community. Huge chess community with 632 +complicated infrastructure with rating, etc. And that's a community with 633 +practice, and it has virtues (some of which are unique to chess, but maybe 634 +not -- e.g. planning ahead). Honesty, of course; patience; personal 635 +improvement.  636 + And the same is true with most things that human beings do. Not 637 +everything. MacIntyre raises the example of advertising. What are the 638 +virtues of this practice? Well, appealing to people in ways that they don't 639 +really see; suggesting things that aren't quite true without saying 640 +them. He lists several virtues that advertising people have, and these 641 +virtues don't generalize. Not part of being a good person; not even 642 +compatible with being a good person. So different from virtues of normal 643 +practices.  644 + Having advertising writers is one of the ways in which MacIntyre thinks 645 +we've just lost the thread. The reason we have them is that we hold up in 646 +our society the value of furthering your own ambition and getting rich, and 647 +not getting rich by doing something that's good anyway, but just getting 648 +rich. That's an external motivation rather than an internal one.  649 + We talk about individuals pursuing their own ends. We glorify -- take as an 650 +integral part of our society -- as individuals pursuing their own ends. In 651 +a modern understanding of ethics, you approach each new situation as if 652 +you've never done anything. You don't learn from experience; you learn from 653 +rules. The result may be the same for each intermediate situation, but it 654 +leads to you thinking differently. You don't think about building good 655 +habits in this context.  656 + A lot of you probably exercise (unlike me). Maybe you do it because it's 657 +fun, but maybe you also do it because it only gets harder as you get older, 658 +and you should get in the habit to keep it up. In that area, you get into 659 +habits. But writing computer programs, we tell you about rules (don't have 660 +concurrency violations), and I guess implicitly, we say that taking 61B is 661 +good for you because you learn to write bigger programs. Still true -- 662 +still a practice with virtues.  663 + Two things: that sort of professional standard of work is a pretty narrow 664 +ethical issue. They don't teach you to worry about the privacy implications 665 +of third parties. Also, when people say they have an ethical dilemma, they 666 +think about it as a decision. A communitarian would reject all that ethical 667 +dilemma stuff. Dilemmas will have bad outcomes regardless. Consider Greek 668 +tragedies. When Oedipus finds himself married to his mother, it's like game 669 +over. Whole series of bad things that happen to him. Not much he can do 670 +about it on an incident by incident basis. Problem is a fatal flaw in his 671 +character early on (as well as some ignorance), and no system of ethics is 672 +going to lead Oedipus out of this trap. What you have to is try not to get 673 +into traps, and you do that through prudence and honesty and whatnot.  674 + Classic dilemma: Heins is a guy whose wife has a fatal disease that can be 675 +cured by an expensive drug, but Heins is poor. So he goes to the druggist 676 +and says that he can't afford to pay for this drug, but his wife is going 677 +to die, so the druggist says no. So Heins is considering breaking into the 678 +drugstore at night and stealing the drug so his wife can live. What should 679 +he do and why? According to the literature, there's no right answer. What 680 +matters is your reason.  681 + I'm going to get this wrong, but it's something like this. Stage one: your 682 +immediate needs are what matter. Yes, he should steal it, because it's his 683 +wife, or no, he shouldn't steal it, because he should go to prison. Stage 684 +two: something like worrying about consequences to individuals. Might hurt 685 +druggist or might hurt his wife. Stage three: something like "well, I have 686 +a closer relationship to my wife than the druggist; I care more about my 687 +wife, so I should steal it". Stage four: it's against the law, and I 688 +shouldn't break the law. Stage five: like stage three, generalized to 689 +larger community: how much will it hurt my wife not to get the drug? A 690 +lot. How much will it hurt the druggist if I steal it? Some money. Stage 691 +six, based not on laws of community, but rather on the standards of the 692 +community. Odd-numbered stages are about specific people. Even-numbered 693 +stages are about society and rules (punishment if I do it to it's the law 694 +to it's what people expect of me).  695 + Right now I'm talking about the literature of moral psychology: people go 696 +through these stages (different ways of thinking). Question posed is not 697 +"how do people behave", but rather "how should people behave".  698 + This is modern ethical reasoning. Take some situation that has no right 699 +answer, and split hairs about finding a right answer somehow.  700 + Talk about flying: checklist for novices. Instructors don't use this list: 701 +eventually, you get to where you're looking at the entire dashboard at 702 +once, and things that aren't right jump out at you.  703 + Another example: take a bunch of chess pieces, put them on the board, get 704 +someone to look at it for a minute, and take the pieces away, and ask the 705 +person to reconstruct the board position. Non-chess players are terrible 706 +(unsurprisingly); chess grandmasters can do it if it came out of a real 707 +game; if you put it randomly, they're just as bad as the rest of 708 +us. They're not looking at individual pieces; they're looking at the board 709 +holistically (clusters of pieces that interact with each other).  710 + Relevance to this about ethics: we don't always know why we do things. Very 711 +rare that we have the luxury to figure out either what categorical 712 +imperative tells us or utilitarian approach. Usually we just do something.  713 + BH with weaknesses. Would be stronger if his education was less about 714 +thinking things through and more about doing the right thing.  715 + Our moral training is full of "Shalt Not"s. Lot more in the Bible about 716 +what not to do than what to do or how to live the good life (that part of 717 +the Bible -- gets better). We also have these laws. Hardly ever say you 718 +have to do something (aside from paying taxes). Mostly say what you can't 719 +do. Never say how to live the good life. BH thinks that serves us ill. Have 720 +to make decisions. Often, what you do is different from what you say you 721 +should do.  386 722   387 723  609  ee221a.html  @@ -2,12 +2,12 @@ 2 2   3 3  EE 221A: Linear System Theory  4 4  August 23, 2012  5 - Prof. Claire Tomlin (tomlin@eecs) 6 -721 Sutardja Dai Hall 7 -inst.eecs.berkeley.edu/~ee221a  8 - GSI: Insoon Yang (iyang@eecs)  9 - Somewhat tentative office hours on schedule: T 1-2, W 11-12.  10 - In Soon's office hours: M 1:30 - 2:30, θ 11-12.  5 + Administrivia  6 + Prof. Claire Tomlin (tomlin@eecs). 721 Sutardja Dai Hall. Somewhat 7 +tentative office hours on schedule: T 1-2, W 11-12. 8 +http://inst.eecs.berkeley.edu/~ee221a  9 + GSI: Insoon Yang (iyang@eecs). In Soon's office hours: M 1:30 - 2:30, θ 10 +11-12.  11 11  Homeworks typically due on Thursday or Friday.  12 12  Intro  13 13  Bird's eye view of modeling in engineering + design vs. in science.  @@ -119,7 +119,7 @@ 119 119  then your model will work as expected. Simulation gives you system behavior 120 120  for a certain set of parameters. Very different, but they complement each 121 121  other. Analyze simpler models, simulate more complex models.  122 - Linear Algebra  122 + Linear Algebra  123 123  Functions and their properties.  124 124  Fields, vector spaces, properties and subspaces.  125 125  (note regarding notation:$\Re^+$means non-negative reals, as does @@ -129,7 +129,7 @@ 129 129  Cartesian product:$\{(x,y) \vert x \in X \land y \in Y\}$(set of ordered 130 130  n-tuples)  131 131   132 - EE 221A: Linear System Theory  132 + Functions and Vector Spaces  133 133  August 28, 2012  134 134  OH: M/W 5-6, 258 Cory  135 135  Today: beginning of the course: review of lin. alg topics needed for the @@ -218,7 +218,7 @@ 218 218  the main things we're going to do is look at properties of linear functions 219 219  and representation as multiplication by matrices.  220 220   221 - EE 221A: Linear System Theory  221 + Vector Spaces and Linearity  222 222  August 30, 2012  223 223  From last time  224 224  Subspaces, bases, linear dependence/independence, linearity. One of the @@ -286,7 +286,7 @@ 286 286  (if the nullspace only contains the zero vector, we say it is trivial)  287 287  $$\mathcal{A}(x_0) = \mathcal{A}(x_1) \iff x - x_0 \in N(\mathcal{A})$$  288 288   289 - EE 221A: Linear System Theory  289 + Matrix Representation of Linear Maps  290 290  September 4, 2012  291 291  Today  292 292  Matrix multiplication as a representation of a linear map; change of basis @@ -403,7 +403,7 @@ 403 403  From analysis: the supremum is the least upper bound (the smallest 404 404 $\forall y \in S, x : x \ge y$).  405 405   406 - EE 221A: Linear System Theory  406 + Guest Lecture: Induced Norms and Inner Products  407 407  September 6, 2012  408 408  Induced norms of matrices  409 409  The reason that we're going to start talking about induced norms: today @@ -595,7 +595,7 @@ 595 595  Basically, you get after all this computation that$b_2 = \frac{1}{12} t - 596 596  \frac{1}{24}$. Same construction for$b_3$.  597 597   598 - EE 221A: Linear System Theory  598 + Singular Value Decomposition & Introduction to Differential Equations  599 599  September 11, 2012  600 600  Reviewing the adjoint, suppose we have two vector spaces$U, V$; like we 601 601  have with norms, let us associated a field that is either$\Re$or @@ -716,7 +716,7 @@ 716 716  infinitely many points of discontinuity.  717 717  Next time we'll talk about Lipschitz continuity.  718 718   719 - EE 221A: Linear System Theory  719 + Existence and Uniqueness of Solutions to Differential Equations  720 720  September 13, 2012  721 721  Section this Friday only, 9:30 - 110:30, Cory 299.  722 722  Today: existence and uniqueness of solutions to differential equations.  @@ -852,7 +852,588 @@ 852 852  necessarily in the space. Example: any continued fraction.  853 853  To show (1), we'll show that this sequence$\{x_m\}$that we constructed is 854 854  a Cauchy sequence in a Banach space. Interestingly, it matters what norm 855 -you choose.  855 +you choose.  856 +  857 + Proof of Existence and Uniqueness Theorem  858 + September 18, 2012  859 + Today:  860 +  861 + • proof of existence and uniqueness theorem. •  862 + • [ if time ] introduction to dynamical systems. •  863 +  864 + First couple of weeks of review to build up basic concepts that we'll be 865 +drawing upon throughout the course. Either today or Thursday we will launch 866 +into linear system theory.  867 + We're going to recall where we were last time. We had the fundamental 868 +theorem of differential equations, which said the following: if we had a 869 +differential equation,$\dot{x} = f(x,t)$, with initial condition$x(t_0) = 870 +x_0$, where$x(t) \in \Re^n$, etc, if$f( \cdot , t)$is Lipschitz 871 +continuous, and$f(x, \cdot )$is piecewise continuous, then there exists a 872 +unique solution to the differential equation / initial condition pair (some 873 +function$\phi(t)$) wherever you can take the derivative (may not be 874 +differentiable everywhere: loses differentiability on the points where 875 +discontinuities exist).  876 + We spent quite a lot of time discussing Lipschitz continuity. Job is 877 +usually to test both conditions; first one requires work. We described a 878 +popular candidate function by looking at the mean value theorem and 879 +applying it to$f$: a norm of the Jacobian function provides a candidate 880 +Lipschitz if it works.  881 + We also described local Lipschitz continuity, and often, when using a norm 882 +of the Jacobian, that's fairly easy to show.  883 + Important point to recall: a norm of the Jacobian of$f$provides a 884 +candidate Lipschitz function.  885 + Another important thing to say here is that we can use any norm we want, so 886 +we can be creative in our choice of norm when looking for a better bound.  887 + We started our proof last day, and we talked a little about the structure 888 +of the proof. We are going to proceed by constructing a sequence of 889 +functions, then show (1) that it converges to a solution, then show (2) 890 +that it is unique.  891 + Proof of Existence  892 + We are going to construct this sequence of functions as follows: 893 +$x_{m+1}(t) = x_0 + \int_0^t f(x_m(\tau)) d\tau$. Here we're dealing with 894 +an arbitrary interval from$t_1$to$t_2$, and so$0 \in [t_1, t_2]$. We 895 +want to show that this sequence is a Cauchy sequence, and we're going to 896 +rely on our knowledge that the space these functions are defined in is a 897 +Banach space (hence this sequence converges to something in the space).  898 + We have to put a norm on the set of reals, so we'll use the infinity 899 +norm. Not going to prove it, but rather state it's a Banach space. If we 900 +show that this is a Cauchy sequence, then the limit of that Cauchy sequence 901 +exists in the space. The reason that's interesting is that it's this limit 902 +that provides a candidate for this differential equation.  903 + We will then prove that this limit satisfies the DE/IC pair. That is 904 +adequate to show existence. We'll then go on to prove uniqueness.  905 + Our immediate goal is to show that this sequence is Cauchy, which is, we 906 +should show$\exists m \st (x_{m+p} - x_m) \to 0$as$m$gets large.  907 + First let us look at the difference between$x_{m+1}$and$x_m$. Just 908 +functions of time, and we can compute this.$\mag{x_{m+1} - x_m} = 909 +\int_{t_0}^t (f(x_m, \tau) - f(x_{m+1}, \tau)) d\tau$. Use the fact that f 910 +is Lipschitz continuous, and so it is$\le k(\tau)\mag{x_m(\tau) - 911 +x_{m+1}(\tau)} d\tau$. The function is Lipschitz, so well-defined, and it 912 +has a supremum in this interval. Let$\bar{k}$be the supremum of$k$over 913 +the whole interval$[t_1, t_2]$. This means that we can take this 914 +inequality and rewrite as$\mag{x_{m+1} - x_m} \le \bar{k} \int_{t_0}^t 915 +\mag{x_m(\tau) - x_{m+1}(\tau)} d\tau$. Now we have a bound that relates 916 +the bound between$x_m$and$x_{m+1}$. You can essentially relate the 917 +distance we've just related between two subsequent elements to some further 918 +distance by counting.  919 + Let us do two things: sort out the integral on the right-hand-side, then 920 +look at arbitrary elements beyond an index.  921 + We know that$x_1(t) = x_0 + \int_{t_0}^t f(x_0, \tau) d\tau$, and that$x_1 922 +- x_0 \le \int_{t_0}^{t} \mag{f(x_0, \tau)} d\tau \le \int_{t_1}{t_2} 923 + \mag{f(x_0, \tau) d\tau} \defequals M$. From the above inequalities, 924 +$\mag{x_2 - x_1} \le M \bar{k}\abs{t - t_0}$. Now I can look at general 925 + bounds:$x_3 - x_2 \le \frac{M\bar{k}^2 \abs{t - t_0}^2}{2!}$. In general, 926 +$x_{m+1} - x_m \le \frac{M\parens{\bar{k} \abs{t - t_0}}^m}{m!}$.  927 + If we look at the norm of$\dot{x}$, that is going to be a function 928 +norm. What I've been doing up to now is look at a particular value$t_1 < t 929 +< t_2$.  930 + Try to relate this to the norm$\mag{x_{m+1} - x_m}_\infty$. Can what we've 931 +done so far give us a bound on the difference between two functions? We 932 +can, because the infinity norm of a function is the maximum value that the 933 +function assumes (maximum vector norm for all points$t$in the interval 934 +we're interested in). If we let$T$be the difference between our larger 935 +bound$t_2 - t_1$, we can use the previous result on the pointwise norm, 936 +then a bound on the function norm has to be less than the same 937 +bound, i.e. if a pointwise norm function is less than this bound for all 938 +relevant$t$, then its max value must be less than this bound.  939 + That gets us on the road we want to be, since that now gets us a bound. We 940 +can now go back to where we started. What we're actually interested in is 941 +given an index$m$, we can construct a bound on all later elements in the 942 +sequence.  943 +$\mag{x_{m+p} - x_m}_\infty = \mag{x_{m+p} + x_{m+p-1} - x_{m+p-1} + ... - 944 +x_m} = \mag{\sum_{k=0}^{p-1} (x_{m+k+1} - x_{m+k})} \le M \sum_{k=0}^{p-1} 945 +\frac{(\bar{k}T)^{m+k}}{(m+k)!}$.  946 + We're going to recall a few things from undergraduate calculus: Taylor 947 +expansion of the exponential function and$(m+k)! \ge m!k!$.  948 + With these, we can say that$\mag{x_{m+p} - x_m}_\infty \le 949 +M\frac{(\bar{k}T)^m}{m!} e^{\bar{k} T}$. What we'd like to show is that this 950 +can be made arbitrarily small as$m$gets large. We study this bound as$m 951 +\to \infty$, and we recall that we can use the Stirling approximation, 952 +which shows that factorial grows faster than the exponential function. That 953 +is enough to show that$\{x_m\}_0^\infty$is Cauchy. Since it is in a 954 +Banach space (not proving, since beyond our scope), it converges to 955 +something in the space to a function (call it$x^\ell$) in the same 956 +space.  957 + Now we just need to show that the limit$x^\ell$solves the differential 958 +equation (and initial condition). Let's go back to the sequence that 959 +determines$x^\ell$.$x_{m+1} = x_0 + \int_{t_0}^t f(x_m, \tau) 960 +d\tau$. We've proven that this limit converges to$x^\ell$. What we want to 961 +show is that if we evaluate$f(x^\ell, t)$, then$\int_{t_0}^t f(x_m, \tau) 962 +\to \int_{t_0}^t f(x^\ell, \tau) d\tau$. Would be immediate if we had that 963 +the function were continuous. Clear that it satisfies initial condition by 964 +the construction of the sequence, but we need to show that it satisfies the 965 +differential equation. Conceptually, this is probably more difficult than 966 +what we've just done (establishing bounds, Cauchy sequences). Thinking 967 +about what that function limit is and what it means for it to satisfy that 968 +differential equation.  969 + Now, you can basically use some of the machinery we've been using all along 970 +to show this. Difference between these goes to$0$as$m$gets large.  971 + $$\mag{\int_{t_0}^t (f(x_m, \tau) f(x^\ell, \tau)) d\tau} 972 +\\ \le \int_{t_0}^t k(\tau) \mag{x_m - x^\ell} d\tau \le \bar{k}\mag{x_m - x^\ell}_\infty T 973 +\\ \le \bar{k} M e^{\bar{k} T} \frac{(\bar{k} T)^m}{m!}T 974 +$$  975 + Thus$x^\ell$solves the DE/IC pair. A solution$\Phi$is$x^\ell$, 976 +i.e.$x^\ell(t) = f(x^\ell, t) \forall [t_1, t_2] - D$and$x^\ell(t_0) = 977 +x_0$ 978 + To show that this solution is unique, we will use the Bellman-Gronwall 979 +lemma, which is very important. Used ubiquitously when you want to show 980 +that functions of time are equal to each other: candidate mechanism to do 981 +that.  982 + Bellman-Gronwall Lemma  983 + Let$u, k$be real-valued positive piece-wise continuous functions of time, 984 +and we'll have a constant$c_1 \ge 0$and$t_0 \ge 0$. If we have such 985 +constants and functions, then the following is true: if$u(t) \le c_1 + 986 +\int_{t_0}^t k(\tau)u(\tau) d\tau$, then$u(t) \le c_1 e^{\int_{t_0}^t 987 +k(\tau) d\tau}$.  988 + Proof (of B-G)  989 +$t > t_0$WLOG.  990 + $$U(t) = c_1 + \int_{t_0}^t k(\tau) u(\tau) d\tau 991 +\\ u(t) \le U(t) 992 +\\ u(t)k(t)e^{\int_{t_0}^t k(\tau) d\tau} \le U(t)k(t)e^{\int_{t_0}^t k(\tau) d\tau} 993 +\\ \deriv{}{t}\parens{U(t)e^{\int_{t_0}^t k(\tau) d\tau}} \le 0 \text{(then integrate this derivative, note that U(t_0) = c_1)} 994 +\\ u(t) \le U(t) \le c_1 e^{\int_{t_0}^t k(\tau) d\tau} 995 +$$  996 + Using this to prove uniqueness of DE/IC solutions  997 + How we're going to use this to prove B-G lemma.  998 + We have a solution that we constructed$\Phi$, and someone else gives us a 999 +solution$\Psi$, constructed via a different method. Show that these must 1000 +be equivalent. Since they're both solutions, they have to satisfy the DE/IC 1001 +pair. Take the norm of the difference between the differential equations.  1002 + $$\mag{\Phi - \Psi} \le \bar{k} \int_{t_0}^t \mag{\Phi - \Psi} d\tau \forall 1003 +t_0, t \in [t_1, t_2]$$  1004 + From the Bellman-Gronwall Lemma, we can rewrite this inequality as 1005 +$\mag{\Phi - \Psi} \le c_1 e^{\bar{k}(t - t_0)}$. Since$c_1 = 0$, this 1006 +norm is less than or equal to 0. By positive definiteness, this norm must 1007 +be equal to 0, and so the functions are equal to each other.  1008 + Reverse time differential equation  1009 + We think about time as monotonic (either increasing or decreasing, usually 1010 +increasing). Suppose that time is decreasing.$\exists \dot{x} = 1011 +f(x,t)$. Going backwards in time, explore existence and uniqueness going 1012 +backwards in time. Suppose we had a time variable$\tau$which goes from 1013 +$t_0$backwards, and defined$\tau \defequals t_0 - t$. We want to define 1014 +the solution to that differential equation backwards in time as$z(\tau) = 1015 +x(t)$if$t < t_0$. Derive what reverse order time derivative is. Equation 1016 +is just$-f$; we're going to use$\bar{f}$to represent this 1017 +function ($\deriv{}{\tau}z = -\deriv{}{t}x = -f(x, t) = -f(z, \tau) = 1018 +\bar{f}$).  1019 + This equation, if I solve the reverse time differential equation, we'll 1020 +have some corresponding backwards solution. Concluding statement: can think 1021 +about solutions forwards and backwards in time. Existence of unique 1022 +solution forward in time means existence of unique solution backward in 1023 +time (and vice versa). You can't have solutions crossing themselves in 1024 +time-invariant systems.  1025 +  1026 + Introduction to dynamical systems  1027 + September 20, 2012  1028 + Suppose we have equations$\dot{x} = f(x, u, t)$,$\fn{f}{\Re^n \times 1029 +\Re^n \times \Re_+}{\Re^n}$and$y = h(x, u, t)$,$\fn{h}{\Re^n \times 1030 +\Re^n \times \Re_+}{\Re^n}$. We define$n_i$as the dimension of the input 1031 +space,$n_o$as dimension of the output space, and$n$as the dimension of 1032 +the state space.  1033 + We've looked at the form, and if we specify a particular$\bar{u}(t)$over some 1034 +time interval of interest, then we can plug this into the right hand side 1035 +of this differential equation. Typically we do not supply a particular 1036 +input. Thinking about solutions to this differential equation, for now, 1037 +let's suppose that it's specified.  1038 + Suppose we have some feedback function of the state. If$u$is specified, 1039 +as long as$\bar{f}$satisfies the conditions for the existence and 1040 +uniqueness theorem, we have a differential equation we can solve.  1041 + Another example: instead of differential equation (which corresponds to 1042 +continuous time), we have a difference equation (which corresponds to 1043 +discrete time).  1044 + Example: dynamic system represented by an LRC circuit. One practical way to 1045 +define the state$x$is as a vector of elements whose derivatives appear in 1046 +our differential equation. Not formal, but practical for this example.  1047 + Notions of discretizing.  1048 + What is a dynamical system?  1049 + As discussed in first lecture, we consider time$\tau$to be a privileged 1050 +variable. Based on our definition of time, the inputs and outputs are all 1051 +functions of time.  1052 + Now we're going to define a dynamical system as a 5-tuple:$(\mathcal{U}, 1053 +\Sigma, \mathcal{Y}, s, r)$(input space, state space, output space, state 1054 +transition function, output map).  1055 + We define the input space as the set of input functions over time to an 1056 +input set$U$(i.e.$\mathcal{U} = \{\fn{u}{\tau}{U}\}$. Typically,$U = 1057 +\Re^{n_i}$).  1058 + We also define the output space as the set of output functions over time to 1059 +an output set$Y$(i.e.$\mathcal{Y} = \{\fn{y}{\tau}{Y}\}$). Typically,$Y 1060 += \Re^{n_o}$.  1061 +$\Sigma$is our state space. Not defined as the function, but the actual 1062 +state space. Typically,$\Sigma = \Re^n$, and we can go back and think 1063 +about the function$x(t) \in \Sigma$.$\fn{x}{\tau}{\Sigma}$is called the 1064 +state trajectory.  1065 +$s$is called the state transition function because it defines how the 1066 +state changes in response to time and the initial state and the 1067 +input.$\fn{s}{\tau \times \tau \times \Sigma \times U }{\Sigma}$. Usually 1068 +we write this as$x(t_1) = s(t_1, t_0, x_0, u)$, where$u$is the function 1069 +$u(\cdot) |_{t_0}^{t_1}$. This is important: coming towards how we define 1070 +state. Only things you need to get to state at the new time are the initial 1071 +state, inputs, and dynamics.  1072 + Finally, we have this output map (sometimes called the readout map) 1073 +$r$.$\fn{r}{\tau \times \Sigma \times U}{Y}$. That is, we can think about 1074 +$y(t) = r(t, x(t), u(t))$. There's something fundamentally different 1075 +between$r$and$s$.$s$depended on the function$u$, whereas$r$only 1076 +depended on the current value of$u$at a particular time.  1077 +$s$captures dynamics, while$r$is static. Remark:$s$has dynamics 1078 +(memory) -- things that depend on previous time, whereas$r$is static: 1079 +everything it depends on is at the current time (memoryless).  1080 + In order to be a dynamical system, we need to satisfy two axioms: a 1081 +dynamical system is a five-tuple with the following two axioms:  1082 +  1083 + • The state transition axiom:$\forall t_1 \ge t_0$, given$u, \tilde{u}$ 1084 + that are equal to each other over a particular time interval, the state 1085 + transition functions must be equal over that interval, i.e.$s(t_1, t_0, 1086 + x_0, u) = s(t_1, t_0, x_0, \tilde{u})$. Requires us to not have 1087 + dependence on the input outside of the time interval of interest. •  1088 + • The semigroup axiom: suppose you start a system at$t_0$and evolve it to 1089 +$t_2$, and you're considering the state. You have an input$u$defined 1090 + over the whole time interval. If you were to look at an intermediate 1091 + point$t_1$, and you computed the state at$t_1$via the state transition 1092 + function, we can split our time interval into two intervals, and we can 1093 + compute the result any way we like. Stated as the following:$s(t_2, t_1, 1094 + s(t_1, t_0, x_0, u), u) = s(t_2, t_0, x_0, u)$. •  1095 +  1096 + When we talk about a dynamical system, we have to satisfy these axioms.  1097 + Response function  1098 + Since we're interested in the outputs and not the states, we can define 1099 +what we call the response map. It's not considered part of the definition 1100 +of a dynamical system because it can be easily derived.  1101 + It's the composition of the state transition function and the readout map, 1102 +i.e.$y(t) = r(t, x(t), u(t)) = r(t, s(t, t_0, x_0, u), u(t)) \defequals 1103 +\rho(t, t_0, x_0, u)$. This is an important function because it is used to 1104 +define properties of a dynamical system. Why is that? We've said that 1105 +states are somehow mysterious. Not something we typically care about: 1106 +typically we care about the outputs. Thus we define properties like 1107 +linearity and time invariance.  1108 + Time Invariance  1109 + We define a time-shift operator$\fn{T_\tau}{\mathcal{U}}{\mathcal{U}}$, 1110 +$\fn{T_\tau}{\mathcal{Y}}{\mathcal{Y}}$.$(T_\tau u)(t) \defequals u(t - 1111 +\tau)$. Namely, the value of$T_\tau u$is that of the old signal at 1112 +$t-\tau$.  1113 + A time-invariant (dynamical) system is one in which the input space and 1114 +output space are closed under$T_\tau$for all$\tau$, and$\rho(t, t_0, 1115 +x_0, u) = \rho(t + \tau, t_0 + \tau, x_0, T_\tau u)$.  1116 + Linearity  1117 + A linear dynamical system is one in which the input, state, and output 1118 +spaces are all linear spaces over the same field$\mathbb{F}$, and the 1119 +response map$\rho$is a linear map of$\Sigma \times \mathcal{U}$into 1120 +$\mathcal{Y}$.  1121 + This is a strict requirement: you have to check that the response map 1122 +satisfies these conditions. Question that comes up: why do we define 1123 +linearity of a dynamical system in terms of linearity of the response and 1124 +not the state transition function? Goes back to a system being 1125 +intrinsically defined by its inputs and outputs. Often states, you can have 1126 +many different ways to define states. Typically we can't see all of 1127 +them. It's accepted that when we talk about a system and think about its 1128 +I/O relations, it makes sense that we define linearity in terms of this 1129 +memory function of the system, as opposed to the state transition function.  1130 + Let's just say a few remarks about this: zero-input response, 1131 +zero-state response. If we look at the zero element in our spaces (so 1132 +we have a zero vector), then we can take our superposition, which implies 1133 +that the response at time$t$is equal to the zero-state response, which is 1134 +the response, given that we started at the zero state, plus the zero input 1135 +response.  1136 + That is:$\rho(t, t_0, x_0, u) = \rho(t, t_0, \theta_x, u) + \rho(t, t_0, 1137 +x_0, \theta_u)$(from the definition of linearity).  1138 + The second remark is that the zero-state response is linear in the input, 1139 +and similarly, the zero-input response is linear in the state.  1140 + One more property of dynamical systems before we finish: equivalence (a 1141 +property derived from the definition). Take two dynamical systems$D = (U, 1142 +\Sigma, Y, s, r), \tilde{D} = (U, \bar{\Sigma}, Y, \bar{s}, \bar{r})$.$x_0 1143 +\in D$is equivalent to$\tilde{x_0} \in \tilde{D}$at$t_0$. If$\forall t 1144 +\ge t_0, \rho(t, t_0, x_0, u) = \tilde{\rho}(t, t_0, \tilde{x_0}, u)$ 1145 +$\forall x$and some$\tilde{x}$, the two systems are equivalent.  1146 +  1147 + Linear time-varying systems  1148 + September 25, 2012  1149 + Recall the state transition function is given some function of the current 1150 +time with initial state, initial time, and inputs, Suppose you have a 1151 +differential equation; how do you acquire the state transition function? 1152 +Solve the differential equation.  1153 + For a general dynamical system, there are different ways to get the state 1154 +transition function. This is an instantiation of a dynamical system, and 1155 +we're going to ge thte state transition function by solving the 1156 +differential equation / initial condition pair.  1157 + We're going to call$\dot{x}(t) = A(t)x(t) + B(t)u(t)$a vector 1158 +differential equation with initial condition$x(t_0) = x_0$.  1159 + So that requires us to think about solving that differential equation. Do a 1160 +dimension check, to make sure we know the dimensions of the matrices.$x 1161 +\in \Re^n$, so$A \in \Re^{n_0 \times n}$. We could define the matrix 1162 +function$A$, which takes intervals of the real line and maps them over to 1163 +matrices. As a function,$A$is piecewise continuous matrix function in 1164 +time.  1165 + The entries are piecewise-continuous scalars in time. We would like to get 1166 +at the state transition function; to do that, we need to solve the 1167 +differential equation.  1168 + Let's assume for now that$A, B, U$are given (part of the system 1169 +definition).  1170 + Piece-wise continuous is trivial; we can use the induced norm of$A$for a 1171 +Lipschitz condition. Since this induced norm is piecewise-continuous in 1172 +time, this is a fine bound. Therefore$f$is globally Lipschitz continuous.  1173 + We're going to back off for a bit and introduce the state transition 1174 +matrix. Background for solving the VDE. We're going to introduce a matrix 1175 +differential equation,$\dot{X} = A(t) X$(where$A(t)$is same as before).  1176 + I'm going to define$\Phi(t, t_0)$as the solution to the matrix 1177 +differential equation (MDE) for the initial condition$\Phi(t_0, t_0) = 1178 +1_{n \times n}$. I'm going to define$\Phi$as the solution to the$n 1179 +\times n$matrix when my differential equation starts out in the identity 1180 +matrix.  1181 + Let's first talk about properties of this matrix$\Phi$just from the 1182 +definition we have.  1183 +  1184 + • If you go back to the vector differential equation, and let's just drop 1185 + the term that depends on$u$(either consider$B$to be 0, or the input 1186 + to be 0), the solution of$\cdot{x} = A(t)x(t)$is given by$x(t) = 1187 + \Phi(t, t_0)x_0$. •  1188 + • This is what we call the semigroup property, since it's reminiscent of 1189 + the semigroup axiom.$\Phi(t, t_0) = \Phi(t, t_1) \Phi(t_1, t_0) \forall 1190 + t, t_0, t_1 \in \Re^+$•  1191 + •$\Phi^{-1}(t, t_0) = \Phi(t_0, t)$. •  1192 + •$\text{det} \Phi(t, t_0) = \exp\parens{\int_{t_0}^t \text{tr} \parens{A 1193 + (\tau)} d\tau}$. •  1194 +  1195 + Here's let's talk about some machinery we can now invoke when 1196 +we want to show that two functions of time are equal to each other when 1197 +they're both solutions to the differential equation. You can simply show by 1198 +the existence and uniqueness theorem (assuming it applies) that they 1199 +satisfy the same initial condition and the same differential 1200 +equation. That's an important point, and we tend to use it a lot.  1201 + (i.e. when faced with showing that two functions of time are equal to each 1202 +other, you can show that they both satisfy the same initial condition and 1203 +the same differential equation [as long as the differential equation 1204 +satisfies the hypotheses of the existence and uniqueness theorem])  1205 + Obvious, but good to state.  1206 + Note: the initial condition doesn't have to be the initial condition given; 1207 +it just has to hold at one point in the interval. Pick your point in time 1208 +judiciously.  1209 + Proof of (2): check$t=t_1$. (3) follows directly from (2). (4) you can 1210 +look at if you want. Gives you a way to compute$\Phi(t, t_0)$. We've 1211 +introduced a matrix differential equation and an abstract solution.  1212 + Consider (1).$\Phi(t, t_0)$is a map that takes the initial state and 1213 +transitions to the new state. Thus we call$\Phi$the state transition 1214 +matrix because of what it does to the states of this vector differential 1215 +equation: it transfers them from their initial value to their final value, 1216 +and it transfers them through matrix multiplication.  1217 + Let's go back to the original differential equation. Claim that the 1218 +solution to that differential equation has the following form:$x(t) = 1219 +\Phi(t, t_0)x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau)u(\tau) d\tau$. Proof: 1220 +we can use the same machinery. If someone gives you a candidate solution, 1221 +you can easily show that it is the solution.  1222 + Recall the Leibniz rule, which we'll state in general as follows: 1223 +$\pderiv{}{z} \int_{a(z)}^{b(z)} f(x, z) dx = \int_{a(z)}^{b(z)} 1224 +\pderiv{}{x}f(x, z) dx + \pderiv{b}{z} f(b, z) - \pderiv{a}{z} f(a, z)$.  1225 + $$ 1226 +\dot{x}(t) = A(t) \Phi(t, t_0) x_0 + \int_{t_0}^t 1227 +\pderiv{}{t} \parens{\Phi(t, \tau)B(\tau)u(\tau)} d\tau + 1228 +\pderiv{t}{t}\parens{\Phi(t, t)B(t)u(t)} - \pderiv{t_0}{t}\parens{...} 1229 +\\ = A(t)\Phi(t, t_0)x_0 + \int_{t_0}^t A(t)\Phi(t,\tau)B(\tau)u(\tau)d\tau + B(t)u(t) 1230 +\\ = A(\tau)\Phi(t, t_0) x_0 + A(t)\int_{t_0}^t \Phi(t, \tau)B(\tau) 1231 +u(\tau) d\tau + B(t) u(t) 1232 +\\ = A(\tau)\parens{\Phi(t, t_0) x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau) 1233 +u(\tau) d\tau} + B(t) u(t) 1234 +$$  1235 +$x(t) = \Phi(t,t_0)x_0 + \int_{t_0}^t \Phi(t,\tau)B(\tau)u(\tau) d\tau$is 1236 +good to remember.  1237 + Not surprisingly, it depends on the input function over an interval of 1238 +time.  1239 + The differential equation is changing over time, therefore the system 1240 +itself is time-varying. No way in general that will be time-invariant, 1241 +since the equation that defines its evolution is changing. You test 1242 +time-invariance or time variance through the response map. But is it 1243 +linear? You have the state transition function, so we can compute the 1244 +response function (recall: readout map composed with the state transition 1245 +function) and ask if this is a linear map.  1246 +  1247 + Linear time-Invariant systems  1248 + September 27, 2012  1249 + Last time, we talked about the time-varying differential equation, and we 1250 +expressed$R(\cdot) = \bracks{A(\cdot), B(\cdot), C(\cdot), 1251 +D(\cdot)}$. Used state transition matrix to show that the solution was 1252 +given by$x(t) = \Phi(t, t_0) x_0 + \int_{t_0}^t B(\tau) u(\tau) 1253 +d\tau$. Integral part is the state transition matrix, and we haven't 1254 +talked about how we would compute this matrix. In general, computing the 1255 +state transition matrix is hard. But there's one important class where 1256 +computing that class becomes much simpler than usual. That is where the 1257 +system does not depend on time.  1258 + Linear time-invariant case:$\dot{x} = Ax + Bu, y = Cx + Du, x(t_0) = 1259 +x_0$. Does not matter at what time we start. Typically, WLOG, we use$t_0 = 1260 +0$(we can't do this in the time-varying case).  1261 + Aside: Jacobian linearization  1262 + In practice, generally the case that someone 1263 +doesn't present you with a model that looks like this. Usually, you derive 1264 +this (usually nonlinear) model through physics and whatnot. What can I do 1265 +to come up with a linear representation of that system? What is typically 1266 +done is an approximation technique called Jacobian linearization.  1267 + So suppose someone gives you a nonlinear system and an output equation, 1268 +and you want to come up with some linear representation of the system.  1269 + Two points of view: we could look at the system, and suppose we applied a 1270 +particular input to the system and solve the differential equation 1271 +($u^0(t) \mapsto x^0(t)$, the nominal input and nominal 1272 +solution). That would result in a solution (state trajectory, in 1273 +general). Now suppose that we for some reason want to perturb that input 1274 +($u^0(t) + \delta u(t)$, the perturbed input). Suppose in general 1275 +that$\delta u$is a small perturbation. What this results in is a new 1276 +state trajectory, that we'll define as$x^0(t) + \delta x(t)$, the 1277 +perturbed solution.  1278 + Now we can derive from that what we call the Jacobian linearization. That 1279 +tells us that if we apply the input, the solution will be$x^0 = 1280 +f(x^0, u^0, t)$, and I also have that$x^0(t_0) = x_0$.  1281 +$\dot{x}^0 + \dot{\delta}x = f(x^0 + \delta x, u^0 + \delta u, t)$, where 1282 +$(x^0 + \delta x)(t_0) = x_0 + \delta x_0$. Now I'm going to look at these 1283 +two and perform a Taylor expansion about the nominal input and 1284 +solution. Thus$f(x^0 + \delta x, u^0 + \delta u, t) = f(x^0, u^0, t) + 1285 +\pderiv{}{x} f(x, u, t)\vert_{(x^0, u^0)}\delta x + 1286 +\pderiv{}{u}f(x,u,t)\vert_{(x^0, u^0)} \delta u + \text{higher order 1287 +terms}$(recall that we also called$\pderiv{}{x}D_1$, i.e. the 1288 +derivative with respect to the first argument).  1289 + What I've done is expanded the right hand side of the differential 1290 +equation. Thus$\delta x = \pderiv{}{x} f(x, u, t)\vert_{(x^0, u^0)} \delta 1291 +x + \pderiv{}{u} f(...)\vert_{(x^0, y^0)}\delta u + ...$. If$\delta u, 1292 +\delta x$small, then we can assume that they are approximately zero, which 1293 +gives us an approximate first-order linear differential equation. This 1294 +gives us a linear time-varying approximation of the dynamics of this 1295 +perturbation vector, in response to a perturbation input. That's what the 1296 +Jacobian linearization gives you: the perturbation away from the nominal 1297 +(we linearized about a bias point).  1298 + Consider A(t) to be the Jacobian matrix with respect to x, and B(t) to be 1299 +the Jacobian matrix with respect to u. Remember that this is an 1300 +approximation, and if your system is really nonlinear, and you perturb the 1301 +system a lot (stray too far from the bias point), then this linearization 1302 +may cease to hold.  1303 + Linear time-invariant systems  1304 + Motivated by the fact that we have a solution to the time-varying equation, 1305 +it depends on the state transition matrix, which right now is an abstract 1306 +thing which we don't have a way of solving. Let's go to a more specific 1307 +class of systems: that where$A, B, C, D$do not depend on time. We know 1308 +that this system is linear (we don't know yet that it is time-invariant; we 1309 +have to find the response function and show that it satisfies the 1310 +definition of a time-invariant system), so this still requires proof.  1311 + Since these don't depend on time, we can use some familiar tools 1312 +(e.g. Laplace transforms) and remember what taking the Laplace transform of 1313 +a derivative is. Denote$\hat{x}(s)$to be the Laplace transform of 1314 +$x(t)$. The Laplace transform is therefore$s\hat{x}(s) - x_0 = A\hat{x}(s) 1315 ++ B\hat{u}(s)$;$s\hat{y}(s) - y_0 = C\hat{x}(s) + D\hat{u}(s)$. The first 1316 +equation becomes$(sI - A)\hat{x}(s) = x_0 + B\hat{u}(s)$, and we'll leave 1317 +the second equation alone.  1318 + Let's first consider$\hat{x} = Ax$,$x(0) = x_0$. I could have done the 1319 +same thing, except my right hand side doesn't depend on B:$(sI - 1320 +A)\hat{x}(s) = x_0$. Let's leave that for a second and come back to it, and 1321 +make the following claim: the state transition matrix for$\hat{x} = Ax, 1322 +x(t_0) = x_0$is$\Phi(t,t_0) = e^{A(t-t_0)}$, which is called the matrix 1323 +exponential, defined as$e^{A(t-t_0)} = I + A(t-t_0) + \frac{A^2(t-t_0)^2}{2!} 1324 ++ ...$(Taylor expansion of the exponential function).  1325 + We just need to show that the state transition matrix, using definitions we 1326 +had last day, is indeed the state transition matrix for that system. We 1327 +could go back to the definition of the state transition matrix for the 1328 +system, or we could go back to the state transition function for the vector 1329 +differential equation.  1330 + From last time, we know that the solution to$\dot{x}A(t)x, x(t_0) = x_0$ 1331 +is given by$x(t) = \Phi(t, t_0)x_0$; here, we are claiming then that$x(t) 1332 += e^{A(t - t_0)} x_0$, where$x(t)$is the solution to$\dot{x} = Ax$with 1333 +initial condition$x_0$.  1334 + First show that it satisfies the vector differential equation:$\dot{x} = 1335 +\pderiv{}{t}\exp\parens{A(t-t_0)} x_0 = (0 + A + A^2(t - t_0 + ...)x_0 = 1336 +A(I + A(t-t_0) + \frac{A^2}{2}(t-t_0)^2 + ...) x_0 = Ae^{At} x_0 = Ax(t)$, 1337 +so it satisfies the differential equation. Checking the initial condition, 1338 +we get$e^{A \cdot 0}x_0 = I x_0 = x_0$. We've proven that this represents 1339 +the solution to this time-invariant differential equation. By the existence 1340 +and uniqueness theorem, this is the same solution.  1341 + Through this proof, we've shown a couple of things: the derivative of the 1342 +matrix exponential, and we evaluated it at$t-t_0=0$. So now let's go back 1343 +and reconsider its infinite series representation and classify some of its 1344 +other properties.  1345 + Properties of the matrix exponential  1346 +  1347 + •$e^0 = I$•  1348 + •$e^{A(t+s)} = e^{At}e^{As}$•  1349 + •$e^{(A+B)t} = e^{At}e^{Bt}$iff$\comm{A}{B} = 0$. •  1350 + •$\parens{e^{At}}^{-1} = e^{-At}$, and these properties hold in general if 1351 + you're looking at$t$or$t - t_0$. •  1352 + •$\deriv{e^{At}}{t} = Ae^{At} = e^{At}A$(i.e.$\comm{e^At}{A} = 0$) •  1353 + • Suppose$X(t) \in \Re^{n \times n}$,$\dot{X} = AX, X(0) = I$, then the 1354 + solution of this matrix differential equation and initial condition pair 1355 + is given by$X(t) = e^{At}$. Proof in the notes; very similar to what we 1356 + just did (more general proof, that the state transition matrix is just 1357 + given by the matrix exponential). •  1358 +  1359 + Calculating$e^{At}$, given$A$ 1360 + What this is now useful for is making more concrete this state transition 1361 +concept. Still a little abstract, since we're still considering the 1362 +exponential of a matrix.  1363 + The first point is that using the infinite series representation to compute 1364 +$e^{At}$is in general hard.  1365 + Would be doable if you knew$A$were nilpotent ($A^k = 0$for some$k \in 1366 +\mathbb{Z}$), but it's not always feasible. Would not be feasible if$k$ 1367 +large.  1368 + The way one usually computes the state transition matrix$e^{At}$is as 1369 +follows:  1370 + Recall:$\dot{X}(t) = AX(t)$, with$X(0) = I$. We know from what we've done 1371 +before (property 6) that we can easily prove$X(t) = e^{At}$. We also know 1372 +that$(sI - A)\hat{X}(s) = I$, so$\hat{X}(s) = (sI - A)^{-1}$. That tells 1373 +me that$e^{At} = \mathcal{L}^{-1}\parens{(sI - A)^{-1}}$. That gives us a 1374 +way of computing$e^{At}$, assuming we have a way to compute a matrix's 1375 +inverse and an inverse Laplace transform. This is what people usually do, 1376 +and most algorithms approach the problem this way. Generally hard to 1377 +compute the inverse and the inverse Laplace transform.  1378 + Requires proof regarding why$sI - A$always has an inverse given by 1379 +$e^{-At}$.  1380 + Clive Moller started LINPACK (Linear algebra package; engine behind 1381 +MATLAB). Famous in computational linear algebra. Paper: 19 dubious ways to 1382 +compute the matrix exponential. Actually a hard problem in 1383 +general. Factoring of$n$-degree polynomials.  1384 + If we were to consider our simple nilpotent case, we'll compute$sI - A = 1385 +\begin{bmatrix}s & -1 \\ 0 & s\end{bmatrix}$. We can immediately write down 1386 +its inverse as$\begin{bmatrix}\frac{1}{s} & \frac{1}{s^2} \\ 0 & 1387 +\frac{1}{s}\end{bmatrix}$. Inverse Laplace transform takes no work; it's 1388 +simply$\begin{bmatrix}1 & t \\ 0 & 1\end{bmatrix}$.  1389 + In the next lecture (and next series of lectures) we will be talking about 1390 +the Jordan form of a matrix. We have a way to compute$e^{At}$. We'll write 1391 +$A = TJT^{-1}$. In its simplest case, it's diagonal. Either way, all of the 1392 +work is in exponentiating$J$. You still end up doing something that's the 1393 +inverse Laplace transform of$sI - J$.  1394 + We've shown that for a linear TI system,$\dot{x} = Ax + Bu$;$y = Cx + Du$ 1395 +($x(0) = x_0$).$x(t) = e^{At}x_0 + \int_0^t e^{A(t-\tau)} Bu(\tau) 1396 +d\tau$. We proved it last time, but you can check this satisfies the 1397 +differential equation and initial condition.  1398 + From that, you can compute the response function and show that it's 1399 +time-invariant. Let's conclude today's class with a planar inverted 1400 +pendulum. Let's call the angle of rotation away from the vertical$\theta$, 1401 +mass$m$, length$\ell$, and torque$\tau$. Equations of motion:$m\ell^2 1402 +\ddot{\theta} - mg\ell \sin \theta = \tau$. Perform Jacobian 1403 +linearization; we'll define$\theta = 0$at$\pi/2$, and we're linearizing 1404 +about the trivial trajectory that the pendulum is straight up. Therefore 1405 +$\delta \theta = \theta \implies m\ell^2 \ddot{\theta} + mg\ell\theta 1406 += \tau$, where$u = \frac{\tau}{m\ell^2}$, and$\Omega^2 = \frac{g}{\ell}$, 1407 +$\dot{x}_1 = x_2$, and$\dot{x}_2 = \Omega^2 x_1 + u$.  1408 +$y = \theta - x_1, \dot{x}_1 = x_2, \dot{x}_2 = \Omega^2 x_1 + u, y = 1409 +x_1$. Stabilization of system via feedback by considering poles of Laplace 1410 +transform, etc.$\frac{\hat{y}}{\hat{u}} = \frac{1}{s^2 - \Omega^2} = 1411 +G(s)$(the plant).  1412 + In general, not a good idea: canceling unstable pole, and then using 1413 +feedback. In the notes, this is some controller$K(s)$. If we look at the 1414 +open-loop transfer function ($K(s)G(s) = \frac{1}{s(s+\Omega)}$),$u = 1415 +\frac{s-\Omega}{s}\bar{u}$, so$\dot{u} = \dot{\bar{u}} - \Omega\bar{u}$ 1416 +(assume zero initial conditions on$u, \bar{u}$). If we define a third 1417 +state variable now,$x_3 = u - \bar{u}$, then that tells us that$\dot{x}_3 1418 += \Omega \bar{u}$. Here, I have$A = \begin{bmatrix} 0 & 1 & 0 \\ \Omega^2 1419 +& 0 & -1 \\ 0 & 0 & 0 \end{bmatrix}$,$B = \begin{bmatrix}0 \\ 1 \\ 1420 +\Omega\end{bmatrix}$,$C = \begin{bmatrix}1 & 0 & 0\end{bmatrix}$,$D = 1421 +0$. Out of time today, but we'll solve at the beginning of Tuesday's class.  1422 + Solve for$x(t) = \begin{bmatrix}x_1, x_2, x_3\end{bmatrix}$. We have a few 1423 +approaches:  1424 +  1425 + • Using$A,B,C,D$: compute the following:$y(t) = Ce^{At} x_0 + C\int_0^t 1426 + e^{A(t - \tau)}Bu(\tau) d\tau$. In doing that, we'll need to compute 1427 +$e^{At}$, and then we have this expression for general$u$: suppose you 1428 + supply a step input. •  1429 + • Suppose$\bar{u} = -y = -Cx$. Therefore$\dot{x} = Ax + B(-Cx) = (A - 1430 + BC)x$. We have a new$A_{CL} = A - BC$, and we can exponentiate this 1431 + instead. •  1432 +  1433 + Foreshadows later, when we think about control. Introduces this standard 1434 +notion of feedback for stabilizing systems. Using newfound knowledge of 1435 +state transition matrix for TI systems (how to compute it), see how to 1436 +compute. See what MATLAB is doing.  856 1437   857 1438  96  fa2012/cs150/10.md  ... ... @@ -0,0 +1,96 @@ 1 +CS 150: Digital Design & Computer Architecture 2 +============================================== 3 +September 20, 2012 4 +------------------ 5 + 6 +Non-overlapping clocks. n-phase means that you've got n different outputs, 7 +and at most one high at any time. Guaranteed dead time between when one 8 +goes low and next goes high. 9 + 10 +K-maps 11 +------ 12 +Finding minimal sum-of-products and product-of-sums expressions for 13 +functions. **On-set**: all the ones of a function; **implicant**: one or 14 +more circled ones in the onset; a **minterm** is the smallest implicant you 15 +can have, and they go up by powers of two in the number of things you can 16 +have; a **prime implicant** can't be combined with another (by circling); 17 +an **essential prime implicant** is a prime implicant that contains at 18 +least one one not in any other prime implicant. A **cover** is any 19 +collection of implicants that contains all of the ones in the on-set, and a 20 +**minimal cover** is one made up of essential prime implicants and the 21 +minimum number of implicants. 22 + 23 +Hazards vs. glitches. Glitches are when timing issues result in dips (or 24 +spikes) in the output; hazards are if they might happen. Completely 25 +irrelevant in synchronous logic. 26 + 27 +Project 28 +------- 29 +3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If 30 +we look at the datapath elements, the storage elements, you've got your 31 +program counter, your instruction memory, register file, and data 32 +memory. Figure 7.1 from the book. If you mix that in with figure 8.28, 33 +which talks about MMIO, that data memory, there's an address and data bus 34 +that this is hooked up to, and if you want to talk to a serial port on a 35 +MIPS processor (or an ARM processor, or something like that), you don't 36 +address a particular port (not like x86). Most ports are 37 +memory-mapped. Actually got a MMIO module that is also hooked up to the 38 +address and data bus. For some range of addresses, it's the one that 39 +handles reads and writes. 40 + 41 +You've got a handful of different modules down here such as a UART receive 42 +module and a UART transmit module. In your project, you'll have your 43 +personal computer that has a serial port on it, and that will be hooked up 44 +to your project, which contains the MIPS150 processor. Somehow, you've got 45 +to be able to handle characters transmitted in each direction. 46 + 47 +UART 48 +---- 49 +Common ground, TX on one side connected to RX port on other side, and vice 50 +versa. Whole bunch more in different connectors. Basic protocol is called 51 +RS232, common (people often refer to it by connector name: DB9 (rarely 52 +DB25); fortunately, we've moved away from this world and use USB. We'll 53 +talk about these other protocols later, some sync, some async. Workhorse 54 +for long time, still all over the place. 55 + 56 +You're going to build the UART receiver/transmitter and MMIO module that 57 +interfaces them. See when something's coming in from software / 58 +hardware. Going to start out with polling; we will implement interrupts 59 +later on in the project (for timing and serial IO on the MIPS 60 +processor). That's really the hardcore place where software and hardware 61 +meet. People who understand how each interface works and how to use those 62 +optimally together are valuable and rare people. 63 + 64 +What you're doing in Lab 4, there's really two concepts of (1) how does 65 +serial / UART work and (2) ready / valid handshake. 66 + 67 +On the MIPS side, you've got some addresses. Anything that starts with FFFF 68 +is part of the memory-mapped region. In particular, the first four are 69 +mapped to the UART: they are RX control, RX data, TX control, and TX data. 70 + 71 +When you want to send something out the UART, you write the byte -- there's 72 +just one bit for the control and one byte for data. 73 + 74 +Data goes into some FSM system, and you've got an RX shift register and a 75 +TX shift register. 76 + 77 +There's one other piece of this, which is that inside of here, the thing 78 +interfacing to this IO-mapped module uses this ready bit. If you have two 79 +modules: a source and a sink (diagram from the document), the source has 80 +some data that is sending out, tells the sink when the data is valid, and 81 +the sink tells the source when it is ready. And there's a shared "clock" 82 +(baud rate), and this is a synchronous interface. 83 + 84 +* source presents data 85 +* source raises valid 86 +* when ready & valid on posedge clock, both sides know the transaction was 87 + successful. 88 + 89 +Whatever order this happens in, source is responsible for making sure data 90 +is valid. 91 + 92 +HDLC? Takes bytes and puts into packets, ACKs, etc. 93 + 94 +Talk about quartz crystals, resonators.$\pi \cdot 10^7$. 95 + 96 +So: before I let you go, parallel load, n bits in, serial out, etc. 83  fa2012/cs150/11.md  ... ... @@ -0,0 +1,83 @@ 1 +UART, MIPS and Timing 2 +===================== 3 +September 25, 2012 4 +------------------ 5 + 6 +Timing: motivation for next lecture (pipelining). Lot of online resources 7 +(resources, period) on MIPS. Should have lived + breathed this thing during 8 +61C. For sure, you've got your 61C lecture notes and CS150 lecture notes 9 +(both from last semester). Also the green card (reference) and there's 10 +obviously the book. Should have tons of material on the MIPS processor out 11 +there. 12 + 13 +So, from last time: we talked about a universal asynchronous receiver 14 +transmitter. On your homework, I want you to draw a couple of boxes 15 +(control and datapath; they exchange signals). Datapath is mostly shift 16 +registers. May be transmitting and receiving at same time; one may be idle; 17 +any mix. Some serial IO lines going to some other system not synchronized 18 +with you. Talked about clock and how much clock accuracy you need. For 19 +eight-bit, you need a couple percent matching parity. In years past, we've 20 +used N64 game controllers as input for the project. All they had was an RC 21 +relaxation oscillator. Had same format: start bit, two data bits, and stop 22 +bit. Data was sent Manchester-coded (0 -> 01; 1: 10). In principle, I can 23 +have a 33% error, which is something I can do with an RC oscillator. 24 + 25 +Also part of the datapath, 8-bit data going in and out. Whatever, going to 26 +be MIPS interface. Set of memory-mapped addresses on the MIPS, so you can 27 +read/write on the serial port. Also some ready/valid stuff up 28 +here. Parallel data to/from MIPS datapath. 29 + 30 +MIPS: invented by our own Dave Patterson and John Henessey from 31 +Stanford. Started company, Kris saw business plan. Was confidential, now 32 +probably safe to talk about. Started off and said they're going to end up 33 +getting venture capital, and VCs going to take equity, which is going to 34 +dilute their equity. Simple solution, don't take venture money. These guys 35 +have seen enough of this. By the time they're all done, it would be awesome 36 +if they each had 4% of the company. They set things up so that they started 37 +at 4%. Were going to allocate 20% for all of the employees, series A going 38 +to take half, series B, they'll give up a third, and C, 15%. Interesting 39 +bit about MIPS that you didn't learn in 61C. 40 + 41 +One of the resources, the green sheet, once you've got this thing, you know 42 +a whole bunch about the processor. You know you've got a program counter 43 +over here, and you've got a register file in here, and how big it 44 +is. Obviously you've got an ALU and some data memory over here, and you 45 +know the instruction format. You don't explicitly know that you've got a 46 +separate instruction memory (that's a choice you get to make as an 47 +implementor); you don't know how many cycles it'll be (or pipelined, 48 +etc). People tend to have separate data and instruction memory for embedded 49 +systems, and locally, it looks like separate memories (even on more 50 +powerful systems). 51 + 52 +We haven't talked yet about what a register file looks like inside. Not 53 +absolute requirement about register file, but it would be nice if your 54 +register file had two read and one write address. 55 + 56 +We go from a D-ff, and we know that sticking an enable line on there lets 57 +us turn this into a D-ff with enable. Then if I string 32 of these in 58 +parallel, I now have a register (clocked), with a write-enable on it. 59 + 60 +Not going to talk about ALU today: probably after midterm. 61 + 62 +So now, I've got a set of 32 registers. Considerations of cost. Costs on 63 +the order of a hundredth of a cent. 64 + 65 +Now I've made my register file. How big is that logic? NAND gates to 66 +implement a 5->32 bit decoder. 67 + 68 +Asynchronous reads. At the rising edge of the clock, synchronous write. 69 + 70 +So, now we get back to MIPS review. The MIPS instrctions, you've got 71 +R/I/J-type instructions. All start with opcode (same length: 6 bits). Tiny 72 +fraction of all 32-bit instructions. 73 + 74 +More constraints as we get more stuff. If we then want to constrain that 75 +this is a single-cycle processor, then you end up with a pretty clear 76 +picture of what you want. PC doesn't need 32 bits (two LSBs are always 0); 77 +can implement PC with a counter. 78 + 79 +PC goes into instruction memory, and out comes my instruction. If, for 80 +example, we want to execute LW$s0 12(%s3), then we look at the green 81 +card, and it tells us the RTL. 82 + 83 +Adding R-type to the I-type datapath adds three muxes. Not too bad.
108  fa2012/cs150/12.md
 ... ... @@ -0,0 +1,108 @@ 1 +Pipelining 2 +========== 3 +September 27, 2012 4 +------------------ 5 +Last time, I just mentioned in passing that we will always be reading 6 +32-bit instruction words in this class, but ARM has both 32- and 16-bit 7 +instruction sets. MicroMIPS does the same thing. 8 + 9 +Optimized for size rather than speed; will run at 100 MHz (not very good 10 +compared to desktop microprocessors made in the same process, which run in 11 +the gigahertz range), but it burns 3 mW. $0.06 \text{mm}^2$. Questions 12 +about power monitor -- you've got a chip that's somehow hanging off of the 13 +power plug and manages one way or the other to get a voltage and current 14 +signal. You know the voltage is going to look like 155 amplitude. 15 + 16 +Serial! Your serial line, the thing I want you to play around with is the 17 +receiver. We give this to you in the lab, but the thing is I want you to 18 +design the basic architecture. 19 + 20 +Start, stop, some bits between. You've got a counter on here that's running 21 +at 1024 ticks per bit of input. Eye diagrams. 22 + 23 +Notion of factoring state machines. Or you can draw 10000 states if you 24 +want. 25 + 26 +Something about Kris + scanners, it always ends badly. Will be putting 27 +lectures on the course website (and announce on Piazza). High-level, look 28 +at pipelines. 29 + 30 +MIPS pipeline 31 + 32 +For sure, you should be reading 7.5, if you haven't already. H&H do a great 33 +job. Slightly different way of looking at pipelines, which is probably 34 +inferior, but it's different. 35 + 36 +First off, suppose I've got something like my Golden Bear power monitor, 37 +and $f = (A+B)C + D$. It's going to give me this ALU that does addition, ALU 38 +that does multiplication, and then an ALU that does addition again, and 39 +that will end up in my output register. 40 + 41 +There is a critical path (how fast can I clock this thing?). For now, 42 +assume "perfect" fast registers. This, however, is a bad assumption. 43 + 44 +So let's talk about propagation delay in registers. 45 + 46 +Timing & Delay (H&H 3.5; Fig 3.35,36) 47 +------------------------------------- 48 +Suppose I have a simple edge-triggered D flipflop, and these things come 49 +with some specs on the input and output, and in particular, there is a 50 +setup time ($t_{\mathrm{setup}}$) and a hold time ($t_{\mathrm{hold}}$). 51 + 52 +On the FPGA, these are each like 0.4 ns, whereas in 22nm, these are more 53 +like 10 ps. 54 + 55 +And then the output is not going to change immediately (going to remain 56 +constant for some period of time before it changes), $t_{ccq}$ is the 57 +minimum time for clock to contamination (change) in Q. And then there's a 58 +maximum called $t_{pcq}$, the maximum (worst-case) for clock to stable 59 +Q. Just parameters that you can't control (aside from choosing a different 60 +flipflop). 61 + 62 +So what do we want to do? We want to combine these flipflops through some 63 +combinational logic with some propagation delay ($t_{pd}$) and see what our 64 +constraints are going to be on the timing. 65 + 66 +Once the output is stable ($t_{pcq}$), it has to go through my 67 +combinational logic ($t_{pd}$), and then counting backwards, I've got 68 +$t_{setup}$, and that overall has to be less than my cycle. Tells you how 69 +complex logic can be, and how many stages of pipelines you need. Part of 70 +the story of selling microprocessors was clock speed. Some of the people 71 +who got bachelors in EE cared, but people only really bought the higher 72 +clock speeds. So there'd be like 4 NAND gate delays, and that was it. One 73 +of the reasons why Intel machines have such incredibly deep pipelines: 74 +everything was cut into pieces so they could have these clock speeds. 75 + 76 +So. $t_{pd}$ on your Xilinx FPGA for block RAM, which you care about, is 77 +something like 2 ns from clock to data. 32-bit adders are also on the order 78 +of 2 ns. What you're likely to end up with is a 50 MHz part. I also have to 79 +worry about fast combinational logic -- what happens as the rising edge 80 +goes high, my new input contaminates, and it messes up this register before 81 +the setup time? Therefore $t_{ccq} + t_{pd} > t_{hold}$, necessarily, so we 82 +need $t_{ccq} > t_{hold}$ for a good flipflop (consider shift registers, 83 +where we have basically no propagation delay). 84 + 85 +Therefore $t_{pcq} + t_{setup} + t_{pd} < t_{cycle}$. 86 + 87 +What does this have to do with the flipflop we know about? If we look at 88 +the flipflop that we've done in the past (with inverters, controlled 89 +buffers, etc), what is $t_{setup}$? We have several delays; $t_{setup}$ 90 +should ideally have D propagate to X and Y. How long is the hold 91 +afterwards? You'd like $D$ to be constant for an inverter delay (so that it 92 +can stop having an effect). That's pretty stable. $t_{hold}$ is something 93 +like the delay of an inverter (if you want to be really safe, you'd say 94 +twice that number). $t_{pcq}$, assuming we have valid setup, the D value 95 +will be sitting on Y, and we've got two inverter delays, and $t_{ccq}$ is 96 +also 2 inverter delays. 97 + 98 +Good midterm-like question for you: if I have a flipflop with some 99 +characteristic setup and hold time, and I put a delay of 1 ps on the input, 100 +and I called this a new flipflop, how does that change any of these things? 101 +Can make $t_{hold}$ negative. How do I add more delay? Just add more 102 +inverters in the front. Hold time can in fact go negative. Lot of 141-style 103 +stuff in here that you can play with. 104 + 105 +Given that, you have to deal with the fact that you've got this propagation 106 +time and the setup time. Cost of pipelined registers. 107 + 108 +Critical path time, various calculations.
2  fa2012/cs150/3.md
 ... ... @@ -1,5 +1,5 @@ 1 1  CS 150: Digital Design & Computer Architecture 2 -=============================================== 2 +============================================== 3 3  August 28, 2012 4 4  --------------- 5 5  
2  fa2012/cs150/4.md
 ... ... @@ -1,5 +1,5 @@ 1 1  CS 150: Digital Design & Computer Architecture 2 -=============================================== 2 +============================================== 3 3  August 30, 2012 4 4  --------------- 5 5  
2  fa2012/cs150/5.md
 ... ... @@ -1,5 +1,5 @@ 1 1  CS 150: Digital Design & Computer Architecture 2 -=============================================== 2 +============================================== 3 3  September 4, 2012 4 4  ----------------- 5 5  
2  fa2012/cs150/6.md
 ... ... @@ -1,5 +1,5 @@ 1 1  CS 150: Digital Design & Computer Architecture 2 -=============================================== 2 +============================================== 3 3  September 6, 2012 4 4  ----------------- 5 5  
2  fa2012/cs150/7.md
 ... ... @@ -1,5 +1,5 @@ 1 1  CS 150: Digital Design & Computer Architecture 2 -=============================================== 2 +============================================== 3 3  September 11, 2012 4 4  ------------------ 5 5  
2  fa2012/cs150/8.md
 ... ... @@ -1,5 +1,5 @@ 1 1  CS 150: Digital Design & Computer Architecture 2 -=============================================== 2 +============================================== 3 3  September 13, 2012 4 4  ------------------ 5 5  
83  fa2012/cs150/9.md
 ... ... @@ -0,0 +1,83 @@ 1 +CS 150: Digital Design & Computer Architecture 2 +============================================== 3 +September 18, 2012 4 +------------------ 5 + 6 +Lab this week you are learning about chipscope. Chipscope is kinda like 7 +what it sounds: allows you to monitor things happening in the FPGA. One of 8 +the interesting things about Chipscope is that it's a FSM monitoring stuff 9 +in your FPGA, it also gets compiled down, and it changes the location of 10 +everything that goes into your chip. It can actually make your bug go away 11 +(e.g. timing bugs). 12 + 13 +So. Counters. How do counters work? If I've got a 4-bit counter and I'm 14 +counting from 0, what's going on here? 15 + 16 +D-ff with an inverter and enable line? This is a T-ff (toggle 17 +flipflop). That'll get me my first bit, but my second bit is slower. $Q_1$ 18 +wants to toggle only when $Q_0$ is 1. With subsequent bits, they want to 19 +toggle when all lower bits are 1. 20 + 21 +Counter with en: enable is tied to the toggle of the first bit. Counter 22 +with ld: four input bits, four output bits. Clock. Load. Then we're going 23 +to want to do a counter with ld, en, rst. Put in logic, etc. 24 + 25 +Quite common: ripple carry out (RCO), where we AND $Q[3:0]$ and feed this 26 +into the enable of $T_4$. 27 + 28 +Ring counter (shift register with one hot out), If reset is low I just 29 +shift this thing around and make a circular shift register. If high, I clear 30 +the out bit. 31 + 32 +Mobius counter: just a ring counter with a feedback inverter in it. Just 33 +going to take whatever state in there, and after n clock ticks, it inverts 34 +itself. So you have $n$ flipflops, and you get $2n$ states. 35 + 36 +And then you've got LFSRs (linear feedback shift registers). Given N 37 +flipflops, we know that a straight up or down counter will give us $2^N$ 38 +states. Turns out that an LFSR give syou almost that (not 0). So why do 39 +that instead of an up-counter? This can give you a PRNG. Fun times with 40 +Galois fields. 41 + 42 +Various uses, seeds, high enough periods (Mersenne twisters are higher). 43 + 44 +RAM 45 +--- 46 +Remember, decoder, cell array, $2^n$ rows, $2^n$ word lines, some number of 47 +bit lines coming out of that cell array for I/O with output-enable and 48 +write-enable. 49 + 50 +When output-enable is low, D goes to high-Z. At some point, some external 51 +device starts driving some Din (not from memory). Then I can apply a write 52 +pulse (write strobe), which causes our data to be written into the memory 53 +at this address location. Whatever was driving it releases, so it goes back 54 +to high-impedance, and if we turn output-enable again, we'll see "Din" from 55 +the cell array. 56 + 57 +During the write pulse, we need Din stable and address stable. We have a 58 +pulse because we don't want to break things. Bad things happen. 59 + 60 +Notice: no clock anywhere. Your FPGA (in particular, the block ram on the 61 +ML505) is a little different in that it has registered input (addr & 62 +data). First off, very configurable. All sorts of ways you can set this up, 63 +etc. Addr in particular goes into a register and comes out of there, and 64 +then goes into a decoder before it goes into the cell array, and what comes 65 +out of that cell array is a little bit different also in that there's a 66 +data-in line that goes into a register and some data-out as well that's 67 +separate and can be configured in a whole bunch of different ways so that 68 +you can do a bunch of different things. 69 + 70 +The important thing is that you can apply your address to those inputs, and 71 +it doesn't show up until the rising edge of the clock. There's the option 72 +of having either registered or non-registered output (non-registered for 73 +this lab). 74 + 75 +So now we've got an ALU and RAM. And so we can build some simple 76 +datapaths. For sure you're going to see on the final (and most likely the 77 +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design 78 +a system to find the largest unsigned int in the SRAM." 79 + 80 +Demonstration of clock cycles, etc. So what's our FSM look like? Either 81 +LOAD or HOLD. 82 + 83 +On homework, did not say sync SRAM. Will probably change.
394  fa2012/cs150/cs150.md
 @@ -197,7 +197,7 @@ stuff. 197 197   198 198   199 199  CS 150: Digital Design & Computer Architecture 200 -=============================================== 200 +============================================== 201 201  August 28, 2012 202 202  --------------- 203 203   @@ -297,7 +297,7 @@ and a maxterm is a sum containing every input variable or its complement. 297 297   298 298   299 299  CS 150: Digital Design & Computer Architecture`