Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

Add more days of notes.

  • Loading branch information...
commit 17a7744d88946ab2ad0ad69e6217cec45791fa7d 1 parent f839a8a
Steve Wang authored
146 cs150.html
@@ -594,7 +594,151 @@
594 594 sums. (Section 2.7). Based on the combining theorem, which says that <mathjax>$XA +
595 595 X\bar{A} = X$</mathjax>. Ideally: every row should just have a single value
596 596 changing. So, I use Gray codes. (e.g. 00, 01, 11, 10). Graphical
597   -representation!</p></div><div class='pos'></div>
  597 +representation!</p>
  598 +<p><a name='9'></a></p>
  599 +<h1>CS 150: Digital Design &amp; Computer Architecture</h1>
  600 +<h2>September 18, 2012</h2>
  601 +<p>Lab this week you are learning about chipscope. Chipscope is kinda like
  602 +what it sounds: allows you to monitor things happening in the FPGA. One of
  603 +the interesting things about Chipscope is that it's a FSM monitoring stuff
  604 +in your FPGA, it also gets compiled down, and it changes the location of
  605 +everything that goes into your chip. It can actually make your bug go away
  606 +(e.g. timing bugs).</p>
  607 +<p>So. Counters. How do counters work? If I've got a 4-bit counter and I'm
  608 +counting from 0, what's going on here?</p>
  609 +<p>D-ff with an inverter and enable line? This is a T-ff (toggle
  610 +flipflop). That'll get me my first bit, but my second bit is slower. <mathjax>$Q_1$</mathjax>
  611 +wants to toggle only when <mathjax>$Q_0$</mathjax> is 1. With subsequent bits, they want to
  612 +toggle when all lower bits are 1.</p>
  613 +<p>Counter with en: enable is tied to the toggle of the first bit. Counter
  614 +with ld: four input bits, four output bits. Clock. Load. Then we're going
  615 +to want to do a counter with ld, en, rst. Put in logic, etc.</p>
  616 +<p>Quite common: ripple carry out (RCO), where we AND <mathjax>$Q[3:0]$</mathjax> and feed this
  617 +into the enable of <mathjax>$T_4$</mathjax>.</p>
  618 +<p>Ring counter (shift register with one hot out), If reset is low I just
  619 +shift this thing around and make a circular shift register. If high, I clear
  620 +the out bit.</p>
  621 +<p>Mobius counter: just a ring counter with a feedback inverter in it. Just
  622 +going to take whatever state in there, and after n clock ticks, it inverts
  623 +itself. So you have <mathjax>$n$</mathjax> flipflops, and you get <mathjax>$2n$</mathjax> states.</p>
  624 +<p>And then you've got LFSRs (linear feedback shift registers). Given N
  625 +flipflops, we know that a straight up or down counter will give us <mathjax>$2^N$</mathjax>
  626 +states. Turns out that an LFSR give syou almost that (not 0). So why do
  627 +that instead of an up-counter? This can give you a PRNG. Fun times with
  628 +Galois fields.</p>
  629 +<p>Various uses, seeds, high enough periods (Mersenne twisters are higher).</p>
  630 +<h2>RAM</h2>
  631 +<p>Remember, decoder, cell array, <mathjax>$2^n$</mathjax> rows, <mathjax>$2^n$</mathjax> word lines, some number of
  632 +bit lines coming out of that cell array for I/O with output-enable and
  633 +write-enable.</p>
  634 +<p>When output-enable is low, D goes to high-Z. At some point, some external
  635 +device starts driving some Din (not from memory). Then I can apply a write
  636 +pulse (write strobe), which causes our data to be written into the memory
  637 +at this address location. Whatever was driving it releases, so it goes back
  638 +to high-impedance, and if we turn output-enable again, we'll see "Din" from
  639 +the cell array.</p>
  640 +<p>During the write pulse, we need Din stable and address stable. We have a
  641 +pulse because we don't want to break things. Bad things happen.</p>
  642 +<p>Notice: no clock anywhere. Your FPGA (in particular, the block ram on the
  643 +ML505) is a little different in that it has registered input (addr &amp;
  644 +data). First off, very configurable. All sorts of ways you can set this up,
  645 +etc. Addr in particular goes into a register and comes out of there, and
  646 +then goes into a decoder before it goes into the cell array, and what comes
  647 +out of that cell array is a little bit different also in that there's a
  648 +data-in line that goes into a register and some data-out as well that's
  649 +separate and can be configured in a whole bunch of different ways so that
  650 +you can do a bunch of different things.</p>
  651 +<p>The important thing is that you can apply your address to those inputs, and
  652 +it doesn't show up until the rising edge of the clock. There's the option
  653 +of having either registered or non-registered output (non-registered for
  654 +this lab).</p>
  655 +<p>So now we've got an ALU and RAM. And so we can build some simple
  656 +datapaths. For sure you're going to see on the final (and most likely the
  657 +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design
  658 +a system to find the largest unsigned int in the SRAM."</p>
  659 +<p>Demonstration of clock cycles, etc. So what's our FSM look like? Either
  660 +LOAD or HOLD.</p>
  661 +<p>On homework, did not say sync SRAM. Will probably change.</p>
  662 +<p><a name='10'></a></p>
  663 +<h1>CS 150: Digital Design &amp; Computer Architecture</h1>
  664 +<h2>September 20, 2012</h2>
  665 +<p>Non-overlapping clocks. n-phase means that you've got n different outputs,
  666 +and at most one high at any time. Guaranteed dead time between when one
  667 +goes low and next goes high.</p>
  668 +<h2>K-maps</h2>
  669 +<p>Finding minimal sum-of-products and product-of-sums expressions for
  670 +functions. <strong>On-set</strong>: all the ones of a function; <strong>implicant</strong>: one or
  671 +more circled ones in the onset; a <strong>minterm</strong> is the smallest implicant you
  672 +can have, and they go up by powers of two in the number of things you can
  673 +have; a <strong>prime implicant</strong> can't be combined with another (by circling);
  674 +an <strong>essential prime implicant</strong> is a prime implicant that contains at
  675 +least one one not in any other prime implicant. A <strong>cover</strong> is any
  676 +collection of implicants that contains all of the ones in the on-set, and a
  677 +<strong>minimal cover</strong> is one made up of essential prime implicants and the
  678 +minimum number of implicants.</p>
  679 +<p>Hazards vs. glitches. Glitches are when timing issues result in dips (or
  680 +spikes) in the output; hazards are if they might happen. Completely
  681 +irrelevant in synchronous logic.</p>
  682 +<h2>Project</h2>
  683 +<p>3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If
  684 +we look at the datapath elements, the storage elements, you've got your
  685 +program counter, your instruction memory, register file, and data
  686 +memory. Figure 7.1 from the book. If you mix that in with figure 8.28,
  687 +which talks about MMIO, that data memory, there's an address and data bus
  688 +that this is hooked up to, and if you want to talk to a serial port on a
  689 +MIPS processor (or an ARM processor, or something like that), you don't
  690 +address a particular port (not like x86). Most ports are
  691 +memory-mapped. Actually got a MMIO module that is also hooked up to the
  692 +address and data bus. For some range of addresses, it's the one that
  693 +handles reads and writes.</p>
  694 +<p>You've got a handful of different modules down here such as a UART receive
  695 +module and a UART transmit module. In your project, you'll have your
  696 +personal computer that has a serial port on it, and that will be hooked up
  697 +to your project, which contains the MIPS150 processor. Somehow, you've got
  698 +to be able to handle characters transmitted in each direction.</p>
  699 +<h2>UART</h2>
  700 +<p>Common ground, TX on one side connected to RX port on other side, and vice
  701 +versa. Whole bunch more in different connectors. Basic protocol is called
  702 +RS232, common (people often refer to it by connector name: DB9 (rarely
  703 +DB25); fortunately, we've moved away from this world and use USB. We'll
  704 +talk about these other protocols later, some sync, some async. Workhorse
  705 +for long time, still all over the place.</p>
  706 +<p>You're going to build the UART receiver/transmitter and MMIO module that
  707 +interfaces them. See when something's coming in from software /
  708 +hardware. Going to start out with polling; we will implement interrupts
  709 +later on in the project (for timing and serial IO on the MIPS
  710 +processor). That's really the hardcore place where software and hardware
  711 +meet. People who understand how each interface works and how to use those
  712 +optimally together are valuable and rare people.</p>
  713 +<p>What you're doing in Lab 4, there's really two concepts of (1) how does
  714 +serial / UART work and (2) ready / valid handshake.</p>
  715 +<p>On the MIPS side, you've got some addresses. Anything that starts with FFFF
  716 +is part of the memory-mapped region. In particular, the first four are
  717 +mapped to the UART: they are RX control, RX data, TX control, and TX data.</p>
  718 +<p>When you want to send something out the UART, you write the byte -- there's
  719 +just one bit for the control and one byte for data.</p>
  720 +<p>Data goes into some FSM system, and you've got an RX shift register and a
  721 +TX shift register.</p>
  722 +<p>There's one other piece of this, which is that inside of here, the thing
  723 +interfacing to this IO-mapped module uses this ready bit. If you have two
  724 +modules: a source and a sink (diagram from the document), the source has
  725 +some data that is sending out, tells the sink when the data is valid, and
  726 +the sink tells the source when it is ready. And there's a shared "clock"
  727 +(baud rate), and this is a synchronous interface.</p>
  728 +<ul>
  729 +<li>source presents data</li>
  730 +<li>source raises valid</li>
  731 +<li>when ready &amp; valid on posedge clock, both sides know the transaction was
  732 + successful.</li>
  733 +</ul>
  734 +<p>Whatever order this happens in, source is responsible for making sure data
  735 +is valid.</p>
  736 +<p>HDLC? Takes bytes and puts into packets, ACKs, etc.</p>
  737 +<p>Talk about quartz crystals, resonators. <mathjax>$\pi \cdot 10^7$</mathjax>.</p>
  738 +<p>So: before I let you go, parallel load, n bits in, serial out, etc.</p>
  739 +<p><a name='11'></a></p>
  740 +<h1>CS 150: Digital Design &amp; Computer Architecture</h1>
  741 +<h2>September 25, 2012</h2></div><div class='pos'></div>
598 742 <script src='mathjax/unpacked/MathJax.js?config=default'></script>
599 743 <script type="text/x-mathjax-config">
600 744 MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
338 cs_h195.html
@@ -382,7 +382,343 @@
382 382 <p><a name='3'></a></p>
383 383 <h1>CS H195: Ethics with Harvey</h1>
384 384 <h2>September 17, 2012</h2>
385   -<p>Lawsuit to get records about NSA's surveillance information.</p></div><div class='pos'></div>
  385 +<p>Lawsuit to get records about NSA's surveillance information.</p>
  386 +<p>Video games affecting people, evidently.</p>
  387 +<p>Government subpoenaed Twitter to give people tweets.</p>
  388 +<p>Records can be subpoenaed in a court case, etc. We'll see how this plays
  389 +out. Today, in today's Daily Cal, UCB suing big companies. Universities do
  390 +research, etc. Back in the day, core memory meant people paid money to IBM
  391 +and MIT. Berkeley holds a bunch of patents. Non-software seems reasonable.</p>
  392 +<p>Important point: the burst of genius is very rarely true. Enabling
  393 +technologies have reached the point of making things feasible. Usual story
  394 +about inventions. Flash bulb in a camera, single-use: before sustainable
  395 +light bulb. Steam engine. Some inventions aren't like that. Some really do
  396 +just come to somebody (velcro, xerography). Nobody else was working on
  397 +that. More often, everyone is thinking about this stuff.</p>
  398 +<p>IP. A patent is the right to develop an invention, to produce things
  399 +dependent on an invention. Copyright is not about invention, it's about
  400 +creative and artistic works. And there, if you have an idea and write about
  401 +it, other people are allowed to use your ideas, not your words. Trademark,
  402 +you know what it is; you can register one; people are not allowed to use it
  403 +in ways that might confuse people. You can in principle make a vacuum
  404 +cleaner called "time". How close do things have to be to raise a lawsuit?
  405 +Lawsuit about Apple Computers vs Apple Records. Later did, which caused a
  406 +later round of battling.</p>
  407 +<p>Personal likeness, I can't take a picture of you and publish it with
  408 +certain exceptions. Most important for famous people. Funny rules:
  409 +newsworthy, and news photographers are allowed to take pictures of
  410 +newsworthy people.</p>
  411 +<p>Trade secrets: if a company has secrets, and you are a competing company,
  412 +you may not send a spy to extract these secrets.</p>
  413 +<p>House ownership. There are houses where people have had houses for
  414 +millennia. Patents and copyrights are not like that: not a right. Those
  415 +things are bargains between creators and society. Purpose to society is
  416 +that these eventually belong to the public. One of the readings talks about
  417 +a different history of patents quoting Italian legal scholars, and if
  418 +correct, patents were supposed to be permanent ownership. Why might it be
  419 +good to society? Used to be people who made new inventions. Guilds. Hard to
  420 +join, and you would be a slave for a while. Master would teach apprentice
  421 +the trade, and advantage was that it reduced competition. Trouble was that
  422 +there is a long history of things people used to be able to do that we
  423 +can't anymore. Textbook example: Stradivarius violins.</p>
  424 +<p>Nonetheless, nobody knows how Stradivarius made violins. Stories about how
  425 +to make paints of particular colors. What the patent system is trying to
  426 +avoid. Describe how invention works so someone in the field can create
  427 +it. By making this disclosure, you are given a limited-term exclusive right
  428 +to make these.</p>
  429 +<p>The thing is, sooner or later, your technology is going to be obsolete. To
  430 +your advantage to have a clear legal statement.</p>
  431 +<p>Patent treaties. Used to be that if you invented something important, you'd
  432 +hire a bunch of lawyers.</p>
  433 +<p>Until recently, software was not patentable. ATT wanted to patent the
  434 +SETUID bit. In those days, you could not patent any math or software or
  435 +algorithm.</p>
  436 +<p>Patents stifling innovation in the field. When you file a patent
  437 +application. Let's say you deny the patent. You would like to fall back on
  438 +trade secrecy. Patent applications are secret until approved. Startups
  439 +doomed. Wouldn't matter if term were short compared to innovation cycle of
  440 +the industry.</p>
  441 +<p>Another thing in the Constitution is that treaties take precedence over
  442 +domestic laws.</p>
  443 +<p>So let's talk about copyrights! So. Nobody says let's do away with
  444 +copyright altogether. Copyright (at its worst) is less socially harmful
  445 +than patents because it's so specific. Again, copyrights are a
  446 +bargain. Started in Britain between the King and printers. Printers wanted
  447 +exclusive right to things they printed. King wanted printers to be
  448 +censors. Originally not authors who had copyright, but the publisher. Often
  449 +creators of rights will sell the rights to publishers.</p>
  450 +<p>This is where computers come in. How to sell to world? Used to need big
  451 +company with facilities to create copies and widely
  452 +distribute. Self-publish: work available to everyone. Important: rarely
  453 +author who complains about copyrights. Usually publishers.</p>
  454 +<p>There's always been piracy, but limited historically by analog media losing
  455 +information when copying.</p>
  456 +<p>Term of copyright has gotten longer and longer. Lawsuit about this about
  457 +the most recent extension. In effect, making permanent copyright, against
  458 +constitution. Ironic because copyright law now would have made much of
  459 +what made Disney rich would have been copyrighted. Lot of exceptions to
  460 +copyright law. Fair use. e.g. cannot write a Harry Potter novel, but can
  461 +write a Harry Potter parody. Famous case: Gone with the Wind. About how
  462 +wonderful life was for the owners of slaves. Someone wrote a book
  463 +(retelling from slave's point of view); ruled as fair use (political
  464 +commentary, protected by free speech).</p>
  465 +<p>Stallman actually invented a system that has 5 different categories of
  466 +work. Even Stallman doesn't say to ditch copyright. Hardly any musicians
  467 +make any money selling music because their contracts say that they make a
  468 +certain percentage of net proceeds. The way musicians survive is concerts,
  469 +and ironically, selling concert CDs. Stallman says to make music players
  470 +have a money button and send money directly to the musician.</p>
  471 +<p><a name='4'></a></p>
  472 +<h1>CS H195: Ethics with Harvey</h1>
  473 +<h2>September 24, 2012</h2>
  474 +<p>Vastly oversimplified picture of moral philosophy. Leaves out a lot.</p>
  475 +<p>So Socrates says famously "to know the good is to desire the good", by
  476 +which he means that if you really understand what's in your own interest,
  477 +it's going to turn out to be the right thing. Counter-intuitive, since
  478 +we've probably encountered situations in which we think what's good for us
  479 +isn't good for the rest of the community.</p>
  480 +<p>Ended up convicting Socrates, and he was offered the choice between exile
  481 +from Athens and death -- chose death because he felt that he could not
  482 +exist outside of his own community. His most famous student was Plato, who
  483 +started an Academy (Socrates just wandered around from hand to mouth), took
  484 +in students (one of whom was Aristotle). If you're scientists or engineers,
  485 +you've been taught to make fun of Aristotle, since he said that heavier
  486 +objects fall faster than light objects, and famously, Galileo took two
  487 +objects, dropped them, and they hit the ground at the same time.</p>
  488 +<p>It's true that some of the things Aristotle said about the physical world
  489 +have turned out not to be right. But it's important to understand it in
  490 +terms of the physical world, he did not have the modern idea of trying to
  491 +make a universal theory that explained everything.</p>
  492 +<p>Objects falling in atmosphere with friction different from behavior of
  493 +planets orbiting sun? Perfectly fine with Aristotle.</p>
  494 +<p>One of the things Aristotle knew? When you see a plate of donuts, you know
  495 +perfectly well that it's just carbs and fat and you shouldn't eat them, but
  496 +you do anyway. Socrates explains that as "you don't really know through and
  497 +through that it is bad for you", and Aristotle doesn't like that
  498 +explanation. Knowing what to do and actually doing it are two different
  499 +things. Took that in two directions: action syllogism (transitivity),
  500 +extended so that conclusion of the syllogism can be an action. Not
  501 +important to us: important to us is that he introduces the idea of
  502 +virtues. A virtue is not an understanding of what's right, but a habit --
  503 +like a good habit you get into.</p>
  504 +<p>Aristotle lists a bunch of virtues, and in all cases he describes it as a
  505 +midpoint between two extremes (e.g. courage between cowardice and
  506 +foolhardiness, or honesty as a middle ground between dishonesty and saying
  507 +too much).</p>
  508 +<p>Better have good habits, since you don't have time in real crises to
  509 +think. So Aristotle's big on habits. And he says that you learn the virtues
  510 +through being a member of a community and through the role you play in that
  511 +community, Lived in a time that people inherited roles a lot. The argument
  512 +goes a little like this. What does it mean to be a good person? Hard
  513 +question. What does it mean to be a good carpenter? Much easier. A good
  514 +carpenter builds stuff that holds together and looks nice, etc. What are
  515 +the virtues that lead to being a good carpenter? Also easy: patience, care,
  516 +measurement, honesty, etc. Much easier than what's a good
  517 +person.</p>
  518 +<p>Aristotle's going to say that the virtues of being a good person are
  519 +precisely the virtues you learn in social practices from people older than
  520 +you who are masters of the practice. One remnant of that in modern society
  521 +is martial arts instruction. When you go to a martial arts school and say
  522 +you want to learn, one of the first things you learn is respect for your
  523 +instructor, and you're supposed to live your life in a disciplined way, and
  524 +you're not learning skills so much as habits. Like what Aristotle'd say
  525 +about any practice. Not so much of that today: when you're learning to be a
  526 +computer scientist, there isn't a lot of instruction in "here are the
  527 +habits that make you a (morally) good computer scientist".</p>
  528 +<p>Kant was not a communitarian: was more of "we can figure out the right
  529 +answer to ethical dilemmas." He has an axiom system, just like in
  530 +mathematics: with small numbers of axioms, you can prove things. Claims
  531 +just one axiom, which he describes in multiple ways.</p>
  532 +<p>Categorical imperative number one: treat people as ends, not means. This is
  533 +the grown-up version of the golden rule. Contracts are all right as long as
  534 +both parties have their needs met and exchange is not too unequal.</p>
  535 +<p>Second version: universalizability. An action is good if it is
  536 +universalizable. That means, if everybody did it, would it work? Textbook
  537 +example is "you shouldn't tell lies". The only reason telling lies works is
  538 +because people usually tell the truth, and so people are predisposed to
  539 +thinking that it's usually true. If everyone told lies, then we'd be
  540 +predisposed to disbelieve statements. Lying would no longer be effective.</p>
  541 +<p>There's a third one which BH can never remember which is much less
  542 +important. Kant goes on to prove theorems to resolve moral dilemmas.</p>
  543 +<p>Problem from Kant: A runs past you into the house. B comes up with a gun
  544 +and asks you where A is. Kant suggests something along the lines of
  545 +misleading B.</p>
  546 +<p>Axiomatic, resolve ethical problems through logic and proving what you want
  547 +to do. Very popular among engineers, mainly for the work of Rawls, who
  548 +talks about the veil of ignorance. You have to imagine yourself, looking at
  549 +life on Earth, and not knowing in what social role you're going to be
  550 +born. Rawls thinks that from this perspective, you have to root for the
  551 +underdog when situations come up, because in any particular thing that
  552 +comes up, harm to the rich person is going to be less than the gains of the
  553 +poor person (in terms of total wealth, total needs). Going to worry about
  554 +being on side of underdog, etc. More to Rawls: taking into account how
  555 +things affect all different constituencies.</p>
  556 +<p>Another descendant of Plato are utilitarians. One of the reasons it's
  557 +important for you to understand this chart: when you don't think about it
  558 +too hard, you use utilitarian principles, which is sometimes
  559 +bad. Utilitarians talk about the greatest good for the greatest number.</p>
  560 +<p>Back to something from this class: what if I illegally download some movie?
  561 +Is that okay? How much do I benefit, and how much is the movie-maker
  562 +harmed? Not from principled arguments, which is what Kant wants you to do,
  563 +but from nuts and bolts, who benefits how much, each way.</p>
  564 +<p>Putting that in a different fashion, Kantians are interested in what
  565 +motivates your action, why you did it. Utilitarians are interested in the
  566 +result of your action. One thing that makes utilitarian hard is that you
  567 +have to guess as to what probably will happen.</p>
  568 +<p>Now I want to talk to you about MacIntyre. Gave you a lot of reading,
  569 +probably hardest reading in the course. Talks like a philosopher. Uses
  570 +dessert as what you deserve (noun of deserve). Life-changing for BH when he
  571 +came across MacIntyre; passing it on to you as a result.</p>
  572 +<p>He starts by saying to imagine an aftermath in which science is blamed and
  573 +destroyed. A thousand years later, some people digging through the remains
  574 +of our culture read about this word science, and it's all about
  575 +understanding how the physical world works, and they want to revive this
  576 +practice. Dig up books by scientists, read and memorize bits of them,
  577 +analyze, have discussions. The people who do this call themselves
  578 +scientists because they're studying science.</p>
  579 +<p>We from our perspective would say that isn't science at all -- you don't
  580 +just engage with books, but rather engage with the physical world through
  581 +experiments. Those imagined guys from a millennium from now have lost the
  582 +practice. They think they're following a practice, but they have no idea
  583 +what it's like. MacIntyre argues this is us with ethics.</p>
  584 +<p>Equivalent to WW3 according to MacIntyre is Kant. Kant really, more than
  585 +anyone else, brought into being the modern era. Why? Because in the times
  586 +prior to Kant, a lot of arguments not only about ethics but also by the
  587 +physical world were resolved by religious authority. Decisions made based
  588 +on someone's interpretation of the bible, e.g.</p>
  589 +<p>Kant claims to be a Christian, but he thinks the way we understand God's
  590 +will is by applying the categorical imperative. Instead of asking a priest
  591 +what to do, we reason it out. We don't ask authorities, we work it out.
  592 +Also, he starts this business of ethical dilemmas. Everybody in the top
  593 +half of the world talks in terms of the good life. Even Socrates, who
  594 +thinks you can know what to do, talks about the good life, too. So ethics
  595 +is not about "what do I do in this situation right now", but rather the
  596 +entirety of one's life and what it means to live a good life.</p>
  597 +<p>Kant and Mill: no sense of life as a flow; rather, moments of
  598 +decisions. What MacIntyre calls the ethical equivalent of WW3: at that
  599 +point, we lost the thread, since we stopped talking about the good
  600 +life. Now, it wasn't an unmitigated disaster, since it gives us -- the
  601 +modern liberal society, not in the American sense of voting for democrats,
  602 +but in the sense that your life goals are up to you as an individual, and
  603 +the role of society is to build infrastructure and getting in people's way,
  604 +so stopping people from doing things. I can, say, have some sexual practice
  605 +different from yours. So that was a long time coming. Now, in our
  606 +particular culture, the only thing that's bad is having sex with children,
  607 +as far as I can tell -- as long as it doesn't involve you messing up
  608 +someone else's life, e.g. rape. As long as it involves two (or more?)
  609 +consenting adults, that's okay.</p>
  610 +<p>MacIntyre says that there are things that came up with Kant that we can't
  611 +just turn back to being Aristotlean. The people who lived the good life
  612 +were male Athenian citizens. They had wives who weren't eligible, and they
  613 +had slaves who did most of the grunt work. And so male Athenian citizens
  614 +could spend their time walking around chatting with Socrates because they
  615 +were supported by slavery. And nobody wants to go back to that. No real way
  616 +to go back to being Aristotlean without giving up modern civil rights.</p>
  617 +<p>So. One of the things I really like about MacIntyre is the example of
  618 +wanting to teach a child how to play chess, but he's not particularly
  619 +interested. He is, however, interested in candy. You say, every time you
  620 +play with me, I'll give you a piece of candy. If you win, two pieces. Will
  621 +play in a way that's difficult but possible to beat me. So, MacIntyre says
  622 +this child is now motivated to play and to play well. But he's also
  623 +motivated to cheat, if he can get away with it. So let's say this
  624 +arrangement goes on for some time, and the kid gets better at it. What you
  625 +hope is that the child reaches a point where the game is valuable to
  626 +itself: he or she sees playing chess as rewarding (as an intellectual
  627 +challenge). When that happens, cheating becomes self-defeating.</p>
  628 +<p>While the child is motivated by external goods (rewards, money, fame,
  629 +whatever), then the child is not part of the community of practice. But
  630 +once the game becomes important (the internal benefits motivate him), then
  631 +he does feel like part of the community. Huge chess community with
  632 +complicated infrastructure with rating, etc. And that's a community with
  633 +practice, and it has virtues (some of which are unique to chess, but maybe
  634 +not -- e.g. planning ahead). Honesty, of course; patience; personal
  635 +improvement.</p>
  636 +<p>And the same is true with most things that human beings do. Not
  637 +everything. MacIntyre raises the example of advertising. What are the
  638 +virtues of this practice? Well, appealing to people in ways that they don't
  639 +really see; suggesting things that aren't quite true without saying
  640 +them. He lists several virtues that advertising people have, and these
  641 +virtues don't generalize. Not part of being a good person; not even
  642 +compatible with being a good person. So different from virtues of normal
  643 +practices.</p>
  644 +<p>Having advertising writers is one of the ways in which MacIntyre thinks
  645 +we've just lost the thread. The reason we have them is that we hold up in
  646 +our society the value of furthering your own ambition and getting rich, and
  647 +not getting rich by doing something that's good anyway, but just getting
  648 +rich. That's an external motivation rather than an internal one.</p>
  649 +<p>We talk about individuals pursuing their own ends. We glorify -- take as an
  650 +integral part of our society -- as individuals pursuing their own ends. In
  651 +a modern understanding of ethics, you approach each new situation as if
  652 +you've never done anything. You don't learn from experience; you learn from
  653 +rules. The result may be the same for each intermediate situation, but it
  654 +leads to you thinking differently. You don't think about building good
  655 +habits in this context.</p>
  656 +<p>A lot of you probably exercise (unlike me). Maybe you do it because it's
  657 +fun, but maybe you also do it because it only gets harder as you get older,
  658 +and you should get in the habit to keep it up. In that area, you get into
  659 +habits. But writing computer programs, we tell you about rules (don't have
  660 +concurrency violations), and I guess implicitly, we say that taking 61B is
  661 +good for you because you learn to write bigger programs. Still true --
  662 +still a practice with virtues.</p>
  663 +<p>Two things: that sort of professional standard of work is a pretty narrow
  664 +ethical issue. They don't teach you to worry about the privacy implications
  665 +of third parties. Also, when people say they have an ethical dilemma, they
  666 +think about it as a decision. A communitarian would reject all that ethical
  667 +dilemma stuff. Dilemmas will have bad outcomes regardless. Consider Greek
  668 +tragedies. When Oedipus finds himself married to his mother, it's like game
  669 +over. Whole series of bad things that happen to him. Not much he can do
  670 +about it on an incident by incident basis. Problem is a fatal flaw in his
  671 +character early on (as well as some ignorance), and no system of ethics is
  672 +going to lead Oedipus out of this trap. What you have to is try not to get
  673 +into traps, and you do that through prudence and honesty and whatnot.</p>
  674 +<p>Classic dilemma: Heins is a guy whose wife has a fatal disease that can be
  675 +cured by an expensive drug, but Heins is poor. So he goes to the druggist
  676 +and says that he can't afford to pay for this drug, but his wife is going
  677 +to die, so the druggist says no. So Heins is considering breaking into the
  678 +drugstore at night and stealing the drug so his wife can live. What should
  679 +he do and why? According to the literature, there's no right answer. What
  680 +matters is your reason.</p>
  681 +<p>I'm going to get this wrong, but it's something like this. Stage one: your
  682 +immediate needs are what matter. Yes, he should steal it, because it's his
  683 +wife, or no, he shouldn't steal it, because he should go to prison. Stage
  684 +two: something like worrying about consequences to individuals. Might hurt
  685 +druggist or might hurt his wife. Stage three: something like "well, I have
  686 +a closer relationship to my wife than the druggist; I care more about my
  687 +wife, so I should steal it". Stage four: it's against the law, and I
  688 +shouldn't break the law. Stage five: like stage three, generalized to
  689 +larger community: how much will it hurt my wife not to get the drug? A
  690 +lot. How much will it hurt the druggist if I steal it? Some money. Stage
  691 +six, based not on laws of community, but rather on the standards of the
  692 +community. Odd-numbered stages are about specific people. Even-numbered
  693 +stages are about society and rules (punishment if I do it to it's the law
  694 +to it's what people expect of me).</p>
  695 +<p>Right now I'm talking about the literature of moral psychology: people go
  696 +through these stages (different ways of thinking). Question posed is not
  697 +"how do people behave", but rather "how should people behave".</p>
  698 +<p>This is modern ethical reasoning. Take some situation that has no right
  699 +answer, and split hairs about finding a right answer somehow.</p>
  700 +<p>Talk about flying: checklist for novices. Instructors don't use this list:
  701 +eventually, you get to where you're looking at the entire dashboard at
  702 +once, and things that aren't right jump out at you.</p>
  703 +<p>Another example: take a bunch of chess pieces, put them on the board, get
  704 +someone to look at it for a minute, and take the pieces away, and ask the
  705 +person to reconstruct the board position. Non-chess players are terrible
  706 +(unsurprisingly); chess grandmasters can do it if it came out of a real
  707 +game; if you put it randomly, they're just as bad as the rest of
  708 +us. They're not looking at individual pieces; they're looking at the board
  709 +holistically (clusters of pieces that interact with each other).</p>
  710 +<p>Relevance to this about ethics: we don't always know why we do things. Very
  711 +rare that we have the luxury to figure out either what categorical
  712 +imperative tells us or utilitarian approach. Usually we just do something.</p>
  713 +<p>BH with weaknesses. Would be stronger if his education was less about
  714 +thinking things through and more about doing the right thing.</p>
  715 +<p>Our moral training is full of "Shalt Not"s. Lot more in the Bible about
  716 +what not to do than what to do or how to live the good life (that part of
  717 +the Bible -- gets better). We also have these laws. Hardly ever say you
  718 +have to do something (aside from paying taxes). Mostly say what you can't
  719 +do. Never say how to live the good life. BH thinks that serves us ill. Have
  720 +to make decisions. Often, what you do is different from what you say you
  721 +should do.</p></div><div class='pos'></div>
386 722 <script src='mathjax/unpacked/MathJax.js?config=default'></script>
387 723 <script type="text/x-mathjax-config">
388 724 MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
393 ee221a.html
@@ -852,7 +852,398 @@
852 852 necessarily in the space. Example: any continued fraction.</p>
853 853 <p>To show (1), we'll show that this sequence <mathjax>$\{x_m\}$</mathjax> that we constructed is
854 854 a Cauchy sequence in a Banach space. Interestingly, it matters what norm
855   -you choose.</p></div><div class='pos'></div>
  855 +you choose.</p>
  856 +<p><a name='8'></a></p>
  857 +<h1>EE 221A: Linear System Theory</h1>
  858 +<h2>September 18, 2012</h2>
  859 +<p>Today:</p>
  860 +<ul>
  861 +<li>proof of existence and uniqueness theorem.</li>
  862 +<li>[ if time ] introduction to dynamical systems.</li>
  863 +</ul>
  864 +<p>First couple of weeks of review to build up basic concepts that we'll be
  865 +drawing upon throughout the course. Either today or Thursday we will launch
  866 +into linear system theory.</p>
  867 +<p>We're going to recall where we were last time. We had the fundamental
  868 +theorem of differential equations, which said the following: if we had a
  869 +differential equation, <mathjax>$\dot{x} = f(x,t)$</mathjax>, with initial condition <mathjax>$x(t_0) =
  870 +x_0$</mathjax>, where <mathjax>$x(t) \in \Re^n$</mathjax>, etc, if <mathjax>$f( \cdot , t)$</mathjax> is Lipschitz
  871 +continuous, and <mathjax>$f(x, \cdot )$</mathjax> is piecewise continuous, then there exists a
  872 +unique solution to the differential equation / initial condition pair (some
  873 +function <mathjax>$\phi(t)$</mathjax>) wherever you can take the derivative (may not be
  874 +differentiable everywhere: loses differentiability on the points where
  875 +discontinuities exist).</p>
  876 +<p>We spent quite a lot of time discussing Lipschitz continuity. Job is
  877 +usually to test both conditions; first one requires work. We described a
  878 +popular candidate function by looking at the mean value theorem and
  879 +applying it to <mathjax>$f$</mathjax>: a norm of the Jacobian function provides a candidate
  880 +Lipschitz if it works.</p>
  881 +<p>We also described local Lipschitz continuity, and often, when using a norm
  882 +of the Jacobian, that's fairly easy to show.</p>
  883 +<p>Important point to recall: a norm of the Jacobian of <mathjax>$f$</mathjax> provides a
  884 +candidate Lipschitz function.</p>
  885 +<p>Another important thing to say here is that we can use any norm we want, so
  886 +we can be creative in our choice of norm when looking for a better bound.</p>
  887 +<p>We started our proof last day, and we talked a little about the structure
  888 +of the proof. We are going to proceed by constructing a sequence of
  889 +functions, then show (1) that it converges to a solution, then show (2)
  890 +that it is unique.</p>
  891 +<h2>Proof of Existence</h2>
  892 +<p>We are going to construct this sequence of functions as follows:
  893 +<mathjax>$x_{m+1}(t) = x_0 + \int_0^t f(x_m(\tau)) d\tau$</mathjax>. Here we're dealing with
  894 +an arbitrary interval from <mathjax>$t_1$</mathjax> to <mathjax>$t_2$</mathjax>, and so <mathjax>$0 \in [t_1, t_2]$</mathjax>. We
  895 +want to show that this sequence is a Cauchy sequence, and we're going to
  896 +rely on our knowledge that the space these functions are defined in is a
  897 +Banach space (hence this sequence converges to something in the space).</p>
  898 +<p>We have to put a norm on the set of reals, so we'll use the infinity
  899 +norm. Not going to prove it, but rather state it's a Banach space. If we
  900 +show that this is a Cauchy sequence, then the limit of that Cauchy sequence
  901 +exists in the space. The reason that's interesting is that it's this limit
  902 +that provides a candidate for this differential equation.</p>
  903 +<p>We will then prove that this limit satisfies the DE/IC pair. That is
  904 +adequate to show existence. We'll then go on to prove uniqueness.</p>
  905 +<p>Our immediate goal is to show that this sequence is Cauchy, which is, we
  906 +should show <mathjax>$\exists m \st (x_{m+p} - x_m) \to 0$</mathjax> as <mathjax>$m$</mathjax> gets large.</p>
  907 +<p>First let us look at the difference between <mathjax>$x_{m+1}$</mathjax> and <mathjax>$x_m$</mathjax>. Just
  908 +functions of time, and we can compute this. <mathjax>$\mag{x_{m+1} - x_m} =
  909 +\int_{t_0}^t (f(x_m, \tau) - f(x_{m+1}, \tau)) d\tau$</mathjax>. Use the fact that f
  910 +is Lipschitz continuous, and so it is <mathjax>$\le k(\tau)\mag{x_m(\tau) -
  911 +x_{m+1}(\tau)} d\tau$</mathjax>. The function is Lipschitz, so well-defined, and it
  912 +has a supremum in this interval. Let <mathjax>$\bar{k}$</mathjax> be the supremum of <mathjax>$k$</mathjax> over
  913 +the whole interval <mathjax>$[t_1, t_2]$</mathjax>. This means that we can take this
  914 +inequality and rewrite as <mathjax>$\mag{x_{m+1} - x_m} \le \bar{k} \int_{t_0}^t
  915 +\mag{x_m(\tau) - x_{m+1}(\tau)} d\tau$</mathjax>. Now we have a bound that relates
  916 +the bound between <mathjax>$x_m$</mathjax> and <mathjax>$x_{m+1}$</mathjax>. You can essentially relate the
  917 +distance we've just related between two subsequent elements to some further
  918 +distance by counting.</p>
  919 +<p>Let us do two things: sort out the integral on the right-hand-side, then
  920 +look at arbitrary elements beyond an index.</p>
  921 +<p>We know that <mathjax>$x_1(t) = x_0 + \int_{t_0}^t f(x_0, \tau) d\tau$</mathjax>, and that <mathjax>$x_1
  922 +- x_0 \le \int_{t_0}^{t} \mag{f(x_0, \tau)} d\tau \le \int_{t_1}{t_2}
  923 + \mag{f(x_0, \tau) d\tau} \defequals M$</mathjax>. From the above inequalities,
  924 + <mathjax>$\mag{x_2 - x_1} \le M \bar{k}\abs{t - t_0}$</mathjax>. Now I can look at general
  925 + bounds: <mathjax>$x_3 - x_2 \le \frac{M\bar{k}^2 \abs{t - t_0}^2}{2!}$</mathjax>. In general,
  926 + <mathjax>$x_{m+1} - x_m \le \frac{M\parens{\bar{k} \abs{t - t_0}}^m}{m!}$</mathjax>.</p>
  927 +<p>If we look at the norm of <mathjax>$\dot{x}$</mathjax>, that is going to be a function
  928 +norm. What I've been doing up to now is look at a particular value <mathjax>$t_1 &lt; t
  929 +&lt; t_2$</mathjax>.</p>
  930 +<p>Try to relate this to the norm <mathjax>$\mag{x_{m+1} - x_m}_\infty$</mathjax>. Can what we've
  931 +done so far give us a bound on the difference between two functions? We
  932 +can, because the infinity norm of a function is the maximum value that the
  933 +function assumes (maximum vector norm for all points <mathjax>$t$</mathjax> in the interval
  934 +we're interested in). If we let <mathjax>$T$</mathjax> be the difference between our larger
  935 +bound <mathjax>$t_2 - t_1$</mathjax>, we can use the previous result on the pointwise norm,
  936 +then a bound on the function norm has to be less than the same
  937 +bound, i.e. if a pointwise norm function is less than this bound for all
  938 +relevant <mathjax>$t$</mathjax>, then its max value must be less than this bound.</p>
  939 +<p>That gets us on the road we want to be, since that now gets us a bound. We
  940 +can now go back to where we started. What we're actually interested in is
  941 +given an index <mathjax>$m$</mathjax>, we can construct a bound on all later elements in the
  942 +sequence.</p>
  943 +<p><mathjax>$\mag{x_{m+p} - x_m}_\infty = \mag{x_{m+p} + x_{m+p-1} - x_{m+p-1} + ... -
  944 +x_m} = \mag{\sum_{k=0}^{p-1} (x_{m+k+1} - x_{m+k})} \le M \sum_{k=0}^{p-1}
  945 +\frac{(\bar{k}T)^{m+k}}{(m+k)!}$</mathjax>.</p>
  946 +<p>We're going to recall a few things from undergraduate calculus: Taylor
  947 +expansion of the exponential function and <mathjax>$(m+k)! \ge m!k!$</mathjax>.</p>
  948 +<p>With these, we can say that <mathjax>$\mag{x_{m+p} - x_m}_\infty \le
  949 +M\frac{(\bar{k}T)^m}{m!} e^{\bar{k} T}$</mathjax>. What we'd like to show is that this
  950 +can be made arbitrarily small as <mathjax>$m$</mathjax> gets large. We study this bound as <mathjax>$m
  951 +\to \infty$</mathjax>, and we recall that we can use the Stirling approximation,
  952 +which shows that factorial grows faster than the exponential function. That
  953 +is enough to show that <mathjax>$\{x_m\}_0^\infty$</mathjax> is Cauchy. Since it is in a
  954 +Banach space (not proving, since beyond our scope), it converges to
  955 +something in the space to a function (call it <mathjax>$x^\ell$</mathjax>) in the same
  956 +space.</p>
  957 +<p>Now we just need to show that the limit <mathjax>$x^\ell$</mathjax> solves the differential
  958 +equation (and initial condition). Let's go back to the sequence that
  959 +determines <mathjax>$x^\ell$</mathjax>. <mathjax>$x_{m+1} = x_0 + \int_{t_0}^t f(x_m, \tau)
  960 +d\tau$</mathjax>. We've proven that this limit converges to <mathjax>$x^\ell$</mathjax>. What we want to
  961 +show is that if we evaluate <mathjax>$f(x^\ell, t)$</mathjax>, then <mathjax>$\int_{t_0}^t f(x_m, \tau)
  962 +\to \int_{t_0}^t f(x^\ell, \tau) d\tau$</mathjax>. Would be immediate if we had that
  963 +the function were continuous. Clear that it satisfies initial condition by
  964 +the construction of the sequence, but we need to show that it satisfies the
  965 +differential equation. Conceptually, this is probably more difficult than
  966 +what we've just done (establishing bounds, Cauchy sequences). Thinking
  967 +about what that function limit is and what it means for it to satisfy that
  968 +differential equation.</p>
  969 +<p>Now, you can basically use some of the machinery we've been using all along
  970 +to show this. Difference between these goes to <mathjax>$0$</mathjax> as <mathjax>$m$</mathjax> gets large.</p>
  971 +<p><mathjax>$$\mag{\int_{t_0}^t (f(x_m, \tau) f(x^\ell, \tau)) d\tau}
  972 +\\ \le \int_{t_0}^t k(\tau) \mag{x_m - x^\ell} d\tau \le \bar{k}\mag{x_m - x^\ell}_\infty T
  973 +\\ \le \bar{k} M e^{\bar{k} T} \frac{(\bar{k} T)^m}{m!}T
  974 +$$</mathjax></p>
  975 +<p>Thus <mathjax>$x^\ell$</mathjax> solves the DE/IC pair. A solution <mathjax>$\Phi$</mathjax> is <mathjax>$x^\ell$</mathjax>,
  976 +i.e. <mathjax>$x^\ell(t) = f(x^\ell, t) \forall [t_1, t_2] - D$</mathjax> and <mathjax>$x^\ell(t_0) =
  977 +x_0$</mathjax></p>
  978 +<p>To show that this solution is unique, we will use the Bellman-Gronwall
  979 +lemma, which is very important. Used ubiquitously when you want to show
  980 +that functions of time are equal to each other: candidate mechanism to do
  981 +that.</p>
  982 +<h2>Bellman-Gronwall Lemma</h2>
  983 +<p>Let <mathjax>$u, k$</mathjax> be real-valued positive piece-wise continuous functions of time,
  984 +and we'll have a constant <mathjax>$c_1 \ge 0$</mathjax> and <mathjax>$t_0 \ge 0$</mathjax>. If we have such
  985 +constants and functions, then the following is true: if <mathjax>$u(t) \le c_1 +
  986 +\int_{t_0}^t k(\tau)u(\tau) d\tau$</mathjax>, then <mathjax>$u(t) \le c_1 e^{\int_{t_0}^t
  987 +k(\tau) d\tau}$</mathjax>.</p>
  988 +<h2>Proof (of B-G)</h2>
  989 +<p><mathjax>$t &gt; t_0$</mathjax> WLOG.</p>
  990 +<p><mathjax>$$U(t) = c_1 + \int_{t_0}^t k(\tau) u(\tau) d\tau
  991 +\\ u(t) \le U(t)
  992 +\\ u(t)k(t)e^{\int_{t_0}^t k(\tau) d\tau} \le U(t)k(t)e^{\int_{t_0}^t k(\tau) d\tau}
  993 +\\ \deriv{}{t}\parens{U(t)e^{\int_{t_0}^t k(\tau) d\tau}} \le 0 \text{(then integrate this derivative, note that U(t_0) = c_1)}
  994 +\\ u(t) \le U(t) \le c_1 e^{\int_{t_0}^t k(\tau) d\tau}
  995 +$$</mathjax></p>
  996 +<h2>Using this to prove uniqueness of DE/IC solutions</h2>
  997 +<p>How we're going to use this to prove B-G lemma.</p>
  998 +<p>We have a solution that we constructed <mathjax>$\Phi$</mathjax>, and someone else gives us a
  999 +solution <mathjax>$\Psi$</mathjax>, constructed via a different method. Show that these must
  1000 +be equivalent. Since they're both solutions, they have to satisfy the DE/IC
  1001 +pair. Take the norm of the difference between the differential equations.</p>
  1002 +<p><mathjax>$$\mag{\Phi - \Psi} \le \bar{k} \int_{t_0}^t \mag{\Phi - \Psi} d\tau \forall
  1003 +t_0, t \in [t_1, t_2]$$</mathjax></p>
  1004 +<p>From the Bellman-Gronwall Lemma, we can rewrite this inequality as
  1005 +<mathjax>$\mag{\Phi - \Psi} \le c_1 e^{\bar{k}(t - t_0)}$</mathjax>. Since <mathjax>$c_1 = 0$</mathjax>, this
  1006 +norm is less than or equal to 0. By positive definiteness, this norm must
  1007 +be equal to 0, and so the functions are equal to each other.</p>
  1008 +<h2>Reverse time differential equation</h2>
  1009 +<p>We think about time as monotonic (either increasing or decreasing, usually
  1010 +increasing). Suppose that time is decreasing. <mathjax>$\exists \dot{x} =
  1011 +f(x,t)$</mathjax>. Going backwards in time, explore existence and uniqueness going
  1012 +backwards in time. Suppose we had a time variable <mathjax>$\tau$</mathjax> which goes from
  1013 +<mathjax>$t_0$</mathjax> backwards, and defined <mathjax>$\tau \defequals t_0 - t$</mathjax>. We want to define
  1014 +the solution to that differential equation backwards in time as <mathjax>$z(\tau) =
  1015 +x(t)$</mathjax> if <mathjax>$t &lt; t_0$</mathjax>. Derive what reverse order time derivative is. Equation
  1016 +is just <mathjax>$-f$</mathjax>; we're going to use <mathjax>$\bar{f}$</mathjax> to represent this
  1017 +function (<mathjax>$\deriv{}{\tau}z = -\deriv{}{t}x = -f(x, t) = -f(z, \tau) =
  1018 +\bar{f}$</mathjax>).</p>
  1019 +<p>This equation, if I solve the reverse time differential equation, we'll
  1020 +have some corresponding backwards solution. Concluding statement: can think
  1021 +about solutions forwards and backwards in time. Existence of unique
  1022 +solution forward in time means existence of unique solution backward in
  1023 +time (and vice versa). You can't have solutions crossing themselves in
  1024 +time-invariant systems.</p>
  1025 +<p><a name='9'></a></p>
  1026 +<h1>EE 221A: Linear System Theory</h1>
  1027 +<h2>September 20, 2012</h2>
  1028 +<p>Introduction to dynamical systems. Suppose we have equations <mathjax>$\dot{x} =
  1029 +f(x, u, t)$</mathjax>, <mathjax>$\fn{f}{\Re^n \times \Re^n \times \Re_+}{\Re^n}$</mathjax> and <mathjax>$y = h(x,
  1030 +u, t)$</mathjax>, <mathjax>$\fn{h}{\Re^n \times \Re^n \times \Re_+}{\Re^n}$</mathjax>. We define <mathjax>$n_i$</mathjax> as
  1031 +the dimension of the input space, <mathjax>$n_o$</mathjax> as dimension of the output space,
  1032 +and <mathjax>$n$</mathjax> as the dimension of the state space.</p>
  1033 +<p>We've looked at the form, and if we specify a particular <mathjax>$\bar{u}(t)$</mathjax> over some
  1034 +time interval of interest, then we can plug this into the right hand side
  1035 +of this differential equation. Typically we do not supply a particular
  1036 +input. Thinking about solutions to this differential equation, for now,
  1037 +let's suppose that it's specified.</p>
  1038 +<p>Suppose we have some feedback function of the state. If <mathjax>$u$</mathjax> is specified,
  1039 +as long as <mathjax>$\bar{f}$</mathjax> satisfies the conditions for the existence and
  1040 +uniqueness theorem, we have a differential equation we can solve.</p>
  1041 +<p>Another example: instead of differential equation (which corresponds to
  1042 +continuous time), we have a difference equation (which corresponds to
  1043 +discrete time).</p>
  1044 +<p>Example: dynamic system represented by an LRC circuit. One practical way to
  1045 +define the state <mathjax>$x$</mathjax> is as a vector of elements whose derivatives appear in
  1046 +our differential equation. Not formal, but practical for this example.</p>
  1047 +<p>Notions of discretizing.</p>
  1048 +<h2>What is a dynamical system?</h2>
  1049 +<p>As discussed in first lecture, we consider time <mathjax>$\Tau$</mathjax> to be a privileged
  1050 +variable. Based on our definition of time, the inputs and outputs are all
  1051 +functions of time.</p>
  1052 +<p>Now we're going to define a <strong>dynamical system</strong> as a 5-tuple: <mathjax>$(\mathcal{U},
  1053 +\Sigma, \mathcal{Y}, s, r)$</mathjax> (input space, state space, output space, state
  1054 +transition function, output map).</p>
  1055 +<p>We define the <strong>input space</strong> as the set of input functions over time to an
  1056 +input set <mathjax>$U$</mathjax> (i.e. <mathjax>$\mathcal{U} = \{\fn{u}{\Tau}{U}\}$</mathjax>. Typically, <mathjax>$U =
  1057 +\Re^{n_i}$</mathjax>).</p>
  1058 +<p>We also define the <strong>output space</strong> as the set of output functions over time to
  1059 +an output set <mathjax>$Y$</mathjax> (i.e. <mathjax>$\mathcal{Y} = \{\fn{y}{\Tau}{Y}\}$</mathjax>). Typically, <mathjax>$Y
  1060 += \Re^{n_o}$</mathjax>.</p>
  1061 +<p><mathjax>$\Sigma$</mathjax> is our <strong>state space</strong>. Not defined as the function, but the actual
  1062 +state space. Typically, <mathjax>$\Sigma = \Re^n$</mathjax>, and we can go back and think
  1063 +about the function <mathjax>$x(t) \in \Sigma$</mathjax>. <mathjax>$\fn{x}{\Tau}{\Sigma}$</mathjax> is called the
  1064 +state trajectory.</p>
  1065 +<p><mathjax>$s$</mathjax> is called the <strong>state transition function</strong> because it defines how the
  1066 +state changes in response to time and the initial state and the
  1067 +input. <mathjax>$\fn{s}{\Tau \times \Tau \times \Sigma \times U }{\Sigma}$</mathjax>. Usually
  1068 +we write this as <mathjax>$x(t_1) = s(t_1, t_0, x_0, u)$</mathjax>, where <mathjax>$u$</mathjax> is the function
  1069 +<mathjax>$u(\cdot) |_{t_0}^{t_1}$</mathjax>. This is important: coming towards how we define
  1070 +state. Only things you need to get to state at the new time are the initial
  1071 +state, inputs, and dynamics.</p>
  1072 +<p>Finally, we have this <strong>output map</strong> (sometimes called the readout map)
  1073 +<mathjax>$r$</mathjax>. <mathjax>$\fn{r}{\Tau \times \Sigma \times U}{Y}$</mathjax>. That is, we can think about
  1074 +<mathjax>$y(t) = r(t, x(t), u(t))$</mathjax>. There's something fundamentally different
  1075 +between <mathjax>$r$</mathjax> and <mathjax>$s$</mathjax>. <mathjax>$s$</mathjax> depended on the function <mathjax>$u$</mathjax>, whereas <mathjax>$r$</mathjax> only
  1076 +depended on the current value of <mathjax>$u$</mathjax> at a particular time.</p>
  1077 +<p><mathjax>$s$</mathjax> captures dynamics, while <mathjax>$r$</mathjax> is static. Remark: <mathjax>$s$</mathjax> has dynamics
  1078 +(memory) -- things that depend on previous time, whereas <mathjax>$r$</mathjax> is static:
  1079 +everything it depends on is at the current time (memoryless).</p>
  1080 +<p>In order to be a dynamical system, we need to satisfy two axioms: a
  1081 +dynamical system is a five-tuple with the following two axioms:</p>
  1082 +<ul>
  1083 +<li>The <strong>state transition axiom</strong>: <mathjax>$\forall t_1 \ge t_0$</mathjax>, given <mathjax>$u, \tilde{u}$</mathjax>
  1084 + that are equal to each other over a particular time interval, the state
  1085 + transition functions must be equal over that interval, i.e. <mathjax>$s(t_1, t_0,
  1086 + x_0, u) = s(t_1, t_0, x_0, \tilde{u})$</mathjax>. Requires us to not have
  1087 + dependence on the input outside of the time interval of interest.</li>
  1088 +<li>The <strong>semigroup axiom</strong>: suppose you start a system at <mathjax>$t_0$</mathjax> and evolve it to
  1089 + <mathjax>$t_2$</mathjax>, and you're considering the state. You have an input <mathjax>$u$</mathjax> defined
  1090 + over the whole time interval. If you were to look at an intermediate
  1091 + point <mathjax>$t_1$</mathjax>, and you computed the state at <mathjax>$t_1$</mathjax> via the state transition
  1092 + function, we can split our time interval into two intervals, and we can
  1093 + compute the result any way we like. Stated as the following: <mathjax>$s(t_2, t_1,
  1094 + s(t_1, t_0, x_0, u), u) = s(t_2, t_0, x_0, u)$</mathjax>.</li>
  1095 +</ul>
  1096 +<p>When we talk about a dynamical system, we have to satisfy these axioms.</p>
  1097 +<h2>Response function</h2>
  1098 +<p>Since we're interested in the outputs and not the states, we can define
  1099 +what we call the <strong>response map</strong>. It's not considered part of the definition
  1100 +of a dynamical system because it can be easily derived.</p>
  1101 +<p>It's the composition of the state transition function and the readout map,
  1102 +i.e. <mathjax>$y(t) = r(t, x(t), u(t)) = r(t, s(t, t_0, x_0, u), u(t)) \defequals
  1103 +\rho(t, t_0, x_0, u)$</mathjax>. This is an important function because it is used to
  1104 +define properties of a dynamical system. Why is that? We've said that
  1105 +states are somehow mysterious. Not something we typically care about:
  1106 +typically we care about the outputs. Thus we define properties like
  1107 +linearity and time invariance.</p>
  1108 +<h2>Time Invariance</h2>
  1109 +<p>We define a time-shift operator <mathjax>$\fn{T_\tau}{\mathcal{U}}{\mathcal{U}}$</mathjax>,
  1110 +<mathjax>$\fn{T_\tau}{\mathcal{Y}}{\mathcal{Y}}$</mathjax>. <mathjax>$(T_\tau u)(t) \defequals u(t -
  1111 +\tau)$</mathjax>. Namely, the value of <mathjax>$T_\tau u$</mathjax> is that of the old signal at
  1112 +<mathjax>$t-\tau$</mathjax>.</p>
  1113 +<p>A <strong>time-invariant</strong> (dynamical) system is one in which the input space and
  1114 +output space are closed under <mathjax>$T_\tau$</mathjax> for all <mathjax>$\tau$</mathjax>, and <mathjax>$\rho(t, t_0,
  1115 +x_0, u) = \rho(t + \tau, t_0 + \tau, x_0, T_\tau u)$</mathjax>.</p>
  1116 +<h2>Linearity</h2>
  1117 +<p>A <strong>linear</strong> dynamical system is one in which the input, state, and output
  1118 +spaces are all linear spaces over the same field <mathjax>$\mathbb{F}$</mathjax>, and the
  1119 +response map <mathjax>$\rho$</mathjax> is a linear map of <mathjax>$\Sigma \times \mathcal{U}$</mathjax> into
  1120 +<mathjax>$\mathcal{Y}$</mathjax>.</p>
  1121 +<p>This is a strict requirement: you have to check that the response map
  1122 +satisfies these conditions. Question that comes up: why do we define
  1123 +linearity of a dynamical system in terms of linearity of the response and
  1124 +not the state transition function? Goes back to a system being
  1125 +intrinsically defined by its inputs and outputs. Often states, you can have
  1126 +many different ways to define states. Typically we can't see all of
  1127 +them. It's accepted that when we talk about a system and think about its
  1128 +I/O relations, it makes sense that we define linearity in terms of this
  1129 +memory function of the system, as opposed to the state transition function.</p>
  1130 +<p>Let's just say a few remarks about this: <strong>zero-input response</strong>,
  1131 +<strong>zero-state response</strong>. If we look at the zero element in our spaces (so
  1132 +we have a zero vector), then we can take our superposition, which implies
  1133 +that the response at time <mathjax>$t$</mathjax> is equal to the zero-state response, which is
  1134 +the response, given that we started at the zero state, plus the zero input
  1135 +response.</p>
  1136 +<p>That is: <mathjax>$\rho(t, t_0, x_0, u) = \rho(t, t_0, \theta_x, u) + \rho(t, t_0,
  1137 +x_0, \theta_u)$</mathjax> (from the definition of linearity).</p>
  1138 +<p>The second remark is that the zero-state response is linear in the input,
  1139 +and similarly, the zero-input response is linear in the state.</p>
  1140 +<p>One more property of dynamical systems before we finish: <strong>equivalence</strong> (a
  1141 +property derived from the definition). Take two dynamical systems <mathjax>$D = (U,
  1142 +\Sigma, Y, s, r), \tilde{D} = (U, \bar{\Sigma}, Y, \bar{s}, \bar{r})$</mathjax>. <mathjax>$x_0
  1143 +\in D$</mathjax> is equivalent to <mathjax>$\tilde{x_0} \in \tilde{D}$</mathjax> at <mathjax>$t_0$</mathjax>. If <mathjax>$\forall t
  1144 +\ge t_0, \rho(t, t_0, x_0, u) = \tilde{\rho}(t, t_0, \tilde{x_0}, u)$</mathjax>
  1145 +<mathjax>$\forall x$</mathjax> and some <mathjax>$\tilde{x}$</mathjax>, the two systems are equivalent.</p>
  1146 +<p><a name='10'></a></p>
  1147 +<h1>EE 221A: Linear System Theory</h1>
  1148 +<h2>September 25, 2012</h2>
  1149 +<h2>Linear time-varying systems</h2>
  1150 +<p>Recall the state transition function is given some function of the current
  1151 +time with initial state, initial time, and inputs, Suppose you have a
  1152 +differential equation; how do you acquire the state transition function?
  1153 +Solve the differential equation.</p>
  1154 +<p>For a general dynamical system, there are different ways to get the state
  1155 +transition function. This is an instantiation of a dynamical system, and
  1156 +we're going to ge thte state transition function by solving the
  1157 +differential equation / initial condition pair. </p>
  1158 +<p>We're going to call <mathjax>$\dot{x}(t) = A(t)x(t) + B(t)u(t)$</mathjax> a vector
  1159 +differential equation with initial condition <mathjax>$x(t_0) = x_0$</mathjax>.</p>
  1160 +<p>So that requires us to think about solving that differential equation. Do a
  1161 +dimension check, to make sure we know the dimensions of the matrices. <mathjax>$x
  1162 +\in \Re^n$</mathjax>, so <mathjax>$A \in \Re^{n_0 \times n}$</mathjax>. We could define the matrix
  1163 +function <mathjax>$A$</mathjax>, which takes intervals of the real line and maps them over to
  1164 +matrices. As a function, <mathjax>$A$</mathjax> is piecewise continuous matrix function in
  1165 +time.</p>
  1166 +<p>The entries are piecewise-continuous scalars in time. We would like to get
  1167 +at the state transition function; to do that, we need to solve the
  1168 +differential equation.</p>
  1169 +<p>Let's assume for now that <mathjax>$A, B, U$</mathjax> are given (part of the system
  1170 +definition).</p>
  1171 +<p>Piece-wise continuous is trivial; we can use the induced norm of <mathjax>$A$</mathjax> for a
  1172 +Lipschitz condition. Since this induced norm is piecewise-continuous in
  1173 +time, this is a fine bound. Therefore <mathjax>$f$</mathjax> is globally Lipschitz continuous.</p>
  1174 +<p>We're going to back off for a bit and introduce the state transition
  1175 +matrix. Background for solving the VDE. We're going to introduce a matrix
  1176 +differential equation, <mathjax>$\dot{X} = A(t) X$</mathjax> (where <mathjax>$A(t)$</mathjax> is same as before).</p>
  1177 +<p>I'm going to define <mathjax>$\Phi(t, t_0)$</mathjax> as the solution to the matrix
  1178 +differential equation (MDE) for the initial condition <mathjax>$\Phi(t_0, t_0) =
  1179 +1_{n \times n}$</mathjax>. I'm going to define <mathjax>$\Phi$</mathjax> as the solution to the <mathjax>$n
  1180 +\times n$</mathjax> matrix when my differential equation starts out in the identity
  1181 +matrix.</p>
  1182 +<p>Let's first talk about properties of this matrix <mathjax>$\Phi$</mathjax> just from the
  1183 +definition we have.</p>
  1184 +<ul>
  1185 +<li>If you go back to the vector differential equation, and let's just drop
  1186 + the term that depends on <mathjax>$u$</mathjax> (either consider <mathjax>$B$</mathjax> to be 0, or the input
  1187 + to be 0), the solution of <mathjax>$\cdot{x} = A(t)x(t)$</mathjax> is given by <mathjax>$x(t) =
  1188 + \Phi(t, t_0)x_0$</mathjax>.</li>
  1189 +<li>This is what we call the semigroup property, since it's reminiscent of
  1190 + the semigroup axiom. <mathjax>$\Phi(t, t_0) = \Phi(t, t_1) \Phi(t_1, t_0) \forall
  1191 + t, t_0, t_1 \in \Re^+$</mathjax></li>
  1192 +<li><mathjax>$\Phi^{-1}(t, t_0) = \Phi(t_0, t)$</mathjax>.</li>
  1193 +<li><mathjax>$\text{det} \Phi(t, t_0) = \exp\parens{\int_{t_0}^t \text{tr} \parens{A
  1194 + (\tau)} d\tau}$</mathjax>.</li>
  1195 +</ul>
  1196 +<p>Here's let's talk about some machinery we can now invoke when
  1197 +we want to show that two functions of time are equal to each other when
  1198 +they're both solutions to the differential equation. You can simply show by
  1199 +the existence and uniqueness theorem (assuming it applies) that they
  1200 +satisfy the same initial condition and the same differential
  1201 +equation. That's an important point, and we tend to use it a lot.</p>
  1202 +<p>(i.e. when faced with showing that two functions of time are equal to each
  1203 +other, you can show that they both satisfy the same initial condition and
  1204 +the same differential equation [as long as the differential equation
  1205 +satisfies the hypotheses of the existence and uniqueness theorem])</p>
  1206 +<p>Obvious, but good to state.</p>
  1207 +<p>Note: the initial condition doesn't have to be the initial condition given;
  1208 +it just has to hold at one point in the interval. Pick your point in time
  1209 +judiciously.</p>
  1210 +<p>Proof of (2): check <mathjax>$t=t_1$</mathjax>. (3) follows directly from (2). (4) you can
  1211 +look at if you want. Gives you a way to compute <mathjax>$\Phi(t, t_0)$</mathjax>. We've
  1212 +introduced a matrix differential equation and an abstract solution.</p>
  1213 +<p>Consider (1). <mathjax>$\Phi(t, t_0)$</mathjax> is a map that takes the initial state and
  1214 +transitions to the new state. Thus we call <mathjax>$\Phi$</mathjax> the <strong>state transition
  1215 +matrix</strong> because of what it does to the states of this vector differential
  1216 +equation: it transfers them from their initial value to their final value,
  1217 +and it transfers them through matrix multiplication.</p>
  1218 +<p>Let's go back to the original differential equation. Claim that the
  1219 +solution to that differential equation has the following form: <mathjax>$x(t) =
  1220 +\Phi(t, t_0)x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau)u(\tau) d\tau$</mathjax>. Proof:
  1221 +we can use the same machinery. If someone gives you a candidate solution,
  1222 +you can easily show that it is the solution.</p>
  1223 +<p>Recall the Leibniz rule, which we'll state in general as follows:
  1224 +<mathjax>$\pderiv{}{z} \int_{a(z)}^{b^z} f(x, z) dx = \int_{a(z)}^{b^z}
  1225 +\pderiv{}{x}f(x, z) dx + \pderiv{b}{z} f(b, z) - \pderiv{a}{z} f(a, z}$</mathjax>.</p>
  1226 +<p><mathjax>$$
  1227 +\dot{x}(t) &amp; = A(t) \Phi(t, t_0) x_0 + \int_{t_0}^t
  1228 +\pderiv{}{t} \parens{\Phi(t, \tau)B(\tau)u(\tau)} d\tau +
  1229 +\pderiv{t}{t}\parens{\Phi(t, t)B(t)u(t)} - \pderiv{t_0}{t}\parens{...}
  1230 +\\ &amp; = A(t)\Phi(t, t_0)x_0 + \int_{t_0}^t A(t)\Phi(t,\tau)B(\tau)u(\tau)d\tau + B(t)u(t)
  1231 +\\ &amp; = A(\tau)\Phi(t, t_0) x_0 + A(t)\int_{t_0}^t \Phi(t, \tau)B(\tau)
  1232 +u(\tau) d\tau + B(t) u(t)
  1233 +\\ &amp; = A(\tau)\parens{\Phi(t, t_0) x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau)
  1234 +u(\tau) d\tau} + B(t) u(t)
  1235 +$$</mathjax></p>
  1236 +<p><mathjax>$x(t) = \Phi(t,t_0)x_0 + \int_{t_0}^t \Phi(t,\tau)B(\tau)u(\tau) d\tau$</mathjax> is
  1237 +good to remember.</p>
  1238 +<p>Not surprisingly, it depends on the input function over an interval of
  1239 +time.</p>
  1240 +<p>The differential equation is changing over time, therefore the system
  1241 +itself is time-varying. No way in general that will be time-invariant,
  1242 +since the equation that defines its evolution is changing. You test
  1243 +time-invariance or time variance through the response map. But is it
  1244 +linear? You have the state transition function, so we can compute the
  1245 +response function (recall: readout map composed with the state transition
  1246 +function) and ask if this is a linear map.</p></div><div class='pos'></div>
856 1247 <script src='mathjax/unpacked/MathJax.js?config=default'></script>
857 1248 <script type="text/x-mathjax-config">
858 1249 MathJax.Hub.Register.StartupHook("TeX Jax Ready",function () {
96 fa2012/cs150/10.md
Source Rendered
... ... @@ -0,0 +1,96 @@
  1 +CS 150: Digital Design & Computer Architecture
  2 +==============================================
  3 +September 20, 2012
  4 +------------------
  5 +
  6 +Non-overlapping clocks. n-phase means that you've got n different outputs,
  7 +and at most one high at any time. Guaranteed dead time between when one
  8 +goes low and next goes high.
  9 +
  10 +K-maps
  11 +------
  12 +Finding minimal sum-of-products and product-of-sums expressions for
  13 +functions. **On-set**: all the ones of a function; **implicant**: one or
  14 +more circled ones in the onset; a **minterm** is the smallest implicant you
  15 +can have, and they go up by powers of two in the number of things you can
  16 +have; a **prime implicant** can't be combined with another (by circling);
  17 +an **essential prime implicant** is a prime implicant that contains at
  18 +least one one not in any other prime implicant. A **cover** is any
  19 +collection of implicants that contains all of the ones in the on-set, and a
  20 +**minimal cover** is one made up of essential prime implicants and the
  21 +minimum number of implicants.
  22 +
  23 +Hazards vs. glitches. Glitches are when timing issues result in dips (or
  24 +spikes) in the output; hazards are if they might happen. Completely
  25 +irrelevant in synchronous logic.
  26 +
  27 +Project
  28 +-------
  29 +3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If
  30 +we look at the datapath elements, the storage elements, you've got your
  31 +program counter, your instruction memory, register file, and data
  32 +memory. Figure 7.1 from the book. If you mix that in with figure 8.28,
  33 +which talks about MMIO, that data memory, there's an address and data bus
  34 +that this is hooked up to, and if you want to talk to a serial port on a
  35 +MIPS processor (or an ARM processor, or something like that), you don't
  36 +address a particular port (not like x86). Most ports are
  37 +memory-mapped. Actually got a MMIO module that is also hooked up to the
  38 +address and data bus. For some range of addresses, it's the one that
  39 +handles reads and writes.
  40 +
  41 +You've got a handful of different modules down here such as a UART receive
  42 +module and a UART transmit module. In your project, you'll have your
  43 +personal computer that has a serial port on it, and that will be hooked up
  44 +to your project, which contains the MIPS150 processor. Somehow, you've got
  45 +to be able to handle characters transmitted in each direction.
  46 +
  47 +UART
  48 +----
  49 +Common ground, TX on one side connected to RX port on other side, and vice
  50 +versa. Whole bunch more in different connectors. Basic protocol is called
  51 +RS232, common (people often refer to it by connector name: DB9 (rarely
  52 +DB25); fortunately, we've moved away from this world and use USB. We'll
  53 +talk about these other protocols later, some sync, some async. Workhorse
  54 +for long time, still all over the place.
  55 +
  56 +You're going to build the UART receiver/transmitter and MMIO module that
  57 +interfaces them. See when something's coming in from software /
  58 +hardware. Going to start out with polling; we will implement interrupts
  59 +later on in the project (for timing and serial IO on the MIPS
  60 +processor). That's really the hardcore place where software and hardware
  61 +meet. People who understand how each interface works and how to use those
  62 +optimally together are valuable and rare people.
  63 +
  64 +What you're doing in Lab 4, there's really two concepts of (1) how does
  65 +serial / UART work and (2) ready / valid handshake.
  66 +
  67 +On the MIPS side, you've got some addresses. Anything that starts with FFFF
  68 +is part of the memory-mapped region. In particular, the first four are
  69 +mapped to the UART: they are RX control, RX data, TX control, and TX data.
  70 +
  71 +When you want to send something out the UART, you write the byte -- there's
  72 +just one bit for the control and one byte for data.
  73 +
  74 +Data goes into some FSM system, and you've got an RX shift register and a
  75 +TX shift register.
  76 +
  77 +There's one other piece of this, which is that inside of here, the thing
  78 +interfacing to this IO-mapped module uses this ready bit. If you have two
  79 +modules: a source and a sink (diagram from the document), the source has
  80 +some data that is sending out, tells the sink when the data is valid, and
  81 +the sink tells the source when it is ready. And there's a shared "clock"
  82 +(baud rate), and this is a synchronous interface.
  83 +
  84 +* source presents data
  85 +* source raises valid
  86 +* when ready & valid on posedge clock, both sides know the transaction was
  87 + successful.
  88 +
  89 +Whatever order this happens in, source is responsible for making sure data
  90 +is valid.
  91 +
  92 +HDLC? Takes bytes and puts into packets, ACKs, etc.
  93 +
  94 +Talk about quartz crystals, resonators. $\pi \cdot 10^7$.
  95 +
  96 +So: before I let you go, parallel load, n bits in, serial out, etc.
5 fa2012/cs150/11.md
Source Rendered
... ... @@ -0,0 +1,5 @@
  1 +CS 150: Digital Design & Computer Architecture
  2 +==============================================
  3 +September 25, 2012
  4 +------------------
  5 +
2  fa2012/cs150/3.md
Source Rendered
... ... @@ -1,5 +1,5 @@
1 1 CS 150: Digital Design & Computer Architecture
2   -===============================================
  2 +==============================================
3 3 August 28, 2012
4 4 ---------------
5 5
2  fa2012/cs150/4.md
Source Rendered
... ... @@ -1,5 +1,5 @@
1 1 CS 150: Digital Design & Computer Architecture
2   -===============================================
  2 +==============================================
3 3 August 30, 2012
4 4 ---------------
5 5
2  fa2012/cs150/5.md
Source Rendered
... ... @@ -1,5 +1,5 @@
1 1 CS 150: Digital Design & Computer Architecture
2   -===============================================
  2 +==============================================
3 3 September 4, 2012
4 4 -----------------
5 5
2  fa2012/cs150/6.md
Source Rendered
... ... @@ -1,5 +1,5 @@
1 1 CS 150: Digital Design & Computer Architecture
2   -===============================================
  2 +==============================================
3 3 September 6, 2012
4 4 -----------------
5 5
2  fa2012/cs150/7.md
Source Rendered
... ... @@ -1,5 +1,5 @@
1 1 CS 150: Digital Design & Computer Architecture
2   -===============================================
  2 +==============================================
3 3 September 11, 2012
4 4 ------------------
5 5
2  fa2012/cs150/8.md
Source Rendered
... ... @@ -1,5 +1,5 @@
1 1 CS 150: Digital Design & Computer Architecture
2   -===============================================
  2 +==============================================
3 3 September 13, 2012
4 4 ------------------
5 5
83 fa2012/cs150/9.md
Source Rendered
... ... @@ -0,0 +1,83 @@
  1 +CS 150: Digital Design & Computer Architecture
  2 +==============================================
  3 +September 18, 2012
  4 +------------------
  5 +
  6 +Lab this week you are learning about chipscope. Chipscope is kinda like
  7 +what it sounds: allows you to monitor things happening in the FPGA. One of
  8 +the interesting things about Chipscope is that it's a FSM monitoring stuff
  9 +in your FPGA, it also gets compiled down, and it changes the location of
  10 +everything that goes into your chip. It can actually make your bug go away
  11 +(e.g. timing bugs).
  12 +
  13 +So. Counters. How do counters work? If I've got a 4-bit counter and I'm
  14 +counting from 0, what's going on here?
  15 +
  16 +D-ff with an inverter and enable line? This is a T-ff (toggle
  17 +flipflop). That'll get me my first bit, but my second bit is slower. $Q_1$
  18 +wants to toggle only when $Q_0$ is 1. With subsequent bits, they want to
  19 +toggle when all lower bits are 1.
  20 +
  21 +Counter with en: enable is tied to the toggle of the first bit. Counter
  22 +with ld: four input bits, four output bits. Clock. Load. Then we're going
  23 +to want to do a counter with ld, en, rst. Put in logic, etc.
  24 +
  25 +Quite common: ripple carry out (RCO), where we AND $Q[3:0]$ and feed this
  26 +into the enable of $T_4$.
  27 +
  28 +Ring counter (shift register with one hot out), If reset is low I just
  29 +shift this thing around and make a circular shift register. If high, I clear
  30 +the out bit.
  31 +
  32 +Mobius counter: just a ring counter with a feedback inverter in it. Just
  33 +going to take whatever state in there, and after n clock ticks, it inverts
  34 +itself. So you have $n$ flipflops, and you get $2n$ states.
  35 +
  36 +And then you've got LFSRs (linear feedback shift registers). Given N
  37 +flipflops, we know that a straight up or down counter will give us $2^N$
  38 +states. Turns out that an LFSR give syou almost that (not 0). So why do
  39 +that instead of an up-counter? This can give you a PRNG. Fun times with
  40 +Galois fields.
  41 +
  42 +Various uses, seeds, high enough periods (Mersenne twisters are higher).
  43 +
  44 +RAM
  45 +---
  46 +Remember, decoder, cell array, $2^n$ rows, $2^n$ word lines, some number of
  47 +bit lines coming out of that cell array for I/O with output-enable and
  48 +write-enable.
  49 +
  50 +When output-enable is low, D goes to high-Z. At some point, some external
  51 +device starts driving some Din (not from memory). Then I can apply a write
  52 +pulse (write strobe), which causes our data to be written into the memory
  53 +at this address location. Whatever was driving it releases, so it goes back
  54 +to high-impedance, and if we turn output-enable again, we'll see "Din" from
  55 +the cell array.
  56 +
  57 +During the write pulse, we need Din stable and address stable. We have a
  58 +pulse because we don't want to break things. Bad things happen.
  59 +
  60 +Notice: no clock anywhere. Your FPGA (in particular, the block ram on the
  61 +ML505) is a little different in that it has registered input (addr &
  62 +data). First off, very configurable. All sorts of ways you can set this up,
  63 +etc. Addr in particular goes into a register and comes out of there, and
  64 +then goes into a decoder before it goes into the cell array, and what comes
  65 +out of that cell array is a little bit different also in that there's a
  66 +data-in line that goes into a register and some data-out as well that's
  67 +separate and can be configured in a whole bunch of different ways so that
  68 +you can do a bunch of different things.
  69 +
  70 +The important thing is that you can apply your address to those inputs, and
  71 +it doesn't show up until the rising edge of the clock. There's the option
  72 +of having either registered or non-registered output (non-registered for
  73 +this lab).
  74 +
  75 +So now we've got an ALU and RAM. And so we can build some simple
  76 +datapaths. For sure you're going to see on the final (and most likely the
  77 +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design
  78 +a system to find the largest unsigned int in the SRAM."
  79 +
  80 +Demonstration of clock cycles, etc. So what's our FSM look like? Either
  81 +LOAD or HOLD.
  82 +
  83 +On homework, did not say sync SRAM. Will probably change.
205 fa2012/cs150/cs150.md
Source Rendered
</
@@ -197,7 +197,7 @@ stuff.
197 197 <a name='3'></a>
198 198
199 199 CS 150: Digital Design & Computer Architecture
200   -===============================================
  200 +==============================================
201 201 August 28, 2012
202 202 ---------------
203 203
@@ -297,7 +297,7 @@ and a maxterm is a sum containing every input variable or its complement.
297 297 <a name='4'></a>
298 298
299 299 CS 150: Digital Design & Computer Architecture
300   -===============================================
  300 +==============================================
301 301 August 30, 2012
302 302 ---------------
303 303
@@ -355,7 +355,7 @@ de Morgan's law: "bubble-pushing".
355 355 <a name='5'></a>
356 356
357 357 CS 150: Digital Design & Computer Architecture
358   -===============================================
  358 +==============================================
359 359 September 4, 2012
360 360 -----------------
361 361
@@ -467,7 +467,7 @@ can make FSMs.
467 467 <a name='6'></a>
468 468
469 469 CS 150: Digital Design & Computer Architecture
470   -===============================================
  470 +==============================================
471 471 September 6, 2012
472 472 -----------------
473 473
@@ -539,7 +539,7 @@ Next time: more MIPS, memory.
539 539 <a name='7'></a>
540 540