# steveWang/Notes

### Subversion checkout URL

You can clone with
or
.

authored
146 cs150.html
 @@ -594,7 +594,151 @@ sums. (Section 2.7). Based on the combining theorem, which says that $XA + X\bar{A} = X$. Ideally: every row should just have a single value changing. So, I use Gray codes. (e.g. 00, 01, 11, 10). Graphical -representation!

+representation!

+

+

CS 150: Digital Design & Computer Architecture

+

September 18, 2012

+

Lab this week you are learning about chipscope. Chipscope is kinda like +what it sounds: allows you to monitor things happening in the FPGA. One of +the interesting things about Chipscope is that it's a FSM monitoring stuff +in your FPGA, it also gets compiled down, and it changes the location of +everything that goes into your chip. It can actually make your bug go away +(e.g. timing bugs).

+

So. Counters. How do counters work? If I've got a 4-bit counter and I'm +counting from 0, what's going on here?

+

D-ff with an inverter and enable line? This is a T-ff (toggle +flipflop). That'll get me my first bit, but my second bit is slower. $Q_1$ +wants to toggle only when $Q_0$ is 1. With subsequent bits, they want to +toggle when all lower bits are 1.

+

Counter with en: enable is tied to the toggle of the first bit. Counter +with ld: four input bits, four output bits. Clock. Load. Then we're going +to want to do a counter with ld, en, rst. Put in logic, etc.

+

Quite common: ripple carry out (RCO), where we AND $Q[3:0]$ and feed this +into the enable of $T_4$.

+

Ring counter (shift register with one hot out), If reset is low I just +shift this thing around and make a circular shift register. If high, I clear +the out bit.

+

Mobius counter: just a ring counter with a feedback inverter in it. Just +going to take whatever state in there, and after n clock ticks, it inverts +itself. So you have $n$ flipflops, and you get $2n$ states.

+

And then you've got LFSRs (linear feedback shift registers). Given N +flipflops, we know that a straight up or down counter will give us $2^N$ +states. Turns out that an LFSR give syou almost that (not 0). So why do +that instead of an up-counter? This can give you a PRNG. Fun times with +Galois fields.

+

Various uses, seeds, high enough periods (Mersenne twisters are higher).

+

RAM

+

Remember, decoder, cell array, $2^n$ rows, $2^n$ word lines, some number of +bit lines coming out of that cell array for I/O with output-enable and +write-enable.

+

When output-enable is low, D goes to high-Z. At some point, some external +device starts driving some Din (not from memory). Then I can apply a write +pulse (write strobe), which causes our data to be written into the memory +at this address location. Whatever was driving it releases, so it goes back +to high-impedance, and if we turn output-enable again, we'll see "Din" from +the cell array.

+

During the write pulse, we need Din stable and address stable. We have a +pulse because we don't want to break things. Bad things happen.

+

Notice: no clock anywhere. Your FPGA (in particular, the block ram on the +ML505) is a little different in that it has registered input (addr & +data). First off, very configurable. All sorts of ways you can set this up, +etc. Addr in particular goes into a register and comes out of there, and +then goes into a decoder before it goes into the cell array, and what comes +out of that cell array is a little bit different also in that there's a +data-in line that goes into a register and some data-out as well that's +separate and can be configured in a whole bunch of different ways so that +you can do a bunch of different things.

+

The important thing is that you can apply your address to those inputs, and +it doesn't show up until the rising edge of the clock. There's the option +of having either registered or non-registered output (non-registered for +this lab).

+

So now we've got an ALU and RAM. And so we can build some simple +datapaths. For sure you're going to see on the final (and most likely the +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design +a system to find the largest unsigned int in the SRAM."

+

Demonstration of clock cycles, etc. So what's our FSM look like? Either +LOAD or HOLD.

+

On homework, did not say sync SRAM. Will probably change.

+

+

CS 150: Digital Design & Computer Architecture

+

September 20, 2012

+

Non-overlapping clocks. n-phase means that you've got n different outputs, +and at most one high at any time. Guaranteed dead time between when one +goes low and next goes high.

+

K-maps

+

Finding minimal sum-of-products and product-of-sums expressions for +functions. On-set: all the ones of a function; implicant: one or +more circled ones in the onset; a minterm is the smallest implicant you +can have, and they go up by powers of two in the number of things you can +have; a prime implicant can't be combined with another (by circling); +an essential prime implicant is a prime implicant that contains at +least one one not in any other prime implicant. A cover is any +collection of implicants that contains all of the ones in the on-set, and a +minimal cover is one made up of essential prime implicants and the +minimum number of implicants.

+

Hazards vs. glitches. Glitches are when timing issues result in dips (or +spikes) in the output; hazards are if they might happen. Completely +irrelevant in synchronous logic.

+

Project

+

3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If +we look at the datapath elements, the storage elements, you've got your +program counter, your instruction memory, register file, and data +memory. Figure 7.1 from the book. If you mix that in with figure 8.28, +which talks about MMIO, that data memory, there's an address and data bus +that this is hooked up to, and if you want to talk to a serial port on a +MIPS processor (or an ARM processor, or something like that), you don't +address a particular port (not like x86). Most ports are +memory-mapped. Actually got a MMIO module that is also hooked up to the +address and data bus. For some range of addresses, it's the one that +handles reads and writes.

+

You've got a handful of different modules down here such as a UART receive +module and a UART transmit module. In your project, you'll have your +personal computer that has a serial port on it, and that will be hooked up +to your project, which contains the MIPS150 processor. Somehow, you've got +to be able to handle characters transmitted in each direction.

+

UART

+

Common ground, TX on one side connected to RX port on other side, and vice +versa. Whole bunch more in different connectors. Basic protocol is called +RS232, common (people often refer to it by connector name: DB9 (rarely +DB25); fortunately, we've moved away from this world and use USB. We'll +talk about these other protocols later, some sync, some async. Workhorse +for long time, still all over the place.

+

You're going to build the UART receiver/transmitter and MMIO module that +interfaces them. See when something's coming in from software / +hardware. Going to start out with polling; we will implement interrupts +later on in the project (for timing and serial IO on the MIPS +processor). That's really the hardcore place where software and hardware +meet. People who understand how each interface works and how to use those +optimally together are valuable and rare people.

+

What you're doing in Lab 4, there's really two concepts of (1) how does +serial / UART work and (2) ready / valid handshake.

+

On the MIPS side, you've got some addresses. Anything that starts with FFFF +is part of the memory-mapped region. In particular, the first four are +mapped to the UART: they are RX control, RX data, TX control, and TX data.

+

When you want to send something out the UART, you write the byte -- there's +just one bit for the control and one byte for data.

+

Data goes into some FSM system, and you've got an RX shift register and a +TX shift register.

+

There's one other piece of this, which is that inside of here, the thing +interfacing to this IO-mapped module uses this ready bit. If you have two +modules: a source and a sink (diagram from the document), the source has +some data that is sending out, tells the sink when the data is valid, and +the sink tells the source when it is ready. And there's a shared "clock" +(baud rate), and this is a synchronous interface.

+
+
• source presents data
• +
• source raises valid
• +
• when ready & valid on posedge clock, both sides know the transaction was + successful.
• +
+

Whatever order this happens in, source is responsible for making sure data +is valid.

+

HDLC? Takes bytes and puts into packets, ACKs, etc.

+

Talk about quartz crystals, resonators. $\pi \cdot 10^7$.

+

So: before I let you go, parallel load, n bits in, serial out, etc.

+

+

CS 150: Digital Design & Computer Architecture

+

September 25, 2012

338 cs_h195.html
 @@ -382,7 +382,343 @@

CS H195: Ethics with Harvey

September 17, 2012

-

Lawsuit to get records about NSA's surveillance information.

+

Lawsuit to get records about NSA's surveillance information.

+

Video games affecting people, evidently.

+

Government subpoenaed Twitter to give people tweets.

+

Records can be subpoenaed in a court case, etc. We'll see how this plays +out. Today, in today's Daily Cal, UCB suing big companies. Universities do +research, etc. Back in the day, core memory meant people paid money to IBM +and MIT. Berkeley holds a bunch of patents. Non-software seems reasonable.

+

Important point: the burst of genius is very rarely true. Enabling +technologies have reached the point of making things feasible. Usual story +about inventions. Flash bulb in a camera, single-use: before sustainable +light bulb. Steam engine. Some inventions aren't like that. Some really do +just come to somebody (velcro, xerography). Nobody else was working on +that. More often, everyone is thinking about this stuff.

+

IP. A patent is the right to develop an invention, to produce things +dependent on an invention. Copyright is not about invention, it's about +creative and artistic works. And there, if you have an idea and write about +it, other people are allowed to use your ideas, not your words. Trademark, +you know what it is; you can register one; people are not allowed to use it +in ways that might confuse people. You can in principle make a vacuum +cleaner called "time". How close do things have to be to raise a lawsuit? +Lawsuit about Apple Computers vs Apple Records. Later did, which caused a +later round of battling.

+

Personal likeness, I can't take a picture of you and publish it with +certain exceptions. Most important for famous people. Funny rules: +newsworthy, and news photographers are allowed to take pictures of +newsworthy people.

+

Trade secrets: if a company has secrets, and you are a competing company, +you may not send a spy to extract these secrets.

+

House ownership. There are houses where people have had houses for +millennia. Patents and copyrights are not like that: not a right. Those +things are bargains between creators and society. Purpose to society is +that these eventually belong to the public. One of the readings talks about +a different history of patents quoting Italian legal scholars, and if +correct, patents were supposed to be permanent ownership. Why might it be +good to society? Used to be people who made new inventions. Guilds. Hard to +join, and you would be a slave for a while. Master would teach apprentice +the trade, and advantage was that it reduced competition. Trouble was that +there is a long history of things people used to be able to do that we +can't anymore. Textbook example: Stradivarius violins.

+

Nonetheless, nobody knows how Stradivarius made violins. Stories about how +to make paints of particular colors. What the patent system is trying to +avoid. Describe how invention works so someone in the field can create +it. By making this disclosure, you are given a limited-term exclusive right +to make these.

+

The thing is, sooner or later, your technology is going to be obsolete. To +your advantage to have a clear legal statement.

+

Patent treaties. Used to be that if you invented something important, you'd +hire a bunch of lawyers.

+

Until recently, software was not patentable. ATT wanted to patent the +SETUID bit. In those days, you could not patent any math or software or +algorithm.

+

Patents stifling innovation in the field. When you file a patent +application. Let's say you deny the patent. You would like to fall back on +trade secrecy. Patent applications are secret until approved. Startups +doomed. Wouldn't matter if term were short compared to innovation cycle of +the industry.

+

Another thing in the Constitution is that treaties take precedence over +domestic laws.

+

So let's talk about copyrights! So. Nobody says let's do away with +copyright altogether. Copyright (at its worst) is less socially harmful +than patents because it's so specific. Again, copyrights are a +bargain. Started in Britain between the King and printers. Printers wanted +exclusive right to things they printed. King wanted printers to be +censors. Originally not authors who had copyright, but the publisher. Often +creators of rights will sell the rights to publishers.

+

This is where computers come in. How to sell to world? Used to need big +company with facilities to create copies and widely +distribute. Self-publish: work available to everyone. Important: rarely +author who complains about copyrights. Usually publishers.

+

There's always been piracy, but limited historically by analog media losing +information when copying.

+

+

Stallman actually invented a system that has 5 different categories of +work. Even Stallman doesn't say to ditch copyright. Hardly any musicians +make any money selling music because their contracts say that they make a +certain percentage of net proceeds. The way musicians survive is concerts, +and ironically, selling concert CDs. Stallman says to make music players +have a money button and send money directly to the musician.

+

+

CS H195: Ethics with Harvey

+

September 24, 2012

+

Vastly oversimplified picture of moral philosophy. Leaves out a lot.

+

So Socrates says famously "to know the good is to desire the good", by +which he means that if you really understand what's in your own interest, +it's going to turn out to be the right thing. Counter-intuitive, since +we've probably encountered situations in which we think what's good for us +isn't good for the rest of the community.

+

Ended up convicting Socrates, and he was offered the choice between exile +from Athens and death -- chose death because he felt that he could not +exist outside of his own community. His most famous student was Plato, who +started an Academy (Socrates just wandered around from hand to mouth), took +in students (one of whom was Aristotle). If you're scientists or engineers, +you've been taught to make fun of Aristotle, since he said that heavier +objects fall faster than light objects, and famously, Galileo took two +objects, dropped them, and they hit the ground at the same time.

+

It's true that some of the things Aristotle said about the physical world +have turned out not to be right. But it's important to understand it in +terms of the physical world, he did not have the modern idea of trying to +make a universal theory that explained everything.

+

Objects falling in atmosphere with friction different from behavior of +planets orbiting sun? Perfectly fine with Aristotle.

+

One of the things Aristotle knew? When you see a plate of donuts, you know +perfectly well that it's just carbs and fat and you shouldn't eat them, but +you do anyway. Socrates explains that as "you don't really know through and +through that it is bad for you", and Aristotle doesn't like that +explanation. Knowing what to do and actually doing it are two different +things. Took that in two directions: action syllogism (transitivity), +extended so that conclusion of the syllogism can be an action. Not +important to us: important to us is that he introduces the idea of +virtues. A virtue is not an understanding of what's right, but a habit -- +like a good habit you get into.

+

Aristotle lists a bunch of virtues, and in all cases he describes it as a +midpoint between two extremes (e.g. courage between cowardice and +foolhardiness, or honesty as a middle ground between dishonesty and saying +too much).

+

Better have good habits, since you don't have time in real crises to +think. So Aristotle's big on habits. And he says that you learn the virtues +through being a member of a community and through the role you play in that +community, Lived in a time that people inherited roles a lot. The argument +goes a little like this. What does it mean to be a good person? Hard +question. What does it mean to be a good carpenter? Much easier. A good +carpenter builds stuff that holds together and looks nice, etc. What are +the virtues that lead to being a good carpenter? Also easy: patience, care, +measurement, honesty, etc. Much easier than what's a good +person.

+

Aristotle's going to say that the virtues of being a good person are +precisely the virtues you learn in social practices from people older than +you who are masters of the practice. One remnant of that in modern society +is martial arts instruction. When you go to a martial arts school and say +you want to learn, one of the first things you learn is respect for your +instructor, and you're supposed to live your life in a disciplined way, and +you're not learning skills so much as habits. Like what Aristotle'd say +about any practice. Not so much of that today: when you're learning to be a +computer scientist, there isn't a lot of instruction in "here are the +habits that make you a (morally) good computer scientist".

+

Kant was not a communitarian: was more of "we can figure out the right +answer to ethical dilemmas." He has an axiom system, just like in +mathematics: with small numbers of axioms, you can prove things. Claims +just one axiom, which he describes in multiple ways.

+

Categorical imperative number one: treat people as ends, not means. This is +the grown-up version of the golden rule. Contracts are all right as long as +both parties have their needs met and exchange is not too unequal.

+

Second version: universalizability. An action is good if it is +universalizable. That means, if everybody did it, would it work? Textbook +example is "you shouldn't tell lies". The only reason telling lies works is +because people usually tell the truth, and so people are predisposed to +thinking that it's usually true. If everyone told lies, then we'd be +predisposed to disbelieve statements. Lying would no longer be effective.

+

There's a third one which BH can never remember which is much less +important. Kant goes on to prove theorems to resolve moral dilemmas.

+

Problem from Kant: A runs past you into the house. B comes up with a gun +and asks you where A is. Kant suggests something along the lines of +misleading B.

+

Axiomatic, resolve ethical problems through logic and proving what you want +to do. Very popular among engineers, mainly for the work of Rawls, who +talks about the veil of ignorance. You have to imagine yourself, looking at +life on Earth, and not knowing in what social role you're going to be +born. Rawls thinks that from this perspective, you have to root for the +underdog when situations come up, because in any particular thing that +comes up, harm to the rich person is going to be less than the gains of the +poor person (in terms of total wealth, total needs). Going to worry about +being on side of underdog, etc. More to Rawls: taking into account how +things affect all different constituencies.

+

Another descendant of Plato are utilitarians. One of the reasons it's +important for you to understand this chart: when you don't think about it +too hard, you use utilitarian principles, which is sometimes +bad. Utilitarians talk about the greatest good for the greatest number.

+

Back to something from this class: what if I illegally download some movie? +Is that okay? How much do I benefit, and how much is the movie-maker +harmed? Not from principled arguments, which is what Kant wants you to do, +but from nuts and bolts, who benefits how much, each way.

+

Putting that in a different fashion, Kantians are interested in what +motivates your action, why you did it. Utilitarians are interested in the +result of your action. One thing that makes utilitarian hard is that you +have to guess as to what probably will happen.

+

Now I want to talk to you about MacIntyre. Gave you a lot of reading, +probably hardest reading in the course. Talks like a philosopher. Uses +dessert as what you deserve (noun of deserve). Life-changing for BH when he +came across MacIntyre; passing it on to you as a result.

+

He starts by saying to imagine an aftermath in which science is blamed and +destroyed. A thousand years later, some people digging through the remains +of our culture read about this word science, and it's all about +understanding how the physical world works, and they want to revive this +practice. Dig up books by scientists, read and memorize bits of them, +analyze, have discussions. The people who do this call themselves +scientists because they're studying science.

+

We from our perspective would say that isn't science at all -- you don't +just engage with books, but rather engage with the physical world through +experiments. Those imagined guys from a millennium from now have lost the +practice. They think they're following a practice, but they have no idea +what it's like. MacIntyre argues this is us with ethics.

+

Equivalent to WW3 according to MacIntyre is Kant. Kant really, more than +anyone else, brought into being the modern era. Why? Because in the times +prior to Kant, a lot of arguments not only about ethics but also by the +physical world were resolved by religious authority. Decisions made based +on someone's interpretation of the bible, e.g.

+

Kant claims to be a Christian, but he thinks the way we understand God's +will is by applying the categorical imperative. Instead of asking a priest +what to do, we reason it out. We don't ask authorities, we work it out. +Also, he starts this business of ethical dilemmas. Everybody in the top +half of the world talks in terms of the good life. Even Socrates, who +thinks you can know what to do, talks about the good life, too. So ethics +is not about "what do I do in this situation right now", but rather the +entirety of one's life and what it means to live a good life.

+

Kant and Mill: no sense of life as a flow; rather, moments of +decisions. What MacIntyre calls the ethical equivalent of WW3: at that +point, we lost the thread, since we stopped talking about the good +life. Now, it wasn't an unmitigated disaster, since it gives us -- the +modern liberal society, not in the American sense of voting for democrats, +but in the sense that your life goals are up to you as an individual, and +the role of society is to build infrastructure and getting in people's way, +so stopping people from doing things. I can, say, have some sexual practice +different from yours. So that was a long time coming. Now, in our +particular culture, the only thing that's bad is having sex with children, +as far as I can tell -- as long as it doesn't involve you messing up +someone else's life, e.g. rape. As long as it involves two (or more?) +consenting adults, that's okay.

+

MacIntyre says that there are things that came up with Kant that we can't +just turn back to being Aristotlean. The people who lived the good life +were male Athenian citizens. They had wives who weren't eligible, and they +had slaves who did most of the grunt work. And so male Athenian citizens +could spend their time walking around chatting with Socrates because they +were supported by slavery. And nobody wants to go back to that. No real way +to go back to being Aristotlean without giving up modern civil rights.

+

So. One of the things I really like about MacIntyre is the example of +wanting to teach a child how to play chess, but he's not particularly +interested. He is, however, interested in candy. You say, every time you +play with me, I'll give you a piece of candy. If you win, two pieces. Will +play in a way that's difficult but possible to beat me. So, MacIntyre says +this child is now motivated to play and to play well. But he's also +motivated to cheat, if he can get away with it. So let's say this +arrangement goes on for some time, and the kid gets better at it. What you +hope is that the child reaches a point where the game is valuable to +itself: he or she sees playing chess as rewarding (as an intellectual +challenge). When that happens, cheating becomes self-defeating.

+

While the child is motivated by external goods (rewards, money, fame, +whatever), then the child is not part of the community of practice. But +once the game becomes important (the internal benefits motivate him), then +he does feel like part of the community. Huge chess community with +complicated infrastructure with rating, etc. And that's a community with +practice, and it has virtues (some of which are unique to chess, but maybe +not -- e.g. planning ahead). Honesty, of course; patience; personal +improvement.

+

And the same is true with most things that human beings do. Not +everything. MacIntyre raises the example of advertising. What are the +virtues of this practice? Well, appealing to people in ways that they don't +really see; suggesting things that aren't quite true without saying +them. He lists several virtues that advertising people have, and these +virtues don't generalize. Not part of being a good person; not even +compatible with being a good person. So different from virtues of normal +practices.

+

Having advertising writers is one of the ways in which MacIntyre thinks +we've just lost the thread. The reason we have them is that we hold up in +our society the value of furthering your own ambition and getting rich, and +not getting rich by doing something that's good anyway, but just getting +rich. That's an external motivation rather than an internal one.

+

We talk about individuals pursuing their own ends. We glorify -- take as an +integral part of our society -- as individuals pursuing their own ends. In +a modern understanding of ethics, you approach each new situation as if +you've never done anything. You don't learn from experience; you learn from +rules. The result may be the same for each intermediate situation, but it +leads to you thinking differently. You don't think about building good +habits in this context.

+

A lot of you probably exercise (unlike me). Maybe you do it because it's +fun, but maybe you also do it because it only gets harder as you get older, +and you should get in the habit to keep it up. In that area, you get into +habits. But writing computer programs, we tell you about rules (don't have +concurrency violations), and I guess implicitly, we say that taking 61B is +good for you because you learn to write bigger programs. Still true -- +still a practice with virtues.

+

Two things: that sort of professional standard of work is a pretty narrow +ethical issue. They don't teach you to worry about the privacy implications +of third parties. Also, when people say they have an ethical dilemma, they +think about it as a decision. A communitarian would reject all that ethical +dilemma stuff. Dilemmas will have bad outcomes regardless. Consider Greek +tragedies. When Oedipus finds himself married to his mother, it's like game +over. Whole series of bad things that happen to him. Not much he can do +about it on an incident by incident basis. Problem is a fatal flaw in his +character early on (as well as some ignorance), and no system of ethics is +going to lead Oedipus out of this trap. What you have to is try not to get +into traps, and you do that through prudence and honesty and whatnot.

+

Classic dilemma: Heins is a guy whose wife has a fatal disease that can be +cured by an expensive drug, but Heins is poor. So he goes to the druggist +and says that he can't afford to pay for this drug, but his wife is going +to die, so the druggist says no. So Heins is considering breaking into the +drugstore at night and stealing the drug so his wife can live. What should +he do and why? According to the literature, there's no right answer. What +matters is your reason.

+

I'm going to get this wrong, but it's something like this. Stage one: your +immediate needs are what matter. Yes, he should steal it, because it's his +wife, or no, he shouldn't steal it, because he should go to prison. Stage +two: something like worrying about consequences to individuals. Might hurt +druggist or might hurt his wife. Stage three: something like "well, I have +a closer relationship to my wife than the druggist; I care more about my +wife, so I should steal it". Stage four: it's against the law, and I +shouldn't break the law. Stage five: like stage three, generalized to +larger community: how much will it hurt my wife not to get the drug? A +lot. How much will it hurt the druggist if I steal it? Some money. Stage +six, based not on laws of community, but rather on the standards of the +community. Odd-numbered stages are about specific people. Even-numbered +stages are about society and rules (punishment if I do it to it's the law +to it's what people expect of me).

+

Right now I'm talking about the literature of moral psychology: people go +through these stages (different ways of thinking). Question posed is not +"how do people behave", but rather "how should people behave".

+

This is modern ethical reasoning. Take some situation that has no right +answer, and split hairs about finding a right answer somehow.

+

Talk about flying: checklist for novices. Instructors don't use this list: +eventually, you get to where you're looking at the entire dashboard at +once, and things that aren't right jump out at you.

+

Another example: take a bunch of chess pieces, put them on the board, get +someone to look at it for a minute, and take the pieces away, and ask the +person to reconstruct the board position. Non-chess players are terrible +(unsurprisingly); chess grandmasters can do it if it came out of a real +game; if you put it randomly, they're just as bad as the rest of +us. They're not looking at individual pieces; they're looking at the board +holistically (clusters of pieces that interact with each other).

+

Relevance to this about ethics: we don't always know why we do things. Very +rare that we have the luxury to figure out either what categorical +imperative tells us or utilitarian approach. Usually we just do something.

+

BH with weaknesses. Would be stronger if his education was less about +thinking things through and more about doing the right thing.

+

Our moral training is full of "Shalt Not"s. Lot more in the Bible about +what not to do than what to do or how to live the good life (that part of +the Bible -- gets better). We also have these laws. Hardly ever say you +have to do something (aside from paying taxes). Mostly say what you can't +do. Never say how to live the good life. BH thinks that serves us ill. Have +to make decisions. Often, what you do is different from what you say you +should do.

393 ee221a.html
 @@ -852,7 +852,398 @@ necessarily in the space. Example: any continued fraction.

To show (1), we'll show that this sequence $\{x_m\}$ that we constructed is a Cauchy sequence in a Banach space. Interestingly, it matters what norm -you choose.

+you choose.

+

+

EE 221A: Linear System Theory

+

September 18, 2012

+

Today:

+
+
• proof of existence and uniqueness theorem.
• +
• [ if time ] introduction to dynamical systems.
• +
+

First couple of weeks of review to build up basic concepts that we'll be +drawing upon throughout the course. Either today or Thursday we will launch +into linear system theory.

+

We're going to recall where we were last time. We had the fundamental +theorem of differential equations, which said the following: if we had a +differential equation, $\dot{x} = f(x,t)$, with initial condition $x(t_0) = +x_0$, where $x(t) \in \Re^n$, etc, if $f( \cdot , t)$ is Lipschitz +continuous, and $f(x, \cdot )$ is piecewise continuous, then there exists a +unique solution to the differential equation / initial condition pair (some +function $\phi(t)$) wherever you can take the derivative (may not be +differentiable everywhere: loses differentiability on the points where +discontinuities exist).

+

We spent quite a lot of time discussing Lipschitz continuity. Job is +usually to test both conditions; first one requires work. We described a +popular candidate function by looking at the mean value theorem and +applying it to $f$: a norm of the Jacobian function provides a candidate +Lipschitz if it works.

+

We also described local Lipschitz continuity, and often, when using a norm +of the Jacobian, that's fairly easy to show.

+

Important point to recall: a norm of the Jacobian of $f$ provides a +candidate Lipschitz function.

+

Another important thing to say here is that we can use any norm we want, so +we can be creative in our choice of norm when looking for a better bound.

+

We started our proof last day, and we talked a little about the structure +of the proof. We are going to proceed by constructing a sequence of +functions, then show (1) that it converges to a solution, then show (2) +that it is unique.

+

Proof of Existence

+

We are going to construct this sequence of functions as follows: +$x_{m+1}(t) = x_0 + \int_0^t f(x_m(\tau)) d\tau$. Here we're dealing with +an arbitrary interval from $t_1$ to $t_2$, and so $0 \in [t_1, t_2]$. We +want to show that this sequence is a Cauchy sequence, and we're going to +rely on our knowledge that the space these functions are defined in is a +Banach space (hence this sequence converges to something in the space).

+

We have to put a norm on the set of reals, so we'll use the infinity +norm. Not going to prove it, but rather state it's a Banach space. If we +show that this is a Cauchy sequence, then the limit of that Cauchy sequence +exists in the space. The reason that's interesting is that it's this limit +that provides a candidate for this differential equation.

+

We will then prove that this limit satisfies the DE/IC pair. That is +adequate to show existence. We'll then go on to prove uniqueness.

+

Our immediate goal is to show that this sequence is Cauchy, which is, we +should show $\exists m \st (x_{m+p} - x_m) \to 0$ as $m$ gets large.

+

First let us look at the difference between $x_{m+1}$ and $x_m$. Just +functions of time, and we can compute this. $\mag{x_{m+1} - x_m} = +\int_{t_0}^t (f(x_m, \tau) - f(x_{m+1}, \tau)) d\tau$. Use the fact that f +is Lipschitz continuous, and so it is $\le k(\tau)\mag{x_m(\tau) - +x_{m+1}(\tau)} d\tau$. The function is Lipschitz, so well-defined, and it +has a supremum in this interval. Let $\bar{k}$ be the supremum of $k$ over +the whole interval $[t_1, t_2]$. This means that we can take this +inequality and rewrite as $\mag{x_{m+1} - x_m} \le \bar{k} \int_{t_0}^t +\mag{x_m(\tau) - x_{m+1}(\tau)} d\tau$. Now we have a bound that relates +the bound between $x_m$ and $x_{m+1}$. You can essentially relate the +distance we've just related between two subsequent elements to some further +distance by counting.

+

Let us do two things: sort out the integral on the right-hand-side, then +look at arbitrary elements beyond an index.

+

We know that $x_1(t) = x_0 + \int_{t_0}^t f(x_0, \tau) d\tau$, and that $x_1 +- x_0 \le \int_{t_0}^{t} \mag{f(x_0, \tau)} d\tau \le \int_{t_1}{t_2} + \mag{f(x_0, \tau) d\tau} \defequals M$. From the above inequalities, + $\mag{x_2 - x_1} \le M \bar{k}\abs{t - t_0}$. Now I can look at general + bounds: $x_3 - x_2 \le \frac{M\bar{k}^2 \abs{t - t_0}^2}{2!}$. In general, + $x_{m+1} - x_m \le \frac{M\parens{\bar{k} \abs{t - t_0}}^m}{m!}$.

+

If we look at the norm of $\dot{x}$, that is going to be a function +norm. What I've been doing up to now is look at a particular value $t_1 < t +< t_2$.

+

Try to relate this to the norm $\mag{x_{m+1} - x_m}_\infty$. Can what we've +done so far give us a bound on the difference between two functions? We +can, because the infinity norm of a function is the maximum value that the +function assumes (maximum vector norm for all points $t$ in the interval +we're interested in). If we let $T$ be the difference between our larger +bound $t_2 - t_1$, we can use the previous result on the pointwise norm, +then a bound on the function norm has to be less than the same +bound, i.e. if a pointwise norm function is less than this bound for all +relevant $t$, then its max value must be less than this bound.

+

That gets us on the road we want to be, since that now gets us a bound. We +can now go back to where we started. What we're actually interested in is +given an index $m$, we can construct a bound on all later elements in the +sequence.

+

$\mag{x_{m+p} - x_m}_\infty = \mag{x_{m+p} + x_{m+p-1} - x_{m+p-1} + ... - +x_m} = \mag{\sum_{k=0}^{p-1} (x_{m+k+1} - x_{m+k})} \le M \sum_{k=0}^{p-1} +\frac{(\bar{k}T)^{m+k}}{(m+k)!}$.

+

We're going to recall a few things from undergraduate calculus: Taylor +expansion of the exponential function and $(m+k)! \ge m!k!$.

+

With these, we can say that $\mag{x_{m+p} - x_m}_\infty \le +M\frac{(\bar{k}T)^m}{m!} e^{\bar{k} T}$. What we'd like to show is that this +can be made arbitrarily small as $m$ gets large. We study this bound as $m +\to \infty$, and we recall that we can use the Stirling approximation, +which shows that factorial grows faster than the exponential function. That +is enough to show that $\{x_m\}_0^\infty$ is Cauchy. Since it is in a +Banach space (not proving, since beyond our scope), it converges to +something in the space to a function (call it $x^\ell$) in the same +space.

+

Now we just need to show that the limit $x^\ell$ solves the differential +equation (and initial condition). Let's go back to the sequence that +determines $x^\ell$. $x_{m+1} = x_0 + \int_{t_0}^t f(x_m, \tau) +d\tau$. We've proven that this limit converges to $x^\ell$. What we want to +show is that if we evaluate $f(x^\ell, t)$, then $\int_{t_0}^t f(x_m, \tau) +\to \int_{t_0}^t f(x^\ell, \tau) d\tau$. Would be immediate if we had that +the function were continuous. Clear that it satisfies initial condition by +the construction of the sequence, but we need to show that it satisfies the +differential equation. Conceptually, this is probably more difficult than +what we've just done (establishing bounds, Cauchy sequences). Thinking +about what that function limit is and what it means for it to satisfy that +differential equation.

+

Now, you can basically use some of the machinery we've been using all along +to show this. Difference between these goes to $0$ as $m$ gets large.

+

$$\mag{\int_{t_0}^t (f(x_m, \tau) f(x^\ell, \tau)) d\tau} +\\ \le \int_{t_0}^t k(\tau) \mag{x_m - x^\ell} d\tau \le \bar{k}\mag{x_m - x^\ell}_\infty T +\\ \le \bar{k} M e^{\bar{k} T} \frac{(\bar{k} T)^m}{m!}T +$$

+

Thus $x^\ell$ solves the DE/IC pair. A solution $\Phi$ is $x^\ell$, +i.e. $x^\ell(t) = f(x^\ell, t) \forall [t_1, t_2] - D$ and $x^\ell(t_0) = +x_0$

+

To show that this solution is unique, we will use the Bellman-Gronwall +lemma, which is very important. Used ubiquitously when you want to show +that functions of time are equal to each other: candidate mechanism to do +that.

+

Bellman-Gronwall Lemma

+

Let $u, k$ be real-valued positive piece-wise continuous functions of time, +and we'll have a constant $c_1 \ge 0$ and $t_0 \ge 0$. If we have such +constants and functions, then the following is true: if $u(t) \le c_1 + +\int_{t_0}^t k(\tau)u(\tau) d\tau$, then $u(t) \le c_1 e^{\int_{t_0}^t +k(\tau) d\tau}$.

+

Proof (of B-G)

+

$t > t_0$ WLOG.

+

$$U(t) = c_1 + \int_{t_0}^t k(\tau) u(\tau) d\tau +\\ u(t) \le U(t) +\\ u(t)k(t)e^{\int_{t_0}^t k(\tau) d\tau} \le U(t)k(t)e^{\int_{t_0}^t k(\tau) d\tau} +\\ \deriv{}{t}\parens{U(t)e^{\int_{t_0}^t k(\tau) d\tau}} \le 0 \text{(then integrate this derivative, note that U(t_0) = c_1)} +\\ u(t) \le U(t) \le c_1 e^{\int_{t_0}^t k(\tau) d\tau} +$$

+

Using this to prove uniqueness of DE/IC solutions

+

How we're going to use this to prove B-G lemma.

+

We have a solution that we constructed $\Phi$, and someone else gives us a +solution $\Psi$, constructed via a different method. Show that these must +be equivalent. Since they're both solutions, they have to satisfy the DE/IC +pair. Take the norm of the difference between the differential equations.

+

$$\mag{\Phi - \Psi} \le \bar{k} \int_{t_0}^t \mag{\Phi - \Psi} d\tau \forall +t_0, t \in [t_1, t_2]$$

+

From the Bellman-Gronwall Lemma, we can rewrite this inequality as +$\mag{\Phi - \Psi} \le c_1 e^{\bar{k}(t - t_0)}$. Since $c_1 = 0$, this +norm is less than or equal to 0. By positive definiteness, this norm must +be equal to 0, and so the functions are equal to each other.

+

Reverse time differential equation

+

We think about time as monotonic (either increasing or decreasing, usually +increasing). Suppose that time is decreasing. $\exists \dot{x} = +f(x,t)$. Going backwards in time, explore existence and uniqueness going +backwards in time. Suppose we had a time variable $\tau$ which goes from +$t_0$ backwards, and defined $\tau \defequals t_0 - t$. We want to define +the solution to that differential equation backwards in time as $z(\tau) = +x(t)$ if $t < t_0$. Derive what reverse order time derivative is. Equation +is just $-f$; we're going to use $\bar{f}$ to represent this +function ($\deriv{}{\tau}z = -\deriv{}{t}x = -f(x, t) = -f(z, \tau) = +\bar{f}$).

+

This equation, if I solve the reverse time differential equation, we'll +have some corresponding backwards solution. Concluding statement: can think +about solutions forwards and backwards in time. Existence of unique +solution forward in time means existence of unique solution backward in +time (and vice versa). You can't have solutions crossing themselves in +time-invariant systems.

+

+

EE 221A: Linear System Theory

+

September 20, 2012

+

Introduction to dynamical systems. Suppose we have equations $\dot{x} = +f(x, u, t)$, $\fn{f}{\Re^n \times \Re^n \times \Re_+}{\Re^n}$ and $y = h(x, +u, t)$, $\fn{h}{\Re^n \times \Re^n \times \Re_+}{\Re^n}$. We define $n_i$ as +the dimension of the input space, $n_o$ as dimension of the output space, +and $n$ as the dimension of the state space.

+

We've looked at the form, and if we specify a particular $\bar{u}(t)$ over some +time interval of interest, then we can plug this into the right hand side +of this differential equation. Typically we do not supply a particular +input. Thinking about solutions to this differential equation, for now, +let's suppose that it's specified.

+

Suppose we have some feedback function of the state. If $u$ is specified, +as long as $\bar{f}$ satisfies the conditions for the existence and +uniqueness theorem, we have a differential equation we can solve.

+

Another example: instead of differential equation (which corresponds to +continuous time), we have a difference equation (which corresponds to +discrete time).

+

Example: dynamic system represented by an LRC circuit. One practical way to +define the state $x$ is as a vector of elements whose derivatives appear in +our differential equation. Not formal, but practical for this example.

+

Notions of discretizing.

+

What is a dynamical system?

+

As discussed in first lecture, we consider time $\Tau$ to be a privileged +variable. Based on our definition of time, the inputs and outputs are all +functions of time.

+

Now we're going to define a dynamical system as a 5-tuple: $(\mathcal{U}, +\Sigma, \mathcal{Y}, s, r)$ (input space, state space, output space, state +transition function, output map).

+

We define the input space as the set of input functions over time to an +input set $U$ (i.e. $\mathcal{U} = \{\fn{u}{\Tau}{U}\}$. Typically, $U = +\Re^{n_i}$).

+

We also define the output space as the set of output functions over time to +an output set $Y$ (i.e. $\mathcal{Y} = \{\fn{y}{\Tau}{Y}\}$). Typically, $Y += \Re^{n_o}$.

+

$\Sigma$ is our state space. Not defined as the function, but the actual +state space. Typically, $\Sigma = \Re^n$, and we can go back and think +about the function $x(t) \in \Sigma$. $\fn{x}{\Tau}{\Sigma}$ is called the +state trajectory.

+

$s$ is called the state transition function because it defines how the +state changes in response to time and the initial state and the +input. $\fn{s}{\Tau \times \Tau \times \Sigma \times U }{\Sigma}$. Usually +we write this as $x(t_1) = s(t_1, t_0, x_0, u)$, where $u$ is the function +$u(\cdot) |_{t_0}^{t_1}$. This is important: coming towards how we define +state. Only things you need to get to state at the new time are the initial +state, inputs, and dynamics.

+

Finally, we have this output map (sometimes called the readout map) +$r$. $\fn{r}{\Tau \times \Sigma \times U}{Y}$. That is, we can think about +$y(t) = r(t, x(t), u(t))$. There's something fundamentally different +between $r$ and $s$. $s$ depended on the function $u$, whereas $r$ only +depended on the current value of $u$ at a particular time.

+

$s$ captures dynamics, while $r$ is static. Remark: $s$ has dynamics +(memory) -- things that depend on previous time, whereas $r$ is static: +everything it depends on is at the current time (memoryless).

+

In order to be a dynamical system, we need to satisfy two axioms: a +dynamical system is a five-tuple with the following two axioms:

+
+
• The state transition axiom: $\forall t_1 \ge t_0$, given $u, \tilde{u}$ + that are equal to each other over a particular time interval, the state + transition functions must be equal over that interval, i.e. $s(t_1, t_0, + x_0, u) = s(t_1, t_0, x_0, \tilde{u})$. Requires us to not have + dependence on the input outside of the time interval of interest.
• +
• The semigroup axiom: suppose you start a system at $t_0$ and evolve it to + $t_2$, and you're considering the state. You have an input $u$ defined + over the whole time interval. If you were to look at an intermediate + point $t_1$, and you computed the state at $t_1$ via the state transition + function, we can split our time interval into two intervals, and we can + compute the result any way we like. Stated as the following: $s(t_2, t_1, + s(t_1, t_0, x_0, u), u) = s(t_2, t_0, x_0, u)$.
• +
+

When we talk about a dynamical system, we have to satisfy these axioms.

+

Response function

+

Since we're interested in the outputs and not the states, we can define +what we call the response map. It's not considered part of the definition +of a dynamical system because it can be easily derived.

+

It's the composition of the state transition function and the readout map, +i.e. $y(t) = r(t, x(t), u(t)) = r(t, s(t, t_0, x_0, u), u(t)) \defequals +\rho(t, t_0, x_0, u)$. This is an important function because it is used to +define properties of a dynamical system. Why is that? We've said that +states are somehow mysterious. Not something we typically care about: +typically we care about the outputs. Thus we define properties like +linearity and time invariance.

+

Time Invariance

+

We define a time-shift operator $\fn{T_\tau}{\mathcal{U}}{\mathcal{U}}$, +$\fn{T_\tau}{\mathcal{Y}}{\mathcal{Y}}$. $(T_\tau u)(t) \defequals u(t - +\tau)$. Namely, the value of $T_\tau u$ is that of the old signal at +$t-\tau$.

+

A time-invariant (dynamical) system is one in which the input space and +output space are closed under $T_\tau$ for all $\tau$, and $\rho(t, t_0, +x_0, u) = \rho(t + \tau, t_0 + \tau, x_0, T_\tau u)$.

+

Linearity

+

A linear dynamical system is one in which the input, state, and output +spaces are all linear spaces over the same field $\mathbb{F}$, and the +response map $\rho$ is a linear map of $\Sigma \times \mathcal{U}$ into +$\mathcal{Y}$.

+

This is a strict requirement: you have to check that the response map +satisfies these conditions. Question that comes up: why do we define +linearity of a dynamical system in terms of linearity of the response and +not the state transition function? Goes back to a system being +intrinsically defined by its inputs and outputs. Often states, you can have +many different ways to define states. Typically we can't see all of +them. It's accepted that when we talk about a system and think about its +I/O relations, it makes sense that we define linearity in terms of this +memory function of the system, as opposed to the state transition function.

+

Let's just say a few remarks about this: zero-input response, +zero-state response. If we look at the zero element in our spaces (so +we have a zero vector), then we can take our superposition, which implies +that the response at time $t$ is equal to the zero-state response, which is +the response, given that we started at the zero state, plus the zero input +response.

+

That is: $\rho(t, t_0, x_0, u) = \rho(t, t_0, \theta_x, u) + \rho(t, t_0, +x_0, \theta_u)$ (from the definition of linearity).

+

The second remark is that the zero-state response is linear in the input, +and similarly, the zero-input response is linear in the state.

+

One more property of dynamical systems before we finish: equivalence (a +property derived from the definition). Take two dynamical systems $D = (U, +\Sigma, Y, s, r), \tilde{D} = (U, \bar{\Sigma}, Y, \bar{s}, \bar{r})$. $x_0 +\in D$ is equivalent to $\tilde{x_0} \in \tilde{D}$ at $t_0$. If $\forall t +\ge t_0, \rho(t, t_0, x_0, u) = \tilde{\rho}(t, t_0, \tilde{x_0}, u)$ +$\forall x$ and some $\tilde{x}$, the two systems are equivalent.

+

+

EE 221A: Linear System Theory

+

September 25, 2012

+

Linear time-varying systems

+

Recall the state transition function is given some function of the current +time with initial state, initial time, and inputs, Suppose you have a +differential equation; how do you acquire the state transition function? +Solve the differential equation.

+

For a general dynamical system, there are different ways to get the state +transition function. This is an instantiation of a dynamical system, and +we're going to ge thte state transition function by solving the +differential equation / initial condition pair.

+

We're going to call $\dot{x}(t) = A(t)x(t) + B(t)u(t)$ a vector +differential equation with initial condition $x(t_0) = x_0$.

+

So that requires us to think about solving that differential equation. Do a +dimension check, to make sure we know the dimensions of the matrices. $x +\in \Re^n$, so $A \in \Re^{n_0 \times n}$. We could define the matrix +function $A$, which takes intervals of the real line and maps them over to +matrices. As a function, $A$ is piecewise continuous matrix function in +time.

+

The entries are piecewise-continuous scalars in time. We would like to get +at the state transition function; to do that, we need to solve the +differential equation.

+

Let's assume for now that $A, B, U$ are given (part of the system +definition).

+

Piece-wise continuous is trivial; we can use the induced norm of $A$ for a +Lipschitz condition. Since this induced norm is piecewise-continuous in +time, this is a fine bound. Therefore $f$ is globally Lipschitz continuous.

+

We're going to back off for a bit and introduce the state transition +matrix. Background for solving the VDE. We're going to introduce a matrix +differential equation, $\dot{X} = A(t) X$ (where $A(t)$ is same as before).

+

I'm going to define $\Phi(t, t_0)$ as the solution to the matrix +differential equation (MDE) for the initial condition $\Phi(t_0, t_0) = +1_{n \times n}$. I'm going to define $\Phi$ as the solution to the $n +\times n$ matrix when my differential equation starts out in the identity +matrix.

+

Let's first talk about properties of this matrix $\Phi$ just from the +definition we have.

+
+
• If you go back to the vector differential equation, and let's just drop + the term that depends on $u$ (either consider $B$ to be 0, or the input + to be 0), the solution of $\cdot{x} = A(t)x(t)$ is given by $x(t) = + \Phi(t, t_0)x_0$.
• +
• This is what we call the semigroup property, since it's reminiscent of + the semigroup axiom. $\Phi(t, t_0) = \Phi(t, t_1) \Phi(t_1, t_0) \forall + t, t_0, t_1 \in \Re^+$
• +
• $\Phi^{-1}(t, t_0) = \Phi(t_0, t)$.
• +
• $\text{det} \Phi(t, t_0) = \exp\parens{\int_{t_0}^t \text{tr} \parens{A + (\tau)} d\tau}$.
• +
+

Here's let's talk about some machinery we can now invoke when +we want to show that two functions of time are equal to each other when +they're both solutions to the differential equation. You can simply show by +the existence and uniqueness theorem (assuming it applies) that they +satisfy the same initial condition and the same differential +equation. That's an important point, and we tend to use it a lot.

+

(i.e. when faced with showing that two functions of time are equal to each +other, you can show that they both satisfy the same initial condition and +the same differential equation [as long as the differential equation +satisfies the hypotheses of the existence and uniqueness theorem])

+

Obvious, but good to state.

+

Note: the initial condition doesn't have to be the initial condition given; +it just has to hold at one point in the interval. Pick your point in time +judiciously.

+

Proof of (2): check $t=t_1$. (3) follows directly from (2). (4) you can +look at if you want. Gives you a way to compute $\Phi(t, t_0)$. We've +introduced a matrix differential equation and an abstract solution.

+

Consider (1). $\Phi(t, t_0)$ is a map that takes the initial state and +transitions to the new state. Thus we call $\Phi$ the state transition +matrix because of what it does to the states of this vector differential +equation: it transfers them from their initial value to their final value, +and it transfers them through matrix multiplication.

+

Let's go back to the original differential equation. Claim that the +solution to that differential equation has the following form: $x(t) = +\Phi(t, t_0)x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau)u(\tau) d\tau$. Proof: +we can use the same machinery. If someone gives you a candidate solution, +you can easily show that it is the solution.

+

Recall the Leibniz rule, which we'll state in general as follows: +$\pderiv{}{z} \int_{a(z)}^{b^z} f(x, z) dx = \int_{a(z)}^{b^z} +\pderiv{}{x}f(x, z) dx + \pderiv{b}{z} f(b, z) - \pderiv{a}{z} f(a, z}$.

+

$$+\dot{x}(t) & = A(t) \Phi(t, t_0) x_0 + \int_{t_0}^t +\pderiv{}{t} \parens{\Phi(t, \tau)B(\tau)u(\tau)} d\tau + +\pderiv{t}{t}\parens{\Phi(t, t)B(t)u(t)} - \pderiv{t_0}{t}\parens{...} +\\ & = A(t)\Phi(t, t_0)x_0 + \int_{t_0}^t A(t)\Phi(t,\tau)B(\tau)u(\tau)d\tau + B(t)u(t) +\\ & = A(\tau)\Phi(t, t_0) x_0 + A(t)\int_{t_0}^t \Phi(t, \tau)B(\tau) +u(\tau) d\tau + B(t) u(t) +\\ & = A(\tau)\parens{\Phi(t, t_0) x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau) +u(\tau) d\tau} + B(t) u(t) +$$

+

$x(t) = \Phi(t,t_0)x_0 + \int_{t_0}^t \Phi(t,\tau)B(\tau)u(\tau) d\tau$ is +good to remember.

+

Not surprisingly, it depends on the input function over an interval of +time.

+

The differential equation is changing over time, therefore the system +itself is time-varying. No way in general that will be time-invariant, +since the equation that defines its evolution is changing. You test +time-invariance or time variance through the response map. But is it +linear? You have the state transition function, so we can compute the +response function (recall: readout map composed with the state transition +function) and ask if this is a linear map.

96 fa2012/cs150/10.md
 @@ -0,0 +1,96 @@ +CS 150: Digital Design & Computer Architecture +============================================== +September 20, 2012 +------------------ + +Non-overlapping clocks. n-phase means that you've got n different outputs, +and at most one high at any time. Guaranteed dead time between when one +goes low and next goes high. + +K-maps +------ +Finding minimal sum-of-products and product-of-sums expressions for +functions. **On-set**: all the ones of a function; **implicant**: one or +more circled ones in the onset; a **minterm** is the smallest implicant you +can have, and they go up by powers of two in the number of things you can +have; a **prime implicant** can't be combined with another (by circling); +an **essential prime implicant** is a prime implicant that contains at +least one one not in any other prime implicant. A **cover** is any +collection of implicants that contains all of the ones in the on-set, and a +**minimal cover** is one made up of essential prime implicants and the +minimum number of implicants. + +Hazards vs. glitches. Glitches are when timing issues result in dips (or +spikes) in the output; hazards are if they might happen. Completely +irrelevant in synchronous logic. + +Project +------- +3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If +we look at the datapath elements, the storage elements, you've got your +program counter, your instruction memory, register file, and data +memory. Figure 7.1 from the book. If you mix that in with figure 8.28, +which talks about MMIO, that data memory, there's an address and data bus +that this is hooked up to, and if you want to talk to a serial port on a +MIPS processor (or an ARM processor, or something like that), you don't +address a particular port (not like x86). Most ports are +memory-mapped. Actually got a MMIO module that is also hooked up to the +address and data bus. For some range of addresses, it's the one that +handles reads and writes. + +You've got a handful of different modules down here such as a UART receive +module and a UART transmit module. In your project, you'll have your +personal computer that has a serial port on it, and that will be hooked up +to your project, which contains the MIPS150 processor. Somehow, you've got +to be able to handle characters transmitted in each direction. + +UART +---- +Common ground, TX on one side connected to RX port on other side, and vice +versa. Whole bunch more in different connectors. Basic protocol is called +RS232, common (people often refer to it by connector name: DB9 (rarely +DB25); fortunately, we've moved away from this world and use USB. We'll +talk about these other protocols later, some sync, some async. Workhorse +for long time, still all over the place. + +You're going to build the UART receiver/transmitter and MMIO module that +interfaces them. See when something's coming in from software / +hardware. Going to start out with polling; we will implement interrupts +later on in the project (for timing and serial IO on the MIPS +processor). That's really the hardcore place where software and hardware +meet. People who understand how each interface works and how to use those +optimally together are valuable and rare people. + +What you're doing in Lab 4, there's really two concepts of (1) how does +serial / UART work and (2) ready / valid handshake. + +On the MIPS side, you've got some addresses. Anything that starts with FFFF +is part of the memory-mapped region. In particular, the first four are +mapped to the UART: they are RX control, RX data, TX control, and TX data. + +When you want to send something out the UART, you write the byte -- there's +just one bit for the control and one byte for data. + +Data goes into some FSM system, and you've got an RX shift register and a +TX shift register. + +There's one other piece of this, which is that inside of here, the thing +interfacing to this IO-mapped module uses this ready bit. If you have two +modules: a source and a sink (diagram from the document), the source has +some data that is sending out, tells the sink when the data is valid, and +the sink tells the source when it is ready. And there's a shared "clock" +(baud rate), and this is a synchronous interface. + +* source presents data +* source raises valid +* when ready & valid on posedge clock, both sides know the transaction was + successful. + +Whatever order this happens in, source is responsible for making sure data +is valid. + +HDLC? Takes bytes and puts into packets, ACKs, etc. + +Talk about quartz crystals, resonators. $\pi \cdot 10^7$. + +So: before I let you go, parallel load, n bits in, serial out, etc.
5 fa2012/cs150/11.md
 @@ -0,0 +1,5 @@ +CS 150: Digital Design & Computer Architecture +============================================== +September 25, 2012 +------------------ +
2  fa2012/cs150/3.md
 @@ -1,5 +1,5 @@ CS 150: Digital Design & Computer Architecture -=============================================== +============================================== August 28, 2012 ---------------
2  fa2012/cs150/4.md
 @@ -1,5 +1,5 @@ CS 150: Digital Design & Computer Architecture -=============================================== +============================================== August 30, 2012 ---------------
2  fa2012/cs150/5.md
 @@ -1,5 +1,5 @@ CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 4, 2012 -----------------
2  fa2012/cs150/6.md
 @@ -1,5 +1,5 @@ CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 6, 2012 -----------------
2  fa2012/cs150/7.md
 @@ -1,5 +1,5 @@ CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 11, 2012 ------------------
2  fa2012/cs150/8.md
 @@ -1,5 +1,5 @@ CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 13, 2012 ------------------
83 fa2012/cs150/9.md
 @@ -0,0 +1,83 @@ +CS 150: Digital Design & Computer Architecture +============================================== +September 18, 2012 +------------------ + +Lab this week you are learning about chipscope. Chipscope is kinda like +what it sounds: allows you to monitor things happening in the FPGA. One of +the interesting things about Chipscope is that it's a FSM monitoring stuff +in your FPGA, it also gets compiled down, and it changes the location of +everything that goes into your chip. It can actually make your bug go away +(e.g. timing bugs). + +So. Counters. How do counters work? If I've got a 4-bit counter and I'm +counting from 0, what's going on here? + +D-ff with an inverter and enable line? This is a T-ff (toggle +flipflop). That'll get me my first bit, but my second bit is slower. $Q_1$ +wants to toggle only when $Q_0$ is 1. With subsequent bits, they want to +toggle when all lower bits are 1. + +Counter with en: enable is tied to the toggle of the first bit. Counter +with ld: four input bits, four output bits. Clock. Load. Then we're going +to want to do a counter with ld, en, rst. Put in logic, etc. + +Quite common: ripple carry out (RCO), where we AND $Q[3:0]$ and feed this +into the enable of $T_4$. + +Ring counter (shift register with one hot out), If reset is low I just +shift this thing around and make a circular shift register. If high, I clear +the out bit. + +Mobius counter: just a ring counter with a feedback inverter in it. Just +going to take whatever state in there, and after n clock ticks, it inverts +itself. So you have $n$ flipflops, and you get $2n$ states. + +And then you've got LFSRs (linear feedback shift registers). Given N +flipflops, we know that a straight up or down counter will give us $2^N$ +states. Turns out that an LFSR give syou almost that (not 0). So why do +that instead of an up-counter? This can give you a PRNG. Fun times with +Galois fields. + +Various uses, seeds, high enough periods (Mersenne twisters are higher). + +RAM +--- +Remember, decoder, cell array, $2^n$ rows, $2^n$ word lines, some number of +bit lines coming out of that cell array for I/O with output-enable and +write-enable. + +When output-enable is low, D goes to high-Z. At some point, some external +device starts driving some Din (not from memory). Then I can apply a write +pulse (write strobe), which causes our data to be written into the memory +at this address location. Whatever was driving it releases, so it goes back +to high-impedance, and if we turn output-enable again, we'll see "Din" from +the cell array. + +During the write pulse, we need Din stable and address stable. We have a +pulse because we don't want to break things. Bad things happen. + +Notice: no clock anywhere. Your FPGA (in particular, the block ram on the +ML505) is a little different in that it has registered input (addr & +data). First off, very configurable. All sorts of ways you can set this up, +etc. Addr in particular goes into a register and comes out of there, and +then goes into a decoder before it goes into the cell array, and what comes +out of that cell array is a little bit different also in that there's a +data-in line that goes into a register and some data-out as well that's +separate and can be configured in a whole bunch of different ways so that +you can do a bunch of different things. + +The important thing is that you can apply your address to those inputs, and +it doesn't show up until the rising edge of the clock. There's the option +of having either registered or non-registered output (non-registered for +this lab). + +So now we've got an ALU and RAM. And so we can build some simple +datapaths. For sure you're going to see on the final (and most likely the +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design +a system to find the largest unsigned int in the SRAM." + +Demonstration of clock cycles, etc. So what's our FSM look like? Either +LOAD or HOLD. + +On homework, did not say sync SRAM. Will probably change.
205 fa2012/cs150/cs150.md
 @@ -197,7 +197,7 @@ stuff. CS 150: Digital Design & Computer Architecture -=============================================== +============================================== August 28, 2012 --------------- @@ -297,7 +297,7 @@ and a maxterm is a sum containing every input variable or its complement. CS 150: Digital Design & Computer Architecture -=============================================== +============================================== August 30, 2012 --------------- @@ -355,7 +355,7 @@ de Morgan's law: "bubble-pushing". CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 4, 2012 ----------------- @@ -467,7 +467,7 @@ can make FSMs. CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 6, 2012 ----------------- @@ -539,7 +539,7 @@ Next time: more MIPS, memory. CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 11, 2012 ------------------ @@ -680,7 +680,7 @@ means that this might come back into fashion. CS 150: Digital Design & Computer Architecture -=============================================== +============================================== September 13, 2012 ------------------ @@ -786,3 +786,196 @@ sums. (Section 2.7). Based on the combining theorem, which says that $XA + X\bar{A} = X$. Ideally: every row should just have a single value changing. So, I use Gray codes. (e.g. 00, 01, 11, 10). Graphical representation! + + + +CS 150: Digital Design & Computer Architecture +============================================== +September 18, 2012 +------------------ + +Lab this week you are learning about chipscope. Chipscope is kinda like +what it sounds: allows you to monitor things happening in the FPGA. One of +the interesting things about Chipscope is that it's a FSM monitoring stuff +in your FPGA, it also gets compiled down, and it changes the location of +everything that goes into your chip. It can actually make your bug go away +(e.g. timing bugs). + +So. Counters. How do counters work? If I've got a 4-bit counter and I'm +counting from 0, what's going on here? + +D-ff with an inverter and enable line? This is a T-ff (toggle +flipflop). That'll get me my first bit, but my second bit is slower. $Q_1$ +wants to toggle only when $Q_0$ is 1. With subsequent bits, they want to +toggle when all lower bits are 1. + +Counter with en: enable is tied to the toggle of the first bit. Counter +with ld: four input bits, four output bits. Clock. Load. Then we're going +to want to do a counter with ld, en, rst. Put in logic, etc. + +Quite common: ripple carry out (RCO), where we AND $Q[3:0]$ and feed this +into the enable of $T_4$. + +Ring counter (shift register with one hot out), If reset is low I just +shift this thing around and make a circular shift register. If high, I clear +the out bit. + +Mobius counter: just a ring counter with a feedback inverter in it. Just +going to take whatever state in there, and after n clock ticks, it inverts +itself. So you have $n$ flipflops, and you get $2n$ states. + +And then you've got LFSRs (linear feedback shift registers). Given N +flipflops, we know that a straight up or down counter will give us $2^N$ +states. Turns out that an LFSR give syou almost that (not 0). So why do +that instead of an up-counter? This can give you a PRNG. Fun times with +Galois fields. + +Various uses, seeds, high enough periods (Mersenne twisters are higher). + +RAM +--- +Remember, decoder, cell array, $2^n$ rows, $2^n$ word lines, some number of +bit lines coming out of that cell array for I/O with output-enable and +write-enable. + +When output-enable is low, D goes to high-Z. At some point, some external +device starts driving some Din (not from memory). Then I can apply a write +pulse (write strobe), which causes our data to be written into the memory +at this address location. Whatever was driving it releases, so it goes back +to high-impedance, and if we turn output-enable again, we'll see "Din" from +the cell array. + +During the write pulse, we need Din stable and address stable. We have a +pulse because we don't want to break things. Bad things happen. + +Notice: no clock anywhere. Your FPGA (in particular, the block ram on the +ML505) is a little different in that it has registered input (addr & +data). First off, very configurable. All sorts of ways you can set this up, +etc. Addr in particular goes into a register and comes out of there, and +then goes into a decoder before it goes into the cell array, and what comes +out of that cell array is a little bit different also in that there's a +data-in line that goes into a register and some data-out as well that's +separate and can be configured in a whole bunch of different ways so that +you can do a bunch of different things. + +The important thing is that you can apply your address to those inputs, and +it doesn't show up until the rising edge of the clock. There's the option +of having either registered or non-registered output (non-registered for +this lab). + +So now we've got an ALU and RAM. And so we can build some simple +datapaths. For sure you're going to see on the final (and most likely the +midterm) problems like "given a 16-bit ALU and a 1024x16 sync SRAM, design +a system to find the largest unsigned int in the SRAM." + +Demonstration of clock cycles, etc. So what's our FSM look like? Either +LOAD or HOLD. + +On homework, did not say sync SRAM. Will probably change. + + + +CS 150: Digital Design & Computer Architecture +============================================== +September 20, 2012 +------------------ + +Non-overlapping clocks. n-phase means that you've got n different outputs, +and at most one high at any time. Guaranteed dead time between when one +goes low and next goes high. + +K-maps +------ +Finding minimal sum-of-products and product-of-sums expressions for +functions. **On-set**: all the ones of a function; **implicant**: one or +more circled ones in the onset; a **minterm** is the smallest implicant you +can have, and they go up by powers of two in the number of things you can +have; a **prime implicant** can't be combined with another (by circling); +an **essential prime implicant** is a prime implicant that contains at +least one one not in any other prime implicant. A **cover** is any +collection of implicants that contains all of the ones in the on-set, and a +**minimal cover** is one made up of essential prime implicants and the +minimum number of implicants. + +Hazards vs. glitches. Glitches are when timing issues result in dips (or +spikes) in the output; hazards are if they might happen. Completely +irrelevant in synchronous logic. + +Project +------- +3-stage pipeline MIPS150 processor. Serial port, graphics accelerator. If +we look at the datapath elements, the storage elements, you've got your +program counter, your instruction memory, register file, and data +memory. Figure 7.1 from the book. If you mix that in with figure 8.28, +which talks about MMIO, that data memory, there's an address and data bus +that this is hooked up to, and if you want to talk to a serial port on a +MIPS processor (or an ARM processor, or something like that), you don't +address a particular port (not like x86). Most ports are +memory-mapped. Actually got a MMIO module that is also hooked up to the +address and data bus. For some range of addresses, it's the one that +handles reads and writes. + +You've got a handful of different modules down here such as a UART receive +module and a UART transmit module. In your project, you'll have your +personal computer that has a serial port on it, and that will be hooked up +to your project, which contains the MIPS150 processor. Somehow, you've got +to be able to handle characters transmitted in each direction. + +UART +---- +Common ground, TX on one side connected to RX port on other side, and vice +versa. Whole bunch more in different connectors. Basic protocol is called +RS232, common (people often refer to it by connector name: DB9 (rarely +DB25); fortunately, we've moved away from this world and use USB. We'll +talk about these other protocols later, some sync, some async. Workhorse +for long time, still all over the place. + +You're going to build the UART receiver/transmitter and MMIO module that +interfaces them. See when something's coming in from software / +hardware. Going to start out with polling; we will implement interrupts +later on in the project (for timing and serial IO on the MIPS +processor). That's really the hardcore place where software and hardware +meet. People who understand how each interface works and how to use those +optimally together are valuable and rare people. + +What you're doing in Lab 4, there's really two concepts of (1) how does +serial / UART work and (2) ready / valid handshake. + +On the MIPS side, you've got some addresses. Anything that starts with FFFF +is part of the memory-mapped region. In particular, the first four are +mapped to the UART: they are RX control, RX data, TX control, and TX data. + +When you want to send something out the UART, you write the byte -- there's +just one bit for the control and one byte for data. + +Data goes into some FSM system, and you've got an RX shift register and a +TX shift register. + +There's one other piece of this, which is that inside of here, the thing +interfacing to this IO-mapped module uses this ready bit. If you have two +modules: a source and a sink (diagram from the document), the source has +some data that is sending out, tells the sink when the data is valid, and +the sink tells the source when it is ready. And there's a shared "clock" +(baud rate), and this is a synchronous interface. + +* source presents data +* source raises valid +* when ready & valid on posedge clock, both sides know the transaction was + successful. + +Whatever order this happens in, source is responsible for making sure data +is valid. + +HDLC? Takes bytes and puts into packets, ACKs, etc. + +Talk about quartz crystals, resonators. $\pi \cdot 10^7$. + +So: before I let you go, parallel load, n bits in, serial out, etc. + + + +CS 150: Digital Design & Computer Architecture +============================================== +September 25, 2012 +------------------ +
105 fa2012/cs_h195/3.md
 @@ -0,0 +1,124 @@ +EE 221A: Linear System Theory +============================= +September 25, 2012 +------------------ +Linear time-varying systems +--------------------------- +Recall the state transition function is given some function of the current +time with initial state, initial time, and inputs, Suppose you have a +differential equation; how do you acquire the state transition function? +Solve the differential equation. + +For a general dynamical system, there are different ways to get the state +transition function. This is an instantiation of a dynamical system, and +we're going to ge thte state transition function by solving the +differential equation / initial condition pair. + +We're going to call $\dot{x}(t) = A(t)x(t) + B(t)u(t)$ a vector +differential equation with initial condition $x(t_0) = x_0$. + +So that requires us to think about solving that differential equation. Do a +dimension check, to make sure we know the dimensions of the matrices. $x +\in \Re^n$, so $A \in \Re^{n_0 \times n}$. We could define the matrix +function $A$, which takes intervals of the real line and maps them over to +matrices. As a function, $A$ is piecewise continuous matrix function in +time. + +The entries are piecewise-continuous scalars in time. We would like to get +at the state transition function; to do that, we need to solve the +differential equation. + +Let's assume for now that $A, B, U$ are given (part of the system +definition). + +Piece-wise continuous is trivial; we can use the induced norm of $A$ for a +Lipschitz condition. Since this induced norm is piecewise-continuous in +time, this is a fine bound. Therefore $f$ is globally Lipschitz continuous. + +We're going to back off for a bit and introduce the state transition +matrix. Background for solving the VDE. We're going to introduce a matrix +differential equation, $\dot{X} = A(t) X$ (where $A(t)$ is same as before). + +I'm going to define $\Phi(t, t_0)$ as the solution to the matrix +differential equation (MDE) for the initial condition $\Phi(t_0, t_0) = +1_{n \times n}$. I'm going to define $\Phi$ as the solution to the $n +\times n$ matrix when my differential equation starts out in the identity +matrix. + +Let's first talk about properties of this matrix $\Phi$ just from the +definition we have. + + * If you go back to the vector differential equation, and let's just drop + the term that depends on $u$ (either consider $B$ to be 0, or the input + to be 0), the solution of $\cdot{x} = A(t)x(t)$ is given by $x(t) = + \Phi(t, t_0)x_0$. + * This is what we call the semigroup property, since it's reminiscent of + the semigroup axiom. $\Phi(t, t_0) = \Phi(t, t_1) \Phi(t_1, t_0) \forall + t, t_0, t_1 \in \Re^+$ + * $\Phi^{-1}(t, t_0) = \Phi(t_0, t)$. + * $\text{det} \Phi(t, t_0) = \exp\parens{\int_{t_0}^t \text{tr} \parens{A + (\tau)} d\tau}$. + +Here's let's talk about some machinery we can now invoke when +we want to show that two functions of time are equal to each other when +they're both solutions to the differential equation. You can simply show by +the existence and uniqueness theorem (assuming it applies) that they +satisfy the same initial condition and the same differential +equation. That's an important point, and we tend to use it a lot. + +(i.e. when faced with showing that two functions of time are equal to each +other, you can show that they both satisfy the same initial condition and +the same differential equation [as long as the differential equation +satisfies the hypotheses of the existence and uniqueness theorem]) + +Obvious, but good to state. + +Note: the initial condition doesn't have to be the initial condition given; +it just has to hold at one point in the interval. Pick your point in time +judiciously. + +Proof of (2): check $t=t_1$. (3) follows directly from (2). (4) you can +look at if you want. Gives you a way to compute $\Phi(t, t_0)$. We've +introduced a matrix differential equation and an abstract solution. + +Consider (1). $\Phi(t, t_0)$ is a map that takes the initial state and +transitions to the new state. Thus we call $\Phi$ the **state transition +matrix** because of what it does to the states of this vector differential +equation: it transfers them from their initial value to their final value, +and it transfers them through matrix multiplication. + +Let's go back to the original differential equation. Claim that the +solution to that differential equation has the following form: $x(t) = +\Phi(t, t_0)x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau)u(\tau) d\tau$. Proof: +we can use the same machinery. If someone gives you a candidate solution, +you can easily show that it is the solution. + +Recall the Leibniz rule, which we'll state in general as follows: +$\pderiv{}{z} \int_{a(z)}^{b^z} f(x, z) dx = \int_{a(z)}^{b^z} +\pderiv{}{x}f(x, z) dx + \pderiv{b}{z} f(b, z) - \pderiv{a}{z} f(a, z}$. + +$$+\dot{x}(t) & = A(t) \Phi(t, t_0) x_0 + \int_{t_0}^t +\pderiv{}{t} \parens{\Phi(t, \tau)B(\tau)u(\tau)} d\tau + +\pderiv{t}{t}\parens{\Phi(t, t)B(t)u(t)} - \pderiv{t_0}{t}\parens{...} +\\ & = A(t)\Phi(t, t_0)x_0 + \int_{t_0}^t A(t)\Phi(t,\tau)B(\tau)u(\tau)d\tau + B(t)u(t) +\\ & = A(\tau)\Phi(t, t_0) x_0 + A(t)\int_{t_0}^t \Phi(t, \tau)B(\tau) +u(\tau) d\tau + B(t) u(t) +\\ & = A(\tau)\parens{\Phi(t, t_0) x_0 + \int_{t_0}^t \Phi(t, \tau)B(\tau) +u(\tau) d\tau} + B(t) u(t) +$$ + +$x(t) = \Phi(t,t_0)x_0 + \int_{t_0}^t \Phi(t,\tau)B(\tau)u(\tau) d\tau$ is +good to remember. + + +Not surprisingly, it depends on the input function over an interval of +time. + +The differential equation is changing over time, therefore the system +itself is time-varying. No way in general that will be time-invariant, +since the equation that defines its evolution is changing. You test +time-invariance or time variance through the response map. But is it +linear? You have the state transition function, so we can compute the +response function (recall: readout map composed with the state transition +function) and ask if this is a linear map.
 @@ -0,0 +1,211 @@ +EE 221A: Linear System Theory +============================= +September 18, 2012 +------------------ + +Today: + +* proof of existence and uniqueness theorem. +* [ if time ] introduction to dynamical systems. + +First couple of weeks of review to build up basic concepts that we'll be +drawing upon throughout the course. Either today or Thursday we will launch +into linear system theory. + +We're going to recall where we were last time. We had the fundamental +theorem of differential equations, which said the following: if we had a +differential equation, $\dot{x} = f(x,t)$, with initial condition $x(t_0) = +x_0$, where $x(t) \in \Re^n$, etc, if $f( \cdot , t)$ is Lipschitz +continuous, and $f(x, \cdot )$ is piecewise continuous, then there exists a +unique solution to the differential equation / initial condition pair (some +function $\phi(t)$) wherever you can take the derivative (may not be +differentiable everywhere: loses differentiability on the points where +discontinuities exist). + +We spent quite a lot of time discussing Lipschitz continuity. Job is +usually to test both conditions; first one requires work. We described a +popular candidate function by looking at the mean value theorem and +applying it to $f$: a norm of the Jacobian function provides a candidate +Lipschitz if it works. + +We also described local Lipschitz continuity, and often, when using a norm +of the Jacobian, that's fairly easy to show. + +Important point to recall: a norm of the Jacobian of $f$ provides a +candidate Lipschitz function. + +Another important thing to say here is that we can use any norm we want, so +we can be creative in our choice of norm when looking for a better bound. + +We started our proof last day, and we talked a little about the structure +of the proof. We are going to proceed by constructing a sequence of +functions, then show (1) that it converges to a solution, then show (2) +that it is unique. + +Proof of Existence +------------------ +We are going to construct this sequence of functions as follows: +$x_{m+1}(t) = x_0 + \int_0^t f(x_m(\tau)) d\tau$. Here we're dealing with +an arbitrary interval from $t_1$ to $t_2$, and so $0 \in [t_1, t_2]$. We +want to show that this sequence is a Cauchy sequence, and we're going to +rely on our knowledge that the space these functions are defined in is a +Banach space (hence this sequence converges to something in the space). + +We have to put a norm