Skip to content

Latest commit

 

History

History
693 lines (347 loc) · 96 KB

how-to-speak-machine.md

File metadata and controls

693 lines (347 loc) · 96 KB

How to Speak Machine: Computational Thinking for the Rest of Us (2019)

John Maeda


▪ I have always been interested in how the computer (which is an object of great complexity) and design (which is traditionally about simplicity) tend to mix poorly together like the proverbial “oil and water.”

▪ Subsequently, that blog turned into a book titled The Laws of Simplicity, which was rapidly translated into fourteen languages.

▪ I wanted to somehow get closer to the essence of design and move away from computers the way I had done once prior in my early career—back in the nineties, when I was a practicing graphic designer in Japan with a mismatched MIT pedigree.

▪ I’d somehow managed to escape the “T” (Technology) of MIT as an engineering student, and then made a U-turn into the thick of it as an MIT Media Lab professor leading the intersection of design and advanced computing technologies.

▪ These included, for instance, appearing before Congress to encourage putting an “A,” for Art, in STEM education, turning it into STEAM, and later launching the “Design in Tech Reports” while working in Silicon Valley at venture capital firm Kleiner Perkins.

▪ So in 2019, when a popular business magazine announced in a headline that I’d said, “In reality, design is not that important,” it did not come as a surprise to me that I would be dragged through the internet mud by all lovers of design.

▪ My words had been taken out of context from a twenty-minute phone interview—and, frankly, when the article came out, I immediately admired the editorial team’s choice of headline as brilliant clickbait.

▪ What stung the most was knowing that few people had really read the entire interview, so the headline was all that stuck in anyone’s mind. To them, I had completely demeaned the work designers do every day. So I needed to be punished.

▪ Here’s the reality: I honestly don’t believe that design is the most important matter today. Instead, I believe we should focus first on understanding computation.

▪ Because when we combine design with computation, a kind of magic results; when we combine business with computation, great financial opportunities can emerge.

▪ What is computation? That’s the question I would get asked anytime I stepped off the MIT campus when I was in my twenties and thirties, and then whenever I left any technology company I worked with in my forties and fifties.

▪ Computation is an invisible, alien universe that is infinitely large and infinitesimally detailed. It’s a kind of raw material that doesn’t obey the laws of physics, and it’s what powers the internet at a level that far transcends the power of electricity

▪ It’s a ubiquitous medium that experienced software developers and the tech industry control to a degree that threatens the sovereignty of existing nation-states.

▪ Computation is not something you can fully grasp after training in a “learn to code” boot camp, where the mechanics of programming can be easily learned. It’s more like a foreign country with its own culture, its own set of problems, and its own language—but where knowing the language is not enough, let alone if you have only a minimal understanding of it.

▪ There’s been a conscious push in all countries to promote a greater understanding of how computers and the internet work. However, by the time a technology-centered educational program is launched, it is already outdated.

▪ That’s because the pace of progress in computing hasn’t moved at the speed of humans—it’s been moving at the exponential speed of the internet’s evolution.

▪ Back in 1999, when a BBC interviewer made a dismissive comment about the internet, the late musician David Bowie presciently offered an alternate interpretation: “It’s an alien life form . . . and it’s just landed here.”

▪ Since the landing of this alien life form, the world has not been the same—and design as it has conventionally been defined by the Temple of Design no longer feels to me like the foundational language of the products and services worlds.

▪ Instead, it’s ruled by new laws that are governed by the rising Temple of Tech in a way that intrinsically excludes folks who are less technically literate.

▪ A new form of design has emerged: computational design. This kind of design has less to do with the paper, cotton, ink, or steel that we use in everything we physically craft in the real world, and instead has more to do with the bytes, pixels, voice, and AI that we use in everything we virtually craft in the digital world powered by new computing technologies.

▪ It’s the text bubble that pops up on your screen with a message from your loved one, or the perfect photo you shot in the cold rain with your hands trembling and yet which came out perfectly, or the friendly “Here you go, John” that you hear when you ask your smart stereo to play your favorite Bowie tunes.

▪ These new kinds of interactions with our increasingly intelligent devices and surroundings require a fundamental understanding of how computing works to maximize what we can make.

▪ For much of the twentieth century, computation by itself was useful only to the military to calculate missile trajectories. But in the twenty-first century, it is design that has made computation relevant to business and, more so, to our everyday lives.

▪ Design matters a lot when it is leveraged with a deep understanding of computation and the unique set of possibilities it brings.

▪ But achieving an intuitive understanding of an invisible alien universe doesn’t come so easily.

▪ This book is the result of a six-year journey I have traveled away from “pure” design and into the heart of what is impacting design the most: computation.

▪ I will take you on a tour through the minds and cultures of computing machines from how they once existed in a simpler form to how they’ve evolved into the much more complex forms we know today.

▪ Computation brings its share of problems, but most of them have to do with us—how we use it—rather than by the underlying technology itself.

▪ We’ve entered an era in which the computing machines we use today are powered not just by electricity and mathematics, but by our every action and with insights gained in real time as we use them.

▪ I have always believed that being curious is better than being afraid—for when we are curious we get inventive, whereas when we are afraid we get destructive.

▪ But to be honest, I’m just like anyone else—tired, a little lazy, and all too eager to wait for a hero to rise who will protect me and fight for all of us.

▪ There’s a common lack of understanding of what computation fundamentally can and cannot do.

▪ There’s one thing that a computer can do better than any human, animal, or machine in the real world: repetition. It doesn’t get bored and complain if you tell it to count from one to a thousand, or even to a billion.

▪ But a computer running a program, if left powered up, can sit in a loop and run forever, never losing energy or enthusiasm. It’s a metamechanical machine that never experiences surface friction and is never subject to the forces of gravity like a real mechanical machine—so it runs in complete perfection.

▪ This property is the first thing that sets computational machines apart from the living, tiring, creaky, squeaky world.

▪ There was only one font, and text was only uppercase. You would “navigate” the computer screen by pressing the cursor keys: up, down, left, right. And any functionality that you wanted it to have, you would need to type in a program to create it yourself or type it in line by line from a book or magazine.

▪ Make no mistake: the machine speaks a foreign language, with its own vocabulary and grammar, and it will take you more than reading this short book to speak machine fluently.

▪ I can, however, help you get the gist of computation, and to do so I’ll take an important detour into the essence of what software is all about.

▪ In the old days, he said, you could count on shipping an amazing hardware device that would come packaged with a CD-ROM that you’d easily toss into the trash. But the day was soon coming when you’d be doing the opposite: coveting the software and tossing out the hardware instead. In essence, he was describing the era we live in today, in which there are apps for everything and we depend more on the software than the actual physical devices in which they function.

▪ On the surface, most software has a visual representation that we mistakenly identify as the actual computational machine. What you see on-screen with an app is closer to the fast-food drive-thru sign that you drive up to and speak into—which has nothing inside it except a tethered connection to a bustling food factory just a few car lengths away.

▪ Just as you can learn nothing about how the actual restaurant works by taking apart the drive-thru’s microphone box, the pixels on a computer screen don’t tell us anything about the computational machine that it is connected to.

▪ That’s because when it comes to something that exists in the physical world, you can touch and understand it, to a degree, when you crack it open. Machines in the real world are made up of wires, gears, and hoses that kind of make sense, whereas machines in the digital world are made up of “bits” or “zeroes and ones,” which are completely invisible.

▪ Is software visible? Yes and no. On the one hand, program code is what lies at the heart of software and you can read it, but that’s like confusing the recipe for cake with the cake itself.

▪ The software is what comes alive inside the machine due to the program codes—it’s the cake, not the recipe. This can be a difficult conceptual leap.

▪ Understanding the distinction between software running on a computer within its “mind” versus the actual program code being fed into the computer is useful because it lets you conceptualize what is really happening with computation.

▪ Computing machines can freely imagine within “cyberspace,” a term coined by William Gibson in the novel Neuromancer in 1984, the same year I started at MIT:

▪ Cyberspace, a consensual hallucination, experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts . . . Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights receding . . .

▪ There’s a nether universe that computational machines can easily tap into that precedes the internet, and now because of the internet and the ubiquity of network-enabled devices, that universe has expanded wildly beyond what any of the lucky nerds like me who were there at the beginning could ever have expected.

▪ As you can tell, I’m unusually passionate about this subject, and quite eager for you to understand it with me.

▪ The first computers were not machines, but humans who worked with numbers—a definition that goes back to 1613, when English author Richard Braithwaite described “the best arithmetician that ever breathed” as “the truest computer of times.”

▪ A few centuries later, the 1895 Century Dictionary defined “computer” as follows:

One who computes; a reckoner; a calculator; specifically, one whose occupation is to make arithmetical calculations for mathematicians, astronomers, geodesists, etc. Also spelled computor.

▪ At the beginning and well into the middle of the twentieth century, the word “computer” referred to a person who worked with pencil and paper. There might not have been many such human computers if the Great Depression hadn’t hit the United States.

▪ Try to imagine many rooms filled with hundreds of people with a penchant for doing math, all performing calculations with pencil and paper. You can imagine how bored these people must have been from time to time, and also how they would have needed breaks to eat or use the bathroom or just go home for the evening.

▪ Remember, too, that humans make mistakes sometimes—so someone who showed up to work late after partying too much the night prior might have made a miscalculation or two that day. Put most bluntly, in comparison with the computers we use today, the human computers were comparatively slow, at times inconsistent, and would make occasional mistakes that the digital computer of today would never make.

▪ The idea for the Turing machine arose from Dr. Turing’s seminal 1936 paper “On Computable Numbers, with an Application to the Entscheidungsproblem,” which describes a way to use the basic two acts of writing and reading numbers on a long tape of paper, along with the ability to write or read from anywhere along that tape of paper, as a means to describe a working “computing machine.”

▪ Although an actual computing machine could not be built with technology available back then, Turing had invented the ideas that underlie all modern computers. He claimed that such a machine could universally enable any calculation to be performed by storing the programming codes onto the processing tape itself.

▪ This is exactly how all computers work today: the memory that a computer uses to make calculations happen is also used to store the computer codes.

▪ The prevailing wisdom of the day was that the important work of the ENIAC was the creation of the hardware—that credit being owned by ENIAC inventors John Mauchly and John Presper Eckert. The perceived “lesser” act of programming the computer—performed by a primary team of human computers comprising Frances Elizabeth Snyder Holberton, Frances Bilas Spence, Ruth Lichterman Teitelbaum, Jean Jennings Bartik, Kathleen McNulty Mauchly Antonelli, and Marlyn Wescoff Meltzer—turned out to be essential and vital to the project, and yet the women computers of ENIAC were long uncredited.

▪ Around this time, Gordon Moore, a pioneering engineer in the emerging semiconductor industry, predicted that computing power would double approximately every year, and the so-called Moore’s law was born. And a short two decades later I would be the lucky recipient of a degree at MIT in the field that Hamilton had named, but with computers having become by then many thousands of times more powerful—Moore’s exponential prediction turned out to be right.

▪ Relatedly, a physical metaphor that comes close is a Russian matryoshka doll—a doll that has a smaller but identical version of itself nested inside it, and on and on. That’s because we can say that a matryoshka doll is made from another matryoshka doll, and so on and so forth. But even with the world record matryoshka doll set, you will run out of dolls by the time you pull out the fifty-first one; when it comes to computational matryoshka dolls, there are no specific limits to how deeply they can be nested within each other unless a “base case” is set by the programmer explicitly.

▪ They introduced us to a new concept called “recursion”—which is one of those ideas that you don’t naturally come across in your daily life, because it doesn’t fit how the physical world around us usually works. It’s not just a little strange—it’s extremely strange.

▪ When first learning how to draw a tree, you see that this is reflected in nature. A tree starts with a vertical line that has a few lines popping out of its top. To proceed to draw the tree further, you take each of the lines at the top and repeat the act of adding more lines to pop out of each top. And so forth. In the end you get a tree with a lot of subbranches created by simply using the same method you started with. In other words, a tree branch is composed of tree branches.

▪ You see this played out in the exact opposite direction from the sky into the ground underneath a tree with its root system—so nature paints with recursion quietly and obviously.

▪ Recursion is stylistically different in nature where you define the task to be performed in terms of itself—like laying out the steps to make a large pot of curry on an assembly line where one key ingredient is a smaller pot of curry. You end up with an assembly line that vanishes inside the process of making the smaller pot of curry, which in turn will require a smaller pot of curry, and so forth—you simply disappear inside the thing you’re creating. It’s not a concept for the fainthearted.

▪ The central idea is to express the definition of something with a definition itself, which is a vaguely imaginable idea that doesn’t have a home in the real world but is completely native in the realm of cyberspace.

▪ When programmers say “code is poetry,” they really mean it. Recursion is an unusually compact way to express complex ideas that can be infinite in nature and are deeply paradoxical, like what happens when you try to unpack “GNU.”

▪ Recursion (rĭ-kûr’-zhən) noun. If you still don’t get it, see recursion.

▪ Imagine a job where every few minutes you’re likely to be told, by a computer, that you did something wrong.

▪ These fall into three categories: avoidable (“dumb”) errors, less avoidable errors, and unavoidable errors.

▪ The counter runs unimpeded—much like if you were behind the wheel of a fast car on a road that extended forever and hit the gas pedal. But what if there were a big rock a few miles down the road that—with your stereo blasting and your adrenaline on fire—you could easily fail to notice while speeding? That’s right. Blam! Your car will likely wipe out, and hopefully you’ll have your seat belt on.

▪ The computational process, once initiated, will run at top speed from the very moment it comes alive. And if it were to hit some kind of error, it would immediately stop. The entire world it lived within would vanish in that same instant too.

▪ If computer programmers became uncontrollably angry each time a piece of software crashed, they wouldn’t get any work done. Because software crashes a lot, you’ll tend to find that people who write software professionally have an unusually high tolerance for catastrophes while also having little tolerance for minor mistakes that could easily be avoided.

▪ The more complex the program or computing system upon which it is running, the more things that can go wrong.

▪ In the early days of computing, many software systems and the hardware on which they ran had kinks in them because they were like experimental aircrafts—so many errors were in the “unavoidable” category

▪ Since it’s so hard to find bugs in software, there’s a strong bias for programmers working collaboratively on a project to stigmatize the easily avoidable “dumb” errors within their team.

▪ Fortunately, today there are a variety of systems and technologies that cut down on software bugs, but it’s simply humanly impossible to make bug-free software.

▪ And it’s important to remember that not all bugs are fatal—many kinds of bugs can live within the software and have no obvious impact on operations.

▪ When it halts, if there’s a software person who can find the bug and make the necessary repair, you’ll be back and looping away.

▪ The harder things to find are the bugs that don’t immediately present themselves as problematic but are quietly rattling away to potentially cause trouble in ways that you will have difficulty diagnosing.

▪ Don’t forget that what you see displayed on a computer screen is only a tiny fraction of what’s actually happening within the invisible world of computing machinery. There are endless streams of digital information that are hurriedly getting processed, both elegantly and inelegantly, behind the scenes powered by loops, and some recursive loops too.

▪ If all this feels a bit difficult to conceptualize, it’s because it illustrates the difference between exponential thinking and linear thinking—linear thinking is how we’re wired.

▪ Exponential growth is native to how the computer works. This is how the amount of computing memory available has evolved. The same can be said about processing power. So when you hear people in Silicon Valley talk about the future, it’s important to remember that they’re not talking about a future that is incrementally different year after year.

▪ They’re constantly on the lookout for exponential leaps—knowing exactly how to take advantage of them because of their fluency in speaking machine.

▪ Let’s next dip into how something so seemingly boring as speaking loops can make exponential sorcery happen within the computer.

▪ In short, it’s a means to open up spaces that are much larger than the ones that sit in front of us or surround us at the physical scale of our neighborhoods or entire cities. There are literally no limits to how far each dimension can extend, and no limits to how many dimensions can be conjured up with further nesting of loops. This should feel unnatural to those of us who live in the analog world, but it’s just another day inside the computational universe.

▪ Computation has a unique affinity for infinity, and for things that can be allowed to continue forever, which take our normal ideas of scale—big or small—and easily mess with our mind. You may think you are not in control, but you are in complete control when you write the codes and construct the loops to your liking.

▪ There is a certain comfort as you come to realize that, with eventual ease, you can craft infinitely large systems with also infinitely fine details. It’s trivial to define a computational means to traverse across a billion users and analyze their individual minutiae using a couple of well-crafted loops.

▪ There’s no real upper bound, nor is there any limit to how fine-grained any computational process can be implemented by whoever constructs the codes. There’s no need to choose “how broad” or “how deep,” because the answer can simply be “both.”

▪ “Complicated” means something that is knowable, and although it may take time, it’s wholly possible to understand. You might just need some good old-fashioned brute force to do it—you’ll get tired in the process, but it’s doable.

▪ A complicated machine (think of the printing press or digital delivery service that brought you this book) is understandable.

▪ “Complex” means something that is not knowable, and even brute force can’t easily tackle it. I say easily because in the twenty-first century the computing power we have access to is absurdly amazing, but it still can’t solve complex problems—yet. A complex machine (think of any human being you have a relationship with) is not understandable.

▪ I always find it necessary to keep this distinction in the foreground, because how we make systems out of computation is generally complicated, but how we humans relate to the computational systems we make has complex effects that we’re still figuring out.

▪ If you think back to the Powers of Ten and loops within loops and the infinite depth of fractals, and the ability to control time and space at the scale of infinity within the precision of fractions of an angstrom, you can see how a person who lives and breathes this invisible world of absolute power and who controls every minute of the day might lose their connection with reality.

▪ The actual machinery of computing is complicated yet understandable; the social impact of this complicated machinery becomes complex when it involves as many humans as it does today.

▪ Complicated situations are ultimately resolvable, but complex situations are entirely different—though we should still try to understand them because they impact ourselves and our fellow human beings.

▪ Keep in mind that composing lines of code is an extremely creative task that intrinsically involves generosity through sharing skills with others—it does not make you a bad person.

▪ Or consider the Manhattan Project scientists who spent the rest of their lives weighing the balance between having invented nuclear weapons to destroy lives versus nuclear medicine to save lives

▪ But now that computing impacts virtually everyone at the ultrafine level of their daily micromovements and at the scale of the entire world, it is more urgent than ever to know how to speak both machine and to speak humanism.

▪ In 1994, there were 2,738 websites up by the middle of the year, jumping to 10,000 by the end of the year. In 2019, it’s estimated that there are well over a billion websites, with that number continuing to grow.

▪ What would be unthinkable for a human being to do—like searching the entire internet for the best “confused cat” article—is relatively easy to run as computer code. Making that search run quickly is the art and science of it all—and that’s part of the reason why Google is worth so much money as a company—but the logic should make sense to you.

▪ Smaller computers ask bigger computers to do favors for them all the time, like when you try to do something on your mobile device or from a home assistant appliance. If the computer doesn’t have enough horsepower, it can simply kick it off to the network of computers living out there that we now refer to nebulously as “the cloud.”

▪ It’s important to remember that there’s not just one cloud out there—most of the major technology companies like Google, Amazon, Apple, Alibaba, and Microsoft have clouds that consist of hundreds of thousands to millions of computers that are fully interconnected.

▪ These clouds require so much electrical power that they are often located near hydroelectric dams or solar farms in nondescript, windowless buildings with row upon row of computers placed densely in racks.

▪ Today we’re at a point when holding any digital device is like grasping the tiny tentacle of an infinitely large cybermachine floating in the cloud that can do unnaturally powerful things.

▪ For example, the primary engine of computational power for Netflix, a video streaming service, is Amazon’s cloud, because it is prohibitively expensive for Netflix to build its own and it is cheaper to rent the power in a competitive market.

▪ It’s a relatively new thing that an entire tech company doesn’t need to be built on top of its own computing power but can instead wholly rent it in a fully flexible, on-demand capacity

▪ The cloud model represents a fundamental shift in how companies can get built, where the raw materials are all completely ethereal, virtual, and invisible. But that doesn’t mean it’s not comprehensible—it’s just complicated. It’s knowable and learnable. And in the back of our minds we need to be wondering what the future implications might be for servicing an entire race of machines to become better collaborators with each other than we ourselves could ever be. I know that my own uneasiness on this matter has made me resolved to devote the remainder of my life to foster teamwork and to partner with fellow human colleagues, because our computational brethren are beating us, exponentially.

▪ However far modern science and techniques have fallen short of their inherent possibilities, they have taught mankind at least one lesson: nothing is impossible.

▪ When you consider the power of loops that prevent computational machines from ever getting tired while accessing an infinite cloud of capabilities, there’s only one word that can describe these nonliving, pseudo life forms: zombies!

▪ The term “neural network” implies two things if we look at the two words separately. Neural: it’s related to the neurons in a brain.

▪ Network: it’s about the interconnections between the neurons.

▪ There’s no actual computer code when it comes to a neural network—there’s just a black box that learns patterns. Inside the black box is a rough mathematical model for how the neurons in a brain are thought to work electrically, which when stimulated the right way can learn patterns by making its own sparks, connections, and correlations with the raw numerical data that it gets fed.

▪ When talking about this sea change, we tend not to use the term “AI,” because it carries some negative connotations from the past. Instead, we prefer two terms to describe this newer kind of artificial intelligence: “machine learning” (ML) and “deep learning” (DL).

▪ Specifically, deep learning is a technique used in machine learning. The traditional approach to creating AI was to teach a computer how to reason through IF-THEN rules.

▪ Deep learning, on the other hand, uses a model of the brain—neural networks in particular—to teach a computer how to think by observing a desired behavior and learning the skill through analyzing repeated behavioral patterns.

▪ DL wasn’t technically feasible due to the lack of succinctly large amounts of training data and the gargantuan processing power needed for the computer to become a worthy apprentice. Unsurprisingly, Moore’s law has done the magic of bringing us enough computing power to calculate almost anything.

▪ We no longer need to explicitly teach it because it can just as easily teach itself with whatever data it has on hand, and it can reach out to the cloud when it needs more data to get even smarter.

▪ Machine learning expert Andrew Ng describes the problem not as one of questioning whether a computer will eventually become awakened as a superior life form, but instead of: “If you’re trying to understand AI’s near-term impact, don’t think ‘sentience.’ Instead think ‘automation on steroids.’”

▪ For example, prior to 2012, the average error rate for image recognition was 28 percent, and for speech recognition it was 26 percent. After machine learning methods began to take hold, the average error rate for image recognition became 7 percent, and for speech recognition 4 percent. If the cloud continues to absorb more data from our activities, and if the zombies take no lunch breaks while they keep copying all of our moves, we’ll eventually not be able to detect an AI versus a human.

▪ This pattern resembles a computer algorithm discovered by renowned computer scientist Stephen Wolfram called “Rule 30” and draws a surprising link between our organic physical reality and the invisible world of computation.

▪ So the seemingly inhuman algorithms that we code on computers may be more human and natural than we’d like to think.

▪ Does nature speak machine? Or can machines speak nature?

▪ Because for most of my life, I loved making things. Period. And I didn’t really have to do any talking about it. In fact, the epitome of this mind-set is one of my most well-known artistic works from my early career,

▪ In the mathematical world, the codependence of life was eloquently expressed as a succinct algorithm in the 1970s called “Conway’s Game of Life.” With “Game of Life” in its title, Conway’s “Life” is immediately misleading, because it brings to mind the better-known family board game invented in 1960, which evokes a much more playful experience than the stark grid of black and white squares upon which Conway’s game is played.

▪ Vernor Vinge wrote a paper in which, in the first line of the abstract, he predicted, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after the human era will be ended.”

▪ Vinge coined the term “the Singularity” to describe the moment of “imminent creation by technology of entities with greater than human intelligence.”

▪ He went even further and predicted that by 2045 there would be more computing power than all of the human minds combined on earth.

▪ There is even a Singularity University in Silicon Valley, cofounded by Kurzweil, to investigate the impending future when machines will eventually surpass us. Given that in 2015 a mouse brain was reported to have been simulated on a supercomputer, as Kurzweil predicted would happen, we can wonder what might come of Kurzweil’s prediction for 2023.

▪ The millions of invisible computational loops running somewhere out there in the cloud—without ever slowing down or getting tired despite running at close to infinite scales and infinite levels of detail—have been compounding in strength and capability. The technology industry and investing world have endeavored to fuel the growth of these magnificently powerful systems that no longer resemble the cold, calculating material they’re made of, as they now draw upon the observations of artists and their deep understanding of humans and their environments.

▪ Today I feel an imperative in our need to rethink the implications of computation in the design of new products and services, because we’re at a turning point that will irreversibly impact the future of humankind.

▪ These terms all essentially describe a computational philosophy dedicated to shipping an incomplete product followed by as many revisions as possible, instead of trying to ship a product that is complete.

▪ This is the same way you’d build any normal physical product like a car. But it’s not possible to make revisions to a car after it’s been delivered, unless you issue a product recall, which is both cumbersome and damaging to a company’s reputation.

▪ In the past and even in the present day, assembly lines are designed with the singular goal of reducing the additional cost of each unit, or marginal cost, in order to achieve greater economies of scale and increased profit per unit.

▪ Launching a digital product falls under the once mythical business categorization of “marginal cost close to zero”—which is an exceptional opportunity, because it means there’s no downside to building and distributing hundreds or thousands of extra copies of a digital product. In addition, you never have to deal with carrying inventory, managing its movements, and all the associated costs.

▪ For those with a traditional manufacturing background, it is a considerable conceptual stretch to imagine that such a plum business model could ever exist—but it’s easy to do so when considering what you now know about computation as a raw material that defies our laws of physics.

▪ This unique property of computational products means not only that their production and distribution costs are financially advantageous, but that product development costs can be significantly lowered by making a choice to never ship a finished product. You can always “replace” the product digitally with a brand-new and incrementally improved one, and remove all the financial risk from investing heavily all the way to a finished product.

▪ What’s more, the ability to remotely monitor how people are using these products means their makers can easily nudge the product’s design toward whatever will work best for the end user.

▪ In a world that’s made purely of computation, it’s trivial to ship a change so fast that you often don’t notice that it’s happened. It should be no surprise that quickness is a fundamental aspect of quality in the computational age—“good” is defined by how fast a new feature can get to you and how quickly you can get it to work for you. So especially when we consider the doubling effect of Moore’s law, we should expect that all digital products should be twice as fast as they were a little over a year ago and cost the same, if not even less. Consider how on any given day you are explicitly upgrading your software to “the next version” or waking up to a new version of the system that just last night might have been something quite different.

▪ The computational products you’re most often using become cumulatively better each day, but you might never have noticed the improvements being made because the many incremental changes are compounded over time.

▪ The cloud has made the pursuit of timeless design irrelevant—what matters instead is to be timely. What matters more is to be evolving in real time and never stop. And to seek bliss and satisfaction in a state of always being obsolete.

▪ But what makes software different from the planned obsolescence of automobiles is the computational medium’s intrinsic property of being “always obsolete” and just one update away, and with minimal environmental impact, to be transformed into the newer, better version. In fact, the standard Silicon Valley–style expectation is for software products to constantly get improved and upgraded while you’re asleep or even while you’re using them.

▪ The Temple of Design doesn’t rule this century.

▪ If design is reasoned intention, then we need to change our reasoning skills.

▪ But here’s the problem: the traditional world of design doesn’t live in the cloud, but in your beautiful living room. It’s the antique chair that your grandmother gave you. It’s the Swedish flatware that you got as a wedding gift. It’s the French leather case that rests atop the aluminum table you spent more than a few months mulling over. It’s the fashion magazine you flip through while sitting on Grandma’s chair telling you that your striped outfit needs to be thrown into the fire because artist Yayoi Kusama’s polka dots are now all the rage. This isn’t the world of the cloud at all, and yet it’s what we love most about design.

▪ Museums and galleries are filled with these objects, experiences, and highly opinionated prompts that we treasure in a way that can best be described as worshipping the Temple of Design.

▪ The Temple of Design isn’t an actual place, but more of a standard for knowing what great design is. In harsh, concrete terms, it is a cartel run by the museums, galleries, fashion and beauty industry, and arts education institutions worldwide that decree what great design is all about.

▪ Which is . . . what they say is great. And if you can’t tell it is great, then, well, you aren’t a designer. Back when I was trying to figure out design, I fell prey to this conclusion that I wasn’t a designer despite my freshly minted MIT degree, so I went in search of the Temple of Design by simply going to art school for a degree, and then much later actually running one of them.

▪ I can save you some time by informing you that becoming a valid member of the Temple of Design has just two requirements: 1) you need to understand the history of art and design, and 2) you need to think and work like a designer. The former requirement can be met if you read a lot, visit many museums and libraries, and simply put in a few thousand hours of learning. It is painless and eminently pleasurable.

▪ But to actually become a designer, you’ll need to actually design a few things with your own hands. You’re going to make some horrible things, which you won’t fully realize until a trained designer looks at what you’ve done.

▪ Learning the minimal basics of design requires only tens of hours of practice, but being able to excel and make your way as a real-deal designer requires the proverbial ten thousand hours of deliberate practice that anoint you as classically trained.

▪ The Temple of Design has long been able to dictate what making style is all about, because making anything for manufacture has always been a capital-intensive task. The question of what would go to production rested on the knowledge, and network, controlled by the tastemaker hierarchy.

▪ But this reliance on the rulings of a privileged few has long been under siege by the data-gathering and informal networks of rationally minded, non–Temple of Design acolytes.

▪ And the financial and influence capital made available in Silicon Valley, combined with the computational power now available to anybody, means that rationality can prevail and quality design can be had by all for an exceedingly low price—as long as it’s delivered on an electronic device.

▪ We live in an age when you never have to be “out of style” and can always be “in style” because you can always be equipped with the latest and greatest computational product designs—which you can now, in turn, play a key role in helping to determine.

▪ And nothing the Temple of Design can say or do really impacts what happens with computational products today, because, frankly, the Temple of Tech has sped past it in relevance at Moorean speed.

▪ The doctrine of the Bauhaus school was developed in response to the industrial revolution and the newfound ability to manufacture products using machinery and assembly lines in factories. Affordable appliances could finally be produced for the masses, but they were often difficult to use and didn’t fit the décor of consumers’ living spaces. So in the early 1900s the Bauhaus education program introduced superior ways to make products that were usable, desirable, and affordable—but the resulting work was not immediately met with popular acceptance.

▪ Micro-improvements for computational products are constantly refreshed on a schedule as rapid as by-the-second, by-the-day, or by-the-month, where the metric of quality is far more about how often it changes than about maintaining an unchanging product that pretends to be timelessly perfect. This runs counter to the ways of the Temple of Design and its tradition of laboring over decades to consciously evolve what should end up in the history of made objects as “finished” pieces to be worshipped forever.

▪ Instead, today we live in a real-time reality grounded in computational power fed by the clouds that have cast a shadow over the golden facades of the Temple of Design.

▪ We need to fundamentally renounce the traditional notion that design ought to aspire to completeness.

▪ So the new definition of quality is the opposite of the Temple of Design’s definition of quality: a finished product painstakingly crafted with integrity. The new definition of quality, according to the Temple of Tech, is an unfinished product flung out into the world and later modified by observing how it survives in the wild.

▪ It’s the opposite of betting big on a brand-name designer vetted by the Temple of Design to get it heroically just right with a grand ceremonial debut. And instead it’s about gathering a team of talented nerds who can take full advantage of an alienlike material with new properties that can transform business.

▪ It requires a seismic shift toward an attitude of being “lean” (eliminating waste and favoring experimentation over perfection) and “agile” (flexibly responding to customers’ changing requirements). Or, to put it in less techie terms, quality is about proudly embracing the attitude of working incrementally and completely underwhelmingly—to send oneself on a never-ending journey of making products that can never make it into the collection of the Victoria and Albert Museum.

▪ For a computational product designer, waking up each morning and asking yourself how you can lower your high standards is an odd way to get started.

▪ Your job as an MVP maker is to reduce your exquisitely grand idea to its most bare-bones possible form, a form that will only vaguely resemble your original vision.

▪ Your goal is to create what is perfectly appropriate for a low-cost prototype—which no self-respecting perfectionist could ever tolerate showing the public.

▪ The last thing you want is to experience the utmost embarrassment when a Temple of Design colleague tells you that you’ve made . . . poop.

▪ But the beauty of sharing an incomplete product with others is the opportunity to share it with folks who are unlike yourself.

▪ That becomes possible by virtue of network connectivity, and the fact that software can be beamed to most any device in the world.

▪ You don’t need to guess anymore, because the answers are out there and immediately available to you by gathering reactions from all over the world.

▪ The beauty of delivering unfinished and incomplete products is that you can always improve them later.

▪ There isn’t a need to finish them right now—or ever.

▪ Look at the internet. Is it finished? Not at all. It keeps evolving with new technologies and advancements coming out practically every day.

▪ Consider how, between the eighties and nineties, digital design underwent a shift from print to internet and a new kind of graphic design job emerged: website creator. Unlike designing a poster or book to a finished outcome, a website was designed and delivered—but then it wasn’t done. There would always be a need to modify or extend it, because Moore’s law was happening.

▪ So it fundamentally changed the idea of visual design as something you could deliver to perfection—it was as if the paper that the final designs were being printed upon would deteriorate just at the moment of publication.

▪ Naturally, the Temple of Design railed against Web design for this very reason—because it prevented their high standards of perfection from ever being achieved.

▪ So the simple reason why there aren’t many websites in museum collections today is that technology is in constant flux, and many art and design experts are waiting for it to . . . stop.

▪ For now, the new kind of product designer learns to become comfortable with the fact that their work might not make it into the history books or be displayed right next to a beautiful chair in an elite museum collection.

▪ In the old days, you were expected to know the correct answer, dub it “finished,” and simply walk way.

▪ Now, you’re open to achieving a more perfect understanding rather than a more perfect product.

▪ perfect product. You’ve come to know that the former is what makes for a truly timeless design in the end because of your willingness to tweak and improve on your formula

▪ Endup tech companies are often overrun with what is called “technical debt,” which has to do with how software gets built. If the software is built quickly and without an eye to the future, then it a ccrues a kind of debt that becomes evident when a newer piece of software starts to depend upon an older layer of software that wasn’t designed to last forever.

▪ This happens all the time in the physical world—for instance, when a “startup” bridge is built to span a river in a fast and cheap manner. Before long, this bridge will have become a key piece of infrastructure for a transportation business that hauls goods back and forth over the river.

▪ Because it would be too disruptive and expensive to upgrade the bridge to allow for more weight or to add a few new lanes to alleviate traffic, the problem is one that its users and operators need to simply deal with.

▪ The bridge carries technical debt that could have possibly been resolved if its builders had the time to engineer it to carry a heavier load or to prepare it to easily add extra lanes if its popularity grew.

▪ Whereas in the physical world it’s usually impossible to address any technical debt, in the virtual world it is indeed possible to do so. Why? Because the material is by nature incomplete—it’s always subject to change. That doesn’t mean it’s easy, of course.

▪ The main constraint to addressing technical debt in an endup company is the people who have made the existing running systems—or, to use my bridge analogy, the bridge builders who are hesitant to modify a bridge that’s working fine and well enough

▪ I’ve worked within, and with, more than a few endups and seen firsthand how the technical debt tends to be managed by valiant, hardworking people who are able to sustain an existing system’s performance, slightly enhance it, or slow down how it atrophies

▪ The more useful your software is, the more other systems depend on it, the scarier change is. You’re risking more than your piece of the world. You need progressive delivery, careful data migrations. Backwards compatibility: twice as many tests, and special cases in the code. You need to design the whole change.

▪ So I urge you to think carefully about the higher value of incompleteness, and the importance of investing in constant iteration while being ruthlessly unsatisfied with the incomplete product that gets pushed out into the world. And keep in mind that the need to iterate quickly should not serve as an excuse to avoid reevaluating your strategy along the way.

▪ The special people who make computational products are amongst you, but there are no telltale stains on their clothing, calluses on their fingertips, or any other physical evidence such as you might find with people who’ve built railroads, airplanes, or any other similarly massive machines

▪ massive machines. These software people quietly, and invisibly, move millions of numbers into place to construct software machines that move, process, and transform information at high speeds and at extraordinary scales

▪ And although speaking machine doesn’t make you into a machine, it will naturally impact your psyche over time as the non-machine-speakers around you can start to appear blissfully clueless about what’s really going on.

▪ When it comes to pushing an idea out to users via the cloud with the kernel of an idea that can be improved over time, it’s important to remember who has the most control over making all that magic happen.

▪ This gap has been exacerbated by the fact that the very nature of business is being transformed by the cloud. In a world where products are no longer finished and incremental improvements are regularly deployed, the relationship with a consumer shifts from buying a product once (owning it) to paying a fee to use it for a set period (renting it).

▪ Whereas in the past we might have paid a few thousand dollars for a “finished” piece of software on a CD-ROM, we now pay a few dollars per month for regular access to a cloud-based service. This shift to a recurring revenue model, or subscription business, has many advantages, which include scalability, predictability, and high customer engagement.

▪ A purely functional approach is no longer enough, and instead a richly experiential one has become table stakes because of the relatively new premium standards that have been set by mass market devices and services—think Apple and Instagram.

▪ Although the definition of “viable” is debatable and rightly in the eye of the beholder—because the folks who actually build software systems usually come from an engineering background where “V” will signify being reliable and having as few defects as possible.

▪ So “minimally viable” tends to mean prioritizing all available resources on the functional aspects of the product.

▪ Let me linger here a bit longer, because it’s important. The engineers are the ones who can “see” the invisible computational world and who are managing a level of complexity that nobody else can see. They’re also coping with multiple layers of technical debt that will accumulate as they forge forward, which can easily involve not only the bridge that they’re building but also all other bridges that remain interconnected.

▪ Only their tired typing fingers can push back against the many dams of chaos and complexity that abound in a world that doesn’t reward them for their heroic efforts—no smoke from fire, no stains from dirt, no perspiration from manual labor. So when a business perspective drops into the battleground, or when a design perspective floats in from the periphery, it’s important to have empathy for these warriors of cyberspace.

▪ It can only be helpful to understand them, starting with your new awareness of the computational world. Being able to speak a little machine won’t hurt either. When asked how much you speak, you can always coyly respond, “Only a ‘bit.’”

▪ A techie might be fine with a rough, purely functional experience, since their tolerance for discomfort is already high to begin with.

▪ A professional test pilot in an experimental aircraft doesn’t need a cozy place to sit, whereas a passenger on a commercial jet will expect a pillow and a soda—preferably the whole can. To make this point clearer in an MVP-ridden world of computational products that are missing creature comforts, I like to use the term “MVLP,” where the “L” stands for “lovable.”

▪ The word “telemetry” was coined in nineteenth-century France, back when telecommunications technology was first emerging. It described the use of an electrical apparatus that transmitted the snow depth from Mont Blanc, the highest mountain in the Alps, to Paris.

▪ A key factor that made telemetry especially valuable was that it replaced a person with a sensor, removing the need for a human who gathered data manually on the remote end. The sensor data would be automatically read and electrically transmitted one way to a home base that needed to be within range.

▪ Try to imagine what it must have been like to be a scientist in Paris in the late 1800s capable of knowing the snow depth at a location almost four hundred miles away. Telemetry must have felt as magical to the French scientists as the internet first felt when it got started.

▪ Once you realize this is happening, it’s only natural to ask yourself, How do I turn that off?

But it is not possible to enjoy the benefits of the conventional internet without this underlying two-way connectivity—there’s no way to turn it off. This is the nature of networked software systems: they all have an explicit connection to a system running somewhere outside of your own computer.

▪ Your every action can be instrumented and telemetered back to a home base somewhere in the cloud. So instead of being explicitly asked a question about something, like in a pop-up survey, it’s possible to simply monitor your behavior and infer what you might be trying to do. The fact that you linger on a particular image longer than the rest can imply you have an interest in it.

▪ Surprisingly, prior to the 2018 European Union legislation called General Data Protection Regulation (GDPR), there were few impediments to the way technology companies could collect, process, and share data with third parties—all unbeknownst to users

▪ At the time of this writing, the United States has no similar consumer-level legislation, so there are fewer restrictions on what companies can do with your information. And in general, when we click on the long legalese telling us terms of service have changed, we’re often giving up some of our rights concerning ownership of the information we’re accessing or creating.

▪ This is all the more alarming when you consider how it’s not just one or two developers at a software company manually analyzing your every action, but instead computational systems that never take lunch breaks or vacations and get down to extreme levels of detail and precision in working to know everything about you.

▪ The computational approach to creating products is powerful and dangerous—that shouldn’t be a surprise to anyone, since power and danger go hand in hand. But the latter concern is frequently overlooked because of what’s often labeled as the “arrogance” of today’s technology leaders.

▪ Reading a message that says, “Welcome back, John!” feels good at first. But it may feel less good when you visit a completely unrelated site for the first time and it enthusiastically welcomes you by your name. You’d feel similarly awkward if you showed up at a restaurant you’d never been to before and a server you’d never met before is addressing you by first name saying, “John! How’s your new job going?”

▪ Avoiding this situation—where strangers “know” you to a rightfully uncomfortable degree—is a matter of highest urgency for humanity right now. You’ll hear about it in the media regarding our privacy and how to protect ourselves, and it’s only natural to want the invasion of our privacy by machines to stop.

▪ But to ask a computing device to stop gathering information and to stop sharing it with other devices is like wishing away all the magic in your magic wand.

▪ Computational machinery, by its very nature, can and will be instrumented in some way because this is an intrinsic benefit to the paradigm. The level of instrumentation can vary from capturing your every click and keystroke on a device to capturing your three-dimensional location information on earth at every moment.

▪ You may have already heard about how the “cookie” is the basic unit of tracking on the Web. Although they sound completely harmless, cookies are the first sin of the Web while also being one of the reasons internet advertising businesses became so successful during the rise of the internet.

▪ Cookies are little pieces of text that any programmer can “park” inside your browser to later access when you come back to it—that way, the browser remembers what you have already visited, and when.

▪ Incidentally, this basic technical mechanism of leaving cookies-as-text in your browser’s cookie jar also allows services unrelated to the sites you’re visiting to park information about you. These are called “third-party cookies,” and I recommend you disable them in your browser’s settings after reading this.

▪ It’s possible to turn off all cookies on your browser, including first-party cookies, but doing so makes the Web much more cumbersome to navigate. Cookies bring convenience, because they mean you don’t have to remember a password to a service you’ve already logged in to—a cookie gets placed on your computer to mark it fully authorized as “logged in” so you don’t have to go looking for your password somewhere in your pile of notes.

▪ And don’t worry, cookies aren’t inherently harmful.

▪ For the foreseeable future, you will be constantly trading computational convenience for digital information that you reveal about yourself.

▪ The more privacy you give up, the more convenience you get. Said differently, when you share information about yourself, you are guaranteed the pleasure of getting what you want instead of feeling the pain of being served incorrectly.

▪ For example, every hotel chain out there is aware that I don’t like to stay in a room next to the elevator. In a similar vein, every airline out there is aware that I prefer an aisle seat. Do I mind that they know this about me? Not at all, because it means my desires are more likely to be met.

▪ There are moments when you explicitly opt in to be instrumented and telemetered, like when a site asks you to reveal your location to it. If every layer of technology were to do so similarly, then ultimately you would likely not be able to use the internet as you know it because there are a lot of assumed permissions that we’ve already handed over.

▪ Gmail has processed all your emails and knows how you might respond to a message, it will automatically suggest a response. That sounds magical too. By giving the cloud companies access to all of our information, we enable them to do wondrous things for us and brew the perfect temperature and quality of tea. The only problem is, what happens if hackers break into Amazon and steal your credit card information, or manage to access all your emails from Google?

▪ The way to mitigate the accepted risks is to understand, and to respect, how computational systems work and what can possibly happen when things go awry. To wish away all of the miraculous conveniences that the computational era has introduced would mean that I couldn’t easily text my mother a heart emoji at any time or work globally while managing to attend my children’s dance performances.

▪ If the average person unlocks their smartphone over one hundred times a day, and there are now billions of handheld computing devices out there, we can figure that the amount of information generated from accessing these devices easily justifies the commonly used term “big data.”

▪ It’s important to listen to what Dr. Regina Dugan shared with the TED audience in 2012: “Nerds change the world. Be nice to nerds.”

▪ Harvard Business Review helped popularize the role of data scientist by declaring it “the sexiest job of the twenty-first century.”

▪ It gave the following definition:

Data scientists’ most basic, universal skill is the ability to write code. This may be less true in five years’ time, when many more people will have the title “data scientist” on their business cards. More enduring will be the need for data scientists to communicate in language that all their stakeholders understand—and to demonstrate the special skills involved in storytelling with data, whether verbally, visually, or—ideally—both.

▪ But it took my being a resident in Silicon Valley to fully understand how much data was available to technology companies, and to more earnestly think about the hugged { code block } sitting at the heart of the nested server/device loop on page 147:

▪ If you’re wondering, like me, why data science wasn’t an available major when you were in college, it’s because the field of statistical analysis was only nominally useful, due to two factors: 1) only if there is a lot of data can you get good statistical accuracy, and 2) processing a lot of data by hand is terribly tedious and boring.

▪ But computation is the source of the former and the solution to the latter. It seems almost self-serving that computation is the reason why we have so much data while also being the solution to understanding it.

▪ This ability to communicate a conclusion based upon a high-R-squared and low-p-value model requires not only good communication skills, but also an awareness of the tendency to confuse data-backed inferences as indisputable facts.

▪ Quantitative information carries a lot of power in an organization of any size—even when it’s wrong—so always keep in mind that data itself doesn’t produce facts or answers. It only paints a raw picture that requires human insight and interpretation, and thus is never going to be 100 percent correct.

▪ The best way to assess a functional design is through a combination of quantitative and qualitative methods. The numbers will tell you what’s going on, and the individual people will help you understand why it’s happening.

▪ If the former is data science, then the latter is data humanism. The science attempts to answer the question, and the humanism makes you ask why it’s relevant to people. The science is what makes you run a quantitative study showing that poor people have a greater tendency to feed their children junk food. The humanism is what makes you listen to disadvantaged parents explain that junk food is one of the few things they can afford to show their kids they love them. Having grown up overweight in an underprivileged family, and later educated to think quantitatively at the finest institutions in the world, I’m reminded how all my smarts aren’t worth as much as listening carefully to people.

▪ Qualitative insights are fueled by carefully structured conversations with human beings just like yourself—serving as a constant reminder of who this work is ultimately meant to serve.

▪ We’ve taken the long road to get to the heart of what makes instrumented systems so exceptionally useful: the ability to easily test out an idea.

▪ By shipping an idea that’s instrumented for data collection into the world, it’s easy to gauge success or failure by just observing how it actually gets used.

▪ But what’s even better than learning whether an idea is good or not, is instead deploying a few distinct variations on the idea simultaneously to learn which is the best direction of the bunch. Maybe a particular aspect of an idea is better than others—this can be revealed by testing variations instead of throwing out an entire idea. It’s like designing a fishing lure by attaching three variations of it to the end of your fishing pole and seeing which one catches the most fish. And then, by using the most effective lure as the basis for your next set of variations, it’s possible to find the next best lure, and so on. The ability to easily test variations on ideas in this manner reduces the risk of random guesses and can only improve your product design.

▪ In the event we got unusually better at guessing at how to improve our system—say, 2 percent a day instead of just 1 percent, then it would be:

▪ This teaches us what happens if we get in the habit of guessing badly too often.

▪ Putting those concerns aside, it’s exciting to consider how making a thousandfold improvement over the course of a year is an imaginable possibility

▪ Yet I’d caution you to temper your giddiness over wielding the power of compounding thousandfold gains when aimed at a noble task, like enabling a retired person to afford their weekly grocery bill as opposed to the more sinister prospect of changing a voter’s opinion on a candidate. I throw in this cautionary point, as I’ve tried to do throughout this chapter, to reinforce how the cloud is not just a network of computers but a network connecting human beings. It’s certainly easy to forget, so if I sound like I’m preaching, understand that that is not my intent—it’s so I can remind myself in the future when I read these words. I find myself forgetting how easy it is to automate bad behaviors.

▪ So when we’re running experiments on many computing devices via the cloud, we’re running them on real people—not in some invisible, detached computational simulation. For example, in 2014 Facebook shared how it ran an experiment on 689,003 Facebook users in which they were exposed to varying degrees of their friends’ negative posts more often than their positive posts, and vice versa. Facebook concluded that it was possible to manipulate a statistically significant number of people’s emotions using this method—in other words, the tests showed the potential to fully automate turning a half million people into an angry mob.

▪ A more winning example of a successful test is the fundraising experiment run on barackobama.com during Barack Obama’s 2008 presidential campaign. Rather than shipping a final website for soliciting donations and then walking away from it, the Obama team ran twenty-four different combinations of buttons and media content to determine that one variation performed 40.6 percent better than the default and could raise $60 million more. The original leaders of this effort for the first Obama campaign went off to start a successful company called Optimizely, which makes it easy for anyone to run such tests. Unsurprisingly, the testing obsession continued in the Obama 2012 fundraising campaign. One example involved testing variations of an email with different subject lines, to learn that one—“The one thing the polls got right . . .”—generated $403,600, whereas another—“I will be outspent”—generated $2,540,866.

▪ When dealing with physical products, the cost of experimenting with variations is still high—although it’s starting to fall in certain categories, like when using three-dimensional printing to rapidly fabricate physical prototypes in plastic or metal with increasingly uncanny accuracy. Testing physical variations on people can involve hefty shipping costs as well as costs due to lost time when moving a prototype around the earth, which three-dimensional printing is just starting to address with the ubiquity of printers making it easy to fabricate objects at remote locations.

▪ The best way to learn how to effectively execute these kinds of “split tests” (also called “A/B tests”) is freely available in the short 2007 research paper “Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO.”

▪ The key point is stated in the paper’s title: don’t listen to the “HiPPO”—or “highest-paid person’s opinion.”

▪ In other words, let rigorously managed data experiments with your customers guide you to an outcome, instead of letting the boss’s opinion overrule your thoughtful work. Keep in mind that a successful variation experiment depends on starting in the right place to begin with—which is usually the choice of the HiPPO

▪ Also remember that having sufficiently divergent variations that make for a worthy test will benefit from tapping into your team’s creativity.

▪ Testing variations on a basic, core idea works best when you are already comparatively close to success. It’s like playing a game of “hot and cold” in search of a hidden object. When you hear “hot”—which means the object is nearby—your best strategy is to take little itsy-bitsy steps and tiptoe around to incrementally make your way to the prize. In contrast, it doesn’t make sense to make a sudden, random leap of faith to jump completely out of range, because you were already close to finding the prize.

▪ Testing ideas that can be instantly delivered to users through the cloud provides an efficient derisking framework when a computational system is thoughtfully instrumented and properly staffed.

▪ At the heart of the testing approach and the success of its adopters, like Amazon, Airbnb, and Netflix, lies the simple customer-centric wisdom of Claude Hopkins. Way back in 1923 he stated:

Almost any questions can be answered, cheaply, quickly and finally, by a test campaign. And that’s the way to answer them—not by arguments around a table. Go to the court of last resort—the buyers of your product.

▪ Testing with your customers comes with added costs and risks that need to be weighed against the opportunity costs of not making any modifications or attempted improvements. And testing is only really useful when you have a measurable outcome in mind, like nudging a click or driving a sale to compare with a base case you can measure against.

▪ Be especially careful of finding yourself in a situation where the default cultural response to any idea becomes “Just test it.” On the one hand, this means there’s a culture of being open to new ideas; on the other hand, it can suggest a certain laziness has set in to avoid being thoughtful when faced with contradicting opinions. “Just test it” can easily become an excuse for not investing in the informed efforts needed to start off with a really good idea.

▪ In other words, testing is real work that can be worth thousandfold improvements and millions of dollars—when you do it right.

▪ Whereas the Temple of Design pushed us to ship something complete and as close to perfect as possible—epitomizing the high-stakes way of doing business in the last century—the Temple of Tech teaches us to push out something incomplete and instrumented that significantly lowers the stakes to begin with

▪ It’s logical to blame the technology industry for the universe of computational machinery that permeates every aspect of our lives today. But it’s not their fault alone—it’s your fault too, because you were oblivious to what the Zuckerbergs of the world have long understood as fluent speakers of machine.

▪ Consider how the first car to break the sixty-mile-per-hour limit was a torpedo-shaped electric vehicle called La Jamais Contente—The Never Satisfied—invented by Belgian engineers and tested in France in 1899. If La Jamais Contente were to have evolved at Moore’s law standards, it would have broken the speed of light by 1925.

▪ On the one hand, it can seem ominous, dark, and terrifying if you can’t fathom what it’s capable of doing—which is what most people are starting to feel. On the other hand, we can feel gratitude that data and ethics experts have long been thinking through many of the issues so that we can start to distinguish between ethical and unethical sources of data collection.

▪ We’re still in the infancy of understanding what it means when the data owned by a credit card company can be completely anonymized and sold to a company like Google, to later be deanonymized by simply matching the credit card transactions to location information that’s either broadcast by your smartphone or tagged by information you might share about yourself online.

▪ There are roughly five phases of evolution in the software product industry that culminate in an end state where AI will take care of everything because it will be all-knowing.

▪ We’ve made it through the first three phases and are currently in the middle of the fourth, where human intelligence is blended with computational intelligence in a kind of hybrid—thus dubbed the Centaur era by Nicky Case—to signify the melding of a human intellect with the superior physical stamina of a horse.

▪ Shrink-wrapped Boxes: We shipped software in boxes with tamperproof plastic and shipped updates by the same method.

▪ Shrink-wrap + Download: We made boxed software optionally downloadable online, and we made updates available online.

▪ Software as a Service (SaaS): We moved software into the cloud as a service, and human teams labor to continuously update it.

▪ WE ARE HERE—> SaaS by Centaurs: We run software in the cloud that’s constantly being improved by human teams that collaborate with light AIs.

▪ A New Beginning: We will be using software that evolves more rapidly than ever because it’s powered by know-it-all heavy AIs.

▪ Media Lab cofounder Nicholas Negroponte famously posited this future of fully tailored customer care back in the 1970s with his “Daily Me” concept of a newspaper that was completely customized to just what interested you. It’s now a reality in every facet of our digital lives, because all the centaurs have been working hard to automate and constantly improve the hedonic bubbles surrounding our brains.

▪ Case in point, even when we’ve become split-testing experts, we’re likely to only succeed a third of the time, fail a third of the time, and have no impact otherwise.

▪ But in time I came to see the difference between the two terms:

Technologist = I do, because I can.

Humanist = I do, because I care.

▪ The grand task of addressing the impending automation of an imbalanced society is upon us right now, just as we are on the precipice of the Singularity, and it is not too late for us to do something about it. If and when computers fully outpace the intelligence of the entire human race, there will always be certain things that machines will not be able to beat us at doing, and it’s our job as humans to figure them out.

▪ Given that the technology industry has long been fed by the technology education industry, where I’m originally from, I had long assumed that the matters we addressed at the source would make their way downstream

▪ I naively thought that with this statement made public by MIT, and subsequently reported by The New York Times, the great injustice of gender discrimination was henceforth officially banished from the technology world. So I was more than flabbergasted when I arrived in Silicon Valley over a decade later and was introduced to a room full of the “top UX designers in Silicon Valley,” only to find two women present.

▪ As I began to dig deeper into the statistics for tech, I became concerned. I learned that there was only 21 percent representation by women in tech, whereas the overall population of women in the United States is roughly 50 percent—an obvious imbalance.

▪ In 2014, the US Equal Employment Opportunity Commission reported that the high-tech sector employed 7.4 percent African Americans, 8 percent Hispanics, and 14 percent Asian Americans, whereas the average overall representation in the private sector was 14.4 percent for African Americans, 13.9 percent for Hispanics, and 5.8 percent for Asian Americans.

▪ From a systems perspective, we can predict that an imbalance in the tech industry of this magnitude will likely perpetuate without self-correction. Tech companies need to run at full speed to keep up with Moorean time scales, which fosters the pressure to optimize for “culture fit” among potential hires—meaning people who are “just like us.” That way, a new person will take less time to onboard (because they are “like us”), create less day-to-day friction (because they are “like us”), and follow the boss (because they are “like boss”)

▪ And these people, in turn, will hire more people like themselves—unless there are explicit systemwide interventions and incentives or penalties that can break this cycle. Whether it’s the friends you went to college with who have similar tastes, or the people in your neighborhood who moved there for similar reasons, or your professional circle that’s already sorted itself for maximal camaraderie, our tendency is to reduce friction and choose sameness over difference.

▪ So it shouldn’t surprise anyone that the tech industry is filled with people who are more likely to think alike and come from similar backgrounds, because the need to move fast will always outweigh a slower, considered approach.

▪ But when there is a “we” that defines us, there is a corresponding category of “not like us” that is naturally excluded because “they” think differently and will slow us down. The Temple of Tech is no different from the Temple of Finance or any other specialized profession that seeks to foster its own culture. The boundaries of any temple will nurture safe cultures of like-minded people who prefer to avoid the friction they might feel whenever they’re not with their own tribe.

▪ The difference is that, although we should care about inclusivity in any sphere, the techies exert disproportionate influence as they operate at a whole different Moorean speed and scale.

▪ For example, when an internal hiring tool was programmed by an Amazon team of majority pale male AI experts who’d leveraged past data from hiring decisions by likely majority pale male managers, the computational systems demerited résumés that mentioned attending women’s colleges or used the word “women’s.” So what can easily be pointed to as a “computer program error” needs to be considered as more of a “culture error” if we are to truly prioritize accountability.

▪ With the players in the Temple of Tech running at computational speeds and scale, we can expect the velocity and level of imbalance to be unparalleled—and eventually fully automated. Beyond the social implications of why that’s not a good thing in terms of equity and justice, from an organizational perspective it represents a suboptimal path for achieving breakthrough innovations.

▪ Sara Wachter-Boettcher’s landmark book Technically Wrong documents the many ways the technology industry has unconsciously let the biases of its primarily pale, cisgender male culture impact its products. The result is everything from menstruation apps that refer to users as “girls” to shopping apps that push notifications to women to shop for a Valentine’s Day gift to delight “him.” Or when a popular social media company released a real-time image filter to add slant eyes to any face to approximate an Asian caricature—just months after it had released a real-time image filter to darken a light-skinned face to look black—such unacceptable mistakes result in PR backlash that, while costly, may not result in the hiring investments that are necessary to make better product decisions.

▪ There’s interest in moving from the narrow-mindedness of the “culture fit” approach to instead valuing differences as “culture add”—that is, bringing new voices and ways of thinking into an organization as a positive asset. For all of Google’s diversity and inclusion challenges—as illustrated by the firing of an employee for his internal antidiversity manifesto memo—the company has been steadily investing in a promising area called “product inclusion.”

▪ Sound impossible? Absolutely. But computation enables the impossible, and if we fully harness its capabilities, we humans can re-create the Temple of Tech as one that is welcome to all.

▪ Given that we can easily deploy computational products that are incomplete and instrumented, we have the opportunity to get back in return a large amount of data to determine how to modify and improve those products.

▪ So with computational products, it’s easy to become biased toward broadly observing the behavior of thousands of users instead of consciously investing time to delve deeply into just a few individuals’ experiences with your products. Why? The simple reason is that it’s so much easier (and thus cheaper) to take the instrumented approach to studying aggregate behavior, due to the computational power available to us today.

▪ By contrast, it’s much more low-tech (and thus more expensive) to study individual people’s behavior through methods developed by anthropologists—in essence, ethnography. It’s a positive sign of customer empathy to share insights after spending time with that one nontechie customer named James who is having a few challenges with your product.

▪ Here’s the problem: the “scientific” response given as numerical data will usually get the majority of affirming nods compared with the stories of James’s challenges and the roadblocks they’re currently facing. That’s because a quantified viewpoint appears like fact—an important signal extracted from the noise—while a qualified viewpoint appears like a noisy customer who just doesn’t “get it” and can be discounted because he falls outside the 7.2 percent.

▪ In actuality neither the aggregate data nor the individual’s story constitutes fact, because both contexts involve people. Human beings are by nature unpredictable, so anything involving predictions of human behavior is ultimately going to be a guess. One kind of guess uses quantitative data and the other uses qualitative data.

▪ We pay big money for high-quality guesses as one of the means to lower risk in decision making, but no guess can give a 100 percent guarantee of success. That’s why it’s a guess, not a fact. And the best way to guess better, as any investor knows all too well, is to create a portfolio of bets so that all the casino chips aren’t placed on only one of the guesses.

▪ This is especially difficult since effortless access to quantitative data is a primary benefit and de facto outcome of the computational era, so to folks who live every day in the future, it may feel like going in the opposite direction.

▪ Furthermore, if it only costs five dollars a month to gather and analyze millions of your online customers using your products, it can seem unnecessarily expensive and inefficient to make the investment in working one-on-one with a customer, which can cost hundreds of dollars per month.

▪ To draw on another finance industry analogy, the best investors will not only carefully analyze the funds they are engaged with, but they will fly out and do a site visit to the fund managers for that extra bit of due diligence as an investing professional. So if the due diligence of the investing world sets the highest standards, then talking with a real customer from time to time makes good business sense.

▪ This is the lesson of good ethnography: to understand a cultural phenomenon, you need to get as close as possible to “first-source” information, instead of relying on second- or thirdhand information.

▪ Furthermore, to truly understand first-source information, you need to invest time in knowing and understanding the cultural context that surrounds it.

▪ But ever since I started to actively work face-to-face with customers—as inspired by Intuit founder Scott Cook’s habit of “going home with the customer,” watching them install and run his software system, a practice he began back when he first launched Quicken—I now fully realize that the time is well worth the investment.

▪ It can be a terribly uncomfortable thing to do, because you will quickly know when you’ve let down a fellow human being with the decisions you’ve made in your product.

▪ For example, I recall how in the nineties Japanese copy machine makers were designing elaborate user interfaces to manage paper jams, only to be blindsided by organizations going paperless as a better way to share information.

▪ I liken this to what we often encounter in customer support as managing a lot of “flat tire” situations, and then we immediately want to spend all our time to make a tire that can’t go flat—or more often getting really good at fixing the flat tire. Meanwhile, we can forget to think about where the customer was going in the first place—asking instead, “What was their destination, and what hopes and dreams were associated with it?”

▪ By starting from the motivation question as the driving force behind thick data, you’ll remain more strategic as you immerse yourself in firsthand information.

▪ Remember that you’re looking for subtle, human details that are impossible to capture with charts and numbers, so try to rely on your ability to smell and feel a situation. Be what AI cannot do.

▪ Thinking like a classical designer, and believing your solution is the one that all will bow down to and adapt. The standards that elite institutions uphold as the cultural compass for the classical design world are underwritten by subjective decisions and invisible wealth networks that facilitate what gets remembered versus forgotten.

▪ The Temple of Design narrative of the “genius designer” is not a reliable pathway to success—it’s seductive, but it’s stupid. Use this nose less.

▪ Computational machines are master copycats and are powered by the quantitative data at their foundations. So we’ll need to pay attention to our completely normal “human nature” of relying on our usual biases, aka “wisdom.” The computational era will easily turn out poorly for all of us if we don’t balance out all of our quantitative data with more qualitative data.

▪ And gather as many observations from people who are unlike yourself—because triangulation works best when you have the most diverse set of sources with which to tune and retune your data-informed guesses.

▪ Automation in Moorean terms is very different from simple machines that wash our clothes or vacuums that scoot about our floors picking up dirt. It’s the Moorean-scale processing network that spans every aspect of our lives that carries the sum total of our past data histories. That thought quickly moves from wonder to concern when we consider how all that data is laden with biases, in some cases spanning centuries

▪ We get crime prediction algorithms that tell us where crime will happen, so officers are sent to neighborhoods where crime has historically been high—that is, underprivileged neighborhoods. And we get crime-sentencing algorithms like COMPAS, which are likely to be harsher on black defendants because they are based on past sentencing data and biases.

▪ Let’s recall again how the new form of artificial intelligence differs from the way it was engineered in the past. Back in the day, we would define different IF-THEN patterns and mathematical formulas, like in a Microsoft Excel spreadsheet, to describe the relation between inputs and outputs. When the inference was wrong, we’d look at the IF-THEN logic that we’d encoded to see if we were missing something and/or we’d look at the mathematical formulas to see if an extra adjustment was needed. But in the new world of machine intelligence, you pour data into the neural networks and then a magic black box gets created: you give it some inputs, and outputs magically appear. You’ve made a machine that is intelligent without having explicitly written any program per se.

▪ Machine learning feeds off the past. So if it hasn’t happened before, it can’t happen in the future—which is why if we keep perpetuating the same behavior, AI will ultimately automate and amplify existing trends and biases. In other words, if AI’s masters are bad, then AI will be bad.

▪ In our machine intelligence era, an apt analogy is how children inevitably copy what their parents do. Oftentimes they can’t help growing up to become like their parents—no matter how hard they try otherwise.

▪ While in the old days solving a complex problem by writing a computer program could take months or years, now machine intelligence can rapidly conjure an equivalent computational machine when just fed with past data by which it can model a past behavior.

▪ So rather than imagining that all the data we generate gets printed out onto pieces of paper in a giant room somewhere at Google, with a staff of twenty running around trying to cross-reference all the information, think instead of how machines run loops, get large, and are living. The logical outcome of that computational power—a rising army of billions of zombie automatons—will tirelessly absorb all the information we generate and exponentially improve at copying us. The AIs are not to blame when they do bad things.

▪ With revelations that systems like Facebook are able to alter how an individual behaves, we’re reaching a critical moment in how we want to coexist with computational systems. If we expose ourselves to technology that is programmed to be sexist, misogynistic, homophobic, racist, and so forth, we shouldn’t be surprised to see things like The Wall Street Journal’s “Blue Feed, Red Feed,” which shows you what you will see if Facebook tags you as liberal or conservative.

▪ The rise of computational design, and the incredible business value it has brought with it, was also creating imbalances in ways that were not immediately apparent to the people and companies at the center of it all. The fact that AI à la levure was odorless really bothered me.

▪ Fortunately there’s change afoot, led by inclusive design expert Kat Holmes, with ideas that began when she was at Microsoft that are now spreading across the world via her 2018 book, Mismatch: How Inclusion Shapes Design.

▪ Anything that we feel comfortable with is going to be laden with biases, and because computation has been built by techies, we can expect it to be laden with techie biases.

▪ It’s a common practice in industry to make closed systems, because when successful they provide the invaluable ability to exercise full control.

▪ When launching the first Mac computer in 1984, Apple famously went with a closed computing system that wasn’t easy to extend like the competing Wintel PC standard at the time. As a result, Apple was able to control the entire user experience in ways that no other computing brand could achieve. This strategy of a closed system approach played out again later with Apple’s launch of the iPhone, and the rest is history.

▪ Meanwhile, there was an emerging mobile OS project called Android that chose the unusual path of making all of its computer code (“source code”) openly available. Today, there are more devices powered by Android than Apple’s operating systems.

▪ As of 2019, Apple’s approach has started to weaken, and the company is being forced to participate outside its own closed universe. Are its unfair advantages eroding?

▪ The term “open source software” was coined by Christine Peterson in 1998 as a way to better embody the community values that are inherent in it, as opposed to the prevailing term of the time, “free software,” which connotes lesser quality.

▪ In open source, the software is the community and not just the code.

▪ The advantage of collaboration over cooperation is that mutual benefits result from working together, with all parties making compromises of varying degrees. In the absence of the ability to collaborate, the only recourse for governments to rein in the Temple of Tech today is to attempt to regulate it. Interestingly, if all software by Temple of Techsters were fully open source, then there would be no need for governments to take their current course of actions against them. Why? Because the source code would be inspectable for violations of issues that we are all concerned about today, like knowing what they’re doing with all the data they are gathering about us. It’s harder to do evil when there aren’t opaque walls shutting everyone else out. An open systems approach is an alternative to government regulation, and so I expect we’ll see more of this approach when politicians who can speak machine, like you, get elected.

▪ In a world where everyone seeks to collaborate with each other and inflict no harm, then full transparency can mean “sharing is caring” and lasting harmony. However, there will always be a few bad actors who are always looking for ways to manipulate a situation in their favor for reasons that can only be explained as human nature. So open source is not always the way to go. For example, you would never want to publish all the source code for your personal electronic banking system that can easily access all of your finances. An open source approach might be commendable if such an act of sharing let others make a similar system for themselves, but you can bet that all your money would soon vanish if your source code included sensitive information, like bank account numbers and passwords. Or if Facebook’s algorithms were all open sourced, then an entity with malicious intent could rewrite the timeline codes and easily manipulate your timeline. And of course, there will always be competitive advantages to justify why a business would want to keep its code private: to keep its unfair advantages over its rivals.

▪ Nonetheless, businesses are recognizing the value of open source. Microsoft surprised the world with its acquisition of GitHub, the world’s largest community of open source software development. To grasp the magnitude of that acquisition, just ask your programmer friends about it—some may not even know that Microsoft owns GitHub, because Microsoft has chosen not to alter or rebrand how it currently operates.

▪ Besides Android, another example at Google is the Chrome web browser, which runs on an open source engine.

▪ Now anybody can make a web browser on top of it—and even Microsoft has announced it will be switching over to Chrome’s engine. Relatedly, Apple’s web browser Safari is an example of a hybrid open/closed system: it shares its lineage with the same open source engine as Google, but the rest of its code is not accessible to the public.

▪ When using open source in your product, be sure to check the licensing rules—some licenses let you use the code without any restrictions, while others require you to openly share the rest of your code if you use theirs. The former is often referred to as an “MIT license,” which gives you a lot of freedom; the latter is a GNU “GPL license,” which is more about giving others a lot of freedom.

▪ Let’s not forget about the other kind of programming that has less to do with shareable computer codes—I’m talking specifically about AI à la levure. Newer machine intelligence systems aren’t composed of readable computer codes but are instead packaged as opaque black boxes of numbers and data with no clear logical flow.

▪ It’s long been a concern that these methods are so complex that we don’t really know how they work—they’re not legible by human beings because they’re essentially piles of raw numbers.

▪ The severely closed nature of these systems—which have biases based in the data that they are fed to be trained—has set off alarm bells about the need to address their inherent opacity. New work is now emerging on computational ways to inspect these opaque AIs to behave more like “gray boxes” that might give us more insight into how they work. And if we can’t figure out how they work, there are also efforts under way for AIs to start asking why they’re being told to do something, so they might build the equivalent of a conscience

▪ Frankly, it’s easy to be terrified of AI as it’s becoming an increasingly common topic in popular media. We’re not far away from hearing about a cleaning robot that refuses to listen to you, or an app-enabled pacemaker that extorts you, or a cybercrime cartel that has replaced your entire online presence with a bot you can’t control—none of these has happened yet, but all are entirely possible with existing technology.

▪ Now more than ever we need to think and work inclusively in order to directly address the imbalances that will be automated if we don’t consciously create new paths.