A brief and quick history of chip design
Brought to you by Electric Square.
Created and Presented by Tom Read Cutting.
Whilst programming software is a skill that is becoming increasingly widespread and accessible, the stack that powers our computer programs: from the silicon to the operating system, compilers and everything in between is becoming increasingly complex and harder to understand. However, the The Law of Leaky Abstractions means that understanding the foundations on which computer programs run is important knowledge for a software engineer to have if they which to solve problems effectively. It is also just plain interesting.
This presentation focuses on one of the lowest level components of the stack whose design affects the behaviour of programs at the very top: the design of computer chips. Moore's Law has seemingly allowed computers to double power every two years until very recently as a result of an exponentially increasing transistor density. Why has this happened, Why has it slowed down, and Why is it increasingly difficult for the computer programs to be able to take advantage of these theoretical speed increases?
Like the software that run on them, computer chips have complex designs which have only grown as the number of transistors hardware architects can play with has increased. These complex designs have allowed for faster computers, but how? What are they?
Importantly, what software engineering choices can make the hardware run more effectively?
This presentation gives one of the highest-level crash-courses one could possibly give on one of the most important and interesting topics in Computer Science: the design of computer chips.