Elements of Computing Systems Chapter 12

Tom Stuart edited this page Jul 22, 2015 · 5 revisions
Clone this wiki locally


We began by reading through some of Murray’s commits, paying particular attention to those which highlighted discrepancies between the book’s advice and the actual behaviour of the reference compiler. (For example: the book says push constant 1; neg but the compiler does push constant 0; not; the book says add address and index, but the compiler adds index and address; the book says to set up that before calculating the value in an array assignment, but the compiler calculates the value first and stores it temporarily.) The general feeling was that it would’ve taken less work to write a “correct” compiler if we’d been given tests for the behaviour of its emitted code (cf. the vm-translator tests) instead of having to make it match the output of the reference compiler.

Suitably overawed by the magnitude of Murray’s achievement, we agreed to merge his commits into the compilation-engine branch and then land the whole compilation engine onto master.


Before starting on the exercises, we discussed a couple of points of confusion in the chapter. Tom confessed to not fully understanding this paragraph on page 251:

This [recursive division] algorithm may be considered suboptimal since each multiplication operation also requires O(n) addition and subtraction operations. However, careful inspection reveals that the product 2 * q * y can be computed without any multiplication. Instead, we can rely on the value of this product in the previous recursion level, and use addition to establish its current value.

Clarification was not immediately forthcoming.

Tom was also unclear about the intended initial state of the heap in the “Improved Memory Allocation Algorithm” on page 254. A moment’s discussion indicated that page 255 clarifies this with freeList = heapBase; freeList.length = heapLength; freeList.next = null, i.e. the initial heap is one huge segment filling the available memory.

We briefly discussed the best way to approach the Chapter 12 project: did we want to implement the entire Jack OS? If not, which parts, and in what order? The group was generally most interested in the memory allocator, which was the first part of the project anyway, so we decided to follow instructions and see how far through the project we got.


We refreshed our understanding of the VM emulator by mucking about with it on the command line. Leo eventually tired of this, and retaliated by writing a makefile to compile the skeleton Memory.jack into Memory.vm and run MemoryTest.tst through the emulator. This gave us a meaningful failure message and allowed us to proceed with the implementation of the Memory class.

We found it easy to implement the “Basic Memory Allocation Algorithm” (pages 253–254) by just incrementing the free pointer in alloc and doing nothing in deAlloc. We considered upgrading to the “Improved Memory Allocation Algorithm” — which actually frees memory when not in use — but there wasn’t enough collective enthusiasm to implement something more complicated before it was required.

It was also straightforward to write poke and peek by following the “hacking trick” on page 270, namely assigning zero to an Array variable and then using array access to read and write memory directly. This provided a glimpse into Jack’s laid-back typing discipline: method calls rely on the type of a variable in order to decide what code to execute, but other operations (e.g. assignment) don’t care at all.


After some housekeeping to avoid duplication in the test directories and makefile, we added the skeleton Array implementation and quickly fleshed it out.

This again involved appreciating Jack’s indifference towards types. As page 265 points out, Array.new is unusual in being a function rather than a constructor; it doesn’t automatically allocate an Array instance, it just returns the int address that it gets from Memory.alloc, which the caller then treats as a legitimate Array. In particular this means that the “instance method” dispose is really called directly on an int, so Memory.deAlloc(this) is the appropriate implementation. (We think. It’s hard to tell whether this really works, because our Memory.deAlloc doesn’t do anything.)


As we began running short of time, we decided to pick on divide as the most potentially interesting part of the Math class, particularly given the unanswered question about it that had come up earlier.

That question was how to implement this pseudocode…

divide(x, y):
  // Integer part of x/y, where x >= 0 and y > 0
  if y > x return 0
  q = divide(x, 2 * y)
  if (x - 2 * q * y) < y
    return 2 * q
    return 2 * q + 1

…without performing the multiplication 2 * q * y each time divide is recursively called. The book suggests this can be done with addition (see above) but doesn’t elaborate further.

We decided to investigate by drawing on the whiteboard a kind of stack trace of the values of x, y, q, 2 * q * y, x - (2 * q * y) < y and divide’s result when calculating divide(95, 3):

x y q 2 * q * y x - (2 * q * y) < y divide(x, y)
95 3 15 90 no 31
95 6 7 84 no 15
95 12 3 72 no 7
95 24 1 48 no 3
95 48 0 0 no 1
95 96 0

This made it much clearer to us that 2 * q * y at a given level of recursion is always the sum of y and 2 * q * y at the next level down (e.g. 84 = 12 + 72), which suggested an alternative pseudocode implementation:

divide(x, y):
  if y > x return [0, 0]
  y_next = 2 * y
  q, qy2_next = divide(x, y_next)
  qy2 = y_next + qy2_next
  if (x - qy2) < y
    return [2 * q, qy2]
    return [2 * q + 1, qy2]

By this time we were tired, triumphant, and half an hour over time, so we decided to stop for the night rather than implement this in Jack. As we left, we expressed reservations about the efficiency of the memory allocations and deallocations required to support the multiple return values in this new divide.



Thanks to Leo and Geckoboard for hosting, and thanks to Murray’s heroic efforts for finally propelling us beyond Chapter 11.