Permalink
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
658 lines (493 sloc) 39.8 KB

Modular Patterns and Practices

In this chapter we’ll take a look at some of the latest language features and how we can leverage those in our programs while reducing complexity in the process. We’ll also analyze concrete coding patterns and conventions that can help us develop simple alternatives to otherwise complex problems.

5.1 Leveraging Modern JavaScript

When used judiciously, the latest JavaScript features can be of great help in reducing the amount of code whose sole purpose is to work around language limitations. This increases signal — the amount of valuable information that can be extracted from reading a piece of code — while eliminating boilerplate and repetition.

5.1.1 Template Literals

Before ES6, the JavaScript community came up with half a dozen ways of arriving at multi-line strings: from strings chained with \ escape characters or + arithmetic operators, to using Array#join, or resorting to the string representation of comments in a function — all merely for multi-line support.

Further, inserting variables into a string isn’t possible, but that’s easily circumvented by concatenating them with one or more strings.

'Hello ' + name + ', I\'m Nicolás!'

Template literals arrived in ES6 and resolve multi-line strings in a feature that was native to the language, without the need for any clever hacks in user-space.

Unlike strings, with template literals we can interpolate expressions using a streamlined syntax. They involve less escaping, too, thanks to using backticks instead of single or double quotation marks, which appear more frequently in English text.

`Hello ${ name }, I'm Nicolás!`

Besides these improvements, template literals also offer the possibility of tagged templates. You can prefix the template with a custom function that transforms the template’s output, enabling use cases like input sanitization, formatting, or anything else.

As an illustrative example, the following function could be used for the sanitization use case mentioned above. Any expressions interpolated into a template go through the insane function from a library by the same name, which strips out unsafe bits of HTML — tags, attributes, or whole trees — to keep user-provided strings honest.

import insane from 'insane'

function sanitize(template, ...expressions) {
  return template.reduce((accumulator, part, i) => {
    return accumulator + insane(expressions[i - 1]) + part
  })
}

In the following example we embed a user-provided comment as an interpolated expression in a template literal, and the sanitize tag takes care of the rest.

const comment = 'exploit time! <iframe src="http://evil.corp"></iframe>'
const html = sanitize`<div>${ comment }</div>`
console.log(html)
// <- '<div>exploit time! </div>'

Whenever we need to compose a string using data, template literals are a terse alternative to string concatenation. When we want to avoid escaping single or double quotes, template literals can help. The same is true when we want to write multi-line strings.

In every other case — when there’s no interpolation, escaping, or multi-line needs — the choice comes down to a mere matter of style. In the last chapter of Practical Modern JavaScript, "Practical Considerations", I advocated[1] in favor of using template literals in every case. This was for a few factors, but here’s the two most important ones: because of convenience, so that you don’t have to convert a string back and forth between single-quoted string and template literals depending on its contents; and because of consistency, so that you don’t have to stop and think about which kind of quotation mark — single, double, or backtick — to use each time. Template literals may take some time to get accustomed to: we’ve used single-quoted strings for a long time, and template literals have only been around for a while. You or your team might prefer sticking with single-quoted strings, and that’s perfectly fine too.

Note

When it comes to style choices, you’ll rarely face problems if you let your team come to a consensus about the preferred style choice and later enforce that choice by way of a lint tool like ESLint. It’s entirely valid to stick with single-quoted strings and only use template literals when deemed absolutely necessary, if that’s what most of the team prefers.

Using a tool like ESLint and a continuous integration job to enforce its rules means nobody has to perform the time-consuming job of keeping everyone in line with the house style. When tooling enforces style choices, discussions about those choices won’t crop up as often in discussion threads while contributors are collaborating on units of work.

It’s important to differentiate between purely stylistic choices, which tend to devolve in contentious time-sinking discussions, and choices where there’s more ground to be covered in the everlasting battle against complexity. While the former may make a codebase subjectively easier to read, or more aesthetically pleasing, it is only through deliberate action that we keep complexity in check. Granted, a consistent style throughout a codebase can help contain complexity, but the exact style is unimportant as long as we enforce it consistently.

5.1.2 Destructuring, Rest, and Spread

The destructuring, rest, and spread features came into effect in ES6. These features accomplish a number of different things, which we’ll now discuss.

Destructuring helps us indicate the fields of an object that we’ll be using to compute the output of a function. In the following example, we destructure a few properties from a ticker variable, and then combine that with a …​details rest pattern containing every property of ticker that we haven’t explicitly named in our destructuring pattern.

const { low, high, ask, ...details } = ticker

When we use destructuring methodically and near the top of our functions, or — even better — in the parameter list, we are making it obvious what the exact contract of our function is in terms of inputs.

Deep destructuring offers the ability to take this one step further, digging as deep as necessary into the structure of the object we’re accessing. Consider the following example, where we destructure the JSON response body with details about an apartment. When we have a destructuring statement like this near the top of a function that’s used to render a view, the aspects of the apartment listing are needed to render it become self-evident at a glance. In addition, we avoid repetition when accessing property chains like response.contact.name or response.contact.phone.

const {
  title,
  description,
  askingPrice,
  features: {
    area,
    bathrooms,
    bedrooms,
    amenities
  },
  contact: {
    name,
    phone,
    email
  }
} = response

At times, a deeply destructured property name may not make sense outside of its context. For instance, we introduce name to our scope, but it’s the name of the contact for the listing, not to be confused with the name of the listing itself. We can clarify this by giving the contact’s name an alias, like contactName or responseContactName.

const {
  title,
  description,
  askingPrice,
  features: {
    area,
    bathrooms,
    bedrooms,
    amenities
  },
  contact: {
    name: responseContactName,
    phone,
    email
  }
} = response

When using : to alias, it can be difficult at first to remember whether the original name or the aliased name comes first. One helpful way to keep it straight is to mentally replace : with the word "as". That way, name: responseContactName would read as "name as responseContactName".

We can even have the same property listed twice, if we wanted to destructure some of its contents, while also maintaining access to the object itself. For example, if we wanted to destructure the contact object’s contents, like we do above, but also take a reference to the whole contact object, we can do the following:

const {
  title,
  description,
  askingPrice,
  features: {
    area,
    bathrooms,
    bedrooms,
    amenities
  },
  contact: responseContact,
  contact: {
    name: responseContactName,
    phone,
    email
  }
} = response

Object spread helps us create a shallow copy of an object using a little native syntax. We can also combine object spread with our own properties, so that we create a copy that also overwrites the values in the original object we’re spreading.

const faxCopy = { ...fax }
const newCopy = { ...fax, date: new Date() }

This allows us to create slightly modified shallow copies of other objects. When dealing with discrete state management, this means we don’t need to resort to Object.assign method calls or utility libraries. While there’s nothing inherently wrong with Object.assign calls, the object spread …​ abstraction is easier for us to internalize and mentally map its meaning back to Object.assign without us realizing it, and so the code becomes easier to read because we’re dealing with less unabstracted knowledge.

Another benefit worth pointing out is that Object.assign() can cause accidents: if we forget to pass an empty object literal as the first argument for this use case, we end up mutating the object. With object spread, there is no way to accidentally mutate anything, since the pattern always acts as if an empty object was passed to Object.assign in the first position.

5.1.3 Striving for simple const bindings

If we use const by default, then the need to use let or var can be ascribed to code that’s more complicated than it should be. Striving to avoid those kinds of bindings almost always leads to better and simpler code.

In section 4.2.4 we looked into the case where a let binding is assigned a default value, and have conditional statements immediately after, that might change the contents of the variable binding.

//
let type = 'contributor'
if (user.administrator) {
  type = 'administrator'
} else if (user.roles.includes('edit_articles')) {
  type = 'editor'
}
//

Most reasons why we may need to use let or var bindings are variants of the above and can be resolved by extracting the assignments into a function where the returned value is then assigned to a const binding. This moves the complexity out of the way, and eliminates the need for looking ahead to see if the binding is reassigned at some point in the code flow later on.

//
const type = getUserType(user)
//

function getUserType(user) {
  if (user.administrator) {
    return 'administrator'
  }
  if (user.roles.includes('edit_articles')) {
    return 'editor'
  }
  return 'contributor'
}

A variant of this problem is when we repeatedly assign the result of an operation to the same binding, in order to split it into several lines.

let values = [1, 2, 3, 4, 5]
values = values.map(value => value * 2)
values = values.filter(value => value > 5)
// <- [6, 8, 10]

An alternative would be to avoid reassignment, and instead use chaining, as shown next.

const finalValues = [1, 2, 3, 4, 5]
  .map(value => value * 2)
  .filter(value => value > 5)
// <- [6, 8, 10]

A better approach would be to create new bindings every time, computing their values based on the previous binding, and picking up the benefits of using const in doing so — where we can rest assured that the binding doesn’t change later in the flow.

const initialValues = [1, 2, 3, 4, 5]
const doubledValues = initialValues.map(value => value * 2)
const finalValues = doubledValues.filter(value => value > 5)
// <- [6, 8, 10]

Let’s move onto a more interesting topic: asynchronous code flows.

5.1.4 Navigating Callbacks, Promises, and Asynchronous Functions

JavaScript now offers several options when it comes to describing asynchronous algorithms: the plain callback pattern, promises, async functions, async iterators, async generators, plus any patterns offered by libraries consumed in our applications.

Each solution comes with its own set of strengths and weaknesses:

  1. Callbacks are typically a solid choice, but we often need to get libraries involved when we want to execute our work concurrently

  2. Promises might be hard to understand at first, but they offer a few utilities like Promise#all for concurrent work, yet they might be hard to debug under some circumstances

  3. Async functions require a bit of understanding on top of being comfortable with promises, but they’re easier to debug and often result in simpler code, plus they can be interspersed with synchronous functions rather easily as well

  4. Iterators and generators are powerful tools, but there aren’t all that many practical use cases for them, so we must consider whether we’re using them because they fit our needs or just because we can.

It could be argued that callbacks are the simplest mechanism, although a similar case could be made for promises now that so much of the language is built around them. In any case, consistency should remain as the primary driving force of how we decide which pattern to use. While it’s okay to mix and match a few different patterns, most of the time we should be using the same patterns again and again, so that our team can develop a sense of familiarity with the codebase, instead of having to take a guess whenever encountering an unchartered portion of the application.

Using promises and async functions inevitably involves casting callbacks into this pattern. In the following example we make up a delay function that returns promises which settle after a provided timeout.

function delay(timeout) {
  const resolver = resolve => {
    setTimeout(() => {
      resolve()
    }, timeout)
  }
  return new Promise(resolver)
}
delay(2000).then(…)

A similar pattern would have to be used to consume functions taking a last argument that’s an error-first callback-style function in Node.js. Starting with Node.js v8.0.0, however, there’s a utility built-in that "promisifies" these callback-based functions so that they return promises.[2]

import { promisify } from 'util'
import { readFile } from 'fs'
const readFilePromise = promisify(readFile)

readFilePromise('./data.json', 'utf8').then(data => {
  console.log(`Data: ${ data }`)
})

There are libraries that could do the same for the client-side, one such example being bluebird, or we can create our own promisify. In essence, promisify takes the function that we want to use in promise-based flows, and returns a different — "promisified" — function which returns a promise where we call the original function passing all the provided arguments plus our own callback, where we settle the promise after deciding whether it should be fulfilled or rejected.

// promisify.js
export default function promisify(fn) {
  return (...rest) => {
    return new Promise((resolve, reject) => {
      fn(...rest, (err, result) => {
        if (err) {
          reject(err)
          return
        }
        resolve(result)
      })
    })
  }
}

Using a promisify function, then, would be no different than the earlier example with readFile, except we’d be providing our own promisify implementation.

import promisify from './promisify'
import { readFile } from 'fs'
const readFilePromise = promisify(readFile)

readFilePromise('./data.json', 'utf8').then(data => {
  console.log(`Data: ${ data }`)
})

Casting promises back into a callback-based format is less involved because we can add reactions to handle both the fulfillment and rejection results, and call back done passing in the corresponding result where appropriate.

function unpromisify(p, done) {
  p.then(
    data => done(null, data),
    error => done(error)
  )
}
unpromisify(delay(2000), err => {
  //
})

Lastly, when it comes to converting promises to async functions, the language acts as a native compatibility layer, boxing every expression we await on into promises, so there’s no need for any casting at the application level.

We can apply our guidelines of what constitutes clear code to asynchronous code flows just as well, since there aren’t fundamental differences at play in the way we write these functions. Our focus should be on how these flows are connected together, regardless of whether they’re comprised of callbacks, promises, or something else. When plumbing tasks together, one of the main sources of complexity is nesting. When several tasks are nested in a tree-like shape, we might end up with code that’s deeply nested. One of the best solutions to this readability problem is to break our flow into smaller trees, which would consequently be more shallow. We’ll have to connect these trees together by adding a few extra function calls, but we’ll have removed significant complexity when trying to understand the general flow of operations.

5.2 Composition and Inheritance

Let’s explore how we can improve our application designs beyond what JavaScript offers purely at the language level. In this section we’ll discuss two different approaches to growing parts of a codebase: inheritance, where we scale vertically by stacking pieces of code on top of each other so that we can leverage existing features while customizing others and adding our own; and composition, where we scale our application horizontally by adding related or unrelated pieces of code at the same level of abstraction while keeping complexity to a minimum.

5.2.1 Inheritance through Classes

Up until ES6 introduced first-class syntax for prototypal inheritance to JavaScript, prototypes weren’t a widely used feature in user-land. Instead, libraries offered helper methods that made inheritance simpler, using prototypal inheritance under the hood, but hiding the implementation details from their consumers. Even though ES6 classes look a lot like classes in other languages, they’re syntactic sugar using prototypes under the hood, making them compatible with older techniques and libraries.

The introduction of a class keyword, paired with the React framework originally hailing classes as the go-to way of declaring stateful components, classes have helped spark some love for a pattern that was previously quite unpopular when it comes to JavaScript. In the case of React, the base Component class offers lightweight state management methods, while leaving the rendering and lifecycle up to the consumer classes extending Component. When necessary, the consumer can also decide to implement methods such as componentDidMount, which allows for event binding after a component tree is mounted; componentDidCatch, which can be used to trap unhandled exceptions that arise during the component lifecycle; among a variety of other soft interface methods. There’s no mention of these optional lifecycle hooks anywhere in the base Component class, which are instead confined to the rendering mechanisms of React. In this sense, we note that the Component class stays focused on state management, while everything else is offered up by the consumer.

Inheritance is also useful when there’s an abstract interface to implement and methods to override, particularly when the objects being represented can be mapped to the real world. In practical terms and in the case of JavaScript, inheritance works great when the prototype being extended offers a good description for the parent prototype: a Car is a Vehicle but a car is not a SteeringWheel: the wheel is just one aspect of the car.

5.2.2 The Perks of Composition: Aspects and Extensions

With inheritance we can add layers of complexity to an object. These layers are meant to be ordered: we start with the least specific foundational bits of the object and build our way up to the most specific aspects of it. When we write code based on inheritance chains, complexity is spread across the different classes, but lies mostly at the foundational layers which offer a terse API while hiding this complexity away. Composition is an alternative to inheritance. Rather than building objects by vertically stacking functionality, composition relies on stringing together orthogonal aspects of functionality. In this sense, orthogonality means that the bits of functionality we compose together complements each other, but doesn’t alter one another’s behavior.

One way to compose functionality is additive: we could write extension functions, which augment existing objects with new functionality. In the following code snippet we have a makeEmitter function which adds flexible event handling functionality to any target object, providing them with an .on method, where we can add event listeners to the target object; and an .emit method, where the consumer can indicate a type of event and any number of parameters to be passed to event listeners.

function makeEmitter(target) {
  const listeners = []

  target.on = (eventType, listener) => {
    if (!(eventType in listeners)) {
      listeners[eventType] = []
    }

    listeners[eventType].push(listener)
  }

  target.emit = (eventType, ...params) => {
    if (!(eventType in listeners)) {
      return
    }

    listeners[eventType].forEach(listener => {
      listener(...params)
    })
  }

  return target
}

const person = makeEmitter({
  name: 'Artemisa',
  age: 27
})

person.on('move', (x, y) => {
  console.log(`${ person.name } moved to [${ x }, ${ y }].`)
})

person.emit('move', 23, 5)
// <- 'Artemisa moved to [23, 5].'

This approach is versatile, helping us add event emission functionality to any object without the need for adding an EventEmitter class somewhere in the prototype chain of the object. This is useful in cases where you don’t own the base class, when the targets aren’t class-based, or when the functionality to be added isn’t meant to be part of every instance of a class: there are persons who emit events and persons that are quiet and don’t need this functionality.

Another way of doing composition, that doesn’t rely on extension functions, is to rely on functional aspects instead, without mutating your target object. In the following snippet we do just that: we have an emitters map where we store target objects and map them to the event listeners they have, an onEvent function that associates event listeners to target objects, and an emitEvent function that fires all event listeners of a given type for a target object, passing the provided parameters. All of this is accomplished in such a way that there’s no need to modify the person object in order to have event handling capabilities associated with the object.

const emitters = new WeakMap()

function onEvent(target, eventType, listener) {
  if (!emitters.has(target)) {
    emitters.set(target, new Map())
  }

  const listeners = emitters.get(target)

  if (!(eventType in listeners)) {
    listeners.set(eventType, [])
  }

  listeners.get(eventType).push(listener)
}

function emitEvent(target, eventType, ...params) {
  if (!emitters.has(target)) {
    return
  }

  const listeners = emitters.get(target)

  if (!listeners.has(eventType)) {
    return
  }

  listeners.get(eventType).forEach(listener => {
    listener(...params)
  })
}

const person = {
  name: 'Artemisa',
  age: 27
}

onEvent(person, 'move', (x, y) => {
  console.log(`${ person.name } moved to [${ x }, ${ y }].`)
})

emitEvent(person, 'move', 23, 5)
// <- 'Artemisa moved to [23, 5].'

Note how we’re using both WeakMap and Map here. Using a plain Map would prevent garbage collection from cleaning things up when target is only being referenced by Map entries, whereas WeakMap allows garbage collection to take place on the objects that make up its keys. Given we usually want to attach events to objects, we can use WeakMap as a way to avoid creating strong references that might end up causing memory leaks. On the other hand, it’s okay to use a regular Map for the event listeners, given those are associated to an event type string.

Let’s move onto deciding whether to use inheritance, extension functions, or functional composition, where each pattern shines, and when to avoid them.

5.2.3 Choosing between Composition and Inheritance

In the real world, you’ll seldom have to use inheritance except when connecting to specific frameworks you depend on, to apply specific patterns such as extending native JavaScript arrays, or when performance is of the utmost necessity. When it comes to performance as a reason for using prototypes, we should highlight the need to test our assumptions and measure different approaches before jumping all in into a pattern that might not be ideal to work with, for the sake of a performance gain we might not observe.

Decoration and functional composition are friendlier patterns because they aren’t as restrictive. Once you inherit from something, you can’t later choose to inherit from something else, unless you keep adding inheritance layers to your prototype chain. This becomes a problem when several classes inherit from a base class but they then need to branch out while still sharing different portions of functionality. In these cases and many others, using composition is going to let us pick and choose the functionality we need without sacrificing our flexibility.

The functional approach is a bit more cumbersome to implement than simply mutating objects or adding base classes, but it offers the most flexibility. By avoiding changes to the underlying target, we keep objects easy to serialize into JSON, unencumbered by a growing collection of methods, and thus more readily compatible across our codebase.

Furthermore, using base classes makes it a bit hard to reuse the logic at varying insertion points in our prototype chains. Using extension functions, likewise, makes it challenging to add similar methods that support slightly different use cases. Using a functional approach leads to less coupling in this regard, but it could also complicate the underlying implementation of the makeup for objects, making it hard to decypher how their functionality ties in together, tainting our fundamental understanding of how code flows and making debugging sessions longer than need be.

As with most things programming, your codebase benefits from a semblance of consistency. Even if you use all three patterns, — and others — a codebase that uses half a dozen patterns in equal amounts is harder to understand, work with, and build on, than an equivalent codebase that instead uses one pattern for the vast majority of its code while using other patterns in smaller ways when warranted. Picking the right pattern for each situation and striving for consistency might seem at odds with each other, but this is again a balancing act. The trade-off is between consistency in the grand scale of our codebase versus simplicity in the local piece of code we’re working on. The question to ask is then: are we obtaining enough of a simplicity gain that it warrants the sacrifice of some consistency?

5.3 Code Patterns

Digging a bit deeper and into more specific elements of architecture design, in this section we’ll explore a few of the most common patterns for creating boundaries from which complexity cannot escape, encapsulating functionality, and communicating across these boundaries or application layers.

5.3.1 Revealing Module

The revealing module pattern has become a staple in the world of JavaScript. The premise is simple enough: expose precisely what consumers should be able to access, and avoid exposing anything else. The reasons for this are manifold. Preventing unwarranted access to implementation details reduces the likelihood of your module’s interface being abused for unsupported use cases that might bring headaches to both the module implementer and the consumer alike.

Explicitly avoid exposing methods that are meant to be private, such as a hypothetical _calculatePriceHistory method, which relies on the leading underscore as a way of discouraging direct access and signaling that it should be regarded as private. Avoiding such methods prevents test code from accessing private methods directly, resulting in tests that make assertions solely regarding the interface and which can be later referenced as documentation on how to use the interface; prevents consumers from monkey-patching implementation details, leading to more transparent interfaces; and also often results in cleaner interfaces due to the fact that the interface is all there is, and there’s no alternative ways of interacting with the module through direct use of its internals.

JavaScript modules are of a revealing nature by default, making it easy for us to follow the revealing pattern of not giving away access to implementation details. Functions, objects, classes, and any other bindings we declare are private unless we explicitly decide to export them from the module.

When we expose only a thin interface, our implementation can change largely without having an impact on how consumers use the module, nor on the tests that cover the module. As a mental exercise, always be on the lookout for aspects of an interface that should be turned into implementation details and extricated from the interface itself.

5.3.2 Object Factories

Even when using JavaScript modules and following the revealing pattern strictly, we might end up with unintentional sharing of state across our usage of a module. Incidental state might result in unexpected results from an interface: consumers don’t have a complete picture because other consumers are contributing changes to this shared state as well, sometimes making it hard to figure out what exactly is going on in an application.

If we were to move our functional event emitter code snippet, with onEvent and emitEvent, into a JavaScript module, we’d notice that the emitters map is now a lexical top-level binding for that module, meaning all of the module’s scope has access to emitters. This is what we’d want, because that way we can register event listeners in onEvent and fire them off in emitEvent. In most other situations, however, sharing persistent state across public interface methods is a recipe for unexpected bugs.

Suppose we have a calculator module that can be used to make basic calculations through a stream of operations. Even if consumers were supposed to use it synchronously and flush state in one fell swoop, without giving way for a second consumer to taint the state and produce unexpected results, our module shouldn’t rely on consumer behavior to provide consistent results. The following contrived implementation relies on local shared state, and would need consumers to use the module strictly as intended, making any calls to add and multiply, leaving calculate as the last method that’s meant to be called only once.

const operations = []
let state = 0

export function add(value) {
  operations.push(() => {
    state += value
  })
}

export function multiply(value) {
  operations.push(() => {
    state *= value
  })
}

export function calculate() {
  operations.forEach(op => op())
  return state
}

Here’s an example of how consuming the previous module could work.

import { add, multiply, calculate } from './calculator'
add(3)
add(4)
multiply(-2)
calculate() // <- -14

As soon as we tried to append operations in two places, things would start getting out of hand, with the operations array getting bits and pieces of unrelated computations, tainting our calculations.

// a.js
import { add, calculate } from './calculator'
add(3)
setTimeout(() => {
  add(4)
  calculate() // <- 14, an extra 7 because of b.js
}, 100)

// b.js
import { add, calculate } from './calculator'
add(2)
calculate() // <- 5, an extra 3 from a.js

A slightly better approach would get rid of the state variable, and instead pass the state around operation handlers, so that each operation knows the current state, and applies any necessary changes to it. The calculate step would create a new initial state each time, and go from there.

const operations = []

export function add(value) {
  operations.push(state => state + value)
}

export function multiply(value) {
  operations.push(state => state * value)
}

export function calculate() {
  return operations.reduce((result, op) =>
    op(result)
  , 0)
}

This approach presents problems too, however. Even though the state is always reset to 0, we’re treating unrelated operations as if they were all part of a whole, which is still wrong.

// a.js
import { add, calculate } from './calculator'
add(3)
setTimeout(() => {
  add(4)
  calculate() // <- 9, an extra 2 from b.js
}, 100)

// b.js
import { add, calculate } from './calculator'
add(2)
calculate() // <- 5, an extra 3 from a.js

Blatantly, our contrived module is poorly designed, as its operations buffer should never be used to drive several unrelated calculations. We should instead expose a factory function that returns an object from its own self-contained scope, where all relevant state is shut off from the outside world. The methods on this object are equivalent to the exported interface of a plain JavaScript module, but state mutations are contained to instances that consumers create.

export function getCalculator() {
  const operations = []

  function add(value) {
    operations.push(state => state + value)
  }

  function multiply(value) {
    operations.push(state => state * value)
  }

  function calculate() {
    return operations.reduce((result, op) =>
      op(result)
    , 0)
  }

  return { add, multiply, calculate }
}

Using the calculator like this is just as straightforward, except that now we can do things asynchronously and even if other consumers are also making computations of their own, each user will have their own state, preventing data corruption.

import { getCalculator } from './calculator'
const { add, multiply, calculate } = getCalculator()
add(3)
add(4)
multiply(-2)
calculate() // <- -14

Even with our two-file example, we wouldn’t have any problems anymore, since each file would have its own atomic calculator.

// a.js
import { getCalculator } from './calculator'
const { add, calculate } = getCalculator()
add(3)
setTimeout(() => {
  add(4)
  calculate() // <- 7
}, 100)

// b.js
import { getCalculator } from './calculator'
const { add, calculate } = getCalculator()
add(2)
calculate() // <- 2

As we just showed, even when using modern language constructs and JavaScript modules, it’s not too hard to create complications through shared state. Thus, we should always strive to contain mutable state as close to its consumers as possible.

5.3.3 Event Emission

We’ve already explored at length the pattern of registering event listeners associated to arbitrary plain JavaScript objects and firing events of any kind, triggering those listeners. Event handling is most useful when we want to have clearly-delineated side-effects.

In the browser, for instance, we can bind a click event to an specific DOM element. When the click event fires, we might issue an HTTP request, render a different page, start an animation, or play an audio file.

Events are a useful way of reporting progress whenever we’re dealing with a queue. While processing a queue, we could fire a progress event whenever an item is processed, allowing the UI or any other consumer to render and update a progress indicator or apply a partial unit of work relying on the data processed by the queue.

Events also offer a mechanism to provide hooks into the lifecycle of an object, for example the Angular view rendering framework used event propagation to enable hierarchical communication across separate components. This allowed Angular codebases to keep components decoupled from one another while still being able to react to each other’s state changes and interact.

Having event listeners allowed a component to receive a message, perhaps process it by updating its display elements, and then maybe reply with an event of its own, allowing for rich interaction without necessarily having to introduce another module to act as an intermediary.

5.3.4 Message Passing and the Simplicity of JSON

When it comes to ServiceWorker, web workers, browser extensions, frames, API calls, or WebSocket integrations, we might run into issues if we don’t plan for robust data serialization ahead of time. This is a place where using classes to represent data can break down, because we need a way to serialize class instances into raw data (typically JSON) before sending it over the wire, and, crucially, the recipient needs to decode this JSON back into a class instance. It’s the second part where classes start to fail, since there isn’t a standardized way of reconstructing a class instance from JSON. For example:

class Person {
  constructor(name, address) {
    this.name = name
    this.address = address
  }
  greet() {
    console.log(`Hi! My name is ${ this.name }.`)
  }
}

const rwanda = new Person('Rwanda', '123 Main St')

Although we can easily serialize our rwanda instance with JSON.stringify(rwanda), and then send it over the wire, the code on the other end has no standard way of turning this JSON back into an instance of our Person class, which might have a lot more functionality than merely a greet function. The receiving end might have no business deserializing this data back into the class instance it originated from, but in some cases there’s merit to having an exact replica object back on the other end. For example, to reduce friction when passing messages between a website and a web worker, both sides should be dealing in the same data structure. In such scenarios, simple JavaScript objects are ideal.

JSON — now[3] a subset of the JavaScript grammar — was purpose-built for this use case, where we often have to serialize data, send it over the wire, and deserialize it on the other end. Plain JavaScript objects are a great way to store data in our applications, offer frictionless serialization out the box, and lead to cleaner data structures because we can keep logic decoupled from the data.

When the language on both the sending and receiving ends is JavaScript, we can share a module with all the functionality that we need around the data structure. This way, we don’t have to worry about serialization, since we’re using plain JavaScript objects and can rely on JSON for the transport layer. We don’t have to concern ourselves with sharing functionality either, because we can rely on the JavaScript module system for that part.

Armed with a foundation for writing solid modules based on your own reasoning, we now turn the page to operational concerns such as handling application secrets responsibly, making sure our dependencies don’t fail us, taking care of how we orchestrate build processes and continuous integration, and dealing with nuance in state management and the high-stakes decision-making around producing the right abstractions.


1. You can read a blog post I wrote about why template literals are better than strings at: https://mjavascript.com/out/template-literals. Practical Modern JavaScript (O’Reilly, 2017) is the first book in the Modular JavaScript series. You’re currently reading the second book of the same series.
2. Note also that, starting in Node.js v10.0.0, the native fs.promises interface can be used to access promise-based versions of the fs module’s methods.
3. Up until recently, JSON wasn’t — strictly speaking — a proper subset of ECMA-262. A recent proposal has amended the ECMAScript specification to consider bits of JSON that were previously invalid JavaScript to be valid JavaScript. Learn more at: https://mjavascript.com/out/json-subset.