Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

PMP is a hypermedia api

Irakli Nadareishvili edited this page · 29 revisions
Clone this wiki locally

APIs Are Living, Breathing Organisms

To foster innovation, the PMP API must be designed in a way that enables iterative evolution. At the same time we have to be careful with any change: there will be hundreds of API consumers, implemented with dozens of client libraries. Any change can end-up being backwards or forwards incompatible, breaking consumers, interrupting business and establishing a perception of poor reliability. There is no easy way to coordinate a change across hundreds of consumers. Can we ever change anything in such scenario?

Iterative evolution and reliability often seem to be counter to each other. How do we achieve both?

A problem with most APIs is that there's often tight coupling between the API and its API clients ("consumers"). In case of most APIs, even the ones we call "RESTful" and love and cherish and learn from, consumers know the exact URLs to hit for certain information and exact JSON, XML etc. that they will get to parse. What happens if we need to, even slightly, change these URLs or alter the output? Consumer breaks and that is bad.

Many "simple" APIs try to address this with versioning: accumulate enough changes and release a new "major" version of the API, when ready. Problem is: legacy API consumers still need to update to the new API format, so even if we keep the old API version around, versioning just kicks the proverbial can down the road, without addressing the root problem. Versioning also adds huge burden on the API team to maintain multiple versions. Additionally, since consumers do have to rewrite their code for each major version, API cannot change often, so progress slows down.

The root problem is that in the scenario described above API clients are too tightly coupled with the API server. They have exact semantics of API endpoints and input/output formats hardcoded into them. That information must be de-coupled and moved to the server. It is server's responsibility to provide both data, as well as meta-information about its behavior.

In a non-scientific view, I like to compare this problem to an analogy from object-oriented programming. C programming language has a very robust data structure: Enum. There's a lot that can be modeled with enums, compared to simple variables, but enum falls short of being an object. Enum just carries static data -- it has no behaviors. Huge leap from enums to objects (and arguably from C to C++) was encapsulating data with the behavior.

Hypermedia APIs behave like objects: they carry both data (Hypermedia documents) as well as behavior (Affordances).

This is how Roy Fielding describes Affordances:

When I say Hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user obtains choices and selects actions (slide #50). -- Slide Presentation on REST (Fielding, 2008)

The subject of Hypermedia is very extensive and way beyond the scope of PMP Architecture documentation. Thankfully there's been a lot written and presented on the subject. My personal favorite is Mike Amundsen's book

For the sake of PMP it suffices to say that by designing PMP API as a proper Hypermedia API we remove tight coupling of the consumer and the API, drastically decrease hardcoded knowledge about the API behavior present in the consumer and give the API significantly increased flexibility to evolve.

What's the catch?

So, Hypermedia APIs are awesome, why are not all APIs Hypermedia?

There're many reasons, some of which have to do with lack of understanding and evangelism, misconceptions, corporate lobbying of alternative standards trying to solve same problems with different, complicated approaches that have since failed, and everything in between.

One thing to note, however is that Hypermedia APIs require somewhat smarter consumers than a typical API client that mostly just makes HTTP calls and parses JSON or XML.

In case of a Hypermedia API, when implementing the consumer (API Client) we need to put extra effort in ensuring that API behaviors are not hardcoded and the consumer does indeed grab information about affordances from the API itself.

Then there's caching. A typical interaction with a Hypermedia API starts by querying the root of the API and figuring out affordances. As with any hypermedia, there's certain traversal of URLs involved, if you are doing something down the chain of hyperlinks.

Let's take an over-simplified example of wanting to query all audio from All Things Considered from yesterday. To figure out the endpoint URL of such call, API client may need to follow several URLs. It would be extremely inefficient to do so every time that you need to fire the specific API query.

We can look at API affordances similarly to how we look at configuration for programming code: it's meta information about the system. We use configuration in our programming code all the time! Do we parse configuration every time we need to do something? Not necessarily! If parsing configuration is expensive, we can parse configuration once and then cache it for a reasonable amount of time. Hypermedia API consumers are expected to do the exact same thing -- figure-out API affordances from the API itself and cache for reasonable amount of time to speed runtime operaton up.

A properly programmed "smart" Hypermedia API consumer should be comparably as efficient as a "brute-force" consumer of a non-Hypermedia API, but only if it takes care and caution to implement all the required caching.

The upside of "smart" consumers, however, is that they last longer. And, well -- they are smarter :)

Something went wrong with that request. Please try again.