A HTTP proxy to improve usage of HTTP APIs
Go
Switch branches/tags
Nothing to show
Latest commit 9ed2efc May 1, 2015 @evanphx evanphx Revert "Merge remote-tracking branch 'origin/pr/7'"
This reverts commit dd9e11b, reversing
changes made to c2182c0.
Permalink
Failed to load latest commit information.
cache don't rerequest a fallback request via groupcache Apr 11, 2015
cmd/templar fix overridden flag name Apr 12, 2015
.travis.yml Add travis.yml Mar 5, 2015
LICENSE Add license Mar 4, 2015
README.md Merge pull request #5 from yeller/groupcache_is_done Apr 12, 2015
TODO.md Todos Mar 4, 2015
cache.go Add X-Templar-Cached header Mar 4, 2015
cache_backend.go allow configurable groupcache memory limit Apr 11, 2015
cache_backend_test.go Add X-Templar-Cached header Mar 4, 2015
cache_test.go Add X-Templar-Cached header Mar 4, 2015
categorize.go Have categorizer honor explicitly specified categories Mar 4, 2015
categorize_test.go Have categorizer honor explicitly specified categories Mar 4, 2015
collapse.go Start categorization, add better collapse test Mar 4, 2015
collapse_test.go Add a request timeout stat Mar 4, 2015
http.go Add X-Templar-Upgrade to talk to https upstream Mar 5, 2015
http_test.go Add X-Templar-Upgrade to talk to https upstream Mar 5, 2015
interfaces.go gofmt Mar 20, 2015
mock_CacheBackend.go Add fallback and eager cache roundtrippers Mar 3, 2015
mock_Client.go Start categorization, add better collapse test Mar 4, 2015
mock_Responder.go Start categorization, add better collapse test Mar 4, 2015
mock_RiemannClient.go gofmt Mar 20, 2015
mock_Stats.go Add a request timeout stat Mar 4, 2015
mock_StatsdClient.go Add support for statsd Mar 4, 2015
mock_Transport.go Add test that upstream actually talks to an upstream Mar 3, 2015
proxy.go Cleanup stats Mar 4, 2015
proxy_test.go Fix tests Mar 4, 2015
stats.go Revert "Merge remote-tracking branch 'origin/pr/7'" May 1, 2015
stats_test.go Fix tests to use proper int type May 1, 2015
testutils.go Add header to control cache usage and add better cache fallback mecha… Mar 4, 2015
upstream.go Strip X-Templar-Headers only right before sending it out Mar 4, 2015
upstream_test.go Add X-Templar-Upgrade to talk to https upstream Mar 5, 2015

README.md

templar

Build Status

HTTP APIs, they're everywhere. But they have a serious problem: their sychronous nature means that code using them stalls while waiting for a reply.

This means that your apps uptime and reliability are intertwined with whatever HTTP APIs, especially SaaS ones, you use.

templar helps you control the problem.

It is a an HTTP proxy that provides advanced features to help you make better use of and tame HTTP APIs.

Installation

Directly via go: go get github.com/vektra/templar/cmd/templar

Linux

Darwin

Windows

Usage

templar functions like an HTTP proxy, allowing you use your favorite HTTP client to easily send requests through it. Various languages have different HTTP clients but many respect the http_proxy environment variable that you can set to the address templar is running on.

Most HTTP clients in various programming languages have some configuration to configure the proxy directly as well. Nearly all of them do, just check the docs.

HTTPS

Many HTTP APIs located in SaaS products are available only via HTTPS. This is a good thing though it makes templar's job a little harder. We don't want to a client to use CONNECT because then we can't provide any value. So to interact with these APIs, use the X-Templar-Upgrade header. Configure your client to talk to the API as normal http but include X-Templar-Upgrade: https and templar will be able manage your requests and still talk to the https service!

Examples

Do a request through templar, no timeout, no caching:

curl -x http://localhost:9224 http://api.openweathermap.org/data/2.5/weather?q=Los+Angeles,CA

Now add some caching in, caching the value for a minute at a time:

curl -x http://localhost:9224 -H "X-Templar-Cache: eager" -H "X-Templar-CacheFor: 1m" 'http://api.openweathermap.org/data/2.5/weather?q=Los+Angeles,CA'

Features

Timeouts

It's important that timeouts are used when accessing a synchronous API like an HTTP endpoint. It's not uncommon for upstream APIs to have no timeouts to fulfill a request so that typically needs to be done on the client side. Effect use of timeouts on these APIs will improve the robustness of your own system.

For great discussion on this, check out Jeff Hodges thoughts on the topic:

At present, templar does not enforce a default timeout, it needs to be set per request via the X-Templar-Timeout header. The format is documented below under Duration format.

Request collapsing

Say that your app hits an HTTP endpoint at http://isitawesomeyet.com. When you send those HTTP requests through templar, it will reduce the number of requests to the external service to the bare minumum by combining requests together. So if a request comes in while we're waiting on another request to the same endpoint, we combine those requests together and serve the same data to both. This improves response times and reduces load on upstream servers.

Caching

Templar can, if requested, cache upstream requests. By setting the X-Templar-Cache header to either fallback or eager, templar will cache responses to the endpoint and serve them back.

fallback will only use the cache if accessing the endpoint times out. eager will use the cache if it exists first and always repopulate from the endpoint when needed.

The X-Templar-CacheFor header time is used to control how long a cached value will be used for. See Duration format below for how to specify the time.

There are 4 caches available presently:

  • Memory (the default)
  • Memcache
  • Redis
  • Groupcache

The later 3 are used only if configure on the command line.

In the future, the plan is to name the caches and allow requests to say which caching backend they'd like to use. Currently they all use the same one.

Stats generation

Tracking what APIs are used and how well they're performing is critical to understanding. When requests flow through templar, it can generate metrics about those requests and send them to statsd.

Just specify a statsd host via -statsd and templar will start sending them.

We'll support more metrics backends in the future.

Request categorization

Not all requests should use some of these features, for instance, request collapsing. So templar includes a categorizer to identify requests that it should apply additional handling to. It identifies a request as stateless or not. If it is stateless, then things like request collapsing and caching can be used.

By default, only GET requests are treated as stateless. The X-Templar-Category header allows the user to explicitly specify the category. The 2 valid values are stateful and stateless.

Again, a stateless request is subject to the following additional handling:

  • Request collapsing
  • Caching

Duration format

A number of headers take time durations, for instances, 30 seconds. These use the simple "(number)(unit)" parser, so for 1 second, use 1s and 5 minutes use 5m. Units supported are: ns, us, ms, s, m, and h.

Control Headers

Templar uses a number of headers to control how the requests are processed.

X-Templar-Cache

Possible values:

  • eager: Return a value from the cache before checking upstream
  • fallback: Return a value from the cache only if the upstream has issues

X-Templar-CacheFor

When caching, how long to cache the value for. If caching and this isn't set, the default is used.

X-Templar-Cached

Set on responses that are served from the cache.

X-Templar-Category

Possible values:

  • stateless: Process the request as stateless
  • stateful: Process the request as stateful

X-Templar-Timeout

Specifies how long to wait for the response before giving up.

X-Templar-TimedOut

Set to true on a response when the request timed out.

X-Templar-Upgrade

Possible values:

  • https: When connecting to the upstream, switch to https

Future features

  • Automatic caching based on HTTP Expire headers
  • Request throttling
  • Multiple active caching backends
  • Request stream inspection
  • Fire-and-forget requests
  • Return response via AMQP