Skip to content

Latest commit

 

History

History
61 lines (34 loc) · 4.07 KB

ARCHITECTURE.md

File metadata and controls

61 lines (34 loc) · 4.07 KB

Proxy Service Architecture

This directory contains technical documentation and diagrams for developers and operators of the fury proxy service to use while running or developing the service as a reliable, scalable and observable proxy.

The source for many of the diagrams found in the this documentation can be viewed /edited and updated using this miro board.

Service Workflows

Proxy

Proxy Workflow Conceptual Overview

Clients make requests (e.g. API calls from the Fury webapp, other dApps CLIs or scripts) to a public API endpoint URL that maps to a public facing load balancer in AWS. For any request to an API endpoint (such as the Ethereum RPC API) that can be handled by the proxy service, the load balancer will forward the request to an instance of the proxy service.

The default action performed by the proxy service for each request it gets is to proxy the request to the configured URL of a load balanced set of fury blockchain nodes that can best serve the request (e.g. pruning nodes for requests for latest block data or archive node for request to historical block data).

The proxy functionality provides the foundation for all other proxy service features (e.g. logging, caching) by allowing full introspection and transformation of the original request and ultimate response returned to the client.

API Observability

API Observability Worfklow Conceptual Overview

For every request that is proxied by the proxy service, a request metric is created and stored in a postgres table that can be aggregated with other request metrics over a time range to answer ad hoc questions such as:

  • what methods take the longest time?
  • what methods are called the most frequently?
  • what is the distribution of requests per client IP?

Design Goals

Below are the ordered goals (ordered from most important to least) that developers should keep in mind when adding new features to the proxy service in order to ensure that the modifications do not prevent the proxy service from doing the job that users (e.g. external clients of the API and operators of that API) depend on it for.

  1. High Availability

Because the proxy service handles every request for a given API endpoint, above all else the service should always strive to be available for proxying requests to it's configured backend origin(s), failing open or degrading gracefully whenever possible.

  1. Scale multiplier

The proxy service should be a scalablity multiplier (as opposed to a bottleneck), preferring simple and efficient implementations over complex, blocking or bimodal ones.

  1. Transparency

Since errors in the proxy service have a potentially catastrophic impact on the the availability of the API endpoint(s) being proxied, the proxy should be easy to debug, instrument and monitor.

As John Gruber notes often, it matters not just what your priorities are, but what order your priorities are. For the proxy service specifically as an example, that dictates that when adding new features to the service, the implementation of the new feature shouldn't introduce panics or abort the request chain if it encounters an error talking to an infrastructure component or parsing data.

Service Component Deep Dives

  1. How the middleware stack works
  2. Database migrations
  3. AWS network topology worked example
  4. Request Metrics Database Partitioning
  5. Metric Compaction routine

Editable Diagrams

The images present in this documentation can be edited and re-exported as changes are made to the proxy service.

Technology Specific Development Notes

  1. Postgres Development