Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy Architecture Considerations #78

Closed
arschles opened this issue Mar 22, 2018 · 3 comments
Closed

Proxy Architecture Considerations #78

arschles opened this issue Mar 22, 2018 · 3 comments
Labels
proposal A proposal for discussion and possibly a vote proxy Work to do on the module proxy

Comments

@arschles
Copy link
Member

arschles commented Mar 22, 2018

Athens Proxy Architecture

The Athens repository builds three logical artifacts:

  1. The proxy server
  2. The registry server
  3. The (crude) CLI

This document is focused on discussing the systems architecture and challenges in the proxy.

Registry architecture will be covered in a different document.

Code layout and architecture, dependency management considerations, and discussion on the CLI are out of scope and may be covered in a separate document as well.

The Proxy Server

The proxy server has 2 major responsibilities:

  1. Cache modules locally, in some storage medium (e.g. disk, relational DB, mongoDB...)
  2. On a cache miss, fill the cache from an upstream

Local caching is achieved fairly simply by using existing storage systems. As I write this, we have disk, memory and mongo based storage (see https://github.com/gomods/athens/tree/master/pkg/storage), with relational DB support in progress.

Challenges come up when we introduce cache misses and the cache filling mechanism. Our current plan on (verbal) record is to do the following when vgo requests a module that isn't in the cache:

  1. Return a 404
    1. This will effectively tell vgo to get the module upstream somewhere (e.g. a VCS or hosted registry)
  2. Start a background process to fetch the module from upstream

We have 2 challenges here:

  1. How to run background jobs
  2. How to serialize cache fills (to prevent a thundering herd)

Running Background Jobs

Just running background jobs in isolation (challenges will come later 😄) is relatively easy. We use the Buffalo framework, and it gives us built-in, pluggable background jobs support.

The two documented (on the Buffalo site) implementations are in-memory (i.e. a goroutine) and redis (using gocraft/work). We can use the background jobs interface to submit background jobs and we can consume the background jobs from a long-running task.

Aside from the data storage system, the proxy will have two moving parts (the API server and the background workers). Since this software might be deployed by anyone on their infrastructure, a proxy operator is gonna have to figure out how to deploy the database, API server and background worker (and probably a queueing system, depending on the work queue type) on their own. I ❤️ Kubernetes, so I'd like to initially provide Docker images for our software (the API server and background worker), a Helm chart, and really good documentation on how to easily configure and deploy this thing to any Kubernetes cluster. Over time, I hope other folks will contribute documentation to help others deploy into other environments.

Serializing Cache Fills

Suppose you just started up an Athens proxy and everything it needs and you spread the word throughout your company. You have 1000 engineers in your organization and you expect all of them to be heavily using the proxy, so you start 50 API servers and 1000 background workers.

On day 0, all 1000 of the engineers set up GOPROXY and run vgo get github.com/gorilla/mux. They all get a 404 and vgo correctly downloads the package from Github (let's assume everyone has set up their .netrc properly so they don't get rate limited).

On the backend, the proxy has started up 1000 background jobs all to get the same package from Github, and then they all race to write it to the database. The problem is compounded on 2 dimensions: the number of engineers running vgo and the number of imports and transitive dependencies in the codebase.

We need to prevent this behavior!

Invariants

To start, I believe we should treat the cache as write-only. Once module@vX.Y.Z is written to the cache, it can't be deleted or modified in any way (except by manual operator intervention).

Next, I believe we should aim for these invariants, modulo manual intervention and failure modes (those will be covered later):

  • If N>1 requests come in for module@vX.Y.Z, we should start exactly zero or one background job between time t0 and tX, where X is when the cache has been filled
  • If the cache is filled at time t0, no background jobs should ever be started to fill module@vX.Y.Z
  • On a cache miss for module@vX.Y.Z, only one background job should ever be started between time t0 and tX, where X is the time at which the cache was filled
  • No background job should ever be started to fill module@vX.Y.Z after time tX

In order to maintain these invariants in our proxy, we'll need to coordinate on background jobs. We certainly need to support multi-node deployments (like the 1000 engineer scenario above), so we'll need to distribute the coordination mechanism.

Finally, I believe in adding the absolute least amount of complexity in order to get this job done. My proposal is below.

Distributed Coordination of Background Jobs

The immutable cache helps us here for two reasons:

  1. It speeds up our serialization protocol
  2. It simplifies our serialization protocol & code

Currently, when an API server gets a GET /module/@v/vX.Y.Z.{mod,zip,info}, it checks the cache and returns 404 if module@vX.Y.Z doesn't exist. It also starts up a background cache-fill job to fetch module@vX.Y.Z.

I propose that we keep that behavior. Note that the API server doesn't participate in any concurrency control protocol. I am limiting concurrency control entirely to background jobs. I suggest that we do this because the API is in the critical path of all vgo get operations (in proxy deployments). I want to keep this code as simple as possible.


On to background jobs. I propose that we add leases to protect individual module@version cache entries. Here's how that would look (in pseudocode):

if exists_in_db("module@vX.Y.Z") {
	exit()
}
// run_with_lease only runs the function (second parameter) if the lease for 
// "module@vX.Y.Z" was acquired. when the function exits, the lease is 
// given back up. If the lease couldn't be acquired, do nothing
run_with_lease("module@vX.Y.Z", {
	// get module metadata & zipfile from upstream
	module = download_module_from_upstream("module@vX.Y.Z")
	// put all module metadata & zipfile into the cache entry 
	insert_module_to_cache(module)
})

We can then build on this protocol for fetching lists of modules (i.e. handling GET /module/@v/list requests):

if exists_in_db("list:module") {
	exit()
}
versions = []
run_with_lease("list:module", {
	// just get the list of versions from the upstream
	versions = download_versions_from_upstream("module")
	// put the versions list into the cache
	insert_module_list_to_cache(versions)
})
for version in versions {
	// start a cache-fill job (the previous psuedocode)
	enqueue_cache_filler("module@"+version)
})

In either case, if there's a failure, we can release the lease and retry the job. After we hit a maxiumum number of retries, we should write a "failed" message into the appropriate cache entry (list or the actual module).

Open Questions

We've implemented an immutable cache here in the proxy, but we also should consider modules to be mutable upstream. I've included some example scenarios that could result in unexpected, non-repeatable builds:

Scenario: Version Deleted

  1. At time t0, someone requests mymod@v1.0.0 from the proxy
  2. The proxy returns 404 on the /list request
  3. vgo fetches the module from the upstream
  4. The proxy kicks off the list background job (which then kicks off cache-fill jobs)
  5. At time t1, v1.0.0 is deleted

Result: any environment that has access to the proxy builds properly, any that doesn't won't build

Discussion on whether modules are mutable has begun. Regardless of outcome, I believe that the proxy cache should be immutable, and require explicit intervention by operators to delete or mutate an individual module. This behavior helps deliver repeatable, correct builds to an organization using the proxy.

Scenario: Proxy Has Missing Module Version

  1. At time t0, someone requests mymod@v1.0.0 from the proxy
  2. The proxy returns 404 on the /list request
  3. vgo fetches the module from the upstream
  4. At time t1, the proxy kicks off the list background job
  5. At time t2, the proxy saves v1.0.0 as one of the versions in the versions cache entry
  6. At time t3, v1.0.0 is deleted from the upstream
  7. At time t4, the proxy kicks off the cache-fill job for v1.0.0, and cannot find the version upstream

Result: no observable difference between this and the previous scenario

Scenario: Version Mutated

  1. At time t0, someone requests mymod@v1.0.0 from the proxy
  2. The proxy returns 404
  3. At time t1 vgo properly falls back to the upstream
  4. At time t2, v1.0.0 is modified upstream
  5. At time t3, the cache-fill background job fills the cache with v1.0.0

Result: builds on the local machine build with v1.0.0 code from t1, future builds build with v1.0.0 code from t3.

Some of our integrity work may prevent this case

Final Notes

The first scenario above requires us to make some "cultural" decisions on the Go module ecosystem. We'll have to first decide whether module version "should" be mutable.

Personally, I don't think they should be. If someone decides to change or delete a module (i.e. delete or delete-and-recreate a Git tag), the proxy and registry (detailed in another document) should insulate dependent modules from the change.

We could solve the second and third scenario by adding some coordination into the API server. Here's a very rough sketch on how that could look:

  1. The API server checks for mymod@v1.0.0 in the cache. If it finds it, return immediately
  2. If it doesn't find it, check for a lease on mymod@v1.0.0. If none exists, start the cache-fill job
  3. Wait for the lease to be released. If it released successfully check the cache for v1.0.0 and return it to the client
  4. If the lease expired, look for a new lease to be created on v1.0.0 and goto 3

I've mentioned a few times above that I don't think we should do this. It's much more complex to get right at scale, and if we can get away with saying "don't change or delete modules!" - at least at first - that makes more sense to me, culturally and technically.

cc/ @michalpristas @bketelsen

@michalpristas
Copy link
Member

michalpristas commented Mar 22, 2018

comment removed (edited) after i read brians document which basically solved my problem

@michalpristas michalpristas added proxy Work to do on the module proxy and removed proxy labels Mar 24, 2018
@arschles arschles added the proposal A proposal for discussion and possibly a vote label Jul 20, 2018
@michalpristas
Copy link
Member

@arschles can we close this?

@arschles
Copy link
Member Author

@michalpristas yes!

Background for other readers: the proxy architecture is detailed in https://docs.gomods.io/design/proxy/ and as of #772 we've decided not to continue building a registry at the moment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal A proposal for discussion and possibly a vote proxy Work to do on the module proxy
Projects
None yet
Development

No branches or pull requests

2 participants