Skip to content
Permalink
Browse files

docs: remove references to SPDY (#1269)

* docs: remove references to SPDY, change them to HTTP/2.0

Co-Authored-By: mpl <mathieu.lonjaret@gmail.com>
  • Loading branch information
zenhack and mpl committed Jul 10, 2019
1 parent 8e63050 commit 6db9cb8a080cc169421b5b1476f5e63f2a9cabaa
Showing with 7 additions and 7 deletions.
  1. +1 −1 doc/protocol/blob-stat.md
  2. +6 −6 doc/protocol/blob-upload.md
@@ -2,7 +2,7 @@

This document describes the "batch stat" API end-point, for checking
the size/existence of multiple blobs when the client and/or server do
not support SPDY or HTTP/2.0. See [blob-upload](blob-upload.md) for more
not support HTTP/2.0. See [blob-upload](blob-upload.md) for more
background.

Notably: the HTTP method may be GET or POST. GET is more correct but
@@ -13,17 +13,17 @@ Uploading a single blob is done in two parts:


When uploading multiple blobs (the common case), the fastest option
depends on whether or not you're using a modern HTTP transport
(e.g. SPDY). If your client and server don't support SPDY, you want
to use the batch stat and batch upload endpoints, which hopefully can
die when the future finishes arriving.
depends on whether or not you're using HTTP/2.0. If your client and
server don't support HTTP/2.0, you want to use the batch stat and batch
upload endpoints, which hopefully can die when the future finishes
arriving.

If you have SPDY, uploading 100 blobs is just like uploading 100
If you have HTTP/2.0, uploading 100 blobs is just like uploading 100
single blobs, but all at once. Send all your 100 HEAD requests at
once, wait 1 RTT for all 100 replies, and then send then the <= 100
PUT requests with the blobs that the server didn't have.

If you DON'T have SPDY on both sides, you want to use the batch stat
If you DON'T have HTTP/2.0 on both sides, you want to use the batch stat
and batch upload endpoints, described below.

## Preupload request:

0 comments on commit 6db9cb8

Please sign in to comment.
You can’t perform that action at this time.