Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP/2: Server Push support #4249

Open
Saphirim opened this Issue Nov 27, 2018 · 10 comments

Comments

Projects
None yet
7 participants
@Saphirim
Copy link

Saphirim commented Nov 27, 2018

Support the HTTP/2 Server Push feature with the now previewed and for .Net Core 2.2 most likely GA HTTP/2 implementation.

HTTP/2 Server Push allows an HTTP/2-compliant server to send resources to a HTTP/2-compliant client before the client requests them. It is, for the most part, a performance technique that can be helpful in loading resources pre-emptively.

@davidfowl davidfowl transferred this issue from aspnet/KestrelHttpServer Nov 27, 2018

@Tratcher

This comment has been minimized.

Copy link
Member

Tratcher commented Nov 27, 2018

No, server push will not be available in any AspNetCore servers for 2.2. We're still trying to gauge interest for 3.0.

@Saphirim

This comment has been minimized.

Copy link
Author

Saphirim commented Nov 27, 2018

Interest here 😊 thx for your fast reply.

@tpeczek

This comment has been minimized.

Copy link

tpeczek commented Nov 27, 2018

Same here.

@CShepartd

This comment has been minimized.

Copy link

CShepartd commented Nov 28, 2018

If not 2.2 then 3.0 will be nice

@asbjornu

This comment has been minimized.

Copy link

asbjornu commented Feb 1, 2019

🙋🏼‍♂️ I would love to have the ability to push resources over an established HTTP/2 connection with ASP.NET.

@davidfowl

This comment has been minimized.

Copy link
Member

davidfowl commented Feb 1, 2019

It would help if you can detail how you think you would take advantage of this feature in real applications. Code samples help as well.

@asbjornu

This comment has been minimized.

Copy link

asbjornu commented Feb 1, 2019

I would like to use HTTP/2 Server Push to push related resources to a client; typically when a “collection” resource has been requested, each “item” resource in the collection could be pushed alongside the “collection” resource so they are available once the client starts iterating over the collection to fetch the individual items.

The alternative (and imho. sub-par) approach is to compound (“embed”, “transclude”, “include”, “expand”, etc.) all items within the collection resource, making caching of the collection resource hard and of the item resources impossible.

The relationship between the resources can also be seen as “parent” and “child”, “root” and “subresource”, etc., but the requirements are similar regardless of the relationship between the resources. Server push could provide helpful in all cases.

@davidfowl

This comment has been minimized.

Copy link
Member

davidfowl commented Feb 1, 2019

How does that concretely change what you’re doing today? What are you doing today? What big downsides are you seeing and what would you expect the big upside to be.

I’m looking for something very concrete and not abstract, that’ll help us think about how to prioritize and even implement the feature.

@asbjornu

This comment has been minimized.

Copy link

asbjornu commented Feb 4, 2019

@davidfowl, sure. Examples of how we (as an industry) deal with the problem of transclusion over HTTP/1.1 today can be found in GraphQL and OData. In GraphQL, transclusion is expressed as sub-selection:

POST /query HTTP/1.1
Host: example.com
Content-Type: application/graphql

{
  hero {
    friends {
      name
      height
    }
  }
}
HTTP/1.1 200 OK 
Content-Type: application/json

{
  "data": {
    "hero": {
      "name": "R2-D2",
      "friends": [
        {
          "name": "Luke Skywalker",
          "height": 1.72
        },
        {
          "name": "Han Solo",
          "height": 1.85
        },
        {
          "name": "Leia Organa",
          "height": 1.54
        }
      ]
    }
  }
}

In OData, transclusion is expressed with the $expand keyword:

GET /heroes/r2-d2?$expand=friends($select=name,height) HTTP/1.1
Host: example.com
HTTP/1.1 200 OK 
Content-Type: application/json

{
  "hero": {
    "name": "R2-D2",
    "friends": [
      {
        "name": "Luke Skywalker",
        "height": 1.72
      },
      {
        "name": "Han Solo",
        "height": 1.85
      },
      {
        "name": "Leia Organa",
        "height": 1.54
      }
    ]
  }
}

Each friend in the above responses are most likely HTTP resources themselves. There's no information provided that allows REST's layered constraint to be used to allow intermediary caching, resource-level RBAC or other resource-specific optimizations, since every "sub-resource" is compounded into one large response.

With HTTP/2, we can use Server Push to deliver each hero as its own resource instead. Hypothetical example based on @evert's Prefer-Push draft:

GET /heroes/r2-d2 HTTP/2
Prefer-Push: friends
Host: example.com
HTTP/2 200 OK
Content-Type: application/json

{
  "hero": {
    "name": "R2-D2",
    "friends": [
      {
        "id": "https://example.com/heroes/luke-skywalker"
      },
      {
        "id": "https://example.com/heroes/han-solo"
      },
      {
        "id": "https://example.com/heroes/leia-organa"
      }
    ]
  }
}

Following this response, three PUSH_PROMISE frames will be sent by the server; one for each friend:

:path /heroes/luke-skywalker
:authority example.com
:path /heroes/han-solo
:authority example.com
:path /heroes/leia-organa
:authority example.com

Then three HEADER and DATA frames are sent by the server containing the headers and contents of the three hero resources:

Content-Type: application/json

{
  "hero": {
    "name": "Luke Skywalker",
    "height": 1.72,
    "friends": [
      {
        "id": "https://example.com/heroes/r2-d2"
      },
      {
        "id": "https://example.com/heroes/han-solo"
      },
      {
        "id": "https://example.com/heroes/leia-organa"
      }
    ]    
  }
}
Content-Type: application/json

{
  "hero": {
    "name": "Han Solo",
    "height": 1.85,
    "friends": [
      {
        "id": "https://example.com/heroes/r2-d2"
      },
      {
        "id": "https://example.com/heroes/luke-skywalker"
      },
      {
        "id": "https://example.com/heroes/leia-organa"
      }
    ]
  }
}
Content-Type: application/json

{
  "hero": {
    "name": "Leia Organa",
    "height": 1.54,
    "friends": [
      {
        "id": "https://example.com/heroes/r2-d2"
      },
      {
        "id": "https://example.com/heroes/luke-skywalker"
      },
      {
        "id": "https://example.com/heroes/han-solo"
      }
    ]
  }
}

Now that each hero has been pushed with its own URI, intermediaries can cache them, authorize them, individually on a per-resource basis. This also has two added bonuses:

  1. The initial R2-D2 response will be more cacheable by being less tailored to the initial request.
  2. The recursive and stack overflow-inducing nature of the friends list is mitigate by not compounding every friend inside every friend, but instead referring to them by their URI.

How we should signal pushing of resources from the client to the server is still up for discussion. preload and prefetch are both viable options that are being investigated. Regardless of the initiation mechanism, a way to do HTTP/2 Server Push in an ASP.NET Core application is needed.

@serialseb

This comment has been minimized.

Copy link

serialseb commented Feb 25, 2019

Same scenario as above, we currently preload those as soon as we get the headers, before we parse the body, so we have headers -> links -> download body (and download links in the background), and finally reassemble once parsing has been done for the request entity body. We lose quite a bit of time in that model, and we have to maintain client code and specific infrastructure for it, which is a pain. People don't like pain. It's a good model. We can't experiment further with push as a replacement because of the lack of implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.