Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add guidance on the frequency that an origin should issues PrivateToken challenges #351

Closed
colinbendell opened this issue Apr 4, 2023 · 4 comments · Fixed by #356
Closed

Comments

@colinbendell
Copy link

The current PrivateToken challenge / response deviates from the expected behaviours common on the web with WWW-Authenticate/Authorization exchanges. The spec should should provide explicit guidance on the frequency that an origin should challenge and set expectations on the frequency that an origin should expect a client to reply to a challenge.

Most UA implementations replay the Authorization on ALL subsequent requests to the same url after a WWW-Authenticate challenge.. Some UAs even share this Authorization to other urls on the same origin. Even RFC7235 nods to this caching behaviour in 6.2.

For the origin, the assumption is that every request missing a valid token can respond with a 401 and WWW-Authenticate challenge. This is a common behavior with basic, digest, ntlm, etc - common when doing credential based auth.Clearly captcha and PATs aren't in this same category as credentials so this pattern deviates from the mental model of other auth schemes.

The issue with PAT is that the token response is inconsistent by the UA. It's not clear to the origin if a token wasn't provided because of a rate limit or because it wasn't a human. At minimum the spec should set out expectations about the minimum frequency that an origin should challenge and re-challenge. As well, there should be similar guidance for the UA.

For example:

  • the UA must respond to the first challenge an origin issues per TLS socket, but the UA may optionally respond to re-challenges during the same socket connection.
    Alternatively:
  • the UA must respond to challenges on the same origin and socket at least once per minute, regardless of the number of requests

Editorial: when the token is not deterministically presented, I will need to set a cookie based on this token on the first presentation. This cookie now represents "isHuman==true". However, to ensure non-replay, I have to introduce my own cryptographic constraints on this cookie and cause the cookie to regenerate on each request. I can't help but feel I've created more problems now just because the token isn't a consistent signal.

@tfpauly
Copy link
Collaborator

tfpauly commented Apr 5, 2023

To note, we do have a small note that addresses part of this difference in https://www.ietf.org/archive/id/draft-ietf-privacypass-auth-scheme-09.html#name-http-authentication-scheme

Unlike many authentication schemes in which a client will present the same credentials across multiple requests, tokens used with the "PrivateToken" scheme are single-use credentials, and are not reused. Spending the same token value more than once allows the origin to link multiple transactions to the same client. In deployment scenarios where origins send token challenges to request tokens, origins ought to expect at most one request containing a token from the client in reaction to a particular challenge.

@tfpauly
Copy link
Collaborator

tfpauly commented Apr 5, 2023

It's also important to note that the acceptable rates may differ based on token type and local policy

@colinbendell
Copy link
Author

colinbendell commented Apr 5, 2023

tokens used with the "PrivateToken" scheme are single-use credentials ...

This note focuses on the single-use nature of the token. It doesn't provide any guidance or direction to the origin on the frequency of challenges. Likewise, it doesn't set expectations on how frequently an origin should or shouldn't expect the UA to reply to a challenge. There should be some minimum mutual understanding of when a PAT should be expected.

For example, imagine a TLS socket with the following flow:

  1. challenge
  2. no-response
  3. challenge
  4. response

Is the lack of response a signal to the origin that it shouldn't re-attempt to re-challenge? Is the second challenge on the socket that gets a response an indication that the UA is compromised and using brute force techniques (or using a shadow bot fleet)?

While these are implementation details that are likely best for the origin to decide, it would help if there was a minimum expectations of the expected behaviour and interaction between the UA and origin. IMHO, having this would help avoid the first wave of cat-mouse attempts to circumvent PAT.

@tfpauly
Copy link
Collaborator

tfpauly commented Apr 10, 2023

To some degree, this is client specific behavior that falls under:

Clients MAY have further restrictions and requirements around validating when a challenge is considered acceptable or valid.

We have some specific heuristics now for specific token types, but they can vary across types, implementations.

I could imagine some general advice for the origin to not ask for more tokens than they need for a particular session with a client (generally a TLS session).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants