Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP2 -- does disabling do more harm than good? #107

Closed
RoxKilly opened this issue May 3, 2017 · 53 comments
Closed

HTTP2 -- does disabling do more harm than good? #107

RoxKilly opened this issue May 3, 2017 · 53 comments

Comments

@RoxKilly
Copy link

RoxKilly commented May 3, 2017

Prefs 2614 and 2615 disable SPDY and HTTP2.

user_pref("network.http.spdy.enabled", false);
user_pref("network.http.spdy.enabled.deps", false);
user_pref("network.http.spdy.enabled.http2", false);

The link reference leads to a Tor project page that doesn't give much info about what's wrong with these protocols:

SPDY and HTTP/2 connections MUST be isolated to the URL bar domain. Furthermore, all associated means that could be used for cross-domain user tracking (alt-svc headers come to mind) MUST adhere to this design principle as well.

Does anyone understand (1) how exactly SPDY/HTTP2 expose users' privacy more than HTTP1 and (2) whether there is any evidence of harm in the wild?

I bring this up because disabling SPDY/HTTP2 has significant downsides. We may not care that these protocols drastically reduce overhead (header data sent back and forth) and speed up browsing, but we should care that they protect against a real ongoing threat to privacy.

I'm specifically talking about what ISPs call "Header Enrichment": the growing trend in altering customer traffic on the fly, tagging it with unique identifiers to make users easier for advertisers to track regardless of browser settings. In the US, AT&T, Verizon and Comcast have all be caught doing this. Here's a great article on the practice. Below I quote the last paragraph:

Google has proposed a new Internet protocol called SPDY that would prevent these types of header injections – much to the dismay of many telecom companies who are lobbying against it. In May, a Verizon executive made a presentation describing how Google's proposal could "limit value-add services that are based on access to header" information.

Here is evidence of telecom industry's displeasure with SPDY (scroll down to see the News section). So we have a growing privacy abuse that is actually becoming standard in the telecom industry, but one that SPDY/HTTP2 protect against. Not only that, but these protocols protect against a class of attacks that use header traffic analysis to unmask user activity. Here is an example of another privacy attack that wouldn't work with SPDY/HTTP2. From the article:

Note also that the recipes in here apply to HTTP/1.0 and HTTP/1.1 only, no HTTP/2.0.

So my basic question is: Are we disabling HTTP2/SPDY with a full understanding of the privacy implications of both sides of the coin? I think actual, ongoing threats outweigh theoretical ones, especially when the ongoing threats are becoming industry standard practice

EDIT - resources

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

@Thorin-Oakenpants true. I bundled them because last time I tested, SPDY pref needed to be enabled for HTTP2 to work in FF 52.

Besides, this is as much about HTTP2 (which is almost entirely based on SPDY), and that's certainly not deprecated.

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

Tor browser has a different risk profile on this issue: because all Tor traffic is encrypted gibberish invisible to ISPs, Tor users are not vulnerable to either of the two problems that HTTP2 protect against from my OP -- the header traffic manipulation to track users , as well as the header traffic analysis to unmask browsing history over HTTPS.

On the other hand vanilla FF users are vulnerable to these practices..

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

yes HTTP2 is only ever used over TLS but that's not relevant to the point I tried to make in my last post. Let me rephrase.

For Firefox users:

  • HTTP/1 (and 1.1) is vulnerable to both exploits from my OP
  • HTTP/2 is not vulnerable to either exploit because of how it deals with headers, and request/response cycles

For Tor users:

  • Regardless of HTTP version, encryption makes all web traffic gibberish to ISP. So Tor can disable HTTP/2 without opening its users to the flaws I discussed.

The encryption in Tor isn't just TLS. TLS leaves lots of data exposed (eg: which site you're going to, the size of the headers etc..) which leaves users vulnerable to the 2nd exploit (header traffic analysis to unmask browsing history over HTTPS). Tor hides all that info, as a VPN would.

So when I said tor has a different risk profile on this issue, I meant that when Tor disables HTTP/2, it doesn't suffer the same drawbacks in privacy protection that regular Firefox does.

@Thorin-Oakenpants
Copy link
Contributor

I understand what you are saying, but the fallback in FF (I assume) would still be TLS

@Thorin-Oakenpants
Copy link
Contributor

I bundled them because last time I tested, SPDY pref needed to be enabled for HTTP2 to work in FF 52.

We should probably merge these two together (2614+2615). spdy.enabled looks like a master switch and needs to be on for all the other spdy.* prefs. They simply kept the spdy prefix from earlier (as you say HTTP2 originated from spdy). Definitely needs a clean up - 2614 description is now not relevant at all.

We should also probably add in network.http.spdy.enforce-tls-profile enforced as true (I assume this enforces HTTP2 only over TLS1.2+ .. need to check)

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

I think I get your point: that in the absence of HTTP/2, TLS over HTTP/1 would still protect users. This is true only for the 1st of the two exploits I alluded to. Though to be fair, the 2nd is still just academic at this point.

Do you have an understanding of what specific privacy exposure is caused by allowing HTTP/2? Because I couldn't make sense of exactly why it's disabled, I had a hard time weighing the options fairly. Because the ISPs who stand to profit by exploiting user traffic data lobbied to oppose SPDY, I was inclined to support it (or rather its successor HTTP/2).

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented May 3, 2017

Background: https://www.extremetech.com/computing/199536-prominent-developer-criticizes-http2-protocol-claims-politics-drove-adoption-process

HTTP/2 allows for multiplexing across different hosts simultaneously, whereas SPDY doesn’t

This is what I was referring to about multiple domains which seems like a concern to me on first glance

According to Kamp, this was a significant mistake. He raises multiple issues with HTTP/2’s design, claiming that it doesn’t protect user privacy, does nothing to address the numerous security and privacy issues around cookies, incrementally improves performance (at best), and was driven by politics, not technical best practices

NOTE: multiple browser vendors have stated they won’t implement HTTP/2 without it, making [encryption] it a de facto requirement.

Points on those quotes and the article

  • Speed/performance benefits - don't care - has nothing to do with security/privacy
  • Encryption is by de facto mandatory - so not a consideration
  • Well, an incremental improvement is better than none
  • Don't care about politics, just what we have to work with, and that is HTTP2
  • Doesn't address issues around cookies (but neither does the old standard), so not a consideration
  • Doesn't protect user privacy ... WHY? HOW?

EDIT:
More stuff: https://www.sitepoint.com/http2-the-pros-the-cons-and-what-you-need-to-know/ - Server Push was something I want to address as well as multiplexed streams

HTTP/2 is more proactive in this regard, sending assets that the browser is likely to need without it having to ask

http://www.zdnet.com/article/severe-vulnerabilities-discovered-in-http2-protocol/ - is this still a thing? It's an attack on the servers, not people browsing. But it's interesting

http://blog.scottlogic.com/2014/11/07/http-2-a-quick-look.html - good summary of exactly what everything is (has Explicit Trusted Proxy been implemented - need to check)

a new concept called ‘Explicit Trusted Proxy’ was proposed, which is essentially a node along the connection which is allowed to decrypt and re-encrypt your traffic, for caching purposes

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

Great finds! Some quick points

  • the multiplexing (fetching multiple assets, possibly from multiple hosts over the same connection) would be a huge privacy concern if tools such as Tracking Protection, uBO and later Containers and First-Party-Isolation had no way to control this. Essentially, if the browser still allows uBO and privacy settings to block connections to undesirable 3rd parties, multiplexing becomes a moot point because the same measures that protect privacy with HTTP/1 would work with HTTP/2

  • As for Server Push, I don't understand the concern. Today, when I open a page in HTTP/1, the links to other resources (images, scripts, css...) are automatically followed once the page is loaded; no additional user action is necessary for the browser to fetch those assets. The only thing difference with HTTP/2 is that the server gets to send all needed assets in one request/response cycle (hence Server Push). I don't see the increased privacy exposure -- with the same caveat as my point above.

PS: the clarification on multiplexing -- especially from different hosts -- makes sense of why Tor doesn't like it. Thanks for that info.

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented May 3, 2017

Just saying it needs to be looked at - not saying it's bad. But my initial thought is .. uggh: https://en.wikipedia.org/wiki/HTTP/2_Server_Push

HTTP/2 Push allows a web server to send resources to a web browser before the browser gets to request them

"BEFORE the browser requests them" .. ring any alarm bells (it kinda does for me). Now 99% of the web will use this for "good" - nothing wrong with speeding up web performance/interaction. I just want to explore if it can be used for "bad"

EDIT: look at the example on the wiki page. You request the html only (eg block css via uMatirx), but get sent the css anyway.

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

I agree it does feel icky. But guess what else works that way? WebSocket messages; they get pushed from the server. All I'm saying is, let's walk through the privacy implications of the server pushing files as soon as the browser connect (HTTP/2) vs the browser automatically fetching those same files as soon as the initial HTML is parsed (HTTP/1). The exposure is the same, unless HTTP/2 neuters the user's ability to control exposure through privacy tools.

By the way, here is the HTTP/2 spec. I'll post it in the OP as well.

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

Agreed about need for an expert, and agreed doubly about the need to know whether the connection would be effectively blocked (simply stopping execution after a download wouldn't be good enough). Enjoy your coffee.

@Radagast
Copy link

Radagast commented May 3, 2017

One other consideration, how many sites currently support HTTP2. I've been using the HTTP/2 and SPDY indicator (https://addons.mozilla.org/en-US/firefox/addon/spdy-indicator/?src=search) and server support seems quite spotty.

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented May 3, 2017

@Radagast

https://en.wikipedia.org/wiki/HTTP2

as of April 2017[update], 13.4% of the top 10 million websites supported HTTP/2

And no doubt it will grow rapidly

EDIT: https://w3techs.com/technologies/details/ce-http2/all/all
^^ traffic would probably be a better indicator

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

The 1st attack OP mentioned is mobile-only. If you can't trust your ISP you're screwed anyway. If you somehow think this also applies to non-mobile - use Tor, problem solved.
The 2nd "attack" - again if you can't trust your ISP you're fucked - and it seems optimistic at best. More realistically it's completely unreliable due to various factors that Pants already mentioned.
Maybe this somewhat works in a lab when you only look at one user, but if you want to do this on a large scale with countless different User-agents strings alone - well good luck with that.
If you think this attack is a problem for you because your ISP sucks, use different length user-agents and change them every minute or so or modify the headers randomly in some other way. That should be enough to make the 2nd attack fail completely.
HTTP2 was designed primarily (or even exclusively) for performance. The fact that it happens to make the 2nd (very theoretical!) attack impossible (?) is just a lucky by-product.

@RoxKilly wrote:

Are we disabling HTTP2/SPDY with a full understanding of the privacy implications of both sides of the coin? I think actual, ongoing threats outweigh theoretical ones

https://http2.github.io/http2-spec/#security
https://http2.github.io/http2-spec/#rfc.section.10.8

(Pants, maybe we should those 2 links, at least the 1st because the 2nd is part of the first)

@RoxKilly wrote:

On the other hand vanilla FF users are vulnerable to these practices..

How? given that h2 is enabled by default for vanilla FF users?

threats

It has already been demonstrated that servers can be exploited (has since been fixed), but the research has been done by a "data-center security" company and they only looked at the server side for obvious reasons. So far nobody seems to have looked at vulnerabilities in clients. Therefore I'm completely against changing our default values and opening a whole new can of worms with potentially exploitable flaws that are very likely to exist somewhere (where nobody has looked yet) - just because there's 1 (ONE) very "optimistic" potential but completely theoretical "threat" that an ISP could or couldn't do.

If you live in a shitty country where you can't trust your ISP, you're free to enable those prefs in your own user.js if you think the benefits outweigh the unknown risks. But you should really focus on making your country less shitty.
H2 sounds very shitty to me (multiplexing etc) and I don't give a single shit about a few milliseconds (or even seconds for that matter) of speed improvements.

ot: They also deliberately and knowingly weakened TLS1.3 for alleged performance reasons and I absolutely hate that they allowed someone to influence them and let that happen. Who gives a fuck about a few millisecs here or there in a freaking encryption protocol?!! today science is compromised and it's sad AF

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented May 3, 2017

The 1st attack OP mentioned is mobile-only

I haven't even looked at that. I assume this is the UIX X-UIDH bullshittery and asshattery from verizon. Yes they did that to their mobile (non-encrypted) traffic only, but that's feasible on any plain text traffic - the ISP can do what they want. They can block shit, redirect shit (like to warning pages for unpaid bills), script injection, modify pages etc.

Summary

  • [attack1] ISPs can suck but they in full control as your endpoint for plain text traffic
    ^^ This IMO doesn't have anything to do with HTTP2 because it is only used with TLS1.2+ (de facto) and the site already uses encryption so there is no danger of falling back to plain text
  • [attack2] as I said earlier, I feel that is a bit far fetched in the real world. Besides the tech issues, it's so unlikely given all the other methods out there
  • speed/performance - who cares, as said earlier

The only things I wanted to question were 1. multiplexing and 2. server push (and maybe check on 3. Explicit Trusted Proxy where the ISP/CDN caches shit unencrypted!). Both these two items do zero for enhancing privacy/security IMO but leave me with possible actual privacy/security holes (eg sending stuff the browser never asked for).

Need a Zilla Engineer in here to calm everyone down :)

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

https://queue.acm.org/detail.cfm?id=2716278 - original article by the "dissident" dev

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

@Thorin-Oakenpants wrote:

[attack1] ISPs can suck but they in full control as your endpoint for plain text traffic
^^ This IMO doesn't have anything to do with HTTP2 because it is only used with TLS1.2+ (de facto) and the site already uses encryption so there is no danger of falling back to plain text

AT&T's patent acknowledges that it would be impossible to insert the identifier into web traffic if it were encrypted using HTTPS, but offers an easy solution – to instruct web servers to force phones to use an unencrypted connection.

source: https://www.propublica.org/article/somebodys-already-using-verizons-id-to-track-users

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

The good news is that HTTP/2.0 probably does not reduce your privacy either. It does add a number of "fingerprinting" opportunities for the server side ...

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

@Thorin-Oakenpants wrote:

The only things I wanted to question were 1. multiplexing and 2. server push (and maybe check on 3. Explicit Trusted Proxy where the ISP/CDN caches shit unencrypted!).

This is where I am for my private settings. I would disable HTTP/2 in my own settings if I learned that it led to my browser connecting to servers and downloading files that it wouldn't have over HTTP/1 (because my uBO settings would have prevented it).

For the public template settings, @earthlng and @Thorin-Oakenpants have made a compelling point and I've changed my mind. I think leaving it disabled makes the most sense. Mostly because:

  1. Disabling HTTP/2 would lead to a fallback to TLS, which would leave the user still protected from attack 1

  2. attack 2 is still just theoretical (I admitted as much a couple of times) and I am also dubious about whether it would scale well for lots of users with lots of websites to track.

Thanks

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

So what are the good things about H2? That it may or may not make things faster? Or that it prevents a theoretical "attack"? Anything else?

On the other side we have things like

  • killing polar bears ✅

HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause increased CO2 pollution adding to climate change.

  • more potential fingerprinting ✅

does add a number of "fingerprinting" opportunities for the server side

  • add new attack surface ✅

"releasing a large amount of new code into the wild in a short time creates an excellent opportunity for attackers," Shulman added

While it is disturbing to see known HTTP 1.x threats introduced in HTTP/2, it's hardly surprising.

I think at this point I've seen enough to be able to answer OP's question:

HTTP2 -- does disabling do more harm than good? - Nope

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

I think RoxKilly is already convinced, he said as much :)

I was still typing while he posted that, sorry everyone ;)

@earthlng
Copy link
Contributor

earthlng commented May 3, 2017

How could I not be salty if this guy proposes we help kill my fellow earthlings?! xD

@RoxKilly
Copy link
Author

RoxKilly commented May 3, 2017

@Thorin-Oakenpants wrote:

RoxKilly I'm quite keen to ask gorhill about this re uBo/uM. You're more elegant than me.. wanna try?

Well if you put it that way: gorhill/uBlock#2582

My own test (linked above) suggests that uBO and other content blockers in fact do work over HTTP/2. In Firefox at least.

@earthlng
Copy link
Contributor

earthlng commented May 4, 2017

@RoxKilly, thanks for testing and letting us know. But IMHO that test site is not really a very representative example that h2 is a lot faster than h1. For example, without clearing the cache between refreshes, the h1 server took it's time to reply with 304 Not modified, up to 12secs actually, on average maybe 7-8 secs for many of the tiles. While the h2 server only took an average of ~100ms per tile to respond with a 304. They want to advertise their h2 CDN servers and there are probably a few ways to make the results more in their favor. (more powerful server, throttling, etc) Another thing I noticed is that some h1 tiles that were visible a millisecond before, suddenly timed out and the server responded with 404 not found. Highly suspect test site IMO. Not to mention that nobody would split such a relatively small image into 200 tiles on a h1 site.

@RoxKilly
Copy link
Author

RoxKilly commented May 4, 2017

@earthlng I agree with you that we should not trust that site as a demonstration of speed difference. For it to make sense, the same exact tiles should be fetched from the same server, and the response headers should instruct the browser not to cache anything.

Comparing speed wasn't the point of my testing. I wanted to find out whether the browser exposes those HTTP/2 connections to the WebRequest API so that extensions such as uBO can effectively block them (they can). That's all I was looking for, because that's what determines whether I disable HTTP/2 in my browser..

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented May 9, 2017

review (remove old 2614+2615, replace with below)

/* 2614: disable HTTP2 (which was based on SPDY which is now deprecated)
 * HTTP2 raises concerns with "multiplexing" and "server push", does nothing to enhance
 * privacy, and in fact opens up a number of server-side fingerprinting opportunities
 * [1] https://http2.github.io/faq/
 * [2] http://blog.scottlogic.com/2014/11/07/http-2-a-quick-look.html
 * [3] https://queue.acm.org/detail.cfm?id=2716278 ***/
user_pref("network.http.spdy.enabled", false);
user_pref("network.http.spdy.enabled.deps", false);
user_pref("network.http.spdy.enabled.http2", false);

@Atavic
Copy link

Atavic commented Sep 27, 2017

Implementation status - point 10. SPDY and HTTP/2 - Set to FALSE:

network.http.spdy.enabled
network.http.spdy.enabled.v2
network.http.spdy.enabled.v3
network.http.spdy.enabled.v3-1
network.http.spdy.enabled.http2
network.http.spdy.enabled.http2draft
network.http.altsvc.enabled
network.http.altsvc.oe

@smithfred
Copy link

OK, I'm going to ask a really dumb question here - what exactly are the added "fingerprinting" risks/vectors/concerns? The comments on this issue all seem (?) to reference https://queue.acm.org/detail.cfm?id=2716278, but all that it says is:

The good news is that HTTP/2.0 probably does not reduce your privacy either. It does add a number of "fingerprinting" opportunities for the server side

(Plus some complaining that it doesn't do away with the cookie model).

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented Jan 11, 2018

Well, we never really delved into the specifics - the word of the resident dev was enough for me .. but here we go .. first google result: https://blogs.akamai.com/2017/06/passive-http2-client-fingerprinting-white-paper.html

Edit: tl;dr:

This paper demonstrates how these new implementations create small nuances, which differentiate HTTP/2 clients from one another. In addition, we have shown how these unique implementation features can be leveraged to passively fingerprint web clients. Our research shows that passive HTTP/2 client fingerprinting can be used to deduce the true details about the client’s implementation — for example, browser type, version, and sometimes even the operating system.

Not a massive leak in my book - UA spoofing is pretty moot anyway

@smithfred
Copy link

https://blogs.akamai.com/2017/06/passive-http2-client-fingerprinting-white-paper.html

That's nothing to do with HTTP/2 and everything to do with the generic problem of detectable protocol implementation differences.

browser type, version, and sometimes even the operating system.

So, uh, same as a useragent string then? ;)

There's nothing in that white paper about "what about HTTP/1.x".

In other words: HTTP/2: no evidence for inherently greater fingerprintability, just "it's new network code of any sorts -> differences between clients."

p.s.:

first google result

On a discussion about privacy... ah the irony 😁

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented Jan 11, 2018

Just throwing things here - yet to read em

Don't forget there are other aspects to HTTP2 - it kills polar bears, has server PUSH, concurrency / multiplexing .. holy crap what is that on page 22 .. deducing system uptime?

@Thorin-Oakenpants
Copy link
Contributor

Wow .. page 34 .. entropy in pseudo headers .. every single major browser decided to do it differently :bash-head-on-wall:

@earthlng
Copy link
Contributor

oh great, I had this weird feeling that @smithfred was gonna bring this shit up next. Now I have to read all this stuff again FFS. I mean ... it literally freakin kills polar bears! POLAR BEARS DUDE! What else do you need to know? Don't you care about the environment, huh? HUH??! Don't you have a HEART?? IT'S BAD!!!! HTTP/2 IS BAD! bad bad bad - can't we just leave it that, pleeeease?

@Thorin-Oakenpants
Copy link
Contributor

Now I have to read all this stuff again FFS

Well, TBH, the one paper we I found wasn't published until Jun 2017. The slide show is from Dec 2017, and the blackhat one which is very similar was put together for BlackHat Euro 2017 (whenever that was) - all after we buried this issue back in May

Enjoy your reading :)

@smithfred
Copy link

smithfred commented Jan 11, 2018

oh great, I had this weird feeling that @smithfred was gonna bring this shit up next. Now I have to read all this stuff again FFS. I mean ... it literally freakin kills polar bears! POLAR BEARS DUDE! What else do you need to know? Don't you care about the environment, huh? HUH??!

Yep, been slogging through the damn thing to customise it, so it's all hitting Issues in order :)

Fuck polar bears, murderous bastards. Have you even $image_search-ed "bloodied polar bear"? All their cute little cubs covered too in the blood of poor innocent sea-bunnies? (Or whatever they eat... no-one knows). Also, can we just shoot the planet into the sun and get it over with?

I digress...

@earthlng
Copy link
Contributor

earthlng commented Jan 14, 2018

you asked

what exactly are the added "fingerprinting" risks/vectors/concerns?

and it seems the main FP issues that have been identified and proven (so far) are:

  • client-fingerprinting (but NOT user FP!)
  • Spoofed User-Agent Detection
  • Anonymous Proxy/VPN Detection

admittedly those are probably all not that worrying for most people: Clients can be identified by other means as well, spoofing UA is a bad idea (we already knew that) and there are probably a shit ton of similar OSes using any given VPN at all times, so that's not really a problem either AFAICS.

But there could be other issues as well, either FP stuff not identified yet or things like PUSH (
does that mean someone could push something illegal into the cache that wasn't requested at all? IDK but it doesn't sound like anything I'd ever want to use)
Ultimately I don't really care about the alleged performance benefits and don't see a reason why I should enable H2 (if that's what you were getting at).

The user.js is a template - if you want to enable H2 go for it.
However if you want us to enable it, you'd have some convincing to do ;)
But of course proposals to improve descriptions or whatnot are always welcome.

@earthlng
Copy link
Contributor

holy crap what is that on page 22 .. deducing system uptime?

uptime? nothing to do with H2 but here you go: http://lcamtuf.coredump.cx/p0f3/

^^ does it show your uptime?
If it does, at least you won't have to worry about any other kind of FP anymore 😄

@Thorin-Oakenpants
Copy link
Contributor

uptime? nothing to do with H2

BlackHat pdf - page 38 - uptime
page38

Akamai dude slideshow - page 22 - system uptime
page22

I know it says deduce but this intrigues me. Wot the F are they talking about?

@earthlng
Copy link
Contributor

I don't know

@smithfred
Copy link

p0f shows "uptime" for me but is completely wrong 😆

@Atavic
Copy link

Atavic commented Jan 14, 2018

OS detection is based on the differences between TCP/IP Packets generated by various operative systems, as the TTL value contained in the IP Header or other Flags.

P0f stores the operating system signatures in the file p0f.fp

@ghost
Copy link

ghost commented Mar 9, 2018

What an interesting thread.

I'll keep in mind that if there are HTTP2 issues they re not of the concern of uBO.
After having hesitated since always to disable http2 I changed my mind from what I've read here, hence:

user_pref("network.http.spdy.enabled", false);
user_pref("network.http.spdy.enabled.deps", false);
user_pref("network.http.spdy.enabled.http2", false);

@RoxKilly mentioned above,

Because the ISPs who stand to profit by exploiting user traffic data lobbied to oppose SPDY,

The atis.org page is no longer accessible, I found its last recording on Wayback

Old thread already, pity I haven't read it any sooner.

@Atavic
Copy link

Atavic commented Mar 9, 2018

A nice read, if interested:

http://www.atis.org/openweballiance/docs/SPDY Analysis.pdf

@vertigo220
Copy link

I'd like to file a complaint regarding this pref: the user.js fails to mention polar bears!

Anyways, I read this whole thread and some info elsewhere and, while I didn't read any of the technical docs, as I'm sure they'd mostly go above my head, what I have read has said the multiplexing is nothing more than turning a serial connection into a parallel one, which sounds great for performance. I've not seen anything indicating that it can result in connections being made to other servers; everything suggests it's just more connections to the same one.

So to me, the bigger, and likely only, real issue is the server push, and that seems to me to at worst be a mixed bag. It's confirmed uBO still blocks everything on HTTP/2 that it would on HTTP/1, so you're not going to see any more ads or be exposed to any more JS. The real question is: is uBO capable of keeping the server from sending that stuff, even though it (uBO) would normally block requests for it, or does the stuff get sent regardless due to push. If uBO is able to block it from being sent, then there's no problem. If it's not, then the main/only problem is more bandwidth usage. This is definitely not ideal for a slower or metered connection, and then it becomes a matter of whether the speed benefit gained by multiplexing is more or less than any slowdown imposed by the pushing of unused data (and, ironically, it's the slower connections that would benefit more from multiplexing that would also suffer more from pushing). I'm currently seeking answers on uBO's subreddit, so while I doubt many of you use Reddit, you may want to jump in on that conversation.

Also, I hate to be the bearer (no pun intended) of bad news, but we've probably killed more polar bears running our computers to research and discuss this than HTTP/2 does in a year. 😦

@Atavic
Copy link

Atavic commented Jun 19, 2018

HTTP/2 Server Push is a performance technique that can be helpful in loading resources pre-emptively [1]

AKA Prefetching over multiple streams [MAX_CONCURRENT_STREAMS = 128] from a CDN as Akamai, Amazon, Cloudflare, Level 3 or Softlayer to name a few.

With the adoption of http2, there are reasons to suspect that TCP connections will be much lengthier and be kept alive much longer than HTTP 1.x connections have been. [2]

Reusing connections for different origins allows tracking across those origins. [3]

The pros aren't that big (an article said around 10%), the cons are more content from big players and improved fingerprinting: Bob goes from site a to site b over http2 where TCP connections are still opened. Both sites are provided by the same CDN that knows exactly how much time Bob was on site a. If Bob opens a site c that's over http2 from the same CDN, this service keeps monitoring poor bob's surfing time.

[1] WikiTextBot on r/uBlockOrigin
[2] https://http2-explained.haxx.se/content/en/part7.html
[3] https://http2.github.io/http2-spec/#rfc.section.10.8

Also see: gorhill/uBlock#2582

@vertigo220
Copy link

Reusing connections for different origins allows tracking across those origins. [3]

The problem is, basically everything I've read so far does not suggest that at all. And there's no reference on that page to where that information is coming from. So I'm not sure if that's actually true. Just because a connection is held open longer between Bob and Amazon, doesn't mean that connection will somehow magically be used between Bob and Google, too. But that seems pretty official, so I guess it's true, in which case then, yeah, multiplexing=bad. Which is unfortunate, because it seems like it would be a nice little bump in performance. With my connection, I'll take whatever I can get, though of course not at the expense of such a huge potential privacy leak. As for the CDN stuff, that makes a bit more sense, and is just another reason to use Decentraleyes.

Also see: gorhill/uBlock#2582

Already read that, and referenced it in my post on Reddit. The problem is it only partially answers the questions at hand. It would still be nice to know the answer, but I guess it's less important knowing that multiplexing is such a potential concern, whereas I thought before only push might be.

@Thorin-Oakenpants
Copy link
Contributor

FYI: https://bugzilla.mozilla.org/show_bug.cgi?id=1337868#c3

We're planning to enable HTTP/2 for the next version of Tor Browser and unit tests for FPI (or generally OriginAttribute isolation) would provide stronger assurance.

When this bugzilla is closed/confirmed, then it might be worthwhile enabling HTTP2 (it does have speed advantages, but probably not earth shattering - it might have been 20-30% or more, which I read in some tortrac - depends on the site/content of course) - but only if you have FPI on

@ghost
Copy link

ghost commented Nov 21, 2018

Hi!
I did some research last night and found out that Tor Browser 8.0 (released the 5th of September this year) enabled both HTTP/2 and Alternative Services after years-long audits.
Here is the HTTP/2 audit : https://trac.torproject.org/projects/tor/ticket/14952
The AltSvc audit : https://trac.torproject.org/projects/tor/ticket/24553
And the official blog post announcing the enabling of both of those protocols : https://blog.torproject.org/new-release-tor-browser-80
I'm bundling both HTTP/2 and AltSvc on this HTTP/2 ticket because they seem to be pretty closely related, even from Tor's perspective, but please do tell me if I was wrong to do so, I only created my account today for this particular matter, I'm not sure about how I should be proceeding.
I will add that AltSvc needs First Party Isolation enabled to prevent any privacy concerns, as stated in the Tor audit.
The issue of the lying URL bar can still be a concern though, and I do not know whether that was patched by Mozilla, though I think Tor's approval on the matter is already enough to reconsider the preference or even just add a comment above it in the .js.
My apologies to the polar bears for bringing this topic back on the table.

P.S.: I've been following this awesome repo for months now and I just wanna say thank you for your hard work. You all are great!

EDIT: Just saw you are already discussing the matter in #491, sorry for the bother.

@Thorin-Oakenpants
Copy link
Contributor

I'm bundling both HTTP/2 and AltSvc

Not only that, but I think SSL session ticket ids is closely related. The thing is TBB is different - it uses tor and changes circuits (I'd have to check now, but it used to be every 10 minutes)

@ghost
Copy link

ghost commented Nov 22, 2018

You're right, I didn't think of that.
But I can't see any reference to it in both the HTTP/2 and AltSvc audits, perhaps I should ask someone in the TBB dev community for a quick explanation? Maybe the patches Mozilla applied to the protocols not only protect TBB but also Firefox releases.
Any idea how to contact the devs about such a specific issue?
Also should I continue discussing this here or in #491?

@Thorin-Oakenpants
Copy link
Contributor

maybe start a new topic "Investigate: SSL + HTTP2 + AltSrv" - this thread is rather long, and 491 is for something else (I'll use it to do a TBB vs ghacks diff when TBB ever gets a final 8, it's still a huge mess IMO). It would be kinda cool to know if SSL session tickets are required for HTTP2 and AltSrv, because that's the overriding factor in even allowing them in the first place. Currently FF only wipes SSL sessions on FF (or last PB mode) close, otherwise it allows up to 2 days (and I'm not sure if it respects that, eg if a site says the id is valid for 5 days, what does FF do, and vice versa if the site says its shorter)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

7 participants