[SECURITY] Tampering with Bluetooth metadata #228
Comments
Thanks for the report. Our security experts will have a look at that. Mit freundlichen Grüßen/Best regards, |
Good morning @pdehaye, thanks for reaching out. Furthermore, an attack precondition to create false positives is to be able to tamper upload authorizations to spread a false notification status at the end. Nonetheless we highly appreciate your remarks and will further discuss them with the mobile operating system providers. Happy Friday |
This is wrong. An attack precondition is to collect Rolling Proximity Identifiers whose Temporary Exposure Keys will later be (correctly) revealed. One scenario is thus harvesting of Bluetooth beacons near a hospital at 7.30am, tampering with them, and then replaying them in the middle of the financial district. Since it is deployable on consumer phones, smartphones (even of non-participants) or IoT devices are part of the attack surface through hacking.
Please re-open the issue, or at least point me to where this vulnerability could be reported while CoronaWarn undergoes a security review later. |
Regarding 3.5 "Unauthenticated Metadata" - rather than flipping bits in unknown values (which has a decent chance that you are actually increasing a value that you wanted to decrease), the attacker would be better off simply replaying the message with a much higher output power. I don't think this aspect is a real weakness in the protocol. |
@mh- I am aware of this, but it really depends on which type of attacker you are considering. A hacker leveraging someone else's device might not have that opportunity for instance. An attacker leveraging someone else's SDK might not have that opportunity either. This affects scale, likelihood, cost, motive, etc of the attacker, which in turn affcet legal questions around the legal status of this data (as personal data or not), depending on the country. Additionally you would detect this differently in the two attack modes: one on signal strength abnormally high, one on multiple payloads that are very similar. Finally, depending on undocumented parts of the protocol, I suspect in dynamic environments you would be able to detect which payload was emitted at higher strength, due to timing and physical characteristics of the signal. |
Reviewing the whole thread, please make sure to include this tampering attack also in your documentation, as it has downstream security and legal effects, especially if you care about interoperability. The trade-offs will not be the same everywhere you are hoping to interoperate with. |
@pdehaye Again, the observation that one can flip bits in the plaintext with CTR mode is correct, but it helps only if you know the original value, or you can make reasonable assumptions about it. This significantly reduces the relevance of this attack vector. Also, your statements are very broad, e.g. "downstream security and legal effects" - if you wish to engage in a discussion, it might help to be more specific. |
As already stated by @haxxbard, we do not manage the Bluetooth transmission and/or reception. Please address these concerns directly to Apple and Google, they have dedicated processes especially with respect to security. In case you find out a security vulnerability in one of the CWA components, please see the respective SECURITY.md file in the respective repositories, e.g. https://github.com/corona-warn-app/cwa-app-ios/blob/development/SECURITY.md for the iOS client. Mit freundlichen Grüßen/Best regards, |
@mh- The downstream security and legal effects refer to consequences for the legality analysis of the system deployed. The app developers can chose to ignore problems "lower down the stack", at the protocol or Bluetooth layer, but at some point someone in Germany will have to deploy it and evaluate the legality of the app (who?), which will have to include considerations of the integrity of the data, usefulness of the signals, classification as a medical device, consideration as personal data, etc. This is highly dependent on the country, so I can't be more specific for Germany, but in Switzerland these considerations are biting back into the app developers as well, in terms of UI for instance: "Exchange Bluetooth beacons anonymously" will probably not fly anymore. I can offer more specifics if you are interested, but some will be dependent on the fact that despite having similar data protection laws, we don't have the same jurisprudence built on top. Some are also still question of political debate, for instance around the epidemiological interventions following a SwissCovid notification of risk: if the Swiss confederation wants to deploy an app that it knows is at risk of false positives, and it wants such a notification to give a right to a free COVID-test, then maybe the cantons shouldn't be paying for these tests. @SebastianWolf-SAP The core problem as I see it is precisely to think in terms of a layered stack: it is not so. Everyone wishes we could build on top of a nicely defined protocol, but of course the interactions are very open since there can be all kinds of eavesdroppers etc. If you forget one such angle of attack (here direct tampering with beacon signals), you might need to revisit more than you think. If you know of one such angle of attack and don't document it, you might be assuming more liability than you anticipate (especially if you see it as a problem with the components you are using, but decide it is not your problem to get those components fixed). My main point here is that while the signal strength vs message tampering question is irrelevant to mitigation measures at the software layer (*), it will have consequences further when the app is analyzed. Basing this analysis on partial documentation of the threats might have consequences later in validating the app. (*): unless we go into a surveillance countermeasures route, or maybe the protocol takes advantage of the three bytes that remain available in the metadata to include some sanity checks |
@mh- I have been privately told of techniques that increase the chances of an attacker at flipping values for the AEM. They convinced me, but I will not disclose them at the moment because this is still the topic of ongoing research. |
@SebastianWolf-SAP to illustrate the fact that this metadata tampering is not documented well enough, see the responses to this thread: |
In joint work with Joel Reardon (University of Calgary) (see also #308 ), we found a new attack that we consider linked to this one, since an attacker would leverage the dematerialization the AEM tampering brings to bring them scale (an attacker would no longer need hardware, and could just attack through software). This new attack is now #308 (combined with the process issue this very thread has highlighted). |
Please re-open this issue, until this attack is documented. |
I tend to think that your "catastrophic failure" is academic only: a) Flipping bits in (unknown) plain text You repeatedly mentioned that AEM is not authenticated, and therefore I could - within a re-play attack - flip bits of the "TX Power" byte, which is encoded as a signed 8 bit integer. I would want to decrease the plaintext value, because I don't have a fleet of high-powered transmitters, and if I succeed, the receiver would later think that the attenuation was lower than in reality --> the estimated distance was shorter than in reality. flip bit So which bits should I flip? I'm somewhat afraid that if I decrease the "TX power" value below the received RSSI, the receiver might actually detect my manipulation and simply discard my modified packet. |
Of course I will insist! You presented 8 options. Of course that's not all the options an attacker would have. They actually have 256, since one can combine the 8 primary ones. It's not just that I don't know the value. I also don't know:
You also seem to forget an attacker doesn't exactly need the initial value, they just need the result of a calculation tied to distance and attenuation to end up in the right "box" for the GAEN risk calculation (this is dependent on how exactly the API is leveraged by the app). In the attack scenarios described, the attacker also doesn't need specific people to be "caught", just a larger number than when the system works nominally, or even just a different set. This value being tampered can be thought of So, all in all:
|
So what could an attacker achieve in the worst case at the application level and how much would they have to spend? |
Also,
please enlighten me. The attenuation value is of course never constant, I guess your question refers to the TX Power value. This might vary with pose detection, i.e. the device could transmit with more power while it estimates that is in a pocket, etc. I'm just speculating here, though. Unless someone presents a list of TX values of devices with a high market penetration, and determines that the values are fixed and will stay like that for the next months, this attack vector stays purely academic. |
|
@sventuerpe leveraging the AEM tampering, I came up with Joel Reardon (UCalgary) with a SDK-based attack. See #308. I don't know if this answers your question. The spend to an attacker in the right position would be very small. At application level, the consequences might be:
Certainly for the project building this application, it means: better documentation. |
I don't know if Google / Apple are using this table to determine the encrypted TX Value. |
Then I think the data referred in this tweet by a member of the Swiss team will be it. |
@sventuerpe Apologies. In #308 there is a link to a paper explaining how the AEM tampering can be leveraged to conduct population-scale re-identification and/or false positive attacks. We called it the SDK attack, and I started a new issue given that this one had been flagged as out of scope and closed. |
Sorry, I must have missed that. How do you envision using AEM tampering to conduct population-scale re-identification attacks? |
@mh- You are correct, I was imprecise. I should have said to @sventuerpe :
|
@pdehaye I think
is also imprecise, it is not required. |
You are right, we didn't formally show it was required. I highly suspect it would make it easier and is in fact required, but can't demonstrate that without:
Also, a SDK might not have/offer the option of modifying the maximum TX power. |
For the record, because this might be relevant to #322: while the tampering with AEM values has to be done blindly, one can use all the binary triples (say) Example:If the real metadata value is 10, which I don't know because it is encrypted, I can nevertheless generate the encrypted values of 10 and 42, 74, 106, -22, -54, -86, -118. |
Google has released the relevant data now: https://developers.google.com/android/exposure-notifications/files/en-calibration-2020-06-13.csv |
In a recent report to the Swiss CSIRT, EPFL professor @vaudenay and Martin Vuagnoux point out that the Bluetooth metadata containing the transmission power is merely encrypted with AES-CRT, and not authenticated (cf Section 3.5 here). This makes tampering with parts of the metadata possible, and highly increases the chances of success of some forms of replay attacks, particularly in conjunction with the validity of RPI for 2 hours (see Section 3.6).
This makes the entire system vulnerable to false positive attacks, with the attack surface growing much faster than the utility of the system.
I have been told by some in the periphery of the German community working on Corona Warn that this is a flaw well known here, but have not been able to gain confirmation. Has it been documented anywhere?
The text was updated successfully, but these errors were encountered: