Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pseudonyms Generated and Managed by Authority, Communicated to Users #2

Open
nadimkobeissi opened this issue Apr 18, 2020 · 7 comments
Open
Labels
privacy risks

Comments

@nadimkobeissi
Copy link

@nadimkobeissi nadimkobeissi commented Apr 18, 2020

The ROBERT summary document contains the following diagram, showing authorities generating pseudonyms and transmitting them directly to users:

image

However, Section 1.3 of version 1.0 of the ROBERT specification states that, as a security and privacy requirement, ROBERT mandates the following:

Anonymity of users from a central authority. The central authority should not be able to learn information about the identities or locations of the participating users, whether diagnosed as COVID-positive
or not.

And yet, this assumption is only meant to hold under an honest authority:

The authority running the system, in turn, is ”honest-but-curious”. Specifically, it will not deploy spying devices or will not modify the protocols and the messages. However, it might use collected
information for other purposes such as to re-identify users or to infer their contact graphs. We assume the back-end system is secure, and regularly audited and controlled by external trusted and neutral authorities (such as Data Protection Authorities and National Cybersecurity Agencies).

Furthermore, Section 2.2 states the following:

When a user wants to use the service, she installs the application, App, from an official App store (Apple or Google). App then registers to the server that generates a permanent identifier (ID) and several Ephemeral Bluetooth Identifiers (EBIDs). The back-end maintains a table, IDTable, that keeps an entry for each registered ID. The stored information are “anonymous” and, by no mean, associated to a particular user (no personal information is stored in IDTable).

In short, all of ROBERT is built on trust from central authorities and the assumption that they will behave honestly and be impervious to third-party compromise. I am unable to determine how this is a strong, or even serious and realistic approach to real user privacy. Could you please justify how this protocol achieves any privacy from authorities, and how the current model of assuming that all authorities are:

  • Completely honest,
  • Impervious to server/back-end compromise,
  • Impervious to any transport-layer compromise or impersonation,

...is in any way realistic or something that can be taken seriously as a privacy-preserving protocol? Given the level of trust assurances that you are attributing to authorities, and given that authorities are responsible for generating, storing and communicating all pseudonyms directly to users to their devices, what security property is actually achieved in ROBERT in terms of pseudonymity between authorities and users?

Furthermore, it appears that the trust model for ROBERT is such that the server allocates pseudonyms and is thereafter trusted to never examine the social graph or any network relationship graph for users, ever. How could this possibly be a reasonable assumption for a privacy-preserving protocol?

@aboutet aboutet added the privacy risks label Apr 19, 2020
@dbeniamine
Copy link

@dbeniamine dbeniamine commented Apr 19, 2020

Why is there a need for a central authority to manage peusodnyms anyway ?

In a similar protocol DP3T the devices generates it's own randomized pseudonyms.

The central authority is only contacted to get pseudonyms of infected people thus get way less information about users.

@bortzmeyer
Copy link

@bortzmeyer bortzmeyer commented Apr 19, 2020

Why is there a need for a central authority to manage peusodnyms anyway ?

I don't want you to believe I support the design of the ROBERT protocol (I don't) but your specific point is addressed in appendix A. (Summary: it is to avoid repetition of the "one contact" attack.)

@ThomasFournaise
Copy link

@ThomasFournaise ThomasFournaise commented Apr 19, 2020

@bortzmeyer the "one contact" attack is dismissed by sending random "false positive"
@dbeniamine with a central verification you don't know when you may have been in contact with covid (except one contact) you only receive an information. With a local verification, you receive a list of covid ID, if you logged when you receive these IDs (by creating your ownapp for example) you will know that you have been contaminated this day at this time and then at this place. If you write all your agenda you may be able to find which people contaminated you....

@PRIVATICS-Inria
Copy link

@PRIVATICS-Inria PRIVATICS-Inria commented Apr 30, 2020

Thanks @KAepora for raising this issue.

In ROBERT scheme v1.0, the server is indeed responsible for generating, storing and communicating pseudonyms. This is done for two main reasons:

  • A user can only perform an Exposure Status Request phase corresponding to its pseudonyms. Indeed, the pseudonyms are linked to a secret key K_A on the server, and the server verifies that the request originates from the owner of the secret key K_A before answering. Therefore, an attacker cannot query the server for the exposure status of another user.

  • The pseudonyms of infected users are not exposed to other users. Since the computation of exposure status is done on the server, there is no need to publish pseudonyms of infected users as it may be the case with other solutions.

About the pseudonymity of users with regard to the server. The server does not store any other identifiers than the one included in the IDTable database (see section 3.2. Application Registration (Server Side)). During the Application Registration and the Exposure Status Request, the app may expose network identifiers that could compromise her real identity. To mitigate this risk, solutions like Mixnet or proxies could be used (as suggested in Section 6, footnote 11).

More generally, concerning the “honest but curious” assumption: this is a key assumption for the ROBERT v1.0 design as you noticed. It is not our responsibility, as privacy researchers, to judge whether or not this assumption is valid.

This topic could be discussed for hours, clearly. However, when looking at the “avis CNIL sur le projet d’application mobile StopCovid”, we have the feeling this is a reasonable assumption.

@nadimkobeissi
Copy link
Author

@nadimkobeissi nadimkobeissi commented May 26, 2020

@PRIVATICS-Inria

More generally, concerning the “honest but curious” assumption: this is a key assumption for the ROBERT v1.0 design as you noticed. It is not our responsibility, as privacy researchers, to judge whether or not this assumption is valid.

Could you please care to justify this statement, which appears to be laughably absurd if taken at face value?

This topic could be discussed for hours, clearly. However, when looking at the “avis CNIL sur le projet d’application mobile StopCovid”, we have the feeling this is a reasonable assumption.

Incredibly disappointing, but somehow not surprising, to see an appeal to authority used in a scientific discussion by the INRIA Privatics team.

@everdha
Copy link

@everdha everdha commented May 30, 2020

The statement of the CNIL (n° 2020-056 of May 25th 2020) relies precisely on the assumption that the server cannot know the contact list of an infected user (see point 41). You are responding to our concern that this point has not been adressed by the protocol by saying that CNIL states it's ok? Aren't we in an infinite non-sense loop here?
How are you precisely and technically adressing this issue please, if you are?

And yes indeed, we do not have hours and days to discuss this, since the app was voted in France and will be deployed within hours, that is why we would like a more scientific and clear answer.

@nadimkobeissi
Copy link
Author

@nadimkobeissi nadimkobeissi commented Jul 20, 2020

This topic could be discussed for hours, clearly. However, when looking at the “avis CNIL sur le projet d’application mobile StopCovid”, we have the feeling this is a reasonable assumption.

Hey @PRIVATICS-Inria, still have the feeling this is a reasonable assumption?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
privacy risks
Projects
None yet
Development

No branches or pull requests

7 participants