Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

At risk status directly correlated to identity. (and/or even weaker model) #45

Open
oblazy opened this issue Apr 22, 2020 · 4 comments
Open
Assignees
Labels
privacy risks everything that concerns privacy, personal data, privacy risk analysis

Comments

@oblazy
Copy link

oblazy commented Apr 22, 2020

Looking at the "specification" document:

It is said (on p11) that the server explicitly computes the risk score

  1. computes a ”risk score” value, derived in part from the list LEE_A.

before sending that (properly encrypted) to the user.

So at this step, this means, that the server explicitly correlates the fact that the user is at risk with his IP.

Do you consider that it is outside your model to give (for free) this information to the server?
(No obfuscated computation is done there, the server knows that he needs to send an encryption of 0/1 to a given IP)

Of course, you might assume various proxies / nat / outsourced computation of the risk score. but this would mean that in addition to weak HBC model, you assume that none of the parties can ever collude. Also you would need to add authentication between those sub-parties in the back-end

===

As a side not... this is not "computer security" related, but more for political consideration if the app ends up being adopted.

Given that once considered "at risk", the user is banned from the app until proven non sick by a healthcare professional. This means, that adopting this solution is linked to being ready to deploy a wide range of tests even (and especially) for asymptomatic patients?

@vincent-grenoble vincent-grenoble added the privacy risks everything that concerns privacy, personal data, privacy risk analysis label Apr 22, 2020
@vincent-grenoble vincent-grenoble self-assigned this Apr 23, 2020
@vincent-grenoble
Copy link
Contributor

Hello,

  1. Regarding the IP topic: yes, you're perfectly right. Whether the smartphone is connected through 4G (IP assigned to the phone, following a strategy specific to the cell phone provider) or Wifi (IP assigned to the box I think in a relatively stable manner), the backend server captures a technical identifier. If additionally you assume a collusion with the cell phone provider or ISP, the situation is even worse. Using a proxy can relax a little bit this threat if this proxy is operated by a trusted third party. Once again an assumption. I fully agree.

With DP3T, following the same logic, the server learns the IP address of the smartphone of a person that is infected and uploads its key. With DP3T plus the Google/Apple API (in its current form), those two giants know the infection status of the user. Not much better or worse (depends

I agree with you, there are limits. In all systems. And we (which includes our DP3T colleagues as well) do our best to reduce those risks, to the best as we can, in a super short time frame. This is not the way research happens usually! We have no other choice than making certain assumptions...

  1. Regarding the user notified as "at risk" being blocked, this is perhaps something that may change. We included this measure to mitigate a specific attack, but it is not necessarily the best approach (not sufficient to fully prevent the attack). So it may change...

Regarding the availability of tests, it is clear that any proximity tracing system is part of a much larger health system. It can help identifying persons who deserve to be tested in priority. The "scoring" function itself: is not finalized; it will probably change over the time following epidemiologist understanding; it will probably differ between countries (souvereignty), etc. It is a bit orthogonal to our work, but essential.

In any case, thanks for your feedback that enables us to give some insights on the current design (which is not finalised) and spot issues.

Cheers, Vincent

@vincent-grenoble
Copy link
Contributor

By the way, do you see the paradox here: we are a group of researchers specialized on privacy considerations and we are asked to build a "proximity tracing" system! Just the opposite of what we would do, usually.

An example among others: Nataliia's work, with her colleagues, on bad practices with cookie banners, by some actors:
https://www-sop.inria.fr/members/Nataliia.Bielova/cookiebanners/

Another example: our MOOC on "respect de la vie privée dans le monde numérique" (currently active, but in FR only):
https://www.fun-mooc.fr/courses/course-v1:inria+41015+session04/info

What we are all living is so unusual. We're just trying to do our best.

@Matioupi
Copy link

By the way, do you see the paradox here: we are a group of researchers specialized on privacy considerations and we are asked to build a "proximity tracing" system! Just the opposite of what we would do, usually.

I understand that there is a kind of word twist here. Still from that wording, I have the feeling that you were not really given the opportunity to say "no", and maybe even not agree with what you were asked to develop.

@oblazy
Copy link
Author

oblazy commented Apr 23, 2020

Hi Vincent, thank you for your answer.
I agree that this app is opposite to your "traditional" work.

On the other hand, i'm in the committee in charge of the national gdr (in france) about cryptography, and also in one of the leading group in france in term of cybersecurity, i spent 2 days next to natalia at the last anr conference.
So i have to admit that learning about this project via the press, and hearing you telling that you tried contacting experts and they did not answer is ... surprising, and limits my ability to feel for you...

Anyway, at this step, i think our goal is to try to have something the less intrusive and the more privacy preserving possible, and let's focus on that, the rest is less urgent...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
privacy risks everything that concerns privacy, personal data, privacy risk analysis
Projects
None yet
Development

No branches or pull requests

3 participants