Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

automation-of-contagion-vigilance #10

Open
DavidStodolsky opened this issue Mar 14, 2020 · 1 comment
Open

automation-of-contagion-vigilance #10

DavidStodolsky opened this issue Mar 14, 2020 · 1 comment

Comments

@DavidStodolsky
Copy link

It is possible to both track infectious agents and measure the effect on behavior with a Privacy by Design app.
The false positive problem is reduced because the proximity data is generated by the phones communicating directly with each other.
I published a paper showing how to deal with inapparent infection over twenty years ago:

Stodolsky, D. S. (1997). Automation of Contagion Vigilance. Methods of Information in Medicine, 36(3), 220-232.

https://sites.google.com/a/secureid.net/dss/automation-of-contagion-vigilance

A small study showed acceptability of the approach:

Stodolsky, D. S. & Zaharia, C. N. (2009). Acceptance of Virus Radar. The European Journal of ePractice, 8, 77-93. URL

https://drive.google.com/open?id=0B_zxYlTkSnKQZXFsXzNwSDd3ZGs

After a half a dozen attempts to get funding, I gave up on the idea of a contagion management test. After the #MeToo media explosion, I decided that I might pursue a test by focussing on the frontend of the design. A successful workshop led nowhere. I continue to seek an alternative strategy for funding this research.

@DavidStodolsky
Copy link
Author

There are two things that are needed for virus radar to function. First, the infectious agent must be tracked. In the simplest case, when two phones come within a few meters of each other they exchange information in order to record the risky contact. This pairing info is then transferred to a database that can be used to track the potentially infective contacts. This kind of tracking was demonstrated in the DTU/KU project headed by Sune Lehmann Jørgensen:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130824

“In the scientific realm, the mobility patterns of entire social systems are important for modeling spreading of epidemics on multiple scales: metropolitan networks [7–9] and global air traffic networks [10, 11]; traffic forecasting [12]; understanding fundamental laws governing our lives, such as regularity [13], stability [14], and predictability [15]. ”

The code from this project could be repurposed for virus radar real-time tracking.

The second issue is how to use the risk information. Once a person has been tested positive for Covid-19, that user can be marked as a confirmed case that is to be avoided by at-risk persons. In the simplest case, an open database makes it possible to search for the ID of any person (actually their phone ID) in order to determine if they are infected. A more advance search would indicate if the person had been in contact with a (now) known infected person, etc. This is the strategy apparently used in S. Korea, China, etc. for tracking. However, it isn’t likely that the population of Denmark would fully cooperate, due to the total lack of privacy that a system like this requires. The result would be massive stigmatization. At-risk persons would flee at the approach of anyone not unmarked in the database.

What I suggest in the paper is that all risk data be anonymized via the assignment of a random number to each risky contact. Once a person was tested positive, these numbers from their phone would be broadcast to the entire population. If your phone recognized a broadcast number as matching a risky contact in your phone, this would indicate you had been exposed to the infectious agent. You would then report for testing, assuming a system based upon voluntary cooperation. In order to ensure cooperation, health certificates would be broadcast to all users on a daily basis. If a person did not report for testing, however, they would not get a fresh certificate. At-risk person’s phones would automatically check the digitally signed certificate of an approaching person. An alert would be issued, if any approaching person could not be confirmed as safe. I show how to do this without stigmatization using privacy-preserving negotiation in the appendix of the paper. A multi-stage “failsafe flirting” model is outlined in my workshop abstract:

https://groups.io/g/MedicalEthics/message/5

Anyone interested in helping develop this model, should subscribe to the list. This will give access to the files area that contains slides from the presentation, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant