Skip to content

The repository of "A Digital Mask to Safeguard Patient Privacy" (Nature Medicine 2022).

Notifications You must be signed in to change notification settings

StoryMY/Digital-Mask

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Digital-Mask

 

This is the repository of "A Digital Mask to Safeguard Patient Privacy" (Nature Medicine 2022).

The Digital Mask (DM) is a medical privacy protection technology based on 3D face reconstruction and deep learning. It takes original videos of patients as input, and outputs DM-reconstructed videos after digitalization. The DM-reconstructed videos can erase the identity attributes while retaining the clinical attributes needed for diagnosis and management.

Matters Arising

To better address the concerns of Meeus et al. and supplement our reply, we present additional experiments for reference. We believe these help us to clarify the boundary of the DM technique, which is valuable for academic communication.

Within-mask vs. Cross-mask Attack

The experiments of Meeus et al. are based on the assumption that both the algorithm and the face model (the parametric model that represents a 3D face as shape and motion vectors) are known by attackers, so the query and database masks are the same in their Mask2Mask setup (labeled as within-mask). In real-world application, although the algorithm is public, the face model can be private for each institution, which means that the most likely attack is a cross-mask attack, rather than within-mask attack.

To better reveal the ability of the DM technique in real-world application, we test Mask2Mask attack using our clinical dataset, including within-mask attacks similar to Meeus et al., and more realistic cross-mask attacks. For Mask2Mask (within-mask), like Meeus et al., both the query and database image are DM masks. For Mask2Mask (cross-mask), the query image is FLAME mask and the database image is DM mask.

As shown in Fig.A, within-mask testing shows similar phenomenon with Meeus et al. that the rank-1 accuracy rises to 59.25%. However, once the masks of query and database differ, the rank-1 accuracy decreases back to 0.5%. The experiment demonstrates that although our reconstruction algorithm is public, the DM technique still has strong anti-identification ability as long as our face model is private, or can be constantly adjusted, iterated, or replaced.

Within-video vs. Cross-video Attack

In the experiments of Meeus et al., the masks used for the query and database images are generated from the same video, which is not feasible in reality because the original video of clinical examination is private and attackers can only access the DM video generated from it. Notice that this is the major premise of the attack simulation. Otherwise, if attackers can access the original video, there is no need for them to make a Mask2Mask attack. Therefore, the query and database masks in the Mask2Mask setup should be generated from different videos.

As the scale of clinical data is limited and unsuitable for cross-video attack, we replicate Meeus et al.'s experiments on the same public dataset (YouTube Faces dataset) using RingNet algorithm and FLAME face model (within-mask). Some subjects in YouTube Faces Dataset have multiple videos. We test the re-identification performance on 555 random-sampled multi-video subjects with both within-video and cross-video setups that mentioned above.

As shown in Fig.B, the rank-1 accuracy of our within-video setup is 27.6%. The difference between the result reported by Meeus et al. and ours may be caused by different selections of subjects and video frames. Anyway, when switching to cross-video testing, the rank-1 accuracy decreases back to 1.4%. It demonstrates that even using the same face model to attack (within-mask), the re-identification risk is still at a low level as long as the original videos to generate the query and database mask are different.

Trend Analysis of Risk

In real-world applications, the accuracy of re-identification can be further decreased because of the larger number of subjects. As shown in Fig.C, the re-identification becomes more difficult as the number of subjects grows, and the rank-1 accuracy converges around 1,000 subjects. Both experiments of Meeus et al. and ours are based on around 500 subjects, which means the rank-1 accuracy results are upper bound estimation, and the real-world risk can be lower than the risk shown by experiments.

To sum up, as for attackers, they need to apply the same algorithm and face model on the same original video to achieve an “effective” attack. However, even with such strict requirements, the rank-1 accuracy is only about 50% among hundreds of subjects. Therefore, we believe the DM technique is still effective in anti-identification.

Code

To have access to Digital Mask Code, you need sign the license and send it to:

A private link for downloading the Code will be provided after verification.

Please ask your administration to fill the license. Requests from students may not be answered.

About

The repository of "A Digital Mask to Safeguard Patient Privacy" (Nature Medicine 2022).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published