Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low sample quality #295

Closed
woodbe opened this issue Feb 27, 2020 · 9 comments
Closed

Low sample quality #295

woodbe opened this issue Feb 27, 2020 · 9 comments

Comments

@woodbe
Copy link
Collaborator

woodbe commented Feb 27, 2020

The overview for each toolbox should have a section that discusses the expectations regarding low quality samples. While the discussion may not be exhaustive, it should talk about common ways to create low quality samples and how that can be used. This is a corollary to #262 to be added into the BIOSD.

@woodbe
Copy link
Collaborator Author

woodbe commented Oct 8, 2020

Will review between now and the next call (10/22/20) and determine next actions then.

@woodbe
Copy link
Collaborator Author

woodbe commented Oct 19, 2020

I reviewed the toolbox overviews, and at this time I'm not exactly sure where to put it, nor what to put for them. Part of the problem is that what could be low quality for one sensor may not be for another. The BIOSD has generic text about checking with the vendor on ensuring good quality (or low quality for testing), which may be sufficient.

I checked what the FIDO Alliance spec says too, and it does not get into that detail. There is talk about the quality of the information, but not in a way that would detail what this was originally pointing out.

What I think we may want to consider is expanding the documentation requirements for the quality measures for FIA_MBV_EXT.2 and FIA_MBV_EXT.2 that could get more explicit about what is expected from the vendor. This could be a 1.1 or 2.0 update after some evals have been performed.

@n-kai
Copy link
Collaborator

n-kai commented Oct 20, 2020

get more explicit about what is expected from the vendor

Each vendor implements different algorithm for quality assessment of samples so we can only ask vendors to explain what "low quality" is by their own words in sufficient detail (including hints or instructions about how to create such "low quality" samples) so that the evaluator can test the TOE using such "low quality" samples.

We can describe examples of how to measure the quality and "low quality" samples in each toolbox overview (like those described in *1)) so that the vendor can understand more clearly about what is expected from them.

*1) https://link.springer.com/article/10.1186/1687-5281-2014-34#Tab3

@gfiumara
Copy link
Contributor

Agree with @n-kai that quality for these integrated systems is going to be subjective to the vendor. Even for known quantities like fingerprint where NFIQ might be used in other contexts, it's likely not relevant for small non-FTIR sensors typically found in mobile devices.

Also agree with @n-kai that the best approach might be to discuss attributes that, in general, might indicate or contribute to low quality and point to relevant ISO 29794 series or documents.

@woodbe
Copy link
Collaborator Author

woodbe commented Oct 20, 2020

Based on this document, and possible references to ISO docs as well, should we instead consider this to be something to update the toolbox overview document instead? This one document covers 3 of our 4 modalities and a combination of them would probably work related to vein (you could probably take the fingerprint and eye and see a way to make some expectations on low quality there). Should we just use these as a general point that applies to all, and these documents provide some details.

My one concern here though, is having this in the toolbox is great from the standpoint that for PAD it is critical, but PAD is optional, so should we actually be looking at providing a better general set of references in the BIOSD so it explicitly applies to enrolment and verification, and not just PAD? We did add some text about this to the BIOSD, and I created this to say we should add some additional details in the modality toolboxes, but I'm not exactly certain that is the right place (now, anyway).

@gfiumara
Copy link
Contributor

We almost have to require vendors to implement a quality algorithm, since several documents refer to "sufficient quality." If that's something to do in the toolbox overview, then yes.

@woodbe
Copy link
Collaborator Author

woodbe commented Oct 20, 2020

@gfiumara So I originally created this as the corollary to the edits about quality that we added in the BIOSD, but obviously at that point I was also thinking about PAD as the place where this would be impacted. Reading this now though, I think that the PAD focus, and hence the statements being in the toolboxes (either at the main overview or the specific modalities) are the wrong place to put them.

This line

Supplementary information (Assessment criteria for samples) shall describe assessment criteria for creating samples
is in the BIOSD, but it clearly leaves out any guidance as to what the criteria should be. I think this is probably the most practical definition of a sample with sufficient quality (though we then use sufficient quality again to talk about the quality of the templates that are created, too).

We reference this documentation when stating how low quality samples would be used to show they don't meet requirements for use, and specify the lab can ask the vendor about it.

So I'm thinking that IF we want to make a change for this, it should be a TD to the BIOSD. Maybe to add a small section about quality in a generic sense, and add this paper as a specific reference.

That said, I'm not 100% certain we need to do this right now as a TD or if we can just add it into the v1.1 text and let it wait until we make an update. I don't think though (anymore) that this should be in any toolbox document and that I was incorrect.

To be clear, we did add text about ensuring sufficient quality in the BIOSD, and this was my thinking that we should add some additional text as well. So we may be discussing a topic that doesn't need any more content at this point, but this issue had been created and I would like to close it out or assign it to future work.

@n-kai
Copy link
Collaborator

n-kai commented Oct 21, 2020

I agree to assign this to future work after experiencing the CC evaluations (If the vendor or lab requires us to add more guidance, we can add more text based on such detail information)

@woodbe
Copy link
Collaborator Author

woodbe commented Oct 22, 2020

@woodbe will propose a TD to add the linked paper from @n-kai for the BIOSD as an additional reference and also consider internal references to it specifically

@woodbe woodbe transferred this issue from biometricITC/cPP-toolboxes Oct 22, 2020
@woodbe woodbe assigned woodbe and unassigned gfiumara, n-kai, lizb, The-Fiona and ccolin318 Oct 22, 2020
@project-bot project-bot bot added this to To Do in Interpretation Team Oct 22, 2020
@project-bot project-bot bot added this to To Do in Interpretation Team Oct 22, 2020
@project-bot project-bot bot moved this from To Do to Awaiting Review in Interpretation Team Oct 23, 2020
@project-bot project-bot bot moved this from Awaiting Review to To Do in Interpretation Team Oct 23, 2020
@project-bot project-bot bot moved this from To Do to Awaiting Review in Interpretation Team Oct 23, 2020
@project-bot project-bot bot moved this from Awaiting Review to To Do in Interpretation Team Oct 23, 2020
@project-bot project-bot bot moved this from To do to Awaiting Review in PP-Module v1.1 Maintenance Oct 23, 2020
@woodbe woodbe moved this from To Do to Awaiting Review in Interpretation Team Oct 23, 2020
@woodbe woodbe closed this as completed Mar 9, 2021
PP-Module v1.1 Maintenance automation moved this from Awaiting Review to Complete Mar 9, 2021
Interpretation Team automation moved this from To Do to Completed Mar 9, 2021
Interpretation Team automation moved this from Awaiting Review to Completed Mar 9, 2021
@woodbe woodbe removed this from Complete in PP-Module v1.1 Maintenance Mar 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Interpretation Team
  
Completed
Interpretation Team
  
Completed
Development

No branches or pull requests

6 participants