Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement strategy for assessing the quality of the model during lifelong training #87

Open
jcohenadad opened this issue Nov 9, 2023 · 2 comments
Assignees

Comments

@jcohenadad
Copy link
Member

As we are adding more contrasts and re-training the model overtime (see eg: #83, #74, ivadomed/canproco#46), we need to put in place a quality check assessment of model performance shift across various data domains (ie: monitor catastrophic forgetting).

@naga-karthik
Copy link
Collaborator

A relevant theory paper I found: Understanding Continual Learning Settings with Data Distribution Drift Analysis -- Essentially describes the theory of data distribution shifts, proposes new concepts in analyzing model/data drifts and some of the existing concepts in lifelong learning that are related to this phenomenon.

Relevant sections: Sections 3.1, 3.2, 4.1, and 6.2

@jcohenadad
Copy link
Member Author

one idea of validation is to compute the CSA variation across contrasts from the test set of the spine generic data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants