Skip to content

Sowgandh6/Benchmarking-of-machine-learning-methods-that-aid-clinical-tools-based-on-data-from-human-immune-sys

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Benchmarking-of-machine-learning-methods-that-aid-clinical-tools-based-on-data-from-human-immune-sys

A benchmark is a standard way used to evaluate the effectiveness of an activity. Various benchmarks can be done to evaluate how an activity is performing when compared with other comparable activities. In machine learning (ML), the term benchmarking refers to the assessment and comparison of ML methods with regard to their ability to learn patterns. The approach of benchmarking is important to know the ability of new methods and helps in deciding the appropriate ML method for a given problem. The availability of numerous different machine-learning models makes it difficult to identify the most suitable machine-learning model for the evaluation of any given dataset. This issue can be solved by addressing the benchmarking of different ml models. The recent pandemic caused by a virus has shown the world how important it is to be prepared with vaccines, drugs and diagnostics for future diseases that can compromise human immune system. Machine learning methods have been suggested to offer promising solutions in expediting vaccine design, drug discovery and immune-based diagnostics. There have been numerous machine learning methods that were proposed recently that can learn the patterns associated with various immune conditions. However, a systematic benchmarking of all the proposed machine learning methods is needed to make use of robust methods in clinical settings and other settings that are vital for human health. There can be many aspects of machine learning methods and problem formulations that impact the performance and generalizability of machine learning methods such as (a) the assumptions of the data generating process, (b) non-linear patterns that exist in the datasets, (c) sparisty of signals, (d) distributional shifts and need for domain adaptation, (e) imbalance datasets, (f) need for careful choices of performance evaluation and optimization metrics and so on. We argue that a systematic benchmarking of ML methods in any domain should address many of the points mentioned above and for that we provide empirical evidence through simulations. We propose to benchmark ML methods that can aid clinical tools based on data from human immune system.

This work is the part of the curriculum of Ostfold University college.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published