Skip to content

Artinto/Rapid_Deep_Learning-Assisted_Predictive_Diagnostics_for_Point-of-Care_Testing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rapid Deep Learning-Assisted Predictive Diagnostics for Point-of-Care Testing

Python 3.9 and Ubuntu 16.04 are required

CUDA>=10.2 and cuDNN>=8.0.2 are required.

Anaconda environment is recommended.

✔ If you have any issues, please pull request.

I. Enviroment setting

1. git install

$ sudo apt-get install git

$ git config --global user.name <user_name>
$ git config --global user.email <user_email>

2. Clone this Repository on your local path

$ cd <your_path>
$ git clone https://github.com/Artinto/TIMESAVER_Transforming_Point-of-care_Diagnostics

3. Create the virtual enviroment with conda (optional)

- Create your virtual enviroment

$ conda create -n <venv_name> python=3.9

- Activate your virtual environment

$ conda activate <venv_name>

→ Terminal will be... (venv_name) $

- Install requirements packages

(venv_name) $ pip install -r requirements.txt

II. Download Dataset

Dataset


III. File Structure

TIMESAVER_Transforming_Point-of-care_Diagnostics
├── README.md
├── requirements.txt
├── dataset
│   ├── __init__.py
│   └── dataset.py
├── models
│   ├── __init__.py
│   └── models.py
├── utils
│   ├── __init__.py
│   ├── log_util.py
│   ├── preprocess.py
│   ├── split_data.py
│   └── util.py
├── config.py
├── main.py
├── train.py
├── test.py
└── log
    └── train
        └── init_model
            ├── log.txt
            └── model_save
                ├── best_avg_model
                │   ├── best_density_model.pt
                │   └── best_target_model.pt
                └── best_avg_model.txt
Standard_sample
├── train
│   ├── 0
│   │   ├── sample_001
│   │   │   ├──  10.png
│   │   │   ├──  20.png
│   │   │   ├──  ...
│   │   │   └── 900.png
│   │   ├── sample_002
│   │   ├── ...
│   |   └── sample_214
│   ├── 200
│   ├── ...
│   └── 4096000
└── eval
    ├── 0
    ├── ...
    └── 4096000


IV. Train Model

Parameters Setting

# train.py

args = setting_params(
    mode='train',
    description='latent:1024+pretrain-r50+lstm',      
    data_path='./dataset/Standard_sample',
    label_info_path='dataset/label_info.csv',    
    use_cuda=True,                                  # GPU usage
    multi_gpu=False,                                # If you have two or more GPUs, it is recommended to set it to True
    num_epochs=500,
    train_batch_size=16,                            # Adjust according to GPU memory size
    eval_batch_size=2,                              # Adjust according to GPU memory size
    save_model=True,                                # True to save the best performing model (only available in train mode)
    use_frame=(0, 12, 1)                            # (start, end, step)=(0, 12, 1)=[0s, 10s, ... , 100s, 110s]
)

Run

$ cd <your_path>/TIMESAVER_Transforming_Point-of-care_Diagnostics
$ python3 train.py

Check the learning process with Tensorboard

Running Tensorboard

$ tensorboard --logdir="./log"

Connection


V. Test Model

Parameters Setting

# test.py

args = setting_params(
    mode='test',
    description='latent:1024+pretrain-r50+lstm',      
    data_path='./dataset/Standard_sample',
    label_info_path='dataset/label_info.csv', 
    use_cuda=True, 
    multi_gpu=False,
    eval_batch_size=2,                              # Adjust according to GPU memory size
    load_saved_model=True,                          # Load a saved model
    path_saved_model='./model_save/best_avg_model', # Model weight path to load
    save_image=True,                                # Save the resulting image
    save_roc_curve=True                             # Save the ROC Curve
    use_frame=(0, 12, 1)                            # Use the same frame as training
)

Run

$ python3 test.py

About

Transforming Point-of-Care Diagnostics: 1-Minute Assays Enabled by Time-Series Deep Learning

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages