- Harsh Rao Dhanyamraju @HarshRaoD
- Ang Boon Leng @jimmysqqr
- Kshitij Parashar @xitij27
- Please download the model checkpoints here
- Create a new virtual environment
pip install -r requirements.txt
- Make sure you have Trained_Colourization_Models.py and Model_Testing.ipynb in the same directory
- You dont need to download the Coco Datset for inference you can use the Images in the Sample_Images Directory
- Start a Jupyter Server and Model_Testing.ipynb
- Create a Custom Dataset:
import Trained_Colourization_Models as tcm
test_dataset = tcm.CustomDataset(<Your-Path-here>,'test')
- Load the testing Image
ti2 = tcm.Testing_Image(test_dataset, filename=<Your-file-name-here>)
# file name should be in the same directory as specified in test_dataset
- Load a Model Runner and generate output by passing the Testing_Image Object
model_runner = tcm.Default_Model_Runner()
output_img = model_runner.get_image_output(ti2)
- Visualise the output
plt.imshow(output_img) # For model outputs
plt.imshow(ti2.get_gray(), cmap='gray') # For Input Images
plt.imshow(ti2.get_rgb()) # For ground truth Images
- Create a new virtual environment
pip install -r requirements.txt
- Navigate to the directory with the training script
cd Training_Scripts/<experiment_dir>
- Create a new directory
/Models
to store the model checkpoints created during training - Change the necessary configurations in the
Configuration
class and the data paths in the code- set
load_model_to_train = False
- set data path in
CustomDataset
- set
- Run the file to begin training
python training_script.py
- Navigate to the directory with the training script
cd Training_Scripts/<experiment_dir>
- Change the necessary configurations in the
Configuration
class and the data paths in the codeload_model_file_name
: path to checkpoint file- set
load_model_to_train = True
- set
starting_epoch = (current checkpoint epoch + 1)
- set data path in
CustomDataset
- Run the file to resume training
python training_script.py
- You can run DataAnalysis.ipynb to download the COCO Dataset and reduce the size of all images (to get training data)
- You'll need a valid kaggle api key and atleast 55 GB of free space