Skip to content
This repository has been archived by the owner on Oct 19, 2024. It is now read-only.

How to run it? Instruction? #3

Closed
MrCheater opened this issue Nov 3, 2018 · 1 comment
Closed

How to run it? Instruction? #3

MrCheater opened this issue Nov 3, 2018 · 1 comment

Comments

@MrCheater
Copy link

No description provided.

@jantic
Copy link
Owner

jantic commented Nov 3, 2018

Down towards the bottom of the readme there's this section:

For those wanting to start transforming their own images right away: To start right away with your own images without training the model yourself (understandable)...well, you'll need me to upload pre-trained weights first. I'm working on that now. Once those are available, you'll be able to refer to them in the visualization notebooks. I'd use ColorizationVisualization.ipynb. Basically you'd replace

colorizer_path = IMAGENET.parent/('bwc_rc_gen_192.h5')

With the weight file I upload for the generator (colorizer).

Then you'd just drop whatever images in the /test_images/ folder you want to run this against and you can visualize the results inside the notebook with lines like this:

vis.plot_transformed_image("test_images/derp.jpg", netG, md.val_ds, tfms=x_tfms, sz=500)

I'd keep the size around 500px, give or take, given you're running this on a gpu with plenty of memory (11 GB GeForce 1080Ti, for example). If you have less than that, you'll have to go smaller or try running it on CPU. I actually tried the latter but for some reason it was -really- absurdly slow and I didn't take the time to investigate why that was other than to find out that the Pytorch people were recommending building from source to get a big performance boost. Yeah...I didn't want to bother at that point.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants