Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No inpainted results generated on JPG #90

Closed
varungupta31 opened this issue Feb 14, 2022 · 8 comments
Closed

No inpainted results generated on JPG #90

varungupta31 opened this issue Feb 14, 2022 · 8 comments

Comments

@varungupta31
Copy link

varungupta31 commented Feb 14, 2022

I have followed all the instructions to setup lama on my system. I went with the conda installation, and the big-lama.zip model.

I'm running on a custom .jpg image, and have rightly modified configs/prediction/default.yaml and changed .png to .jpg

I created a folder named ts_images containing img.jpg and img_mask.jpg (also tried with img_mask001.jpg), and ran the command:

python3 bin/predict.py model.path=$(pwd)/big-lama indir=$(pwd)/ts_images outdir=$(pwd)/output

I get a Detectron v2 not installed message, then after a series of processing I get:

[2022-02-14 16:18:45,353][saicinpainting.training.trainers.base][INFO] - BaseInpaintingTrainingModule init done
[2022-02-14 16:18:49,151][saicinpainting.training.data.datasets][INFO] - Make val dataloader default from /home2/varungupta/lama/ts_images/
0it [00:00, ?it/s]

An outputs folder gets created in lama/ (and NO output folder) containing file predict.log, the top 5 lines of which are:

[2022-02-14 16:18:44,873][saicinpainting.utils][WARNING] - Setting signal 10 handler <function print_traceback_handler at 0x14fe587bcf28>
[2022-02-14 16:18:44,905][root][INFO] - Make training model default
[2022-02-14 16:18:44,905][saicinpainting.training.trainers.base][INFO] - BaseInpaintingTrainingModule init called
[2022-02-14 16:18:44,905][root][INFO] - Make generator ffc_resnet
[2022-02-14 16:18:45,352][saicinpainting.training.trainers.base][INFO] - Generator

and ending with:

[2022-02-14 16:18:45,353][saicinpainting.training.trainers.base][INFO] - BaseInpaintingTrainingModule init done
[2022-02-14 16:18:49,151][saicinpainting.training.data.datasets][INFO] - Make val dataloader default from /home2/varungupta/lama/ts_images/

I created my masked image using openCV, and it looks the following:

img_mask

No inpainting results are being generated. What am I missing?
Kindly help me out,
Thanks :)

@windj007

Edit: I converted my images to .png from .jpg, and now the code works! As mentioned, I updated the .yaml file :

indir: no  # to be overriden in CLI
outdir: no  # to be overriden in CLI

model:
  path: no  # to be overriden in CLI
  checkpoint: best.ckpt

dataset:
  kind: default
  img_suffix: .jpg
  pad_out_to_modulo: 8

device: cuda
out_key: inpainted

So, has anyone else tested the model with .jpg images and got it working?

@windj007
Copy link
Collaborator

Hi! Sorry for the late reply!

It is by design that the input dataloader reads images only with a single variant of extension (either .jpg or .png). There are no sparticular strong reasons for that - just because it is simple and has been enough for evaluation purposes.

You can change the line dataset.img_suffix=.jpg to the prediction command here so the dataloader reads JPGs

@windj007
Copy link
Collaborator

windj007 commented Feb 16, 2022

Please note that most our scripts use Hydra configuration system - so no need to change YAML files, for ad-hoc modifications use can just override any parameter via command line.

@varungupta31
Copy link
Author

Thanks for the reply @windj007.

Changing dataset.img_suffix=.jpg at the given location did not resolve the issue. I still get the message:

- Make val dataloader default from /home2/varungupta/lama/ts_images/
0it [00:00, ?it/s]

Please note that most our scripts use Hydra configuration system - so no need to change YAML files, for ad-hoc modifications use can just override any parameter via command line.

I was following the instructions mentioned in the readme, as mentioned under point 2.Prepare images and masks ---->
Specify image_suffix, e.g. .png or .jpg or _input.jpg in configs/prediction/default.yaml.

and thus updated the YAML to replace .png to .jpg. Am I missing something still?

@windj007
Copy link
Collaborator

windj007 commented Apr 8, 2022

Hi! Is your question resolved? If not, could you please provide more details on what parameters do you use and how the data look like?

@varungupta31
Copy link
Author

varungupta31 commented Apr 8, 2022

No, the model does work as intended with .png, but not with .jpg files. I tried your suggested solution but that didn't work.
Also, you stated that

Please note that most our scripts use Hydra configuration system - so no need to change YAML files, for ad-hoc modifications use can just override any parameter via command line.

So, is the readme incorrect? (Point 2 of readme stated above). If so, kindly update it.

could you please provide more details on what parameters do you use and how the data look like?

I didn't change much parameters, only the YAML file chage mentioned in the readme. Further, I also tried your suggested update of dataset.img_suffix=.jpg. But the errors (mentioned in the issue) persisted.

Sample Image (in png format)
20210812155640_0060__53__pedestrian-crossing__919

I managed around this issue by changing to .png format. So, this doesn't concern me as of now. Feel-free to close this issue, if you wish.

@jcrbsa
Copy link

jcrbsa commented Apr 8, 2022

Hi, everyone !

I've tested too, The same situation described by varungupta31 happens. For me it doesnt work editing the yaml file or passing as argument.

python bin\predict.py model.path=.\big-lama indir=.\jpg_images outdir=.\output dataset.img_suffix=.jpg

@windj007
Copy link
Collaborator

Sorry for a long discussion and confusion. I forgot to highlight it from the beginning: dataset.img_suffix only changes how to obtain image filename from mask filename.

The algo is as follows:

  1. obtain a list of mask files by pattern "*mask*.png", which cannot be changed from a config
  2. for each mask file, obtain the corresponding image file by removing everything after _mask and adding the value of dataset.img_suffix

Thus, if you have masks in jpg, then changing configs will not help you - the script just will not find any masks in the input folder.

Please note that storing masks in JPG might be suboptimal - JPG is a lossless format and thus the mask after loading is not binary (only 0 and 1s) - it has a kind of gradient on mask boundary. The latter in its turn leads to severe artifacts.

Even if the masks are stored in PNG, the images can be stored in any other format, which PIL or cv2 can load - just set dataset.img_suffix accordingly. However, we did not test it with anything except PNG and JPG.

Takeaway: either store your masks in PNG or modify the code so it binarizes the masks after loading.

@windj007
Copy link
Collaborator

Note about mask binarization: predict.py actually does binarization - I forgot about this line. But anyway, having smooth masks is meaningless and it's better to control it manually.

namngh added a commit to namngh/lama that referenced this issue Feb 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants