Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Code Not Found #9

Open
ZeeshanNadir opened this issue Jun 26, 2019 · 20 comments
Open

Training Code Not Found #9

ZeeshanNadir opened this issue Jun 26, 2019 · 20 comments

Comments

@ZeeshanNadir
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?

Thanks

@zhLawliet
Copy link

+1

1 similar comment
@qq286838947
Copy link

+1

@Chokurei
Copy link

Would you kindly provide the code of training?

@bai-shang
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?

Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.

So, I think it is impossible to repetition this paper without original training codes.

@ceciliavision
Copy link
Owner

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.

So, I think it is impossible to repetition this paper without original training codes.

If you follow the rough alignment scripts and apply the computed matrices correctly during training, you should be able to get similar results as I showed in the paper. It's not trivial and I haven't finished cleaning up all the util functions.

People have emailed me about small artifacts and details about training parameters, given that they were able to re-implement the paper and get close results from what I've shown. If you just use the old training code without changing anything, I won't be surprised it's not converging.

@llp1996
Copy link

llp1996 commented Aug 26, 2019

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.

So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

@Chokurei
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

@llp1996
Copy link

llp1996 commented Aug 26, 2019

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0cobi_vgg+1.0cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

@Chokurei
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0_cobi_vgg+1.0_cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right?

@llp1996
Copy link

llp1996 commented Aug 26, 2019

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0_cobi_vgg+1.0_cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right?

i change the description , i have used 0.5 and 0.8 . but i align the image use "main_align.sh ","main_crop.sh" and "main_wb.sh"

@bai-shang
Copy link

bai-shang commented Oct 9, 2019

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result.

@yanmenglu
Copy link

@bai-shang you train the model used raw data or RGB data ?

@qianzhang2018
Copy link

@bai-shang can you share your train.py ?Thank you.

@IanYeung
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .the one thing i have to mentioned is the parameter "w_spatial " should be set bigger than 0.5 ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

So, that is your "w_spatial "? In author's training code, when adopting 'contextual', w_cont = 1, w_patch = 1.5 w_spatial = 0.5. I tried the training code with the aforementioned weight the author suggested, the result is bad. Would you kindly explain your weight setting?

loss = 1.0_cobi_vgg+1.0_cobi_rgb and the w_spatial parameter of two cobi loss both 0.5

Thanks a lot, I will try that. However, you still mentioned that "the parameter "w_spatial " should be set bigger than 0.5", here you set w_spatial = 0.5 is no problem, right?

i change the description , i have used 0.5 and 0.8 . but i align the image use "main_align.sh ","main_crop.sh" and "main_wb.sh"

May I ask how you use the tform.txt and wb.txt during training?

@WenjiaWang0312
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.

Hope for training code .....

Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset?

@llp1996
Copy link

llp1996 commented Nov 20, 2019

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset?

i use SR_RAW ,use ECC align first

@WenjiaWang0312
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

Dear llp: you said that you have reproduced the code, so what train dataset did you used? The SR_RAW or your own data? If you used the SR_RAW, how did you crop the images of different scales since the author have not released the 'aligned' in the SR_RAW training dataset?

i use SR_RAW ,use ECC align first

thank you, I have found the released script as well.

@CV-JunchengLi
Copy link

+1

@wioponsen
Copy link

Hi,
I am not able to find the code that shows back propagation. There are multiple losses in the loss.py file and I want to understand how to use it. Can you please provide the training/backpropagation code or explain how to use the loss.py file to run back propagation?
Thanks

I used the train.py script in commit 1ebbdf6 but the model is not converge at all. There are approximately 10pixel miss-alignments between input and ground truth images (size=512x512) and the cobi loss is failed.
So, I think it is impossible to repetition this paper without original training codes.

I have reproduced the paper with my own training code .it would be better set the parameter "w_spatial " to 0.5 or bigger ,and i pretrain the model use L1 loss. Although the cobi loss doesnt decrease much,the result is amazing clear.
Hope for training code .....

Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result.

Would you kindly provide the code you train? Thanks!

@Chokurei
Copy link

Chokurei commented Aug 6, 2020

Thanks for your help, we trained the zoom-learn-zoom model by follow your params and got an extremely good result.

Is that possible to share your code? Actually I also did that but find some obvious artifacts in some regions.
Thank you in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests