Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Having some unexpected results with the test code #15

Closed
LuckyShrek opened this issue Oct 31, 2022 · 2 comments
Closed

Having some unexpected results with the test code #15

LuckyShrek opened this issue Oct 31, 2022 · 2 comments

Comments

@LuckyShrek
Copy link

Greetings,
I am trying to load my checkpoint in the test code to get the results and submit on the benchmark but I encountered some problems, I would be grateful for your help.

  1. I found that the test code saves in two folders called "submission" and "visualizations" and I followed the code and found that you don't save a flow file in the "submission" folder for each sample of the input sequences, does this mean that evaluation in the benchmark doesn't use all of the GT flow of the test sequences?

  2. I was trying to run the test code on my GPU but I got an error in. I debugged the error and found that using the GPU will use the following line from test.py in the _test function
    batch = self.move_batch_to_cuda(batch)
    After this line, the variables in the batch changes from int to float, which resulted in having the following values for the first sample in the function "visualize_events" in visualization.py when using the cpu:

The t_start_us is: 49740000566
The t_end_us os: 49740100566

and in case I used the GPU, those values became:

The t_start_us is: 49740001280.0
The t_end_us os: 49740101280.0

which results in an error at the last sample of the first sequence in the function "visualize_flow_colours" in visualization.py.

I am using the latest version of the test code, but I added only the following line in the main.py due to having some error:

os.environ["GIT_PYTHON_REFRESH"] = "quiet"

Do you have an idea about what might be the problem? is it necessary to use the CPU?

  1. Is it better any way in having the batch number greater than 1 while running the test code?

  2. I am getting bad results when I load my checkpoint, the flow images in the submission folder are like the one below:
    000020

I am not sure about what the problem might be, given that the validation and training epe graphs are very good. I save the whole model in my checkpoint in the train code (with extension .pth) and then load it back in the test code. Do you have any advice regarding this?

Thanks in advance for your help

@magehrig
Copy link
Contributor

Hi @LuckyShrek

Have you tried following the instructions to verify that you can replicate the results with the checkpoint that we provide?
This helps me to understand if the issue is checkpoint related.

@magehrig
Copy link
Contributor

magehrig commented Apr 4, 2023

closing due to inactivity

@magehrig magehrig closed this as completed Apr 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants