Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why did I come to 'mmco:unref short failure' and print this ValueError: 'Empty range for randrange()' #4

Closed
kynehc opened this issue Apr 22, 2019 · 6 comments

Comments

@kynehc
Copy link

kynehc commented Apr 22, 2019

I followed the procedure to download videos. After that when I tried to train the featurizer, it happened this.
image
I tried to figure out why occurred to this error.
image
this means Video1 and Video3 can't successfully preprocess and only Video2 to train the featurizer.

And after I found that empty range error due to this code in TDC model, which as said in the paper.
image
after I changed this possible_frames_end to 80 from 200, It trains the featurizer successfully but by only Video2.
image

So what should I do to solve this? Thanks!

@MaxSobolMark
Copy link
Owner

The error seems to be the length of the videos. Every video should have much more than 200 frames, so this shouldn't be a problem if they are downloaded correctly.

The error is in line 218, calling random.randint with a negative parameter, and that would only happen if videos[video_index].shape[0] < 200.

Try to download the videos again, and also check videos[video_index].shape[0] to make sure it's more than 200.

@kynehc
Copy link
Author

kynehc commented Apr 22, 2019

I downloaded the videos by the given download_videos.py
python download_videos.py --filename montezuma.txt
I have tried many times and it’s same. And Video1 and 3 can’t preprocess due to the error “unref short failure”.

@kynehc
Copy link
Author

kynehc commented Apr 23, 2019

Can u share your version of python, opencv, scipy, tensorflow? I will try to recovery the same development environment.

@MaxSobolMark
Copy link
Owner

You can use this environment for testing: https://colab.research.google.com/drive/1pKFO8hUpOZv_KZetEXmvxoNfyIoO9uQj

Everything up to the PPO part should be working. For the PPO part I had to run it locally.

@kynehc
Copy link
Author

kynehc commented May 9, 2019

Thanks, I run it successfully.
But the performance seems like not good, would u share your saved model after ppo2, so I can run it locally to see the good results.

@kynehc kynehc closed this as completed May 9, 2019
@MaxSobolMark
Copy link
Owner

Unfortunately, I don't have access to my previously trained ppo2 model. However, you could try to make PPO work on the colab. There seem to be some problems with updated packages, but that might be not too hard to fix. And performance on Google colab is very good, it shouldn't take too long to train the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants