Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with human_pose_util/dataset/eva/skeleton.py #1

Closed
pavanteja295 opened this issue Nov 12, 2018 · 9 comments
Closed

Issue with human_pose_util/dataset/eva/skeleton.py #1

pavanteja295 opened this issue Nov 12, 2018 · 9 comments

Comments

@pavanteja295
Copy link

Hey Jack,
I have been trying to run human_pose_util/dataset/eva/raw_tree/raw_tree.py and I'm not able to find the function native_to_s16 used in raw_tree.py

To quote the ouput this is what I get :

File "raw_tree.py", line 334, in show
    p3_world_16, p3_world_14 = convert(view.p3_world[image_frame])
  File "raw_tree.py", line 330, in convert
    p16 = native_to_s16(native)
NameError: name 'native_to_s16' is not defined

I tried to check the function but its not existing in the file.

Can you please help me out in using your repo

Thanks!

@jackd
Copy link
Owner

jackd commented Nov 12, 2018

Hi Pavanteja, I've seen this and promise I'll get back to it - things pretty hectic at work for the rest of the week though, sorry for delay. If you're desperate it's probably something I accidentally deleted after I'd done the conversion, so it'll likely be in the git history somewhere - otherwise I'll sort it out in a week or so.

@pavanteja295
Copy link
Author

Hey Jack,
Thanks for the quick reply. Can you at least tell me what th function does on the whole. If I understand correctly annotations of humaneva has a different joint names than the ones used in general and u want to convert the given joints into the general joints ? Lemme know if this is the case

@jackd
Copy link
Owner

jackd commented Nov 13, 2018

Just pushed fix. I changed interfaces at some point to using SkeletonConverters, didn't fix the example code scattered around the place. Should have used the parent directories skeleton.s20_to_s16_converter().convert(native) - native is the native skeleton (i.e. skeleton provided by the original dataset) with 20 joints, where as s14/s16 have 14/16 joints.

Disclaimer: you'll probably find a fair few issues like this. Feel free to file them and I'll get to them when I get a chance, but it won't be a high priority for the next week. Good luck.

@pavanteja295
Copy link
Author

Thanks a lot for such a quick fix. Just one doubt I have is how do you convert Image_data which are the video files into images which I want to use them for future use. Also can I use the hdf5_tree.py to convert the uncompressed files to a hdf5 file ?

@jackd
Copy link
Owner

jackd commented Nov 13, 2018

I know I tried doing that once, but I ended up concluding it was a bad idea - video compression is best, and if you try to save raw data it will explode to an unmanagable size. If might work if you wanted to only do a subset of the data - every 10th frame or something - but the size still ends up being quite unmanagable if you're not smart about it. I haven't revisited it since I've done some work with imagenet and learned some things (feel free to check out this script from my imagenet repo that saves externally compressed image data as vlen hdf5 data. Don't try and save frames in individual datasets - you'll get this behaviour), but I can guarantee I haven't implemented anything like that in here.

@pavanteja295
Copy link
Author

Hey thanks a lot for the information and such a interactive issue resolving. Last question I have is I think u haven't downsampled any annotations which are stored in hdf5. But I tried to extract frames from the videos provided using ffmpeg using 60 frame rate which is given in the paper. Suprisingly the number of frames in the hdf5 file donot somehow match with the number of images present after extracting the video. Any idea about this ?

@jackd
Copy link
Owner

jackd commented Nov 13, 2018

I observed the same thing, but the difference was only a few frames if a recall correctly - can't remember exactly how I reconciled it - think I just trimmed the last few frames after visually verifying I couldn't really tell the difference between trimming start and end frames.

@pavanteja295
Copy link
Author

Yeah thanks for your suggestion. I was able to create it. One doubt I have is in meta.py you have the partition which shows partition of the frames so is this partition is training and validation partition ? If not how can I find train and validation split ?

@jackd
Copy link
Owner

jackd commented Nov 15, 2018

... ... ... yep, should have docuemented that better. 36 hours to (different) deadline so I won't address it properly now, but I recall the numbers coming straight from the original EVA paper. From memory, and based on the limited comments I have there, S1/Walking/Trial 1 frames[:590] were validation, while frames[590:] were training, while trial 2 was entirely for testing and trial 3 entirely training (total frame counts below: 1180, 980, 3238).

@jackd jackd closed this as completed Oct 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants