Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video extract #131

Closed
Hellodan-77 opened this issue Jan 27, 2021 · 9 comments
Closed

Video extract #131

Hellodan-77 opened this issue Jan 27, 2021 · 9 comments

Comments

@Hellodan-77
Copy link

If we use our own video dataset, or directly use openpose to extract the 2D joint positions to generate a json format file, can we replace the original json file and directly perform style transfer? How many bone points does the json file extracted from your video dataset contain?

@HalfSummer11
Copy link
Collaborator

Thanks for your interest! Our code deals with json files generated using OpenPose with --hand option. Here we convert the raw OpenPose output to a 2D skeleton corresponding to the CMU skeleton we used in training. Note that you may want to set reasonable mean and std poses and replace the default mean/std pose here.

@Hellodan-77
Copy link
Author

Hellodan-77 commented Jan 28, 2021

Thank you very much for your reply! I would like to ask what do you mean and std? And how did you get the NPZ format file you used in the folder 'data/ readmill_norm/test2d.npz'?

@HalfSummer11
Copy link
Collaborator

During training, our motion input is normalized ((X-mean(X))/std(X)) before being fed into the network. Here mean(X) and std(X) are computed over the training dataset. Ideally mean(X) would be an "average pose" w/o bias from any style.
The normalization step is also done for the test inputs. Our test2d.npz is computed over all extracted 2D skeletons in this youtube video using the code here

@Hellodan-77
Copy link
Author

Question 1: I want to experiment with my own video, because I am not very familiar with openpose, you already have a very mature and powerful operation technology process, so can you please share the specific method of extracting video joint points using openpose or related websites? ?
Question 2: The result of your style transfer synthesis action is a BVH format file. Is there any visual code? Just like the generated result on your article homepage? Could you please share with me?
Thank you very much!

@HalfSummer11
Copy link
Collaborator

Sure. The way to get json files is to simply run

./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --write_json output_json_folder/

as specified in the OpenPose repo
For BVH visualization, you can directly use blender for a quick visualization. For rendering, please refer to the relevant section here in our repo.

@Hellodan-77
Copy link
Author

Thank you very much for your reply! Does the visualization of the BVH file generated in the demo_results folder require a GPU? Is it necessary to run the code under Linux operating system?

@HalfSummer11
Copy link
Collaborator

You're welcome :)
GPU is not required. But the visualization code is only tested under Linux & maxOS. I'm not sure whether it works under Windows.

@Hellodan-77
Copy link
Author

Hellodan-77 commented Feb 1, 2021

Which version of Openpose did you use to extract 2D nodes of human body and then generate JSON files? Are you running under Windows? Do you need a GPU? Which step of “https://github.com/CMU-Perceptual-Computing-Lab/openpose ” did you follow? Could you tell me more about that? Thank you very much!

@HalfSummer11
Copy link
Collaborator

We used OpenPose 1.5.1, but I don't think the version matters here since the output formats are the same. The latest version should also work. We ran it on Ubuntu w/ a GPU. The step is basically the following in my previous comment. There are no other steps other than this.

./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --write_json output_json_folder/

If you go over OpenPose's readme you should find a similar script here. For more details, you can consult their doc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants