New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Video extract #131
Comments
Thanks for your interest! Our code deals with json files generated using OpenPose with |
Thank you very much for your reply! I would like to ask what do you mean and std? And how did you get the NPZ format file you used in the folder 'data/ readmill_norm/test2d.npz'? |
During training, our motion input is normalized ((X-mean(X))/std(X)) before being fed into the network. Here mean(X) and std(X) are computed over the training dataset. Ideally mean(X) would be an "average pose" w/o bias from any style. |
Question 1: I want to experiment with my own video, because I am not very familiar with openpose, you already have a very mature and powerful operation technology process, so can you please share the specific method of extracting video joint points using openpose or related websites? ? |
Sure. The way to get
as specified in the OpenPose repo |
Thank you very much for your reply! Does the visualization of the BVH file generated in the demo_results folder require a GPU? Is it necessary to run the code under Linux operating system? |
You're welcome :) |
Which version of Openpose did you use to extract 2D nodes of human body and then generate JSON files? Are you running under Windows? Do you need a GPU? Which step of “https://github.com/CMU-Perceptual-Computing-Lab/openpose ” did you follow? Could you tell me more about that? Thank you very much! |
We used OpenPose 1.5.1, but I don't think the version matters here since the output formats are the same. The latest version should also work. We ran it on Ubuntu w/ a GPU. The step is basically the following in my previous comment. There are no other steps other than this. ./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --write_json output_json_folder/ If you go over OpenPose's readme you should find a similar script here. For more details, you can consult their doc. |
If we use our own video dataset, or directly use openpose to extract the 2D joint positions to generate a json format file, can we replace the original json file and directly perform style transfer? How many bone points does the json file extracted from your video dataset contain?
The text was updated successfully, but these errors were encountered: