Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About DATA_PATH = 'sklt_data_all' #9

Closed
JiajiaStrive opened this issue Jul 2, 2018 · 19 comments
Closed

About DATA_PATH = 'sklt_data_all' #9

JiajiaStrive opened this issue Jul 2, 2018 · 19 comments

Comments

@JiajiaStrive
Copy link

Where is the path of "DATA_PATH = 'sklt_data_all'"?
I can't find it, can you tell me?
I don't know if you are willing to tell me how to execute the NTU code.
I have get .csv form txt files.
Looking forward to your reply.

Now, I face the question is fellowing.
_20180702210001

@InwoongLee
Copy link
Owner

InwoongLee commented Jul 3, 2018

You need to make a folder named by sklt_data_all, and move the input csv files to the sklt_data_all folder.
Or Set DATA_PATH to the folder consisting of your input csv files.
Also, you need to make the folder named by sklt_npy_view for saving npy files.

@JiajiaStrive
Copy link
Author

Thank you very much! I have excute it, but now , I have another question.
image
How can I use the code bellow?
image
image
I just run the code CS_Ensemble TS-LSTM v1_new.py, but I think maybe I was wrong.

Looking forward to your reply!

@InwoongLee
Copy link
Owner

Now, I cannot know the error.
I think you need debug according to the error message.

@JiajiaStrive
Copy link
Author

I just run the code CS_Ensemble TS-LSTM v1_new.py, is this right?

@InwoongLee
Copy link
Owner

InwoongLee commented Jul 3, 2018

I think there is config.feature_size error.
You need to modify 2*config.feature_size in feature_only_diff_2 into config.feature_size.
Please refer to 1257f42.
Ensemble v1, v2 were modifed.

@JiajiaStrive
Copy link
Author

Can you excute this code in your computer?
And can you tell me the versions of your tensorflow and python?

I changed it, but now have the another preblem.
image

@InwoongLee
Copy link
Owner

InwoongLee commented Jul 3, 2018

I had no problem when executing the code just before.
You need to check the config.feature_size.
The config.feature_size and evalconfig.feature_size should be 150 not 75.
Please check it.

@InwoongLee
Copy link
Owner

in feature_only_diff_2,
for batch_step in range(len(data)):
# print len(data[batch_step])
use print len(data[batch_step][0])
If the value is 150, it's okay. But if the value is 75, data format has problem.

@JiajiaStrive
Copy link
Author

image
I have print len(data[batch_step]) and print len(data[batch_step][0]), but I get this
image
I get the data through the files as fellow.
image
I didn't change anything.

I called the function throngh the fellow.

image

image
Looking forward to your answer.

@JiajiaStrive
Copy link
Author

I am so sorry, I writed wrong sbout print len(data[batch_step][0]);
I excute the code again, and get this
image

@JiajiaStrive
Copy link
Author

image
I changed the config.feature_size = 75, and get the above.

@InwoongLee
Copy link
Owner

InwoongLee commented Jul 4, 2018

config.feature_size is 150, which is right not 75.
I'm sorry for our error file.
make_csv_action_0149.m is 75 input size. So we need to add zero padding to match 150.
We modified the code. Please reuse the make_csv_action_0149.m in bf57d5d.
make_csv_action_5060.m doesn't have any problem.

@JiajiaStrive
Copy link
Author

thank you very much!
I wish I run the code success.

@JiajiaStrive
Copy link
Author

I need your help.
Can you help me change 4 GPUs to 3 GPUs about NTU_Code->CS_Ensembe_TS-LSTM_v1.py?
I realy need your help.
I don't have 4 GPUs for it, but I have 3 GPUs.

@JiajiaStrive
Copy link
Author

Looking forward to your answer.

@InwoongLee
Copy link
Owner

InwoongLee commented Jul 18, 2018

If you see "with tf.device(sw_0):"

sw_0, sw_1, sw_2, and sw_3 are assigned by gpu0, gpu1, gpu2, and gpu3, respectively.

So, you can control the runner assign like this.

sw_0 = runner_assign[0]
sw_1 = runner_assign[1]
sw_2 = runner_assign[2]
sw_3 = runner_assign[3]

->

sw_0 = '/gpu:0'
sw_1 = '/gpu:1'
sw_2 = '/gpu:2'
sw_3 = '/gpu:2'

and another modification is needed like this.

gradient_device = ['/gpu:0','/gpu:1','/gpu:2','/gpu:3']

-> gradient_device = ['/gpu:0','/gpu:1','/gpu:2','/gpu:2']

This is an example.

You can handle it in the way you want.

@JiajiaStrive
Copy link
Author

Thank you very much! I have changed it seccussfully! But I have another question, what is the version of cuda about tensorflow-0.11.0 when you run the code of NTUGRB-D?

@InwoongLee
Copy link
Owner

maybe it was 7.5.

If you have problem of version, you can upgrade tensorflow and cuda.

And then, edit some of codes like tf,initializer, tf.concat, etc according to new version of tensorflow.

UCLA and UWA is already modified. please refer to that codes on UCLA and UWA datasets.

@JiajiaStrive
Copy link
Author

Thank you very much! I am very glad to have your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants