Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue #10

Open
BaldyLLC opened this issue May 12, 2020 · 6 comments
Open

Issue #10

BaldyLLC opened this issue May 12, 2020 · 6 comments

Comments

@BaldyLLC
Copy link

image
It is not allowing me to render

@errno-mmd
Copy link
Owner

The screeenshot image is too small to read the error message...

@BaldyLLC
Copy link
Author

C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From C:\Users\lewis\Downloads\mmdmatic-ver1.03-3\mmdmatic-ver1.03-3\tf-pose-estimation\tf_pose\mobilenet\mobilenet.py:369: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

usage: run_video.py [-h] [--video VIDEO] [--resolution RESOLUTION]
[--model MODEL] [--show-process] [--no_bg]
[--write_json WRITE_JSON] [--no_display]
[--resize_out_ratio RESIZE_OUT_RATIO]
[--number_people_max NUMBER_PEOPLE_MAX]
[--frame_first FRAME_FIRST] [--write_video WRITE_VIDEO]
[--tensorrt TENSORRT]
run_video.py: error: unrecognized arguments: 5/1/_215002\zimzalabim_json 5/1/_215002\zimzalabim_tf-pose-estimation.avi

Done!!
tf-pose-estimation analysis end
BULK OUTPUT_JSON_DIR: C:\Users\lewis\Downloads\zimzalabim_Mon 5/1/_215002\zimzalabim_json

mannequinchallenge-vmd

usage: predict_video.py [-h] --input {single_view,two_view,two_view_k}
[--simple_keypoints SIMPLE_KEYPOINTS] [--mode MODE]
[--human_data_term HUMAN_DATA_TERM]
[--batchSize BATCHSIZE] [--loadSize LOADSIZE]
[--fineSize FINESIZE] [--output_nc OUTPUT_NC]
[--ngf NGF] [--ndf NDF]
[--which_model_netG WHICH_MODEL_NETG]
[--gpu_ids GPU_IDS] [--name NAME] [--model MODEL]
[--nThreads NTHREADS]
[--checkpoints_dir CHECKPOINTS_DIR] [--norm NORM]
[--serial_batches] [--display_winsize DISPLAY_WINSIZE]
[--display_id DISPLAY_ID] [--identity IDENTITY]
[--use_dropout] [--max_dataset_size MAX_DATASET_SIZE]
[--display_freq DISPLAY_FREQ]
[--print_freq PRINT_FREQ]
[--save_latest_freq SAVE_LATEST_FREQ]
[--save_epoch_freq SAVE_EPOCH_FREQ] [--continue_train]
[--phase PHASE] [--which_epoch WHICH_EPOCH]
[--niter NITER] [--niter_decay NITER_DECAY]
[--lr_decay_epoch LR_DECAY_EPOCH]
[--lr_policy LR_POLICY] [--beta1 BETA1] [--lr LR]
[--no_lsgan] [--lambda_A LAMBDA_A]
[--lambda_B LAMBDA_B] [--pool_size POOL_SIZE]
[--no_html] [--no_flip] [--video_path VIDEO_PATH]
[--json_path JSON_PATH] [--now NOW]
[--past_depth_path PAST_DEPTH_PATH]
[--interval INTERVAL]
[--number_people_max NUMBER_PEOPLE_MAX]
[--reverse_specific REVERSE_SPECIFIC]
[--order_specific ORDER_SPECIFIC]
[--end_frame_no END_FRAME_NO]
[--order_start_frame ORDER_START_FRAME]
[--avi_output AVI_OUTPUT] [--verbose VERBOSE]
predict_video.py: error: unrecognized arguments: 5/1/_215002\zimzalabim_json 5/1/_215137
ERROR
Press any key to continue . . .

@errno-mmd
Copy link
Owner

It seems that the problem is caused by a difference in date format.
Would you try this pre-release version of motion_trace_bulk?
https://github.com/errno-mmd/motion_trace_bulk/archive/date_format.zip

@BaldyLLC
Copy link
Author

I am still getting C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\lewis.conda\envs\mmdmat\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])

@BaldyLLC
Copy link
Author

nvm, it is working

@BaldyLLC
Copy link
Author

now my code is looping with this
--- Logging error ---
Traceback (most recent call last):
File "C:\Users\lewis.conda\envs\mmdmat\lib\logging_init_.py", line 1028, in emit
stream.write(msg + self.terminator)
File "C:\Users\lewis.conda\envs\mmdmat\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-3: character maps to
Call stack:
File "predict_video.py", line 783, in
main()
File "predict_video.py", line 777, in main
predict_video(now_str, opt.video_path, depth_path, past_depth_path, interval, opt.json_path, opt.number_people_max, reverse_specific_dict, order_specific_dict, is_avi_output, opt.end_frame_no, opt.order_start_frame, opt.verbose, opt)
File "predict_video.py", line 449, in predict_video
logger.warning("深度推定 idx: %s(%s) 処理: %s[sec]", _idx, cnt, time.time() - start)
Message: '深度推定 idx: %s(%s) 処理: %s[sec]'
Arguments: (900, 901, 107.13522481918335)
WARNING:main:深度推定 idx: 900(901) 処理: 107.13522481918335[sec]
DEBUG:main:cnt: 902, _idx: 901, flag: True, len(img_list): 1
DEBUG:main:cnt: 903, _idx: 902, flag: True, len(img_list): 2
DEBUG:main:cnt: 904, _idx: 903, flag: True, len(img_list): 3
DEBUG:main:cnt: 905, _idx: 904, flag: True, len(img_list): 4
DEBUG:main:cnt: 906, _idx: 905, flag: True, len(img_list): 5
DEBUG:main:cnt: 907, _idx: 906, flag: True, len(img_list): 6
DEBUG:main:cnt: 908, _idx: 907, flag: True, len(img_list): 7
DEBUG:main:cnt: 909, _idx: 908, flag: True, len(img_list): 8
DEBUG:main:cnt: 910, _idx: 909, flag: True, len(img_list): 9
DEBUG:main:cnt: 911, _idx: 910, flag: True, len(img_list): 10
DEBUG:main:cnt: 912, _idx: 911, flag: True, len(img_list): 11
DEBUG:main:cnt: 913, _idx: 912, flag: True, len(img_list): 12
DEBUG:main:cnt: 914, _idx: 913, flag: True, len(img_list): 13
DEBUG:main:cnt: 915, _idx: 914, flag: True, len(img_list): 14
DEBUG:main:cnt: 916, _idx: 915, flag: True, len(img_list): 15
DEBUG:main:cnt: 917, _idx: 916, flag: True, len(img_list): 16
DEBUG:main:cnt: 918, _idx: 917, flag: True, len(img_list): 17
DEBUG:main:cnt: 919, _idx: 918, flag: True, len(img_list): 18
DEBUG:main:cnt: 920, _idx: 919, flag: True, len(img_list): 19
DEBUG:main:cnt: 921, _idx: 920, flag: True, len(img_list): 20
DEBUG:main:========================= Video dataset #images = 20 =========
DEBUG:main.models.pix2pixdata_model:====================================== DIW NETWORK TRAIN FROM Ours_Bilinear=======================
DEBUG:main.models.pix2pixdata_model:===================Loading Pretrained Model OURS ===================================
DEBUG:main.models.pix2pixdata_model:---------- Networks initialized -------------
DEBUG:main.models.networks:HourglassModel(
(seq): Sequential(
(0): Conv2d(3, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3))
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Channels4(
(list): ModuleList(
(0): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): inception[[32], [3, 32, 32], [5, 32, 32], [7, 32, 32]]
(2): inception[[32], [3, 32, 32], [5, 32, 32], [7, 32, 32]]
(3): Channels3(
(list): ModuleList(
(0): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): inception[[32], [3, 32, 32], [5, 32, 32], [7, 32, 32]]
(2): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(3): Channels2(
(list): ModuleList(
(0): Sequential(
(0): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(1): inception[[64], [3, 64, 64], [7, 64, 64], [11, 64, 64]]
)
(1): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(2): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(3): Channels1(
(list): ModuleList(
(0): Sequential(
(0): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(1): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
)
(1): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(2): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(3): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(4): UpsamplingBilinear2d(scale_factor=2.0, mode=bilinear)
)
)
)
(4): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(5): inception[[64], [3, 64, 64], [7, 64, 64], [11, 64, 64]]
(6): UpsamplingBilinear2d(scale_factor=2.0, mode=bilinear)
)
)
)
(4): inception[[64], [3, 32, 64], [5, 32, 64], [7, 32, 64]]
(5): inception[[32], [3, 32, 32], [5, 32, 32], [7, 32, 32]]
(6): UpsamplingBilinear2d(scale_factor=2.0, mode=bilinear)
)
(1): Sequential(
(0): inception[[32], [3, 32, 32], [5, 32, 32], [7, 32, 32]]
(1): inception[[32], [3, 64, 32], [7, 64, 32], [11, 64, 32]]
)
)
)
(4): inception[[32], [3, 64, 32], [5, 64, 32], [7, 64, 32]]
(5): inception[[16], [3, 32, 16], [7, 32, 16], [11, 32, 16]]
(6): UpsamplingBilinear2d(scale_factor=2.0, mode=bilinear)
)
(1): Sequential(
(0): inception[[16], [3, 64, 16], [7, 64, 16], [11, 64, 16]]
)
)
)
)
(uncertainty_layer): Sequential(
(0): Conv2d(64, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Sigmoid()
)
(pred_layer): Conv2d(64, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
DEBUG:main.models.networks:Total number of parameters: 5357730
DEBUG:main.models.pix2pixdata_model:-----------------------------------------------
DEBUG:main:================================= BEGIN VALIDATION =====================================
DEBUG:main:TESTING ON VIDEO

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants