Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training code is not working properly #4

Closed
abslon opened this issue May 28, 2020 · 6 comments
Closed

Training code is not working properly #4

abslon opened this issue May 28, 2020 · 6 comments

Comments

@abslon
Copy link

abslon commented May 28, 2020

I tried to run the training code : python scripts/train_network
Tensorflow works on GPU, but the code stucks at model train function. current_loss = train(model, batch_tf)

I'm using tensorflow 2.0, and the latest version of Open3d (ml-module branch)
How can I fix it?

@abslon
Copy link
Author

abslon commented May 28, 2020

I found that it doesn't stuck at train function, but it's running incredibly slow.

python scripts/train_network.py
['scripts/../datasets/ours_default_data/valid/sim_0201_00.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_01.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_02.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_03.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_04.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_05.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_06.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_07.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_08.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_09.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_10.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_11.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_12.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_13.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_14.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0201_15.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0202_00.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0202_01.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0202_02.msgpack.zst', 'scripts/../datasets/ours_default_data/valid/sim_0202_03.msgpack.zst'] ...
['scripts/../datasets/ours_default_data/train/sim_0001_00.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_01.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_02.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_03.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_04.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_05.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_06.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_07.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_08.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_09.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_10.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_11.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_12.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_13.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_14.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0001_15.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0002_00.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0002_01.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0002_02.msgpack.zst', 'scripts/../datasets/ours_default_data/train/sim_0002_03.msgpack.zst'] ...
[0528 14:46:42 @parallel.py:340] [MultiProcessRunnerZMQ] Will fork a dataflow more than one times. This assumes the datapoints are i.i.d.
2020-05-28 14:46:45.016724: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-05-28 14:46:45.026424: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.026892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.755
pciBusID: 0000:2d:00.0
2020-05-28 14:46:45.027119: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-05-28 14:46:45.028051: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-05-28 14:46:45.028930: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-05-28 14:46:45.029122: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-05-28 14:46:45.030196: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-05-28 14:46:45.031014: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-05-28 14:46:45.033543: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-28 14:46:45.033705: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.034187: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.034584: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-05-28 14:46:45.034916: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-28 14:46:45.039717: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3900050000 Hz
2020-05-28 14:46:45.040177: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557246ed0b70 executing computations on platform Host. Devices:
2020-05-28 14:46:45.040194: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Host, Default Version
2020-05-28 14:46:45.120697: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.121134: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557246dd70a0 executing computations on platform CUDA. Devices:
2020-05-28 14:46:45.121151: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-05-28 14:46:45.121303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.121705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.755
pciBusID: 0000:2d:00.0
2020-05-28 14:46:45.121758: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-05-28 14:46:45.121773: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-05-28 14:46:45.121785: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-05-28 14:46:45.121797: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-05-28 14:46:45.121810: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-05-28 14:46:45.121821: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-05-28 14:46:45.121833: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-28 14:46:45.121898: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.122333: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.122736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-05-28 14:46:45.122765: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-05-28 14:46:45.123589: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-28 14:46:45.123601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2020-05-28 14:46:45.123608: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2020-05-28 14:46:45.123706: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.124156: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-28 14:46:45.124579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8493 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:2d:00.0, compute capability: 7.5)
# 2020-05-28 14:46:45        0 n/a ips                 n/a rem | 
2020-05-28 14:47:00.475828: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
evaluating.. sim_0201 sim_0202

Current time here is 2020-05-28 17:03

@benjaminum
Copy link
Collaborator

Hi, does nvidia-smi show any activity on the gpu?

@abslon
Copy link
Author

abslon commented May 28, 2020

Thanks for your reply.
Tensorflow does consume gpu memory, but there are no activities

@benjaminum
Copy link
Collaborator

Can you check if the ops have been compiled with CUDA?
During the CMake configure step you should get the following two lines

-- Building Tensorflow ops
-- Building Tensorflow ops with CUDA

@abslon
Copy link
Author

abslon commented May 28, 2020

We added CMake option '-DBUILD_CUDA_MODULE=ON' and build Tensorflow ops successfully.
According to your paper, training model was finished in a day. Now training code seems to run 0.4~0.5 ips. Is this similar to your training speed?

# 2020-05-28 23:28:01     1330      0.46 ips      1 day, 5:05:32 rem | loss 1.933568000793457

Thanks.

@benjaminum
Copy link
Collaborator

The speed looks reasonable now.

@abslon abslon closed this as completed May 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants