Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

c_api_distributed_test creates huge amount of threads and segfaults #47047

Open
Flamefire opened this issue Feb 9, 2021 · 0 comments
Open

c_api_distributed_test creates huge amount of threads and segfaults #47047

Flamefire opened this issue Feb 9, 2021 · 0 comments
Assignees
Labels
comp:apis Highlevel API related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:bug Bug

Comments

@Flamefire
Copy link
Contributor

System information

  • Have I written custom code: no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 2.4.1
  • Python version: 3.7.4
  • Bazel version (if compiling from source): 3.7.1
  • GCC/Compiler version (if compiling from source): 8.3.0
  • CUDA/cuDNN version: 10.1

Describe the current behavior

When running bazel test on a system with a large physical core count the test //tensorflow/c/eager:c_api_distributed_test finishes and then segfaults on exit.

When I manually set OMP_NUM_THREADS=80 the test succeeds without a segfault but at around 85 it again crashes.

I'm unable to get a stacktrace neither through TensorFlow nor through gdb and even valgrind gives up with

valgrind: the 'impossible' happened:
Max number of threads is too low

It then prints the stacks of 500(!) threads. In GDB I was sometimes able to catch a part of the stack pointing to libiomp from the included llvm-OpenMP, but that was difficult and hard to reproduce. Usually the process would just be terminated even when in GDB.

Something I noticed: The ThreadPool(Device) creates a large amount of threads which don't terminate until program exit. I don't think this is intended and expect this to be the cause which triggers some limitation in the OpenMP runtime.

Also the crash does not happen when not all subtests are run (via the GTest filter), excluding any of the 5 (or 6?) makes the crash disappear

Describe the expected behavior

Threads exit when ThreadPool is destroyed and no crash happens.

Standalone code to reproduce the issue

  • build with bazel
  • CUDA_VISIBLE_DEVICES=-1 gdb /dev/shm//tmpzWGWuq-bazel-tf/fdff6046a749a079864ed2bee7e018bf/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/c_api_distributed_test

Other info / logs

Executing tests from //tensorflow/c/eager:c_api_distributed_test
-----------------------------------------------------------------------------
2021-02-08 19:58:39.296267: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
Running main() from test_main.cc
[==========] Running 6 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 6 tests from CAPI
[ RUN      ] CAPI.TestLocalFunctionWithPackedInput
2021-02-08 19:58:39.510017: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2021-02-08 19:58:39.748990: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-02-08 19:58:39.749037: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: taurusi8028
2021-02-08 19:58:39.749045: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: taurusi8028
2021-02-08 19:58:39.749373: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 460.32.3
2021-02-08 19:58:39.749416: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 460.32.3
2021-02-08 19:58:39.749430: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 460.32.3
2021-02-08 19:58:39.749508: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:39.796167: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59685, 1 -> localhost:50055, 2 -> localhost:31398}
2021-02-08 19:58:39.841581: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:50055
2021-02-08 19:58:39.841721: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:40.095546: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59685, 1 -> localhost:50055, 2 -> localhost:31398}
2021-02-08 19:58:40.095796: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:31398
2021-02-08 19:58:40.095865: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:40.095922: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:40.258067: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59685, 1 -> localhost:50055, 2 -> localhost:31398}
2021-02-08 19:58:40.406436: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59685, 1 -> localhost:50055, 2 -> localhost:31398}
2021-02-08 19:58:40.406478: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59685, 1 -> localhost:50055, 2 -> localhost:31398}
2021-02-08 19:58:40.406627: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating async eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:1
2021-02-08 19:58:40.406634: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating async eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:2
2021-02-08 19:58:40.406664: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:40.406670: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:40.408781: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59685, 1 -> localhost:50055, 2 -> localhost:31398}
2021-02-08 19:58:40.409412: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:59685
2021-02-08 19:58:40.647385: I tensorflow/core/common_runtime/eager/kernel_and_device.cc:92] Ignoring error status when releasing multi-device function handle Unimplemented: Releasing a multi-device component 
handle on a remote device is not yet implemented.
[       OK ] CAPI.TestLocalFunctionWithPackedInput (1209 ms)
[ RUN      ] CAPI.TestRemoteFunctionWithPackedInput
2021-02-08 19:58:40.647938: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:40.673207: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:62634, 1 -> localhost:31692, 2 -> localhost:39353}
2021-02-08 19:58:40.673481: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:31692
2021-02-08 19:58:40.673544: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:40.775266: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:62634, 1 -> localhost:31692, 2 -> localhost:39353}
2021-02-08 19:58:40.778366: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:39353
2021-02-08 19:58:40.778517: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:40.778614: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:40.850707: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:62634, 1 -> localhost:31692, 2 -> localhost:39353}
2021-02-08 19:58:40.857370: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:62634, 1 -> localhost:31692, 2 -> localhost:39353}
2021-02-08 19:58:40.857557: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:62634, 1 -> localhost:31692, 2 -> localhost:39353}
2021-02-08 19:58:40.857947: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating async eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:2
2021-02-08 19:58:40.857974: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:40.858061: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating async eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:1
2021-02-08 19:58:40.858100: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:40.865056: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:62634, 1 -> localhost:31692, 2 -> localhost:39353}
2021-02-08 19:58:40.866478: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:62634
[       OK ] CAPI.TestRemoteFunctionWithPackedInput (367 ms)
[ RUN      ] CAPI.DistributedFunctionGraphPassOnlyOnce
2021-02-08 19:58:41.014661: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.024886: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:45659, 1 -> localhost:44179, 2 -> localhost:57750}
2021-02-08 19:58:41.025155: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:44179
2021-02-08 19:58:41.025233: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.101530: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:45659, 1 -> localhost:44179, 2 -> localhost:57750}
2021-02-08 19:58:41.103305: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:57750
2021-02-08 19:58:41.103445: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.103492: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.265757: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:45659, 1 -> localhost:44179, 2 -> localhost:57750}
2021-02-08 19:58:41.272939: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:45659, 1 -> localhost:44179, 2 -> localhost:57750}
2021-02-08 19:58:41.273025: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:45659, 1 -> localhost:44179, 2 -> localhost:57750}
2021-02-08 19:58:41.273080: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating sync eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:1
2021-02-08 19:58:41.273111: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.273240: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating sync eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:2
2021-02-08 19:58:41.273276: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.275488: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:45659, 1 -> localhost:44179, 2 -> localhost:57750}
2021-02-08 19:58:41.275920: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:45659
[       OK ] CAPI.DistributedFunctionGraphPassOnlyOnce (316 ms)
[ RUN      ] CAPI.DistributedFunctionNoError
2021-02-08 19:58:41.331075: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.405806: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59564, 1 -> localhost:34434, 2 -> localhost:37620}
2021-02-08 19:58:41.406048: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:34434
2021-02-08 19:58:41.406115: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.445328: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59564, 1 -> localhost:34434, 2 -> localhost:37620}
2021-02-08 19:58:41.446777: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:37620
2021-02-08 19:58:41.446932: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.447020: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.709247: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59564, 1 -> localhost:34434, 2 -> localhost:37620}
2021-02-08 19:58:41.713390: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59564, 1 -> localhost:34434, 2 -> localhost:37620}
2021-02-08 19:58:41.713392: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59564, 1 -> localhost:34434, 2 -> localhost:37620}
2021-02-08 19:58:41.713504: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating sync eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:2
2021-02-08 19:58:41.713531: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.713555: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating sync eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:1
2021-02-08 19:58:41.713588: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.714723: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:59564, 1 -> localhost:34434, 2 -> localhost:37620}
2021-02-08 19:58:41.715081: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:59564
[       OK ] CAPI.DistributedFunctionNoError (448 ms)
[ RUN      ] CAPI.RemoteExecuteDeleteContextWithOutstandingRPC
2021-02-08 19:58:41.778430: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.843268: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:60792, 1 -> localhost:32251}
2021-02-08 19:58:41.843574: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:32251
2021-02-08 19:58:41.843637: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.843678: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.896621: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:60792, 1 -> localhost:32251}
2021-02-08 19:58:41.952439: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:60792, 1 -> localhost:32251}
2021-02-08 19:58:41.952676: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating sync eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:1
2021-02-08 19:58:41.952707: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:41.953443: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:60792, 1 -> localhost:32251}
2021-02-08 19:58:41.953942: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:60792
[       OK ] CAPI.RemoteExecuteDeleteContextWithOutstandingRPC (178 ms)
[ RUN      ] CAPI.RemoteExecuteDeleteContextWithOutstandingRPCAsync
2021-02-08 19:58:41.956466: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.980731: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:63496, 1 -> localhost:32519}
2021-02-08 19:58:41.980969: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:32519
2021-02-08 19:58:41.981026: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-08 19:58:41.981060: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:42.347899: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:63496, 1 -> localhost:32519}
2021-02-08 19:58:42.350811: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:63496, 1 -> localhost:32519}
2021-02-08 19:58:42.350917: I tensorflow/core/distributed_runtime/eager/eager_service_impl.cc:270] Creating async eager service context with rendezvous_id on host taurusi8028 /job:localhost/replica:0/task:1
2021-02-08 19:58:42.350946: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-02-08 19:58:42.358578: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:63496, 1 -> localhost:32519}
2021-02-08 19:58:42.359091: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:411] Started server with target: grpc://localhost:63496
[       OK ] CAPI.RemoteExecuteDeleteContextWithOutstandingRPCAsync (403 ms)
[----------] 6 tests from CAPI (2921 ms total)

[----------] Global test environment tear-down
[==========] 6 tests from 1 test suite ran. (2921 ms total)
[  PASSED  ] 6 tests.

  YOU HAVE 1 DISABLED TEST

*** Received signal 11 ***
*** BEGIN MANGLED STACK TRACE ***

(yes the log ends here, no stack trace!)

@Flamefire Flamefire added the type:bug Bug label Feb 9, 2021
@Saduf2019 Saduf2019 added TF 2.4 for issues related to TF 2.4 comp:apis Highlevel API related issues labels Feb 10, 2021
@Saduf2019 Saduf2019 assigned ymodak and unassigned Saduf2019 Feb 10, 2021
Flamefire added a commit to Flamefire/easybuild-easyconfigs that referenced this issue Feb 10, 2021
@ymodak ymodak added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Feb 10, 2021
@ymodak ymodak removed their assignment Feb 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:apis Highlevel API related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 type:bug Bug
Projects
None yet
Development

No branches or pull requests

5 participants