-
Notifications
You must be signed in to change notification settings - Fork 284
Closed
Description
Hi there,
I recently started moving my training environment to WSL2 to keep pace to keras3.
after following the installation guide, I successfully installed the tensorflow on my conda environment through command
keras3::install_keras(envname = "~/pyEnv/keras", backend = "tensorflow", gpu = T)
However, when I checked tf.config in R, I found out that the GPU was not detected.
> tf$config$list_physical_devices()
2024-06-12 02:16:24.128849: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-06-12 02:16:24.668747: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-06-12 02:16:25.456112: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:282] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
[[1]]
PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')
I test some code and keras worked just fine with CPU.
Then I turned to python to get more details. dramatically, the GPU just showed up.
evan@DESKTOP-KGBNUBC:~$ conda activate keras
(/home/evan/pyEnv/keras) evan@DESKTOP-KGBNUBC:~$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2024-06-12 02:21:15.036500: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-06-12 02:21:15.538230: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-06-12 02:21:16.242746: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-06-12 02:21:16.271831: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-06-12 02:21:16.271904: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
googled a while and found nothing similar to this. Is that I shouldn't install TF into a conda environment?
Thanks in advance for any advice.
session info is here:
> sessionInfo()
R version 4.1.2 (2021-11-01)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 22.04 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.10.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
locale:
[1] LC_CTYPE=C.UTF-8 LC_NUMERIC=C LC_TIME=C.UTF-8 LC_COLLATE=C.UTF-8 LC_MONETARY=C.UTF-8
[6] LC_MESSAGES=C.UTF-8 LC_PAPER=C.UTF-8 LC_NAME=C LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] tensorflow_2.16.0
loaded via a namespace (and not attached):
[1] Rcpp_1.0.12 lattice_0.20-45 png_0.1-8 withr_3.0.0 zeallot_0.1.0 rappdirs_0.3.3
[7] R6_2.5.1 grid_4.1.2 lifecycle_1.0.4 jsonlite_1.8.8 magrittr_2.0.3 tfruns_1.5.3
[13] rlang_1.1.4 cli_3.6.2 fs_1.6.4 rstudioapi_0.16.0 whisker_0.4.1 keras3_1.0.0
[19] Matrix_1.4-0 reticulate_1.37.0 generics_0.1.3 keras_2.15.0 tools_4.1.2 glue_1.7.0
[25] compiler_4.1.2 base64enc_0.1-3
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels