Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues while running #1

Closed
chrissunny94 opened this issue Oct 19, 2023 · 3 comments
Closed

Issues while running #1

chrissunny94 opened this issue Oct 19, 2023 · 3 comments

Comments

@chrissunny94
Copy link

chrissunny94 commented Oct 19, 2023

DNN library is not found.
	 [[{{node model/Conv1/Conv2D}}]] [Op:__inference_predict_function_10189]


Num GPUs Available:  1
loading file ...mobilenetv2_weights.h5...!
Traceback (most recent call last):
  File "/home/<>/Desktop/POINTCLOUD/YOLOv8-3D/demo.py", line 116, in <module>
    prediction = bbox3d_model.predict(patch, verbose = 0)
  File "/home/<>/anaconda3/envs/test2/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/<>/anaconda3/envs/test2/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnimplementedError: Graph execution error:

Detected at node model/Conv1/Conv2D defined at (most recent call last):

@bharath5673
Copy link
Owner

this is beacause both tensorflow and pytorch tries to acocate one GPU which can lead to conflicts. so try on cpu with new env and run..

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip3 install tensorflow-cpu

@bharath5673
Copy link
Owner

u can use one GPU for tensorflow model and pytorch model at a time if u are good in GPU Memory Management..

Dynamic Memory Allocation:
Both TensorFlow and PyTorch can dynamically allocate GPU memory based on the requirements of the models and operations being performed. This allows for more efficient utilization of the available GPU memory.


import torch
import tensorflow as tf

# Set device to GPU (assuming it's available)
device = torch.device('cuda')

# PyTorch Model
pytorch_model = PyTorchModel().to(device)

# TensorFlow Model
tf_config = tf.compat.v1.ConfigProto()
tf_config.gpu_options.allow_growth = True  # Allow dynamic memory growth
tf_session = tf.compat.v1.Session(config=tf_config)

with tf_session.as_default():
    with tf.device('/GPU:0'):
        tensorflow_model = TensorFlowModel()

# Move TensorFlow model to GPU (if it wasn't created on GPU)
with tf_session.as_default():
    with tf.device('/GPU:0'):
        tensorflow_model = tf.device('/GPU:0')(tensorflow_model)
        tensorflow_model.build(input_shape=(None, ...))  # Specify input shape

# Now, you can use both models on the same GPU

@chrissunny94
Copy link
Author

Thank you , Bharath

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants