You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi folks,
I am currently experimenting with the BEVFusion model from projects/BEVFusion. There are a couple of different modalities how this model can be used. There are commands for lidar-only training and lidar-camera training. I am wondering if and how you could do camera-only?
I saw that in the original paper/repo from MIT Han Lab (https://github.com/mit-han-lab/bevfusion) they also have camera-only metrics. However, I have issues getting their repo to run on my Nvidia H100 GPU which is why I would like to stick to this repo here.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi folks,
I am currently experimenting with the BEVFusion model from projects/BEVFusion. There are a couple of different modalities how this model can be used. There are commands for lidar-only training and lidar-camera training. I am wondering if and how you could do camera-only?
I saw that in the original paper/repo from MIT Han Lab (https://github.com/mit-han-lab/bevfusion) they also have camera-only metrics. However, I have issues getting their repo to run on my Nvidia H100 GPU which is why I would like to stick to this repo here.
Can anyone help me out?
Thanks a lot!
Beta Was this translation helpful? Give feedback.
All reactions