This project develops a real-time driver assistance system deployed on an NVIDIA Jetson board. It leverages the YOLOv5 deep learning model for object detection, enhancing driver awareness and safety on the road.
-
Accurately detects speed limit signs and alerts drivers when exceeding the posted limit.
-
Continuously monitors the environment, identifying potential collisions and providing timely warnings to prevent accidents.
-
Automatically adjusts headlights based on ambient light conditions and detected vehicles, improving visibility for both the driver and oncoming traffic.
- Real-time object detection using the YOLOv5 model.
- Detection of various objects relevant to ADAS, such as vehicles, pedestrians, cyclists, and traffic signs.
- Object tracking to maintain continuity and trajectory of detected objects.
- Bird's Eye View (BEV) visualization of the detected objects in a simulated environment.
- Customizable confidence threshold and class filtering.
- Simulated environment provides an intuitive top-down view of object positions and movements.
- Supports both image and video input for object detection and tracking.
- Easy integration with pre-trained YOLOv5 models.
- Provides bounding box coordinates, class labels, and tracking IDs for detected objects.
- Python 3.x
- OpenCV
- PyTorch
- NumPy
- Clone this repository.
- Install the required dependencies
pip3 install torch opencv numpy- Download pre-trained YOLOv5 weights or train your own model.
- Provide the path to the YOLOv5 weights in the code.
- Run the script with the video file.
- View the object detection results and Bird's Eye View visualization.
For more detailed usage instructions and options, refer to the project documentation.
python3 yoloV5_sim.pyContributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
- YOLOv5: https://github.com/ultralytics/yolov5
- OpenCV: https://opencv.org/



