Skip to content

Code and artifacts for the "Open-Source Assessments of AI Capabilities" BRSL report.

License

Notifications You must be signed in to change notification settings

BerkeleyRisk/Zhousidun

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

🚢🎯 Open-Source Assessments of AI Capabilities: The Proliferation of AI Analysis Tools, Replicating Competitor Models, and the Zhousidun Dataset

Open-Source Assessments of AI Capabilities: The Proliferation of AI Analysis Tools, Replicating Competitor Models, and the Zhousidun Datasets
Ritwik Gupta, Leah Walker, Eli Glickman, Raine Koizumi, Sarthak Bhatnagar, Andrew W. Reddie
Paper: https://arxiv.org/abs/2405.12167

Model: Ultralytics YOLOv8

The model folder in this repository contains a Jupyter notebook with which to train and evaluate a YOLOv8l model on the Zhousidun dataset. Additionally, the Zhousidun dataset is provided as a ZIP file as part of the repository.

  1. Setup your Python environment with Python >=3.10, the ultralytics package, and all of its dependencies.
  2. Clone this repository
  3. Run the Jupyter notebook. Note that evaluation will require the Blender steps below to be executed as well.

Blender: YOLO Synthetic Data Generation in Blender with an Arleigh Burke-class destroyer

The blender folder in this repository contains two Blender 3.6 files and two Python scripts. Its purpose is to create a synthetic dataset of the USS Arleigh Burke modeled by WTigerTw (original model). This dataset is created by using HemisphereCameras.py as a custom blender addon to generate cameras in a hemisphere around the target, then rendering each view as an image. The resulting images are converted into label data ready for YOLOv8l training using _GenerateBoundingBoxTxtFiles.py.

Installation & Running scripts

  1. Install Blender version 3.6
  2. Clone this repository
  3. Open boat_hemisphere_cameras.blend in blender
  4. Run HemisphereCameras.py using the run script button in Blender's text editor panel. This panel should already be open.
  5. Select the collection called "Collection" in the object hierarchy. This ensures that the cameras are generated in the right place.
  6. Select the hemisphere in the 3D editor. It should look like a large white dome surrounding the ocean, with the object name 'Sphere'. This sets the hemisphere as the source for generating cameras.
  7. Open the CamGen panel in the 3D editor generated by the script. If you can't find this, press 'n' in the 3D editor to open the tools bar, then click on the tool that says "CamGen".
  8. With the hemisphere still selected, click "Generate Cameras" in the CamGen panel. You should now see cameras on every vertex of the hemisphere.
  9. Click "Render All Cameras" in the CamGen Panel. Blender will look frozen, since it is rendering for each camera in the scene. For reference, on my laptop this took 2 minutes to run.
  10. Open your terminal, and navigate to the ./render_output/ directory
  11. Run the script with python3 _GenerateBoundingBoxTxtFiles.py. This assumes you have python3 (I have Python 3.9.7) with cv2 and numpy libraries installed. This script generates a txt file with label and location coordinates for each target on the ship.

Bounding Box Generation with Blender bpy and OpenCV

Our approach to generating bounding boxes for the dataset in blender was to render each camera as two layers:

  1. Image layer: original rendered image with the file name including the coordinates camera_X_Y_Z.png
  2. Color label layer: all lights in the scene turned off, and emissive materials turned on, resulting in the image being black except for the target items highlighted in bright unique colors. The file name looks like TARGETS_camera_X_Y_Z.png

Then, use OpenCV to filter for each color and generate a bounding box around each color, and compile it in to a text file named camera_X_Y_Z.txt. The label data is formatted to work with the YOLO model, meaning the coordinates are normalized between 0 and 1.

Credits

About

Code and artifacts for the "Open-Source Assessments of AI Capabilities" BRSL report.

Resources

License

Stars

Watchers

Forks

Packages

No packages published