- Ensure you have Python version 3.11.6 or higher installed.
- Install the ultralytics package:
pip install ultralytics -U
- Download the repository (click the green "Code" button and then "Download ZIP"), then extract the archive to a convenient location so that everything is in one folder.
- There are many programs and services for annotating images. If you're doing this for the first time, it's recommended to use LabelImg.
- Download LabelImg or LabelImg_Next.
- Extract the program and move the
predefined_classes.txt
file (located in the repository) to thelabelimg/data
folder.
The repository already contains examples of annotated images in the ai_aimbot_train/datasets/game
folder:
images
– contains images for model training.labels
– contains annotations for the corresponding images. These are text files with the coordinates of bounding boxes and object class IDs.val
– used to store images and annotations for the validation dataset. The validation set is needed to check the model's performance during training and prevent overfitting.
- Open LabelImg and click Open Dir, selecting the path to the images for model training (
ai_aimbot_train/datasets/game/images
). - Click Change Save Dir to choose where the annotations will be saved (
ai_aimbot_train/datasets/game/labels
). - (Optional) Familiarize yourself with the LabelImg usage guide.
- (Optional) Explore tips for achieving the best training results.
- Annotate between 500 to 2500 images.
- Include empty images that lack players, weapons, fire, and other objects. For example, add lots of images with trees, chairs, grass, human-like objects, and empty game locations.
- The more complex a game looks for AI (e.g., CS2 is more formalized than Battlefield 2042), the more data you'll need for model training (at least 5000-10000 images).
- Image resolution can vary from 100x100 to 4K.
- Ready-made datasets can be found here.
- Don't forget to add images and annotated files to the
val
folder. If you have 1000 training images, add about 10 validation images.
-
The file specifies the following parameters:
path: game # Your dataset name train: images # Folder with training images val: val # Folder with validation images test: # Folder with test images (can be empty)
-
To find Ultralytics settings file, enter the command:
yolo settings
-
(Optional) Detailed information about training can be found here.
-
After annotating the dataset, navigate to the
ai_aimbot_train
folder using the command:cd ai_aimbot_train
-
Choose a pre-trained model. The options are:
- yolo11n or yolo12n – the fastest and least resource-intensive.
- yolo11s or yolo12s – fast, slightly smarter but more resource-demanding.
- yolo11m or yolo12m – optimized for real-time, requires a powerful GPU (e.g., RTX 2060 or better).
- yolo11l or yolo12l and yolo11x – most intelligent and resource-intensive, not suitable for most tasks.
-
For example, choose yolo11n:
model = yolo11n.pt
-
Select the image size for the model. The lower the resolution, the faster the training and the fewer objects the model can detect:
- Possible options: 160, 320, 480, 640. Choose 320.
-
Determine the number of training epochs. A larger dataset requires more epochs, but don't overdo it, as too many epochs can lead to overfitting. Assume we set 40 epochs.
-
(Optional) All training parameters are described in detail here.
-
Start training with the command:
yolo detect train data=game.yaml model=yolo11n.pt epochs=40 batch=-1
-
After successful training, navigate to the model weights folder:
ai_aimbot_train/runs/detect/train/weights
You'll see two files:
best.pt
– the file with the best model weights.last.pt
– checkpoint of the last epoch. If training was interrupted, you can resume from this file:yolo train resume model=ai_aimbot_train/runs/detect/train/weights/last.pt
- Create a
test.py
file with the following content:import cv2 from ultralytics import YOLO # Load the YOLO model model = YOLO("best.pt") # Open the video file video_path = "path/to/your/video/file.mp4" cap = cv2.VideoCapture(video_path) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLO inference on the frame results = model(frame) # Visualize the results on the frame annotated_frame = results[0].plot() # Display the annotated frame cv2.imshow("YOLO Inference", annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord("q"): break else: # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows()
- Run:
python test.py
-
Selecting a Pre-trained Sunxds Model
- Choose a model that is most similar to your task in terms of version and image size (640, 480, 320 or 160).
-
Preparing the Dataset
- Prepare the dataset with new annotations as specified in this repository. Ensure that the new annotations follow the YOLO format.
- If you want to eliminate false detections, add images with false detections to the dataset and do not annotate anything on them.
- If, for some reason, there are poor or no detections of players in a particular game, add annotated images with these players to the dataset.
- Don't be lazy; add more images to the dataset, and try to review the entire dataset again for errors after annotating it.
-
Running the Fine-Tuning
- Execute the following command:
yolo detect train data=game.yaml model=sunxds_0.7.5.pt epochs=40
- Execute the following command:
- Export model to onnx format with dynamic shape:
yolo export model=best.pt format=onnx dynamic=true simplify=true
- Select .onnx model in AI tab.