Small projects and experiments with the BeagleBone AI-64 platform written in Go and Python.
Includes custom Image Classification
and Object Detection
train scripts with Tensorflow as well as scripts for further models compilation for TI TFlite Delegate
.
Based on
BBAI64 11.8 2023-10-07 10GB eMMC TI EDGEAI Xfce Flasher image.
setup_script.sh
from /opt/edge_ai_apps/
must be run to install edgeai-gst-plugins
cd /opt/edge_ai_apps/ && sudo ./setup_script.sh
To add support of various periphery as well as IMX219 CSI cameras fdtoverlays
property of /boot/firmware/extlinux/extlinux.conf
should be modified with /overlay/THE_OVERLAY_NAME.dtbo
. For example:
fdtoverlays /overlays/BONE-PWM0.dtbo /overlays/BONE-PWM1.dtbo /overlays/BONE-I2C1.dtbo /overlays/BBAI64-CSI0-imx219.dtbo /overlays/BBAI64-CSI1-imx219.dtbo
Checkout arm64 overlays list
https://elinux.org/Beagleboard:BeagleBone_cape_interface_spec
edgeai_dataflows
gstreamer plugins
wget https://go.dev/dl/go1.21.6.linux-arm64.tar.gz
sudo rm -rf /usr/local/go
sudo rm -rf /usr/bin/go
sudo tar -C /usr/local -xzf go1.21.6.linux-arm64.tar.gz
Update ~/.bashrc
with
export PATH=$PATH:/usr/local/go/bin
Apply changes and check go version
source ~/.bashrc
go version
Taken from TI's PROCESSOR-SDK-J721E
Required by tiovxisp
gstreamer plugin to work with IMX219 SCI camera.
wget https://github.com/Hypnotriod/bbai64/raw/master/imaging.zip
sudo unzip imaging.zip -d /opt/
wget https://github.com/Hypnotriod/bbai64/raw/master/libtensorflowlite_c-2.9.0-linux-arm64.tar.gz
sudo tar -C /usr/local -xvf libtensorflowlite_c-2.9.0-linux-arm64.tar.gz
sudo ldconfig
wget https://github.com/kesuskim/libtensorflow-2.4.1-linux-arm64/raw/master/libtensorflow.tar.gz
sudo tar -C /usr/local -xvf libtensorflow.tar.gz
sudo ldconfig
sudo apt-get install libyaml-cpp-dev
sudo apt-get install cmake
conda create --name tensorflow_tidl
conda activate tensorflow_tidl
conda install python=3.7
git clone --depth 1 --branch 08_02_00_05 https://github.com/TexasInstruments/edgeai-tidl-tools
cd edgeai-tidl-tools
export SOC=am68pa
./setup.sh
TI Deep Learning Library User Guide
User options for TIDL Acceleration
make compile-image-classification TIDL_TOOLS_PATH=/path_to_tidl_tools/edgeai-tidl-tools/tidl_tools/
make compile-object-detection TIDL_TOOLS_PATH=/path_to_tidl_tools/edgeai-tidl-tools/tidl_tools/
make build-edgeai-tidl-tools-docker-container
make compile-object-detection-docker
make compile-image-classification-docker
tensorflow cuDNN CUDA configuration list
cuda-11.2.0 for tensorflow 2.10.1
cuda-10.0 for tensorflow 1.15
cudnn-archive
- Prepare conda environment
cd python/image_classification
conda create --name tensorflow_ic
conda activate tensorflow_ic
conda install python=3.7
pip install -r requirements.txt
- Add your
train
(to train and validate) andtest
(to test the final result) images totrain_data
andtest_data
folders respectively. Each image related to specificclass
should be in its own subfolder named by the class name. config.json
- training configuration file.- Fill
classes
field with your class names in desired order. - You may tweak the
epochs
,validation_split
,batch_size
e.t.c
- Fill
train.py
- script to train the model.- At the end of successful training should generate
labels/labels.txt
andsaved_model_tflite/saved_model.tflite
files.
- At the end of successful training should generate
- Prepare conda environment
cd python/object_detection
conda create --name tensorflow_od
conda activate tensorflow_od
conda install python=3.7
pip install -r requirements_tf1.txt
# pip install -r requirements_tf2.txt
git clone --depth 1 https://github.com/tensorflow/models.git
# git clone https://github.com/tensorflow/models.git && git reset --hard a0d092533701cbbf4cde97337b1e4aac51943c4d
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf1/setup.py .
# cp object_detection/packages/tf2/setup.py .
python3 -m pip install .
pip install protobuf==3.20.3
- Prepare you images annotation with labelImg graphical image annotation tool. Should generate annotation
xml
file for each image file. - Add your
train
(for training) andtest
(for evaluation) images and xmls files totrain_data
andtest_data
folders respectively. - Download and extract the content of your model of choise from link below and put into
python/object_detection/base_model
folder, for example: ssd_mobilenet_v2_coco - Update in
python/object_detection/base_model/pipeline.config
theinput_path: "PATH_TO_BE_CONFIGURED"
fields oftrain_input_reader
andeval_input_reader
with"PATH_TO_BE_CONFIGURED/train"
and"PATH_TO_BE_CONFIGURED/eval"
respectively - Also delete if exists line
batch_norm_trainable: true
frompython/object_detection/base_model/pipeline.config
config.json
- training configuration file.- Check the
base_config_path
andfine_tune_checkpoint
paths to your model. - Update the
input_shapes
with the input shape of your model. Checkfixed_shape_resizer
field inpipeline.config
- Fill
classes
field with your class names in desired order. - You may tweak the
num_steps
,batch_size
e.t.c
- Check the
train.py
- script to train the model.--skip
- to skip phasesprepare
train
export
- At the end of successful training should generate
labels/labels.txt
andsaved_model_tflite/saved_model.tflite
files.
- 2 channels RC car platform with steering servo and ESC (Electronic Speed Control)
- 3.3v to 5-6v PWM signal conversion circuit
- Arducam IMX219 sensor based Camera Module with 15-pin to 22-pin FPC (Flexible Printed Circuit) cable
- Waveshare UPS Module 3S for BBAI64 powering and power monitoring
- Gamepad for use as the car controller on the web page
make build-wifi-vehicle
make run-wifi-vehicle
BeagleBone AI-64 MJPEG stream of Waveshare IMX219-83 Stereo Camera with GStreamer example