Sight-Guided Search and Depth Detection
Based on the original SensEye design by Russ Bielawaski: repo
See the diagram below for an overview of the hardware and software stack for SensEye-2:
##OpenCV Image Processing
There are currently two python scripts written to implement OpenCV functions on top of images captured from the system. They are located in
software/client/opencv_detection and also have the ability to measure run-time. Examples are shown below:
Used to find the location of the pupil on the inward facing camera.
python blob_detect.py Runtime of Blob Detection: 0.161993980408 seconds
Used to calculate the depth of the image based on the disparity between the two outward facing cameras.
python disparity.py Runtime of Depth Detection: 0.360285043716 seconds
To complete this processing in real-time on the host computer, the run-time would have to be on the order of
0.04 seconds as the imager captures images at a rate of approximately
A note on operating systems
This installation uses Windows for Libero and Linux for uCLinux compilation. Specifically, 64-bit Windows 8 and 64-bit Ubuntu 14.04 LTS. Other options may be successful
Download Libero from the following link.
Register for free gold 1-year disk ID locked license, follow instructions in email
Install service pack update (if on Windows 8, you may need to disable driver signatures in order to install the FlashPro driver)
Note: Libero can be installed on Linux, but has been found to be problematic
- Download and install OpenCV, this link worked well.
- Ensure OpenCV installed in
- Download and install a TFTP server, this link worked well.
senseye.prjx(you will get errors: "Unable to find..."")
Double click TOPLEVEL in the Design Hierarchy area to open it in the main window
Double click MSS_CORE3_MSS_0 to open it in a new tab
- Double click the ENVM block
- Right click the first client listed and select Modify Client.
- Change the location of the memory file to
- Click the Generate Component button in the main window (yellow cylinder with gear).
Go to TOPLEVEL tab
Click the Generate Programming Data button in the Design Flow area (green arrow)
Project should build appropriately.
Note: Ensure all of the MSS components are updated by clicking the Catalog tab then the
Download them now! button (Libero should show the message
New cores are available). Also ensure reset line into imager is inverted (in TOPLEVEL).
uCLinux Build Environment
Open the Linux Cortex M User Manual available from Emcraft's SmartFusion webpage
Follow the directions in Section 4.1
linux-cortexm-1.12.0/ folder should be extracted to the same location as the git repo
(To get a new version of the cross-compiler, click here, but you should probably stick with the one given by emcraft)
.bashrc will need to be updated to run the following:
cd <linux install directory> . ACTIVATE.sh cd -
Note: the cross compiler tools work for me on 64-bit fedora, but not in a shared folder. You may need to install ia32-libs or equivalent if you are on a 64-bit system
Test that the server is working by running:
touch <tftpboot directory>/foo tftp 127.0.0.1 get foo quit
Note: sometimes it is necessary to run this with
The get request should complete immediately without an issue
Open a serial connection at 115200 baud. An example of this is below (assuming it is
screen /dev/ttyUSB0 115200
Note: Sometimes it is necessary to run this with
sudo . Also if there is only a blank screen press
Enter a few times and if that doesn't work, press
Ctrl-C to kill the current program and reboot.
Find device IP address, from U-Boot:
run flashboot udhcpc ifconfig reboot
Modify the environment variables on the SmartFusion board, from U-Boot:
setenv netmask 255.255.255.0 setenv gatewayip <device ip address top 24 bits>.1 setenv ipaddr <device ip address> setenv serverip <server ip address> setenv image senseye_proj.uImage saveenv run netboot
Follow instructions online to install an NFS server and enable it
After the device has booted run
mount -t nfs -o proto=tcp,nolock,port=2049 <server ip address>:/<server nfs folder> /mnt
A script to do this has been included as
Navigate to the
SensEye-2/software/uclinux/senseye_proj/ directory and edit
cp senseye_proj.uImage <your tftpboot directory>
This should redirect all compile messages to the
compile.log file. Run following commands on device
The Stonyman controller software should now be loaded and ready to begin reading in images.
Setting up client
Ensure OpenCV installed in
Change address of SmartFusion board in
SensEye-2/software/client/senseye_client/senseye_client.c to current IP address (found by running
printenv on the SmartFusion board):
#define INSIGHT_SERV_ADDR ("220.127.116.11")
Navigate to the
SensEye-2/software/client/senseye_client directory and run.
This should redirect all compile messages to the file
senseye_serv is running on the SmartFusion it waits for a connection from the client which is created by running
senseye_client, then images should appear on the screen if a Stonyman is connected correctly.
The hardware can be somewhat tricky to configure correctly. First, build the circuit specified in
hardware/schematics/stonyman_breakout.pdf. Ensure to tie a resistor (30k) to ground from the analog out (AN) of the Stonyman, and place a capacitor across the power supply rails.
Calibration: The configuration of the parameters in
software/uclinux/stonyman/stonyman_2.h is dependent on whether the system is run at 3.3 Volts or 5 Volts. As of now the system is configured for 3.3 V. This can be changed by modifying the
define statements based on empirical evidence.