Skip to content
Android App that provide a "brain" to the Cyber Robot, a cheap sensorless toy robot sell by Clementoni
Java
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
app
gradle/wrapper first commit 2.0 Apr 19, 2017
images
openCVLibrary320 begin opencv integration May 22, 2017
.gitignore
LICENSE
README.md Update README.md Dec 26, 2017
build.gradle it's moving and it's STOPPING Jun 15, 2017
gradle.properties
gradlew first commit 2.0 Apr 19, 2017
gradlew.bat first commit 2.0 Apr 19, 2017
import-summary.txt begin opencv integration May 22, 2017
settings.gradle begin opencv integration May 22, 2017

README.md

Cyber Robot Brain

This Android application has been developed during an University Class and the aim was to provide a "brain" to the Cyber Robot, a cheap sensorless toy robot sell by Clementoni. More information about the robot can be found on the official website.

More specifically, the aim was to reverse-engineer the communication protocol of the robot via the Bluetooth HCI snoop log and then build an app that guides it to reach a target object that is framed by the phone’s camera. For more information about the communication protol, visit the Wiki Page

How it works:

To move the target we have used three different markers: one for the target and the others two to recognize the left and the right side of the robot. The recognition has been exploited using openCV Library.

You can checkout the behauvior of the application by watching a video on YouTube

Test Environment:

The recognition works in the following test environment.

We have used the following disposition of markers: Target = Red, Right = Green and Left = Blue.

Test condition: the light on the markers can’t change too much in the test field and due to the fact that the back of robot is green and in some light condition can interfere, we cover it with a piece of white paper.

Also the color of the field where we have tested the app is uniform and has a neutral color (we use a large piece of paper); the use of paper removes some noise due to light reflection.

Calibration: To better recognize the markers, it is necessary a calibration phase. It is required to take a photo at 15 cm from the target marker as explain in the calibration message inside the app. We placed all the 3 markers close together and took the picture at 15 cm. The most precise this picture is taken, the most precise the application should work.

Last advice: if the app doesn’t recognize some of the markers or the robot seems to move in completely wrong direction we suggest to change illumination and calibrate again colors.

License

   Copyright 2017 Biasin Mattia, Dominutti Giulio, Gomiero Marco

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
You can’t perform that action at this time.