Skip to content

yunjoon-soh/SBMLT_Robotics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

SBMLT_Robotics

SBMLT working on robotics

Purpose

This is git repository for Stony Brook Machine Learning Team's spcial group project. I will make this private repository if things are really get going and needs confidentiality. (As far as I know, they are participating IGVC which is actual competition with other schools.

##IGVC The objective of the competition is to minimize the distance it uses to navigate through the map, hitting every waypoints, without crossing lines, avoiding obstacles, and come back to the start point.

###About robot The robot has a lot of sensors, gyro, GPS, accelerometer, IMU sensors. But most importantly two cameras facing forward, 12cm apart from each other. They are using two of them to use computer vision related algorithm to create a bird view of the plane.

###Rules The rule of the competition limits the amount of power the robot can use. However, he said we can assume unlimited power for now and later integrate it and come up with come solution.

Reading

We will share reading related to the machine learning techniques that can help the robotics team.

Goals

The things we should start doing:

  • Edge Detection in color images to find white line "lanes"
  • Sliding Windows to specify all pixels belonging to obstacles, and pixels belonging to background
  • Maybe find some way to make the images robust against rotation

####Convolutionary Neural Network #####Alex Net

Lectures

Convolutional Neural Network

Papers

Okay, some of these papers are behind a paywall, as long as you are on a WolfieNet internet connection you should be able to access them without a problem. They will not all be relevant, in particular the SLAM has kind of already been implemented and the Image Patch Matching is part of SLAM, so it is not priority to study. Start with Contour and Texture segmentation if you want to try to do line detection, Efficient Sliding windows to try to do object recognition, or find better methods than the papers posted. Post more papers as you deem relevant.

Others

Data to expect

As far as the data collection, I have yet to plan exactly when it will happen, but I have an outline of what we plan to do. Since the software team is currently waiting for the mechanical build to complete, we will assemble a mock-up of the robot (scooter with devices attached) and take it up to the soccer field with a few obstacles. We will then perform a few runs through pre-planned routes and collect the sensor data (gps, gyro, camera) necessary to continue with software development and testing. We will also be sure to record ground-truth readings along with our sensor readings to give us an idea of the variance in our measurement noise. I will keep you posted as soon as I figure out when we can make it happen.

This is the reply from Anthony Musco (team learder of robotics software team).

He says he is mostly tied up with administrative works, so we have a great chance to specialize in analyzing data and giving feed back.

Sample Dataset

There are some datasets of similar things we can begin to play with, especially to see how line detection/sliding windows work, since those should be pretty robust between datasets.

GPS Data

Gyro Data

Camera Data (images & videos)

Some possible keywords to start with

  • Noise reduction
  • Image processing
  • Edge Detection
  • Object Localization

About

SBMLT working on robotics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages