Skip to content

paszti96/Tartarus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

36 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Tartarus

Simulated Environment with Unity for Robust Object Detection

Mission Statement πŸ‘¨β€πŸŽ“

This research project was my Master's Thesis @ Budapest University of Technology and Economics. πŸ‘¨πŸ»β€πŸŽ“ The project was awarded with scholarship by the Artificial Intelligence National Laboratory (MILAB) of Hungary

The main focus of deep learning research is the development of self-supervised learning methods, where the agent completes different detection tasks by understanding his environment, and not by relying on labeled training examples. A typical usage of self-supervised learning methods is to substitute obscured details from images or to estimate the impact of actions taken by robots. The main goal of the following thesis is to create an algorithm that is able to recognize different objects and their relevant visual properties in a simulated environment and to estimate the effect of different actions on them. In this thesis, I will not only present such a robust object detection algorithm, but also create a simulated industrial environment capable of generating the data needed for training quickly and easily.

Setting πŸ‘¨πŸ»β€πŸ’»

The environment consists of two main part:

RODSIE algorithm πŸ€–πŸ“·

This repository contains the Object Detection Algorithm which contains combinations of a YOLO v5 object detector and a U-Net-based segmentation layer.

The Object detection algorithm can be found here: Robust Object Detection in Simulated Industrial Environment

Tartarus 🌍

Thia repository contains a simulated industrial environment that can be used to generate images and metadata for the deep learning algorithm. The simulation should also be modifiable and capable of generating randomized scenarios sufficient to create diverse train dataset. The simulated environment is made with Unity Engine.

The Design: πŸ› 

The simulation contains different scenarios; therefore, the simulated industrial building will have three different parts:

  1. A conveyor belt with different small objects on it like barrels or tools.
  2. A workshop for larger items, such as cars and industrial platforms.
  3. And a floor that functions as an office.

Different robotic arms stand at the two sides of the conveyor belt and at the workshop. In the initial setting, there are 67 different objects. First, the 3D models were downloaded from the Asset Store or other third-party providers. Then, they are imported into the project and saved with the required features as Prefabs.

Design

Images of the setting πŸ–Ό

Conveyor Otherside Otherside2 Office

The output πŸ’‘

Screenshots from the simulation are saved periodically after every few frames in PNG format with the marked 2D bounding boxes around the objects. Moreover, metadata is also saved, containing the ground truth information for the Deep Learning algorithm, including bounding box position, class label, and 3D information about the objects’ position in the simulation space.

Output

Metadata πŸ“‘ 🧾 πŸ“Š

YOLO traditionally expects the annotation file to be in the following format in a .txt named the same as the image it belongs to:

  • is the index of the object class (from 0 to #classes-1)
  • and are the center of the rectangles relative to the to the width and height of the image (0.0 to 1.0]
  • and are the width and height of the bounding box relative to the width and height of the image
  • For more details, please check my Thesis ❀️ 🧑 πŸ’› πŸ’š πŸ’™ πŸ’œ πŸ–€ 🀍 🀎

    And drop a star if you liked it ⭐️⭐️⭐️⭐️⭐️⭐️⭐️

About

Simulated Environment with Unity for Robust Object Detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published