Skip to content

TimmHess/UERoboCup

Repository files navigation

UERoboCup

Overview:

Corresponding Publication:

This repository provides code for stochastic scene generation of RoboCup SPL scenes in Unreal Engine 4 that has been used in our corresponding paper published at the RoboCup Symposium 2017:

https://www.robocup2017.org/file/symposium/RoboCup_Symposium_2017_paper_15.pdf

If you use this repository, please cite the paper:

T. Hess, M. Mundt, T. Weis and V. Ramesh: "Large-scale Stochastic Scene Generation and Semantic Annotation for Deep Convolutional, Neural Network Training in the RoboCup SPL", In: RoboCup 2017: Robot World Cup XXI, LNAI. Springer (2017)

Table of Contents:

Install: Preparing Unreal Engine 4
Getting Started: A Quick Guide
Variables: Overview
DataSet: Extraction
TestSet

Engine Basis Installation:

Go to the https://www.unrealengine.com/ website and create your EpicGames account, download the "EpicGamesLauncher" (UELauncher) and install it.
After the installation is complete, log into the UELauncher and use it to download Unreal Engine 4 Version 4.14.3. Newer versions might also work, but have not been tested.

Substance Plugin:

Making use of the PBR(physics based rendering) capabilies of UE4 we incorporated Materials provided by Allegorithmic (https://share.allegorithmic.com/). To be able to use these Materials you will need to add the Substance Plugin to UnrealEngine, which is available for free in the UELaunchers Marketplace.
After downloading the plugin from the marketplace do not forget to add it to the Engines installation you are using:

C++ Support:

If you want to add or edit C++ modules to the Training Set Generator you will additionally need to install Visual Studio on your machine. Again the UELauncher provides a built in function for directly coupling the Visual Studio Compiler to your Engine's installations:
To do so, use the UELauncher to start the engine and go to the New Project tab. Select C++ Project.
At the bottom of the UELauncher there should appear a message pointing towards a missing compiler and additionally a installation link to the respective VisualStudio version needet.

After completing all the above steps the installation of UnrealEngine4 is complete and you can go ahead an load the TrainingSetGenerator.uproject. The first time UnrealEngine4 may need some time setting up and recompiling components for your system


GETTING STARTED: A Quick Guide

This section will give an overview of how to use use/manuipulate the TrainingSetGenerator project, but should by not be seen as a tutorial on how to operate UnrealEngine4 (/UEEditor). For in depth information please see the UnrealEngine Documentation or one of many third party tutorials (e.g. Youtube).

Git Clone:

git clone https://github.com/TimmHess/UERoboCup.git

Generating The First Synthetic Set:

Open the TrainingSetGenerator.uproject file.
You will see the UEEditor similar to this:

Move to the Play button and select the New Editor Window option.
In the current version this play option is needed to ensure correct aspect ratios on the rendered images.

Now you can hit the Play button to generate a (small) "test" set of synthetic data.

Locations of Synthetic Data:

Location of rendered Images:
\TrainingSetGenerator\Saved\Screenshots\Windows

Location of semantically annotated ground truth:
\TrainingSetGenerator\Saved\ScreenshotMasks

Adjust Training Set Generation:

Currently all variables needed to adjust the generation pipeline behavior are found in the LevelBlueprint.
You can access the LevelBlueprint by:

The BlueprintEditor effectively splits into three parts: The Blueprint graph (middle), the function and global variable overview (left), and a details panel (right).

To change the value of a variable simply select it and adjust its Default Value in the details panel.

VARIABLES: Overview

The public variables (marked by the "open eye" symbol) control the main features of the generation pipeline, such as number of synthetic data to be created per execution or standard deviation of the intenstiy of the light sources.
The private variables (marked by the "closed eye" symbol) contain bounds, counters, and object-array placeholders and should thus not be altered.

Public:

sceneParameterUpdates:
Number of LightTemparature, LightIntensity, and FieldColor samples.

geometricParameterUpdates:
Number of playing situations generated per scene parameter set.

startDelay:
Time delay befor the actual generation process starts. This is needed on some systems to load the scene.

renderTime:
Controls the time delay befor saving the current image render. The delay represents the actual render time the engine needs per image. Due to the full rearangement of the scene in each step in combination with the engine's real-time rendering component, low renderTimes (e.g. 0.0) may result in blurry or edgy images.

maxDistanceToBall:
The maximum distance in cm the ball is placed away from the camera robot.

lightIntensityStddev:
Standard deviation of the normal distribution controling the offsets between individual light sources to take account to wear level etc. .

carpetColorArray:
Givs minimum and maximim values for carpet color sampling.

lightTemparatureArray:
Gives minimum and maximum value for light source temparature sampling.

lightTemparatureArray:
Gives minimum and maximum value for light source intensity sampling.

cameraExposureArray:
NOT IN USE!
Gives minimum and maximum value for camera exposure sampling.

Private:

Coming soon...

The DataSetExtraction.py Python(V.: 2.7.+) script makes use of the previously generated image and ground truth data, merging them into an annotated training(or test) set composed of image patches containing examples for the given classes and respective labels. For now only LMDB and "plain data"(images structured in sub-directories) saves are supported.

The script is found in the UERoboCup/python/DataSetExtractor/ directory.

python DataSetExtractor.py --imgData=<PathToImageData> --groundTruth=<PathToGroundTruthData> --patchSize=<PatchSize> --saveTo=<PathToDatabase> --saveAs=<DatabaseType>

Dependencies:

To run the script Numpy and OpenCV (cv2) are required.

If you want to use the --saveAs=LMDB database structure you will additionally need the caffe(pycaffe) and lmdb python package.

Examples:

python DataSetExtractor.py --imgData=./Screenshots/ --groundTruth=./ScreenshotMasks/ --patchSize=32 --saveTo=./TrainingSets/TestDB1/ --saveAs=LMDB

python DataSetExtractor.py --imgData=./Screenshots/ --groundTruth=./ScreenshotMasks/ --patchSize=24 --saveTo=./TrainingSets/TestDB2/ --saveAs=DIR

Check:

To view the LMDB database content you can use the testLMDB.py Python script provided in the UERoboCup/python/ directory.

To be able to compare your neural network architectures, trained on the generated synthetic examples, we provide the test-data set used for all of our experiments and benchmarks.
The set consists 4 x 780 (32x32 pixel) images showing the object classes: Ball, Robot, Goalpost, and Field (background).

The data has been accumulate with help from Nao-Team HTWK, HULKs, and SPQR Team, who generously supported us with their data! Thank you very much!

About

Stochastic Scene Generation with Unreal Engine 4 for machine learning in RoboCup

Resources

License

MIT, GPL-3.0 licenses found

Licenses found

MIT
LICENCE.txt
GPL-3.0
LICENSE.txt

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published