[Project Page] [Paper]
This repository contains the runtime Unity code for SAMP. For the training code; please see Training Code. The runtime Unity code is largly based on the code of Neural State Machine by Sebastian Starke.
We provide several demo scenes for testing SAMP.
- Open the Demo Scene (Unity -> Assets -> Demo -> Main_Demo.unity).
- Hit the Play button.
- Move around with W,A,S,D (Move), Q,E (Turn), Left-Shift (Sprint).
- Move mouse over object and hold key C (Sit) or L (Liedown).
- To sit, keep the C button pressed. To stand up, simply press W to move forward.
- Open the GoalNet Scene (Unity -> Assets -> Demo -> GoalNet.unity).
- Hit the Play button.
- You will see different goals generated by GoalNet for the different objects in the scene.
- This demo shows how to use GoalNet and MotionNet together.
- Open the GoalNet_MotionNet Scene (Unity -> Assets -> Demo -> GoalNet_MotioNet.unity).
- Hit the Play button.
- Press and hold key C.
- Two different goals will be generated by GoalNet.
- Each of the two characters will follow one of the goals until the sitting action is executed.
- You can stop and replay the demo multiple times to get different results.
- This demo shows how can SAMP navigate obstacles.
- Open the PathPlanning Scene (Unity -> Assets -> Demo -> PathPlanning.unity).
- Hit the Play button.
- Press and hold key C.
- The PathPlanning module will compute a collision-free path to the goal.
- The character will follow the path until the action is executed.
- You can try to start the character from different locations in the scene.
You can turn GoalNet and the path planning module on or off from the inspector of the character. Please note that using the path planning module assumes that a NavMesh has already been computed.
You can download MotionNet Data and the raw fbx
files from SAMP website. However, in case you want to export the data from Unity:
- Open the Mocap Scene (Unity -> Assets -> MotionCapture -> MotionNet_scene.unity).
- Click on the Editor game object in the scene hierarchy window.
- Open the MotionNet Exporter (Header -> AI4Animation -> MotionNet Exporter).
- Choose wheather you want to export train or test data.
- Click
Reload
- Click
Export
You can download GoalNet Data from SAMP website. However, in case you want to export the data from Unity:
- Open the Mocap Scene (Unity -> Assets -> MotionCapture -> GoalNet_scene.unity).
- Open the Motion Exporter (Header -> AI4Animation -> GoalNet Exporter).
- Click
Reload
- Click
Export
In the demo, there will be many corner cases where the system may fail due to the exponential combinatorial amount of possible actions and interactions of the character with the environment.
By using this code, you agree to adhere with the liscense of AI4Animation. In addition:
- You may use, reproduce, modify, and display the research materials provided under this license (the “Research Materials”) solely for noncommercial purposes. Noncommercial purposes include academic research, teaching, and testing, but do not include commercial licensing or distribution, development of commercial products, or any other activity which results in commercial gain. You may not redistribute the Research Materials.
- You agree to (a) comply with all laws and regulations applicable to your use of the Research Materials under this license, including but not limited to any import or export laws; (b) preserve any copyright or other notices from the Research Materials; and (c) for any Research Materials in object code, not attempt to modify, reverse engineer, or decompile such Research Materials except as permitted by applicable law.
- THE RESEARCH MATERIALS ARE PROVIDED “AS IS,” WITHOUT WARRANTY OF ANY KIND, AND YOU ASSUME ALL RISKS ASSOCIATED WITH THEIR USE. IN NO EVENT WILL ANYONE BE LIABLE TO YOU FOR ANY ACTUAL, INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR IN CONNECTION WITH USE OF THE RESEARCH MATERIALS.
If you find this Model & Software useful in your research we would kindly ask you to cite:
@inproceedings{hassan_samp_2021,
title = {Stochastic Scene-Aware Motion Prediction},
author = {Hassan, Mohamed and Ceylan, Duygu and Villegas, Ruben and Saito, Jun and Yang, Jimei and Zhou, Yi and Black, Michael},
booktitle = {Proceedings of the International Conference on Computer Vision 2021},
month = oct,
year = {2021},
event_name = {International Conference on Computer Vision 2021},
event_place = {virtual (originally Montreal, Canada)},
month_numeric = {10}
}