This is a Unity Project containing the source code and prefab for the interaction techniques we present in our paper referenced below.
-
If you use one of our techniques for industrial purposes, please star the project and drop us a line by e-mail to tell us in which application you use it.
-
If you use one of our techniques for academic purposes, please cite: Marc Baloup, Thomas Pietrzak, Martin Hachet, and Géry Casiez. 2021. Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR. In Proceedings of VRST '21. ACM, New York, NY, USA, Article 5, 1–10.
@inproceedings{10.1145/3489849.3489867,
author = {Baloup, Marc and Pietrzak, Thomas and Hachet, Martin and Casiez, G\'{e}ry},
title = {Non-Isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in {VR}},
year = {2021},
isbn = {9781450390927},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3489849.3489867},
doi = {10.1145/3489849.3489867},
booktitle = {Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology},
articleno = {5},
numpages = {10},
keywords = {VR, Emoji, Emoticons, Emotion, Facial expression, Avatar},
location = {Osaka, Japan},
series = {VRST '21}
}
- In the assets folder, copy the folders
FacialExpressions
,RayCursor
,VRInputsAPI
andResources
and paste them in your assets directory.FacialExpressions
is the core of the projet, containing the assets for the techniques and the implementation of the face we used for our experiments.RayCursor
contains an altered version of our RayCursor technique. You can find the original project here.VRInputsAPI
contains a home made API to easily interface with the Unity XR API. If you already use another API to get the VR inputs, just ignore this folder, but you’ll have to change the API calls in the code accordingly, and remove theVRInputsManager
script from the prefabVR Rig Facial Expression
.Resources
contains all the emoji used for the technique RayMoji.
- In the root directory of the repository, copy the files
voice_recognition.py
(for EmoVoice) andexpression_gestures.1d
(for EmoGest) in the root directory of your project. - Depending on what you will use in this project, you will need to install some dependencies:
- For the technique Emovoice:
- JSON .NET For Unity
- Python 3.9 installed on your system (you will have to change the value of
PYTHON_PATH
inAssets/FacialExpressions/Scripts/Techniques/EmoVoice.cs
). - SpeechRecognition python project
- If you still want to use
VRInputsAPI
, you need to installXR Legacy Input Helper
from the package manager of Unity.
- For the technique Emovoice:
- Version of Unity:
2019.4.18f1
You can try the control of the expression of the avatar, with the provided scene at Assets/Scenes/SampleScene
. All the implemented technique are located in the game object hierarchy (see Figure 1). To try one of them, drag the corresponding gameobject in the Inspector of VR Rig Facial Expression
, component Technique Activator
, property Technique To Activate
.
In VR, the technique is activated when pressing the touchpad of the HTC Vive (the input may vary depending on the controller).
Figure 1: Interface of Unity in the scene SampleScene
. The techniques are the prefabs that are disabled in the hierarchy. To try one of them, drag and drop it into the property Technique To Activate
at the bottom, then play the demo in VR.
This project is published under the MIT License (see LICENSE.md
).
The project also uses external ressources: