Skip to content

Commit

Permalink
Merge pull request #7 from NumesSanguis/v0.3.2-alpha
Browse files Browse the repository at this point in the history
V0.3.2 alpha
  • Loading branch information
NumesSanguis committed Oct 27, 2018
2 parents f616f65 + 079da24 commit dc8bd6f
Show file tree
Hide file tree
Showing 489 changed files with 35,290 additions and 28,078 deletions.
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,11 @@ sysinfo.txt
modules/input_facsfromcsv/openface

# json files
modules/output_facstojson/facsjson/
modules/output_facstofile/facsjson
modules/output_facstofile/facscsv

# deep neural network models
modules/process_facsdnnfacs/models/

# debug data
*logging
64 changes: 50 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
# FACSvatar v0.3.1-Alpha
# FACSvatar v0.3.2-Alpha

## New
# Roadmap

## New v0.3.2

* Timestamp of message receive and send per module (`if Python >= 3.7: time.time_ns(), else time.time()`)
* Simplified sending receiving messages (`facsvatarzeromq.py` now takes care of encoding / decoding and adding timestamps)
* Performance improvement: Time taken for smoothing per message reduced (asynchronous): 11.90 +/- 6.91 milliseconds to 6.83 +/- 2.79 milliseconds (pandas --> direct numpy)
* In progress: print() --> logger
* `process_facstoblend` module accepts folder argument for different AU --> Blend Shape conversions
* OpenFace modification updated to v2.0.6

## New v0.3.1

* OpenFace v2.0.3
* Eye movement based on eye gaze data
Expand All @@ -12,28 +23,54 @@
* Voice Activity Detection (VAD) to switch DNN user
* Mix participant AU / head pose data with DNN generated

TODO put video here
---


## TODO v0.4.0-beta
From beta changes will be documented

## Description
* Documentation
* Python modules:
* Standardization pass over all modules / code clean-up
* Consistency fix: ROUTER / DEALER sockets use JSON formatted data
* DOC string per class and function
* Logger instead of print() statements
* Debug as option to enable logger
* File structure for proper import of modules / pip?
* Use config file (in addition to command line arguments) + config filepath argument
* Easy run: Docker container per module + Docker Compose
* Demo video
* Extra: Test FACSvatar on Android with Unity3D

## TODO vx.x.x

* Module management (Between modules: hearthbeat, controller, synchronized start, etc)
* Blender add-on (after Blender 2.8 release)
* New FACS face-rig when MBLAB characters facial expression system has been updated
* Facial rig for easy modification (animation purposes)
* Unreal Engine support


# Description

Affective computing and avatar animation both share that a person's facial expression contains useful information. Up until now, these fields use different processes to obtain and use these data. FACSvatar combines both purposes in a single framework. Empower your Embodied Conversational Agents (ECAs)!

* **Affective computing**: Facial expressions can not only be analyzed, but also be used to generate animation, purely on data.
* **Animators**: Capture facial expressions with just a camera and use it to animate any compatible avatar.
* **Animators**: Capture facial expressions with a standard webcam and use it to animate any compatible avatar.

This interoperability is possible, because FACSvatar uses the [Facial Action Coding System (FACS)](https://en.wikipedia.org/wiki/Facial_Action_Coding_System "https://en.wikipedia.org/wiki/Facial_Action_Coding_System") by Paul Ekman as an intermediate data representation. FACS describes facial expressions in terms of muscle groups, called Action Units (AUs). By giving these AUs a value between 0-1, we can describe the contractions / relaxation of facial muscles.

[![FACSvatar demo 2018-02](https://img.youtube.com/vi/fI05lzXBj3s/0.jpg)](https://www.youtube.com/watch?v=fI05lzXBj3s)
[![FACSvatar demo 2018-09](https://img.youtube.com/vi/J2FvrIl-ypU/0.jpg)](https://www.youtube.com/watch?v=J2FvrIl-ypU)

# Documentation & simple how to run

Open 3 terminals and open the project `unity_FACSvatar` in Unity 3D (2017.3)
Open 3 terminals and open the project `unity_FACSvatar` in Unity 3D (2018.2.13f1)

0. Press 'play' in the Unity editor
0. Install the PyZMQ library (ZeroMQ for Python)
0. Terminal: `python N_proxy_M_bus.py` (/modules/)
0. Terminal: `python pub_blend.py` (/modules/02_facs-to-blendshapes/)
0. Terminal: `python pub_facs.py` (/modules/01_facs-from-csv/)
0. Terminal: `python main.py` (/modules/02_facs-to-blendshapes/)
0. Terminal: `python main.py` (/modules/01_facs-from-csv/)
0. See an avatar move its head and make facial expressions!

For more detailed instructions, see the [FACSvatar documentation](https://facsvatar.readthedocs.io/en/latest/).
Expand All @@ -58,17 +95,16 @@ The modularity is made possible by using [ZeroMQ - brokerless messaging library]


## Detailed workings (English & 日本語)
[![FACSvatar details in English and 日本語](https://surafusoft.eu/facsvatar/files/2018/02/FACSvatar_poster_25_A4_3-liner-724x1024.png)](https://surafusoft.eu/facsvatar/files/2018/02/FACSvatar_poster_25_A4_3-liner.png)
[![FACSvatar details in English and 日本語](https://surafusoft.eu/facsvatar/files/2018/10/FACSvatar_poster_25_A4-724x1024.png)](https://surafusoft.eu/facsvatar/files/2018/10/FACSvatar_poster_25_A4.png)

More can be found on the project's website: [FACSvatar homepage](https://surafusoft.eu/facsvatar/ "https://surafusoft.eu/facsvatar/").

Note: The poster still shows Crossbar.io, but this has been replaced with ZeroMQ.


## Software
* [Blender](https://www.blender.org/) + [Manuel Bastioni Lab addon](http://www.manuelbastioni.com/) (create human models)
* [MBlab wikia](http://manuelbastionilab.wikia.com/wiki/Manuel_Bastioni_Lab_Wiki)
* [MBlab wikia](http://manuelbastionilab.wikia.com/wiki/Manuel_Bastioni_Lab_Wiki)
* [FACSHuman](https://www.michaelgilbert.fr/facshuman/) add-on for MakeHuman
* [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace) (extract FACS data)
* [Unity 3D](https://unity3d.com/) 2017.3 (animate in game engine)
* [Unity 3D](https://unity3d.com/) 2018.2.13f1 (animate in game engine)
* [ZeroMQ (PyZMQ)](http://zeromq.org/) (distributed messaging library)
* [Docker (future)](https://www.docker.com/) (|future| containerization for easy distribution)
77 changes: 51 additions & 26 deletions blender/facsvatar_zeromq.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,11 @@ class FACSvatarZeroMQ(bpy.types.Operator):

_timer = None

def __init__(self, address='127.0.0.1', port='5572'):
def __init__(self, address='127.0.0.1', port='5572', head_movement=True):
print("FACSvatar ZeroMQ initialising...")

self.head_movement = head_movement

# init ZeroMQ subscriber
url = "tcp://{}:{}".format(address, port)
ctx = zmq.Context.instance()
Expand All @@ -34,10 +36,17 @@ def __init__(self, address='127.0.0.1', port='5572'):
self.frame = bpy.context.scene.frame_current
self.pause_loop_count = 0

self.find_MBLabModel()

print("FACSvatar ZeroMQ initialised")

def find_MBLabModel(self):
# get manuel bastioni character in scene
self.mb_obj = None
for obj in scene.objects:
if obj.name.endswith("_armature"):
print(obj)
if obj.name.endswith("_armature") or obj.name.startswith("MBlab_sk"):
print("MBLab object found!")
self.mb_obj = obj
self.head_bones = [self.mb_obj.pose.bones['head'], self.mb_obj.pose.bones['neck']]
for bone in self.head_bones:
Expand All @@ -47,15 +56,16 @@ def __init__(self, address='127.0.0.1', port='5572'):

# find child *_body of MB character
for child in self.mb_obj.children:
if child.name.endswith("_body"):
print(child)

if child.name.endswith("_body") or child.name.startswith("MBlab_bd"):
self.mb_body = child
print("Body found!")
print(self.mb_body)

# stop search, found MB object
break

print("FACSvatar ZeroMQ initialised")

# set Shape Keys for chestExpansion
def breathing(self, full_cycle=1):
# set Shape Key values
Expand Down Expand Up @@ -93,34 +103,49 @@ def modal(self, context, event):
# self.head_json = json.loads(msg[4].decode('utf8'))
msg[2] = json.loads(msg[2].decode('utf8'))

print(dir(self.mb_obj))

# set pose only if pose data is available and not empty
if 'pose' in msg[2] and msg[2]['pose']:
# set head rotation
if len(self.head_bones) == 2:
bpy.context.scene.objects.active = self.mb_obj
bpy.ops.object.mode_set(mode='POSE') # mode for bone rotation

# for pose_name in enumerate(msg_json['data']['head_pose']):
pose_head = msg[2]['pose']
self.rotate_head_bones(0, pose_head['pose_Rx']) # pitch
self.rotate_head_bones(1, pose_head['pose_Ry'], -1) # jaw
self.rotate_head_bones(2, pose_head['pose_Rz'], -1) # roll

# set key frames
bpy.ops.object.mode_set(mode='OBJECT') # mode for key frame
self.head_bones[0].keyframe_insert(data_path="rotation_euler", frame=self.frame)
self.head_bones[1].keyframe_insert(data_path="rotation_euler", frame=self.frame)
# print(dir(self.mb_obj))
# if object was not found in initialisation
try:
self.mb_obj
except:
self.find_MBLabModel()

# # set pose only if pose data is available and not empty
if self.head_movement:
if 'pose' in msg[2] and msg[2]['pose']:
# set head rotation
if len(self.head_bones) == 2:
bpy.context.scene.objects.active = self.mb_obj
bpy.ops.object.mode_set(mode='POSE') # mode for bone rotation

# for pose_name in enumerate(msg_json['data']['head_pose']):
pose_head = msg[2]['pose']
self.rotate_head_bones(0, pose_head['pose_Rx']) # pitch
self.rotate_head_bones(1, pose_head['pose_Ry'], -1) # jaw
self.rotate_head_bones(2, pose_head['pose_Rz'], -1) # roll

# set key frames
bpy.ops.object.mode_set(mode='OBJECT') # mode for key frame
self.head_bones[0].keyframe_insert(data_path="rotation_euler", frame=self.frame)
self.head_bones[1].keyframe_insert(data_path="rotation_euler", frame=self.frame)

else:
print("Head bone and neck bone not found")

else:
print("Head bone and neck bone not found")
print("No pose data found")

else:
print("No pose data found")
print("Head movement data ignored")

# set blendshapes only if blendshape data is available and not empty
if 'blendshapes' in msg[2] and msg[2]['blendshapes']:
# if object was not found in initialisation
try:
self.mb_body
except:
self.find_MBLabModel()

# set all shape keys values
bpy.context.scene.objects.active = self.mb_body
for bs in msg[2]['blendshapes']:
Expand Down
9 changes: 5 additions & 4 deletions modules/controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,11 @@ def face_configuration(self, dict_config):
msg['pose'] = self.slicedict(dict_config, "pose")

# send message
self.pub_socket.send_multipart(["gui.face_config".encode('ascii'), # topic
str(int(time.time() * 1000)).encode('ascii'), # timestamp
json.dumps(msg).encode('utf-8') # data in JSON format or empty byte
])
# self.pub_socket.send_multipart(["gui.face_config".encode('ascii'), # topic
# str(int(time.time() * 1000)).encode('ascii'), # timestamp
# json.dumps(msg).encode('utf-8') # data in JSON format or empty byte
# ])
self.pub_socket.pub(msg, "gui.face_config ")

# change AU multiplier values
def multiplier(self, dict_au):
Expand Down

0 comments on commit dc8bd6f

Please sign in to comment.