Skip to content

Releases: regzo2/VRCFaceTracking

Unified Expressions Release Candidate.

12 Feb 20:44
Compare
Choose a tag to compare

PLEASE USE VRCFT v5.0.0: https://github.com/benaclejames/VRCFaceTracking/releases


Several fixes and expansions upon the last release build of VRCFaceTracking Unified Expressions. These changes also are very close to finalization of this PR and there are expected to be very few reworks/fixes at this state.

NOTE: Modules built for VRCFT 4.0.0 (UI) are not compatible with this build, and will need to be adapted to work within the new Unified architecture. Test modules for common interfaces such as the Quest Pro and SRanipal are included; Existing modules will be adapted to Unified Expressions soon.

Additions

  • Exposed v2 (Unified) base and combined parameters to output.
    • Accessed via v2/... for base Unified parameters, Simple parameters, and merged parameters.
  • Drastically expanded Unified expressions base standard.
  • Added Unified Simple expressions to represent common conversions of more general expressions from the base Unified set of parameters.
    • Includes a mapper to retain consistent convergence of parameters.
  • Simplified Calibration usage on window
    • This is just intended to test out the capabilities of the calibration within VRCFaceTracking. A fuller implementation of the calibrator will be implemented with a potential UI update (including individual parameter calibration, smoothing, etc. mutations).
    • Calibrator can be disabled with a toggle.

Improvements

  • Drastically improved Unified to SRanipal conversion
    • Fixed common conversions from more robust tracking interfaces, fixed EyeLidExpandedSqueeze behavior.
  • Improved Quest Pro tracking
    • Eyelids should be better tracked, fixes for Sneer overdriving the MouthUpperUp type parameters.
  • Adjusted EyeTrackingActive and LipTrackingActive parameters
    • OSC receiver now parses these parameters and properly toggles VRCFT module states.
    • Fixed an oversight that sent these parameters even when modules were not initialized (Allows outside VRCFT apps to freely work around).
  • More robust parameter parsing
    • Parameters can now be loaded with any level of directory eg. .../JawOpen or .../v2/JawOpen
    • Also applies to Binary parameters.

Install / Usage

Included below are the executable for VRCFaceTracking and in-dev Modules for testing with! Includes the ALVR/headless Module, Quest OpenXR module, MeowFace module, and SRanipal module.

Run VRCFaceTracking executable from anywhere. Include desired modules in %appdata%\VRCFaceTracking\CustomLibs.

NOTE: Make sure to Unblock all dll's in CustomLibs! Right click each .dll, go to Properties, and check the Unblock checkbox and apply.

NOTE: SRanipal and QuestOpenXR modules have dependencies within the ModuleLibs subfolder in CustomLibs. Please make sure they are included and are unblocked as well!

NOTE: SRanipal module requires the latest runtime (Runtime 1.3.6.X+ included with Vive Console on Steam) in order to work and supports the Focus 3's face and eye tracking.

Full Changelog: v5.0.0-pre...v5.0.0-pre-2

VRCFaceTracking Unified Expressions

26 Jan 05:49
Compare
Choose a tag to compare
Pre-release

New Features

This release encompasses a huge swathes of changes to both our approach to VRCFaceTracking's development and also a huge refactor of not only core parts of the VRCFaceTracking application but also the shift to move towards our own developed in-house and open source face tracking standard: Unified Expressions!

Unified Expressions

Unified Expressions is a new face tracking standard that was developed to not only be a completely standalone face tracking standard (as in you would be able to both build or make hardware to target it directly and have avatars or other output uses that take advantage of it), but is also backwards compatible with already existing face tracking standards (including AR52, Meta FACS, and SRanipal). This means that Unified Expressions will be able to not only accept these tracking standards but also retains their unique tracking quirks and even can be used to emulate a face tracking solution to function for avatars that do not purposefully support other face tracking solutions (like using an avatar designed with AR52 blendshapes and using a Quest Pro to face track it).

We believe that Unified Expressions will be a compelling standard to both develop for and to develop with for everyone to make face tracking a open ended standard, in the same way that a lot of other components of avatars are already standardized (such as rigging).

Currently Unified Expressions also emulates into our old VRCFaceTracking parameter system, so avatars you have previously built for VRCFaceTracking will still work as expected! Even newer interfaces will be able to seamlessly use the old parameter system as well with some reformulations of the old parameters. We will actively encourage the use of Unified Expressions' parameter system once it is finalized as we will be deprecating all support and documentation for our old parameter system.

In it's current state there are a few basic combined parameters. This is intentional as we are being very careful to introduce new combined parameters that drastically affect tracking quality unintentionally. VRCFaceTracking being built on SRanipal, we used SRanipal's tracking quirks to optimize it for SRanipal. We are going to develop combined parameters with facial muscle anatomy more in mind to avoid any tracking issues. All currently available combined and base Unified Expressions parameters can be found here

We hope that Unified Expressions is our 'biggest seller' with this update, and we hope that it will be an indicator of our forward moving plans!

VRCFaceTracking additions

  • Data Mutations: Calibration and Smoothing

    • This release also includes per-parameter calibration and smoothing.
    • Users will be able to calibrate any tracking interface to work best for their needs. The currently implemented calibration currently just sets the normalization (between 0-1) of parameters based on how expressive you are with the tracking. This in theory should mean that the calibration should work best with your face. Smoothing is also available for controlling all parameter's smoothing value.
    • The Mutator system is capable of setting a unique calibration and smoothing value for each parameter, and modules can set initial values in it respectively (may not be possible in the final release).
    • Currently you can opt-in to these by clicking Calibrate, toggling on Fine Tune Calibrator, and Enable Smoothing respectively. These are not the final implementations of these system and are for testing capabilities.
  • Explicitly set module load order and swapping

    • Modules can now be set into any load order that you want! You will also be able to swap into different modules (Currently has some domain conflicts if a module uses Logger constantly (may cause a crash)).
  • Saved Data Configuration

    • Tracking and Mutation data, and module load order will be saved on application close. Once the app loads the config file it will apply it to the current session. The config file is saved in %appdata%\VRCFaceTracking\
  • Portable Setup

    • VRCFaceTracking now supports completely portable module installs. If you include a CustomLibs folder next to the executable it will load all modules in that folder.
  • Moved SRanipal to Separate Module

    • SRanipal is now it's own dedicated module.

Using the New Update

Download the VRCFaceTracking binaries below, and the SRanipal, ALVR, and Quest Pro OpenXR modules are available below for testing. You can also download the MeowFace module built for this version of VRCFaceTracking as well!