-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Openface License Information. No Commercial Use Without Purchase, $18k Per Year #27
Comments
I see, I guess the documentation could point this out more clearly.
Thank you for the link. That page describes the licensing terms better than their GitHub page. This page did not exist (at least that I'm aware of) when I made that license page. I'll update the documentation with that one. The idea of FACSvatar is that it is not dependent on a specific module. As long as someone develops a tracker that outputs AU values (based on FACS), it can be used as a drop-in replacement for OpenFace. Everything else will continue to work as normal. |
Also note that the FACSvatar code itself is licensed under LGPL. This allows commercial usage, but any changes to the code need to be published. The L in LGPL means that only FACSvatar code changes need to be published, not any code that interfaces with FACSvatar. So you can combine it with commercial closed-source code. Publishing your code would mean more functionality, making it useful for more people, leading to more people contributing, which increasingly improves the project. So please do publish what you make :) |
Thank you for the clarifications. I will have to research a bit further to make sure that StrongTrack outputs actual AU values, and not something particular. But as far as i know, you can set it up to output the tracking information in a variety of formats. It is still in the early stages, but does has proven function. I am also awaiting clear clarification of his license, but from what he has said in his demonstration videos, this is freely usable for commercial purposes. Thank you, and be well! |
Update about StrongTrack. I must been flustered by the openface license when i was checking StrongTrack licenese. Rob in Motion has a clearly stated GNU General Public License v3.0 on his github. I have zero programming experience. Hopefully someone will be kind enough to get these two awesome projects working together. |
Thank you for linking that project, it's interesting. It seems for now that it's only lip tracking, which also OpenFace does (not sure which one does it better), but full face tracking is coming. License-wise it's somewhat better for commercial projects, but just as a warning, the GPL is quite hard to manage in a commercial project (as opposed to FACSvatar LGPL), unless you're planning to distribute all your project files. Personally, I'm not using this project for commercial purposes, so honestly OpenFace is enough for me. More supported trackers is of course better for the health of this project, so it would be great if someone would create support for it. |
It is very confusing territory. So the code of a given program may be licensed under GPL instead of LGPL (or any myriad other licenses), but what about the Output of that software? This is a subject that even high paid lawyers are having a problem deciphering. (A google search will show many videos of lawyers saying that the language of the GPL is at times undecipherable, and also contradictory) for example, Blender is under GPL, but anything i create with it is fully mine. I would think that is the general intent of the people who release under GPL. But, again, i think in many cases, people are releasing under GPL, attempting to say 'hey, use this for whatever' with out realising the effect of that license on other software that may either derive from, or use the output of. So i guess the pertinent question is, if i am using a GPL 3.0 software, do i fully own the content i create with it. It may be worthwile for people to watch what Linus Torvalds has to say about GPL 3.0 in general. It is not good. But my point is, even though StrongTrack is GPL, the Output of that program should be readily, legally, useable to send data to FACSvatar without issue. Seems people are following the pack, and releasing under GPL3 without really understanding the limitations this is putting on, for example, indie game developers. I really do not know, because i surely cannot comprehend all these licenses. |
Regarding StrongTrack. Rob in Motion has stated that he is actively working on extending the type and amount of what it can track. I get the impression that when he is done, it will be comparable to OpenFaces ability. |
Further information. I went ahead and contacted the lawyers for 'Openface' and posed the following question. "I just want to use the Output from this tracker to make my game characters move their face. does this really require a commercial license? The response I received stated that yes, this would fall under commercial license requirements. |
@Hunanbean Sorry to hear that :/ Just out of interest and the chance of another FACS tracker coming out, around what time period are you planning to release your Indie game? |
Although i am devoting all my to the game, as a single developer it is a
long arduous process. It is still in very early development. although i am
shooting for an alpha release in under one year, that may not be feasible.
The long and the short of it is, I do not have any realistic estimate yet,
as there are still several issues that bring development to a halt, such as
this one. I have decided for the time being to go a different route for
lipsync and facial expressions. for lipsync, i have rewritten the
phoneme/viseme references in papagayo-ng to include more. i am also
expanding the MHX2 (makehuman exchange), to support all the added visemes,
and expand its functionality to uses beyond the makehuman base. for facial
expressions, i am using point based facial tracking with blender, which is
not very efficient. perhaps i will be able to make use of StrongTrack as it
develops.
I will keep an eye on your project here for word from you if you find
another, and i will mention if i happen to spot one first. I do hope
another FACS based tracker comes to be.
Thank you again, and be well!
p.s. I do not know if you want to keep this as an open issue or close it, i
leave it up to you
also, for anyone interested, i will post the code changes for papagayo-ng for merge, or alternate, as soon as i finsh the project
|
I'm in the process of rewriting the documentation, although some other stuff grabbed my attention. I thought I was clear enough of my mentioning of the license, but it seems it can be better. So until the new documentation is more clearer, we can leave this issue open. Good luck with your game! If I come across another FACS tracker, I'll let you know. |
Thanks!
…On Tue, Aug 11, 2020 at 7:07 PM Stef | ステフ ***@***.***> wrote:
I'm in the process of rewriting the documentation, although some other
stuff grabbed my attention. I thought I was clear enough of my mentioning
of the license, but it seems it can be better. So until the new
documentation is more clearer, we can leave this issue open.
------------------------------
Good luck with your game! If I come across another FACS tracker, I'll let
you know.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#27 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMNVR7D6C6NJZXIXMJCNBT3SAH2PHANCNFSM4PTB65YQ>
.
|
Proposing using Google mediapipe facemesh integration. |
@fire I think you're talking about this solution right?: Thanks for the suggestion! I've wanting to use Google's facemesh since it was part of Android ARcore, but it was too hard to separate the code. Now it's a Python package, it's much easier to use it seems. There is still work to be done before it can be used though. Using mediapipe facemesh only gives the coordinates of the tracking points, but is not in the FACS format yet. So someone would need to make a mapping from this tracker to FACS. This is not trivial and I'm not actively working on this project anymore. Why FACS? This project wants to stay tracker and model agnostic, and FACS is a great description of human facial muscles. An image explaining this can be found in the v0.4.0 branch: |
Can you sketch a way to map from this Google Mediapipe to FACS? I have no idea where to start. |
Mapping would mean that you translate the values of the mesh to muscle contraction/relaxation (Action Unit - AU). Here you can find some basic information on the Facial Action Coding System (FACS): Mapping is a research project in itself, and the most common way is to find a database with images/videos that have FACS annotations (trained human's looked at the face and provide a value between 1-5 for every AU in the face. Then you would train a model that learns to map the mesh points to these values. A paper by OpenFace on how they did Facial Action Unit detection: |
Based on my experience creating a full set of CMU compatible visemes for use with the Papagayo-NG lipsync software, i would be willing to handle the mapping, contingent on there being a clear path to do so. What i mean is, i am not a programmer, so that portion would have to be laid out ahead of time, but as far as accurately transposing the visual information, i believe it is something i could do a good job of. |
@Hunanbean Thanks for you offer! That's actual a different step in the process, but also very important. The process is as follows:
What @fire is asking for is step 2, but what you are willing to help with is step 3. The advantage of this 3 step process (as opposed to directly mapping the tracking dots to a 3D model) is that once you've completed step 3, any tracker with mapped AU values can be used to animate that model. As a bonus, the FACS format lets you do additional things like interpolation, exaggerating certain parts of the facial expression, and even AI applications. |
If tracking information is generated, i think i can take that information and convert/match it to FACS values, depending on the type of information the tracker outputs. But i would need to see the output data from the tracker to verify it is something i can do. Basically, i would need it to be at a point where it is kind of a template to fill in, such as Tracker outputs w = .2, q = .5 g = .1 then i could manually interpret that value to FACS 12 =.x FACS 17=.whatever, etc until a viable conversion database is made. I am willing to do the work, but again, i need to make sure i Can. |
@Hunanbean @fire I did a quick exploration of MediaPipe Face Mesh. As this issue is about OpenFace licensing, I created a separate issue for this feature request. Use Google's project MediaPipe Face Mesh as FACS tracker: #33 |
I just learned that Openface, which this project depends on, cannot be used for commercial purposes without purchasing a license.
Important points from the license:
(USD $10,000 for OpenFace Light,
USD$15,000 for OpenFace Landmark,
USD$18,000 for OpenFace Full Suite: see each offering for details.
The license is non-negotiable.
Information required to complete the license:
https://cmu.flintbox.com/#technologies/5c5e7fee-6a24-467b-bb5f-eb2f72119e59
Thank you again for making this. I just felt this is an important point to note, as i was rather far into using this when i discovered that i had to pay someone else to be able to use it commercially.
The text was updated successfully, but these errors were encountered: