-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What can Animancer do to make networking easier? #210
Comments
I'm not an expert on the subject but I'm working on trying to network Animancer with Photon Fusion. I've also pointed this issue out to some experienced people to get as much expertise out on this subject.
In current implementations, the most diffcult things have been
As far as I know, the people who have gotten their implementations working usually have had to resort to implementations where they control the state pretty much completely themselves, by calculating it from a stored |
I got a somewhat working implementation for Clip Transitions, linear mixers speed is still acting funky. What it requires is before calling evaluate(delta) I set all stored parameters from the end of last frame from either the client, or from the server if they exist. I'm also calling a blank evaluate() before the actual evaluate(delta) just to avoid the weirdness of the animations not advancing in time if their .Time has been set for this update
After evaluate(), I store the newest state. This is re-applied at the start of the next tick in the above function. If server has sent data for that frame, it will have overwritten these variables.
Linear mixers also save / load their mixer parameter respectively in their override of the above functions The mixer speed seems to be the only thing syncing wrong now. Perhaps its not based on the parameter at the call to evaluate(). Experimenting more |
How do you handle transitions? Are you keeping one of those storage objects for every state or only syncing the current state? You could try setting |
How bad are the calls to evaluate performance-wise? Also I think I should use MoveTime() in that case right? Does the mixer work as it should if I just set the blend parameter first and then run I got my current implementation to work with mixers too now, so I'm slightly afraid to touch it after a week of working on it. What it required me to do was a struct like this to store the state after Animancer has been evaluted in a tick: public struct AnimancerStateData : INetworkStruct
{
public bool AnimancerIsPlaying;
public float AnimancerTime;
public float AnimancerWeight;
public float AnimancerTargetWeight;
public float AnimancerFadeSpeed;
public AnimancerStateData(AnimancerState TransitionState)
{
AnimancerIsPlaying = TransitionState.IsPlaying;
AnimancerTime = TransitionState.Time;
AnimancerWeight = TransitionState.Weight;
AnimancerTargetWeight = TransitionState.TargetWeight;
AnimancerFadeSpeed = TransitionState.FadeSpeed;
}
public void ApplyToState(AnimancerState TransitionState)
{
TransitionState.IsPlaying = AnimancerIsPlaying;
TransitionState.Time = AnimancerTime;
TransitionState.SetWeight(AnimancerWeight);
TransitionState.TargetWeight = AnimancerTargetWeight;
TransitionState.FadeSpeed = AnimancerFadeSpeed;
}
} public virtual void SaveStateAfterEval()
{
savedStateData = new(TransitionState);
} public virtual void LoadSavedStateBeforeEval()
{
if (TransitionState == null) transitionAsset.createStateAndApply(_animancer);
savedStateData.ApplyToState(TransitionState);
} Linear mixers do it for all their children also public override void LoadSavedState()
{
base.LoadSavedState();
var asMixerState = (TransitionState as Animancer.LinearMixerState);
asMixerState.Parameter = AnimancerMixerParameter;
int i = 0;
foreach (var stateData in savedChildStateData)
{
if (i < asMixerState.ChildCount)
{
stateData.ApplyToState(asMixerState.GetChild(i));
}
i++;
}
}
public override void SaveState()
{
base.SaveState();
var asMixerState = (TransitionState as Animancer.LinearMixerState);
AnimancerMixerParameter = asMixerState.Parameter;
int i = 0;
foreach (var state in (TransitionState as Animancer.LinearMixerState).ChildStates)
{
savedChildStateData.Set(i, new(state));
i++;
}
} I have a keyed state machine, where the state object itself contains the AnimancerTransitionAsset and AnimancerState references, and I'm working with the acceptable limitation of only having one AnimancerState per TransitionAsset (I'm using the fademode in Play() where it does not create additional states) so every state knows what TransitionAsset to use to create its state. Perhaps if animancer natively supported networking multiple states it would somehow have to create keys for the states so they uniquely identify AnimancerStates yet it would have to be the same for every client. Perhaps a combination TransitionAsset.Key + index of state created from that asset is enough? Then there might be no need for my own state machine wrapper for each state. About your question regarding events:
There would no need to handle rolling back events on an animancer level, as when properly networked the effects of the events themselves could be rolled back. If an event modifies the networked simulation state, the networked variables themselves can handle rolling back whatever effects the event caused. Ultimately, the simplest way to describe how to make a roll-backable and easily networkable state for anything is to make it behave such that given a current state, the previous state always simulates into a valid next state, no matter how many times its ran from that same state i.e. In animancer terms now that I have my prototype working, I believe the current state of an animancer layer should be possible to boil down to a dictionary of Then the state could theoretically be applied to a layer in approximately the following manner, with some additional handling for removing/adding states as necessary so they conform to the one specified by the dict, removing current extra states in animancer and adding nonexisting states if one is in the dict but not in animancer
I'm not exactly sure how much complexity it would add to somehow include the clip / transition asset or whatever it is that created the state in the StateData so that animancer would know what asset or clip to use when creating the state if it doesnt yet exist. |
Pretty bad because it updates the entire graph and applies the output to the model. So if you also have the regular animation update every frame you're essentially doubling the cost of your animations.
If you want events and root motion for that time period to be applied then yes.
Yes, it should.
There are 2 main challenges with that:
Maybe it would be possible to allow both. The system works using a centralised animation dictionary, but you can add to it at runtime if you want to define your transitions elsewhere. Actually, a centralised animation dictionary would also solve the ID issue because it can just have pairs of ID and Transition. This would also face similar issues to implementing transition sets: #80 |
Is it not possible to somehow get a hash for an AnimancerTransitionAsset.Unshared? After all it references an asset, cant it be hashed somehow? |
From some simple testing, the hash of an I could use hash of the clip's name, but that would prevent you from using the same clip in multiple different transitions and I wouldn't want to force people to create duplicate clips with different names just to get around a limitation like this. |
A combination of the clip name and some of the more important transition settings would perhaps do? So its stable on every run but can change in another build if the settings are edited, which should be an acceptable limitation. |
Generating a hash from the transition's fields might work, but would mean you can't modify transitions at runtime (because that would change the hash) which is too close to the limitations of Animator Controllers for my liking. Generating IDs at build time might solve that for builds, but not in the Unity Editor, so connecting a build and the editor to the same game wouldn't work reliably. |
Is this under active development? It would make my life a lot easier to have out of the box networking support that we can apply to any architecture. |
The "What Now" section at the bottom of the OP still applies. I simply don't know enough about networking or have enough spare time to be able to mess around blindly. So if you can provide any insight into those questions I raised it would help me move forward, but otherwise the idea will remain stalled. And even once I'm able to implement something, it's unlikely to be possible for it to support just any architecture. Most of the limitations I hate so much about Animator Controllers are likely partially due to a desire for the system to be inherently networkable. But all of the dynamic things Animancer lets you do like play animations from any source without registering them in a central location beforehand and configuring everything at runtime probably won't be possible in a networkable context. |
My experiences and opinions: I'd suggest taking a look at popular networking libraries and finding out the standard ways to do networked animations in each of them (especially photon fusion). Also at different games and what animancer would have to do to support the kind of requirements on those games, e.g. rollback, tick-perfect, prediction. Here's a few different cases to keep in mind:
The first thing I did when using animancer for my networked kart game was throw out events. They are simply not reliable for gameplay logic with all of the time-changed, evaluations, pauses, Area Of Interest culling, etc. going on. A worthwhile thing to do would be to solve specific networking use-cases and update the docs with them. It would also be helpful to write exactly what needs to be networked per animancer state-type to achieve visual replication (fade, length, speed, start time, etc.). This lets users clear any doubts on whether Animancer will help with their networking needs. Here's a brief explanation of my current project setup in terms of networking/animations:
[Networked] int animationStartTick;
[Networked] byte previousAnimation;
[Networked] byte currentAnimation;
[Networked] float fadeInDuration; This gives me all of the information I need to do my manual fade/blend between the character "layer" and the override "layer". As you can see, by making some small sacrifices and writing the animation logic according to my project requirements, a hard problem becomes a lot more straightforward. My personal motto for this is to bake as much data before-hand as possible, and not use animation-driven game-play logic or rely on animation events for game-play. You may want to check out https://zephyrl.itch.io/network-animancer . It looks very promising for the fusion scene. Photon fusion also have a Battle Royale sample where they use playables directly. I hope this was a little helpful :) |
Thanks, there's a lot of useful insight in there.
That sounds like it's basically just events (name + time), but instead of giving them to Animancer to automatically execute callbacks, you're querying them yourself. Do you think it's something Animancer could/should provide a generalised solution for?
That sounds roughly similar to the idea of Transition Sets which is something I'd like to tackle as the big feature in a major version update at some point.
The author of that plugin was kind enough to give me a free key for it so I had a quick look at it and the idea seems promising, but I haven't really tried it out in depth. The documentation seems minimal and the code quality doesn't fill me with confidence, but I can't comment on the effectiveness of the end result. There's also a studio partnered with Photon who are currently making a Co-Op 3rd Person Shooter sample based on Photon Fusion and Animancer. It's not ready yet, but the gameplay video they showed me was quite impressive and based on how complete it looked I wouldn't be surprised if it released within the next few months. |
I've recently reached Animancer on my search for a 100% tick accurate animation system. This asset has been talked about on the Photon and Fish-Net discords and I think is the only asset to provide perfectly accurate animations on a tick. Mecanim unfortunately produces different animations even if the same time is set on both the client and server. There is a reason why Photon Fusion's 200BR sample project does not use Mecanim and instead uses the Playables API directly. A 100% tick accurate animation is pretty much mandatory to get a game like CSGO, Overwatch, and more possible. It's to ensure that a player's arm is on a certain position on an animation on the server, the same exact thing should be seen on the client side. You don't want to shoot a body part but end up missing because it is on the different location on the server due to animation inconsistencies during rollback. From what I'm gathering here, majority of the issue is due to supporting dynamic actions like playing animations without registering them in a central location. Unfortunately, I don't think there is a way around it unless you want to spend ridiculous amounts of bandwidth. All netcode libraries operate a central database to ID prefabs. This includes Photon Fusion, Fish-Net, Netcode for GameObjects, Mirror, and probably more. In my opinion, when doing networking with Animancer, all the networkable animations should be ID'ed in a central location. The IDs in You don't even need to do integration with the networking libraries. All we need is a mechanism to get the current state (e.g. the ID and whatever data needed to replicate the animation on other clients) and set them (e.g. on clients). Something like |
The I have some ideas for setting up an ID library and making serializable forms of the data in states which I want to have a go at for the next major version, but I haven't started on it yet. |
Hello guys, I accidentaly stumbled upon this page when looking for some answers for my personal project. I must admit I haven't read all the posts in detail but I wanted to let you know that there is now Animation documentation page and Fusion Animations tech sample that should help with animations using Photon Fusion. The tech sample features also a solution using Animancer (though not a tick accurate one). @KybernetikGames, I hope you don't mind posting it here. We should have probably touch base long time ago 🙃 I've written both the doc and the tech sample above. I am also behind player animations in that unannounced co-op game you mentioned above and with my collegue Jiri we put together the BR200 project. Let me know if you would like to correct some info in the doc regarding Animancer or discuss other (animation) matters 😉 |
@jiristary I just read those documentation pages and they are great! Pretty much explains all the current animation options for netcode development in Unity. After spending a month playing around Animancer, I did come to the same conclusion as what was written in the documentation. My method involves writing my own FSM and a tick accurate wrapper around Animancer states. After doing validation checks, I can confirm that the animations aren't tick accurate, they are off a very small amount. I wasn't able to track down what was causing the small discrepancies, but your documentation mentions that it was due to the creation of weightless states during certain fades. I was scratching my head for many weeks, it's good that someone finally got to the root cause of it. |
I'm more than happy to have you post here. I had a quick read through the documentation which mostly looks good and I'm keen to check out the actual samples when I get time. The "creation of weightless states during certain fades" thing you mentioned is explained on the Fade Modes page if you want to link to it. In the "Animancer + Tick Accurate Wrapper Around Animancer States" section, the "Synchronize a whole array of states" approach would likely be much more efficient if you gather only the states with Also, having everything in such a long page makes navigation a bit annoying if you aren't linearly reading through the whole thing. I generally prefer to split my documentation pages. |
@nscimerical, I am very glad you find it helpful. Regarding your animations being slightly off, I suspect it might be something else than weightless states as those would cause issues really only during the fading time (and only in specific fade scenario) and after the fade it should be in sync again.
Thank you, I will add the link. I can imagine that setting the WeightlessThreshold to 1, effectively disabling weightless states, could be a viable option for certain solutions.
That is not a real issue with Fusion. It uses delta compression which means that only bits of networked data that changed are transfered over the network. But definitely the solution should not try to save other parameters to networked data when the weight and target weight is 0.
Noted :) I will bring it up to the team. |
I had never considered it, but yes that would work. I'll add it to my docs.
Maybe not for the actual networked data, but serializing/diffing several dozen or more states is going to be far slower than only needing a couple of float comparisons per state so you only need to serialize/diff the 1-3 active states. Now that I think about it, I might even be able to have Animancer keep track of the active states without too much overhead and expose them publicly. That would also help with internal stuff like playing a new animation which currently iterates through and stops everything else. Hopefully one day I'll figure out how to implement Inertialization which could remove the need for cross fading, meaning no clones and only ever 1 active state at a time (per layer). |
Animancer v8.0 is now available and includes a new Animation Serialization sample which may help with this sort of thing. |
Background
Copied from Animancer's Documentation:
Problem
Synchronising the logical state machine but not the exact animation states should work for individual states, but wouldn't capture things like fade details and therefore couldn't support more complex networking techniques like Roll-back Netcode.
Developing a Solution
It might be useful to have a serializable type that can capture a snapshot of the current animation details to be sent over a network. But what would that entail?
Naming
What would the type be called?
AnimancerPlayableSnapshot
?Data
What values would it need to include?
Keys
Is it the user's responsibility to grab a specific state's data and apply it back to that same state or would you want to snapshot an entire AnimancerComponent into one object then apply it back in one go?
public static readonly SerializableKey Attack = nameof(Attack);
(implicit cast from string to store its hash code as an int for networking) for every state and create all states with their corresponding key on startup?SerializableClipTransition : ClipTransition
class which has aSerializableKey
field so you can just use that transition type everywhere and play it normally for it to use that key? You'd probably still need to create all the states on startup.Events
What about Events?
Transition Triggers
Transitions triggered by events are currently non-deterministic based on the frame rate. For example:
AnimancerComponent.Evaluate(delta time = 2)
is used to simulate forwards in one step.What if Animation B also has an End Event at t=0.5s which would have also been triggered to play Animation C in that time?
Do all End Events need to be replaced with something like a
ClipTransitionSequence
so that all the animations and timings are specified upfront in one place so it can figure out what should be happening at any time without needing to step through each animation's events?Other State Types
Supporting more than the basic
ClipState
s would require additional data for them, meaning the serialization system would need to support polymorphism. Mixers in particular are very common for movement.float
parameter.Vector2
parameter.float
Weight
of every child state.Speed
,Time
, etc. on child states of a mixer, so do we need to support that too?What Now?
That's far too many open questions for me to just implement a solution on my own, but I'd be more than happy to work with anyone making a networked game to help find a solution that meets their needs and move towards coming up with a generalised solution that can be included with Animancer itself.
The text was updated successfully, but these errors were encountered: