Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternate implementation for Solver Handler Scene Object attachment #2795

Closed
wants to merge 5 commits into from
Closed

Alternate implementation for Solver Handler Scene Object attachment #2795

wants to merge 5 commits into from

Conversation

SimonDarksideJ
Copy link
Contributor

@SimonDarksideJ SimonDarksideJ commented Sep 16, 2018

Overview

This implementation enables the Mixed Reality manager to keep a track of any Scene Objects created in the scene by devices and makes them available for Scene components.

This is a preliminary check-in for review

It's critical that devices are not polled or "part" of a scene. The Current MixedRealityControllerVisualizer is a representation of a controller in a scene, the GameObject of the controller.
To this the device layer appends the SourcePoseController to enable input events from the controller to manipulate the Controller Game object.

This implementation has enabled the Visualizer to register it's existence with the MixedRealityManager, so that other scripts can easily query for the gameobjects for the controllers in the scene. Scene objects interacting with scene objects.

Note No parts of the device layer have been modified for this implementation.

Please review and comment where further changes are needed,

Additionally, this solves issue #2790

Changes

  • Adds new Scene Object Registration system to the Mixed Reality manager (currently "limited to" MotionControllerVisualizers)
  • Adds functionality to the SolverHandler to grab the correct handed controller from the Scene Object Registration
  • Added the ISourceStatehandler implementation to the SolverHandler, so it can refresh the solver when a controller is attached / detached.

TODO

This is a preliminary implementation, Ideally the Scene Object Registry should be able to handle any game object, but this requires a fair amount of additional design.

Registry notes

This brings the registry count up to 3, the distinct registries and their purpose are as follows:

  • Core manager Registry - Unique registry for managers where there can only ever ne a single instance in operation (e.g. InputSystem, BoundarySystem). Each manager receives all Unity events coordinated through the MR Manager.
  • Component Registry - Emulates the managers registry, except allows multiple components to be added. Each manager receives all Unity events coordinated through the MR Manager.
  • Scene Object Registry - List of referencable Scene objects, to avoid use of "FindObjectOfType" lookups. Used for regularly queries scene objects critical to a project (e.g. VR Controllers represented in a scene)

…tively find and attached to the Visualized Controller in the scene.

This implementation enables the Mixed Reality manager to keep a track of any Scene Objects created in the scene by devices and makes them available for Scene components.

This is a preliminary check-in for review
@@ -114,6 +114,10 @@ public void ResetConfiguration(MixedRealityConfigurationProfile profile)

#endregion Mixed Reality runtime component registry

#region Mixed Reality scene object registry
public List<IMixedRealitySceneObject> MixedRealitySceneObjects { get; set; } = new List<IMixedRealitySceneObject>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read only list?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Err, no. This is constantly updated as new Scene objects are added to / destroyed in the scene.
Unless I fundamentally misunderstand what a Readonly list does :D

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just that there's only ever one list, created when the MRM is created.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not so keen on the name either. MixedRealitySceneGameObjects?

object != Object != GameObject

}
}

private MixedRealityControllerVisualizer GetVisualizerForHand(Handedness hand)
{
foreach (MixedRealityControllerVisualizer controller in MixedRealityManager.Instance.MixedRealitySceneObjects)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it. However, this call is used so infrequently I didn't think we needed to optimise it.
Open to review of course :D (as per standards)

Copy link
Contributor

@StephenHodgson StephenHodgson Sep 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably be checking if the controller is null as well, just in case. Changing to a for loop might make that a bit simpler.

@@ -129,14 +134,17 @@ public bool UpdateSolvers

private void Awake()
{
//Register this GameObject to receive Input System Events.
MixedRealityManager.Instance.GetManager<IMixedRealityInputSystem>()?.Register(this.gameObject);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not inherit from BaseInputHandler?

Copy link
Contributor

@StephenHodgson StephenHodgson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My only request is that the SolverHandler inherit from the BaseInputHandler for proper registration.

Overall I think think this is a great approach, (and better than my attempt).
I hope this is a good compromise.

@StephenHodgson StephenHodgson added this to To do in Mixed Reality Toolkit via automation Sep 16, 2018
@StephenHodgson StephenHodgson added this to To do in Version Next Beta via automation Sep 16, 2018
@StephenHodgson StephenHodgson added this to the MRTK 2018 milestone Sep 16, 2018
}
}

private MixedRealityControllerVisualizer GetVisualizerForHand(Handedness hand)
{
foreach (MixedRealityControllerVisualizer controller in MixedRealityManager.Instance.MixedRealitySceneObjects)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doubles down on the current requirement that model overrides must have a pose synchronizer on them.

This has already been given as feedback that it adds an unnecessary requirement (as in, there are ways to do this without adding a script; see: HTK) to adding arbitrary prefabs as overrides, as well as likely being incompatible once we load platform controller models and create GameObjects for them at runtime, since the core has no knowledge of the concrete pose synchronizers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this "double down" on the Solvers needing a Pose Synchoniser on them? This solution does nothing of the sort.

The Current Visualizer uses the pose synchronizer to move and manipulate the Controller model.

This PR allows you to access the GameObject the visualizer is using.

The Only thing added, is the ability to capture the Visualizer model in the MR Manager for ANY component to use. For which the visualiser (only on initialization and new controller detection) interrogates.

So, where is your concern?

@keveleigh
Copy link
Contributor

It's critical that devices are not polled or "part" of a scene.

Again, why is this critical? Not because of your definition of what a device is, but what actual effect does this have on an application or the Toolkit?

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 16, 2018

It's critical that devices are not polled or "part" of a scene.

Again, why is this critical? Not because of your definition of what a device is, but what actual effect does this have on an application or the Toolkit?

Good question. I'm not so sure about the polling being crucial as we already give devs the ability to get the current info from the controller at any time, so we're not preventing something like this from happening, but it is device specific, so you'll need a concrete type to poll from, which breaks cross platform support, and why we encourage the use of getting data from the input system via IMixedRealityControllerPoseSynchronizer.

As for not being part of the scene? That's because the device layer is Encapsulated. All it knows about is the device and the data being read from the device. The "Scene" part of the controller is the IMixedRealityControllerPoseSynchronizer which is being driven by the IMixedRealityController.

We're separating our concerns here and making sure we're sticking to good OOP design.

This has already been given as feedback that it adds an unnecessary requirement (as in, there are ways to do this without adding a script; see: HTK)

For one, this isn't the HTK, so we need to stop trying to compare it. Second, where is this feedback? Third, without the IMixedRealityControllerPoseSynchronizer you won't know anything about the controller unless you implement it. So yes, it's a requirement for scene based controllers, as it's the representation of that controller.

That being said, this doesn't mean it's a requirement for all tracked objects by the SolverHandler.

@keveleigh
Copy link
Contributor

keveleigh commented Sep 16, 2018

but it is device specific, so you'll need a concrete type to poll from, which breaks cross platform support

I do not poll from any concrete type. I only call methods from the base interfaces, and this has been tested and works completely cross platform.

For one, this isn't the HTK, so we need to stop trying to compare it.

No, but why would we make things harder to use? Adding a requirement to add a script to an object to obtain the same result as the HTK is objectively not as good. Not everything needs to be rewritten. The HTK did plenty right.

Second, where is this feedback?

I've heard it internally, and you also told me there was confusion at the Summit when arbitrary prefabs didn't "just work".

@keveleigh
Copy link
Contributor

keveleigh commented Sep 16, 2018

Second, where is this feedback?

And you're right, it probably should have been filed as an issue. I've been working on a proposal for a real controller visualization solution and was going to bring it up at that time.

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 16, 2018

I do not poll from any concrete type. I only call methods from the base interfaces, and this has been tested and works completely cross platform.

I know you don't I was agreeing with you 😅 That was more or less for @SimonDarksideJ.

No, but why would we make things worse?

Seems rather subjective, but I agree we should try to reach similar results to the the HTK in regards to feature parity.

Adding a requirement to add a script to an object to obtain the same result

I don't understand why this is objectively a bad idea, as we've broken out requirements and responsibilities of the scripts to be more encapsulated and do single task. That's just good OOP design.

Also, that's what the [RequireComponent(type)] attribute is for.

@keveleigh
Copy link
Contributor

I don't understand why this is objectively a bad idea

Think about it from the point of view of a consumer of this product: a developer working on their app.

In the HTK, they dragged a prefab somewhere and it became the controller model, tracked with the controller, and worked.

In the MRTK, if they do the exact same thing, nothing happens. They now have to add a script onto their prefab to obtain the same results. That prefab might be used elsewhere and shouldn't be modified. They're also now doing an extra step on top of the HTK steps to obtain the same result.

@StephenHodgson
Copy link
Contributor

That's because you need to use the configuration profile. You can't just add these things to the scene, as the Mixed Reality Manager handles all of that. (It's how we were able to get away from using Singletons).

That's just part of the documentation we're currently writing up.

@keveleigh
Copy link
Contributor

Also, that's what the [RequireComponent(type)] attribute is for.

What problem does this solve in this scenario?

@keveleigh
Copy link
Contributor

That's because you need to use the configuration profile. You can't just add these things to the scene

Yes, they did not just drag them into the scene in the HTK either. In both cases, the prefabs are being set in the inspector.

@StephenHodgson
Copy link
Contributor

StephenHodgson commented Sep 16, 2018

In the HTK, they dragged a prefab somewhere and it became the controller model, tracked with the controller, and worked.

I think most of the frustration here is that this entire system isn't like the HTK at all, sans the input system events. I will reiterate again, this is not the HTK.

They're also now doing an extra step on top of the HTK steps to obtain the same result.

I don't see it that way at all.

To setup a controller visualization you just need to create your prefab, attach the visualization script (which isn't even fully fleshed out yet, so maybe they also assign all their transforms who knows) and then assign their controller prefab in the controller mapping profile. No adding anything to your scene.

@SimonDarksideJ
Copy link
Contributor Author

Can you simply list out your issues with the proposal @keveleigh ?
It has achieve all the requirements set out from your taks and even addressed @davidkline-ms 's issue. As well as starting an extensible registration system for managing any scene object.

All without compromising the base architecture and dirtying the Device layer with runtime / scene data.

@@ -190,10 +198,37 @@ protected virtual void AttachToNewTrackedObject()
case TrackedObjectType.Head:
TrackTransform(CameraCache.Main.transform);
break;
// Other cases will come online as ControllerFinder is ported appropriately.
case TrackedObjectType.LeftController:
var leftController = GetVisualizerForHand(Handedness.Left);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As context: I've only been getting involved with vNext over the last week or two, so I'm very unfamiliar with how stuff below the UX layer has changed. The discussion around Solvers has been a crash course for me. :)

I'm only now starting to better understand the full scope of what controller visualizers are responsible for, and in many ways I think the name "Visualizer" has lead me astray on more than one occasion. As someone who hasn't been part of building the lower layers of MRTK, I assume a "Visualizer" is something that handles the visuals of the thing; meshes, textures, maybe positions. But what I gather from how @SimonDarksideJ , @StephenHodgson , and this code uses the term, it is meant something more like "Scene Visualizer"; an scene object which is the manifestation of a device in the scene. Something like an avatar. It's far more than just the visuals.

Based on that understanding, I am going to put forward a hypothesis that @SimonDarksideJ and @StephenHodgson expect ControllerVisualizers to eventually have a whole suite of abstracted concepts. Handles, buttons, speaker and joystick locations might all be game object pieces of a ControllerVisualizers. There might be a world where an MRTK user could get a ControllerVisualizer, find a Speaker controller on it, and pass in an audio clip to be played on the controller's speaker. All in a cross-platform way that requires no knowledge of what actual hardware is attached. (Maybe not that in particular, but it's an example to give a sense of the scope.)

Again, extrapolating from what I think ya'll are saying: For every Device class, there will be a Scene Visualizer class. Nothing in the User Abstraction Layer should ever talk to a Device. Scene Visualizers are an MRTK Construct (or is it the MRTK Interface Layer? Side Note: What's really the difference?), so they're cool to talk with the fun Unity toys in User Abstraction Layer.

Oh... I just realized this is MVVM. Scene Visualizers are the View Model. The User Abstraction Layer is like the View. The Model is the Device.

This is in contrast to the architecture which I first assumed, which is that the Device layer offers interfaces to abstract things like "Controllers", which are free to be used by the User Abstraction Layer. Definitely a simpler architecture, but it has the downside of mingling the work of keeping track of a complex piece of hardware with the work of translating the output of that data into friendly formats.

A "Controller" might be an abstract interface in the MVVM paradigm, but unlike in the simpler architecture, it's an abstract interface to be used only by the Scene Visualizer (AKA the View Model). There might be another abstract interface called "ControllerVisualizer" for use by things in the User Abstraction Layer (or maybe we would use more descriptive names for both like IControllerDevice and IControllerVisualizer).

So anyway, if I have captured the intent, I want to argue that "Visualizer" is a very confusing name for us to extend to people using MRTK. It seems too specific to rendering, and not the host of things that a View Model is responsible for doing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eric, I think you nailed it right on the head. I'm down to update the terminology to better reflect mvvm.

Copy link
Contributor Author

@SimonDarksideJ SimonDarksideJ Sep 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, an interesting way to re"visualize" the concept to another pattern.

  • A Device is the "Model" - data
  • The DeviceManager/MRManager are the View - connecting a model to a vm
  • The scene "things" are the view models - providing functionality

Copy link
Contributor

@Ecnassianer Ecnassianer Sep 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, maybe I don't understand the role of the DeviceManager/MRManager, @SimonDarksideJ ?

the View - connecting a model to a vm

In MVVM, a View doesn't connect a model to a VM, in fact, it may not necessarily even correspond to a specific model of the data (though it does correspond to the kind of data that's stored in that model [a birthdate View can't connect to a Model of a audio clip]). The ViewModel is the class responsible for translating things like Unix Time (how it's stored in the model) to a string (what the View wants to display), and the View handles how strings look on the screen. A robust View Model might support several views (VM.TimeAsAudio(), VM.TimeASSentence(), etc). A View just wants a string or a number, some data to bind to. You could rewrite the model from a binary tree into an SQL database into a decryption tool and the View wouldn't need to change at all, it only talks to the ViewModel.

My understanding was that the Managers are sort of utility classes that help the Scene Representations stay coordinated/initialized with the device models, and helps the Views (User Abstraction Layer) get references to the right Scene Representations. I could see that described as part of the View Model, or alternatively simply a utility outside the MVVM paradigm (like a DNS service isn't specifically a part of the client-server paradigm, but is a necessary utility for implementing a client-server paradigm).

}
break;
case TrackedObjectType.RightController:
var rightController = GetVisualizerForHand(Handedness.Right);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"GetVisualizerForHand()" returning a controller is confusing on AR devices that support both hand tracking and motion controllers. This should probably be GetVisualizerForMotionController(), since a device might have both a right hand and a right motion controller at the same time.

{
var controller = MixedRealityManager.Instance.MixedRealitySceneObjects[i] as MixedRealityControllerVisualizer;

if (controller != null && controller.Handedness == hand)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little concerned about the assumption that there is only one right handed motion controller at a particular time. There's nothing that prevents a person from connecting multiple bluetooth controllers. I might be a fine choice to just give them the first right handed controller the same way GetComponent<>() just gives the first component it finds, but I wish it was a little more explicit (or we should also offer a GetVisualizersForHand() that returns an array of all of them the same way there is a GetComponents<>()?)

{
[SerializeField]
[Tooltip("Tracked object to calculate position and orientation from. If you want to manually override and use a scene object, use the TransformTarget field.")]
private TrackedObjectType trackedObjectToReference = TrackedObjectType.Head;
private TrackedObjectType trackedObjectToReference = TrackedObjectType.None;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None doesn't seem like a good default here, since it essentially means when it gets added it is in a non-functional state. Head is a better safe default.

private float lastUpdateTime;
private GameObject transformWithOffset;

#region MonoBehaviour Implementation

private void Awake()
private async void Awake()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't making Awake asychronous violate Unity's order of operations contract?
https://docs.unity3d.com/Manual/ExecutionOrder.html

Though the Awake() docs don't specifically say Awake cannot be async, it does say it can't be a coroutine:
https://docs.unity3d.com/ScriptReference/MonoBehaviour.Awake.html

It seems safer to stick with the spirit of Awake() and finish up before any Start()s get called.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does't change the timing of the order of operations.

Copy link
Contributor

@StephenHodgson StephenHodgson Sep 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if it is awaited, then yes, it'll delay any method calls after the await.

/// <summary>
/// Do not automatically track an object, use the manual reference instead.
/// </summary>
None = 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this addition, for completeness.

@Ecnassianer
Copy link
Contributor

Ecnassianer commented Sep 18, 2018

Aside from the comments I left in the review, I don't have any concerns with this code itself, but I still feel pretty vague about how someone actually uses this stuff in Unity.

A particular mystery to me comes up in this line:
foreach (MixedRealityControllerVisualizer controller in MixedRealityManager.Instance.MixedRealitySceneObjects)

Where do MixedRealityControllerVisualizers come from? How is MixedRealityManager.Instance.MixedRealitySceneObjects populated? If this is the implementation we go with, I want that list to be automatically populated by the MixedRealityManager or something.

@StephenHodgson
Copy link
Contributor

Where do MixedRealityControllerVisualizers come from? How is MixedRealityManager.Instance.MixedRealitySceneObjects populated? If this is the implementation we go with, I want that list to be automatically populated by the MixedRealityManager or something.

That's what I want to address with #2799

@Ecnassianer
Copy link
Contributor

We should consider this user story as a requirement for this work:

As a content creator, I can get a solver working with not more steps than this:

  1. Import MRTK into a fresh project.
  2. Run MRTK Configure in my scene.
  3. Add a cube to the scene.
  4. Apply a Radial Solver to the cube.
  5. In the Solver Handler (which has been added for me), I set the tracked object to Motion Controller Right
  6. In the Radial Solver I set the min distance to 0.2, and the max distance to 0.7
  7. I hit play and the cube is following my motion controller in the scene

@keveleigh
Copy link
Contributor

keveleigh commented Sep 18, 2018

I still feel pretty vague about how someone actually uses this stuff in Unity

That's probably my main concern as well (though for a slightly different reason, I think). This seems like a generic answer searching for additional problems to solve (and, yes, this would likely unblock solvers as far as I can tell). What other scenario would use this new scene object registry? It seems like, if we do visualization right, we could obtain the same information by asking the visualization system explicitly for a right/left controller instead of needing to iterate through a list filled with some number of objects (that aren't all controller visualizations), casting them all to find something that's both a controller visualization and the correct handedness. That seems more straight forward, explicit, and performant.

@Ecnassianer
Copy link
Contributor

That's what I want to address with #2799

Cool, I'm looking forward to seeing that PR, I think it will help clarify a lot. :)

@Ecnassianer
Copy link
Contributor

It seems like, if we do visualization right, we could obtain the same information by asking the visualization system explicitly for a right/left controller instead of needing to iterate through a list filled with some number of objects (that aren't all controller visualizations), casting them all to find something that's both a controller visualization and the correct handedness.

Now that you mention that, it definitely feels like a "Gimme The Left Controller" utility function that some manager should provide rather than being in this class, rather than writing it in every User Abstraction class that needs access to controllers. (Whether that function loops over a list of everything or keeps controllers separate is a different question that I have less opinions on.)

}
}

private MixedRealityControllerVisualizer GetVisualizerForHand(Handedness hand)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't specific to SolverHandlers, and should be a utility function offered by another class/library. Otherwise we'll write it over and over again. Is there already a good place to move it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I tried to make this is as generic as possible, but I take your point that "hand" probably isn't the correct verb here.

@SimonDarksideJ
Copy link
Contributor Author

Given the discussions, I agree that #2799 should be resolved before this is merged.
We seem to have consensus on the pattern but need a better visualization manager. It may be even possible to drop the SceneObject registry if the visualizer also offers a way to query for connected controllers/hands.

Then the Solver update would be redirected / updated to use the new Vizualizer logic

@Ecnassianer
Copy link
Contributor

Then the Solver update would be redirected / updated to use the new Vizualizer logic

Since #2799 is still a bit off, how do you guys feel about merging in #2784 (or whichever of the proposed implementations) with the understanding that it's just temporary until the Visualizer logic is done? I and other members of my team would like to start using Solvers in MRTK. :)

@SimonDarksideJ
Copy link
Contributor Author

Well, this PR is more in line with the direction that seems to have the most agreement, although I'd like some more feedback on the general approach.

Understand you need solvers asap (although I have raised another PR - #2798 , to update Solvers to a more MRTK configurable approach, which would also be good to get feedback on) so we do need to get a general consensus.

@StephenHodgson
Copy link
Contributor

I'm not in favor of merging #2784

@StephenHodgson
Copy link
Contributor

I'll have #2799 finished tonight

@Ecnassianer
Copy link
Contributor

I'll have #2799 finished tonight

Sweet, thank you for prioritizing it. :)

@SimonDarksideJ
Copy link
Contributor Author

I've been pitching in and giving @StephenHodgson a hard time with a PR on his PR
StephenHodgson#32
Feel free to jump in and comment @Ecnassianer , It'll help to carve the way forward

@Ecnassianer
Copy link
Contributor

Sorry, I'm blocking a big initiative and need to crunch on some project code today. I won't be able to look at this until tomorrow. :(

@StephenHodgson
Copy link
Contributor

@SimonDarksideJ what say you to closing this one in favor of #2807?

@SimonDarksideJ
Copy link
Contributor Author

Closing as implementation is out of date with current plans

Mixed Reality Toolkit automation moved this from To do to Done Sep 22, 2018
Version Next Beta automation moved this from To do to Done Sep 22, 2018
@SimonDarksideJ SimonDarksideJ deleted the mrtk_development_SolverHandlerTracked branch November 13, 2018 10:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

4 participants