-
Notifications
You must be signed in to change notification settings - Fork 1
Interaction System
The Event System is the current implementation of interaction. Notice that whenever you add a canvas to a Unity scene, an "Event System" object will be automatically added to the scene. This object (or the script attached to it) is used for monitoring user interaction with Canvas elements. We can extend this system so that we can interact with 3D elements as well. Here are some brief steps to achieve this: (you don't need to do this, it's already completed)
- Add a collider to the 3D object you want to interact.
- Add an Event Trigger to the 3D object.
- Set up the events in the Event Trigger.
- Add a "Physics Raycaster" script to your main camera.
- Now the element should be interactable by mouse/touch screen.
The Interactables wraps all of these so that you don't need to worry about those generic Event Triggers. Specifically, the Interactable abstract class uses three Event Trigger events: PointerDown (corresponds to mouse click down or screen touch), PointerUp (corresponds to mouse click up or screen leave), and Drag (corresponds to mouse/finger drag). It utilizes three virtual methods:
void OnTouchDown(); // PointerDown
void OnTouchHold(); // Drag
void OnTouchUp(); // PointerUpthat are implemented by its sub classes. Notice that these are virtual methods, but not abstract; this means the sub classes don't necessarily need to give an implementation.
In Interactable's Start(), it detects if the game object has an Event Trigger, and if not, create one. It then hooks the three virtual methods to the Event Trigger. Notice that the Start() method is made public so that all its sub classes will inherit this method.
Take a look at different Interactables and see how they implemented the three virtual methods.
The Slider is a little tricky, because the user's finger might not stick to the slider slot, yet the handle still needs to travel along the slot. There are two Transforms: left point and right point, and the handle Lerp()s between the two handles. In its OnTouchHold() method, there are some linear algebra done to calculate the rotation of the finger with respect to the camera. The rotation is then translated to an actual distance to move the handle.
The Input Manager's main purpose is to accommodate for both the mouse (debugging) and the finger (production). For example, a left click means tapping one time, and a right click means tapping two times. It also offers some shortcuts of getting the coordinates of the touch point, and to convert screen coordinates to world coordinates.
The cursor is the old way of interaction. A separate system was written around this interaction, but this system is so tightly tied to the cursor such that it would be very hard to refactor the cursor out, so the new Event System replaced this completely. I will not explain this in its entire detail, but if you are interested, you can check commits before this to see its implementation.