What is PIE?
PIE was made to give designers and developers an easier way to create, share and modify user interactions. At its core, PIE is a set of cross-device platform independent Unity components that make creating intuitive interactions easier. PIE currently supports mouse and keyboard, Leap Motion, Oculus Touch, and HoloLens gestures, with more interaction support coming soon. Use PIE if you want a quick way to give your users intuitive interactions in VR, AR, on the desktop and on mobile. Contribute to PIE if you want to make interactions better for all developers, designers, and end users. We hope you find this useful!
Interactions are a core component of every user facing software application, yet there are very few pattern guidelines, best practices and tools that can be used to design and develop interactions in virtual (VR) and mixed reality (MR). We've explored interactions using cheap headsets, expensive headsets, hand controllers, full body motion capture suits, and just about everything in between. In the process, we created PIE, a set of tools we've used for both rapid prototyping and production quality products. We're happy to share these tools and welcome all contributions as we look forward to the next generation of human-computer interactions.
PIE consists of modular components that can be swapped in and out. These components revolve around the
Controllerrepresents a user input device, such as the user's eyes
ControllerBehaviorrepresents something a controller can do, like gazing at an object
Interactionrepresents how input affects objects, such as highlighting an object when its looked at
Example - Fade On Hover (3 steps)
Why was it designed this way?
Interactions represent how we interact with objects in the real world. For example, if you were to pick up a mug, you would use your hand (
Controller) to grab (
ControllerBehavior) the mug that would react to your grab (
Interaction). As babies we have hands (
Controllers), but we don't know how to use them (i.e. we don't have any
ControllerBehaviors). Over time we learn to use our hands, body parts and tools to interact with objects, like a mug. Right now PIE is in its infancy, but as we grow our set of
Interactions, we'll be able to quickly add complex interactions to our applications using our collective knowledge.
Oh, one more really cool thing.
Interactions can be mixed and matched. Using our baby analogy, once a child learns how to grab a mug it will quickly apply its grab logic to other objects, like things it shouldn't put in its mouth :/. PIE was designed similarly, but is much safer. Once we develop a
ControllerBehavior we can use it to create countless interactions. For example, we've created a
ControllerBehavior for grabbing with the Leap Motion hand, so that
ControllerBehavior can be used to pick up any object, whether it's a cube, a mug, or anything else. It can also be used to open a door, or flip on a light switch...the possibilities are endless since we can link the grab logic to any
If you couldn't tell, we're pretty excited about this :).
Getting Started with PIE
Import PIE Into Your Mouth! (I mean...into your Own Project)
Download our latest package from the releases section and import it into your Unity project
- The "Pear.InteractionEngine" package has no external requirements
- The "Pear.InteractionEngine HoloLens" package requires
- The "Pear.InteractionEngine Leap Motion" package requires
- The "Pear.InteractionEngine OculusTouch" package requires
- The "Pear.InteractionEngine TouchMotion" package requires
If you like PIE, want to make it better, or just want to work on something cool, help us out! We think user interactions are extremely important, so we'd love to work with others to improve how we all design, develop and use interactions in VR, MR and everywhere else. Fork this repo to get started!
Who is using PIE?
Hopefully this list will continue to grow ;)