Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete Gesture Support [Roadmap Proposal] #530

Open
filip-sakel opened this issue Feb 2, 2023 · 8 comments
Open

Complete Gesture Support [Roadmap Proposal] #530

filip-sakel opened this issue Feb 2, 2023 · 8 comments

Comments

@filip-sakel
Copy link
Contributor

This issue expands on the tap-gesture feature request, setting a timeline for complete gesture-support parity with SwiftUI.

Level 1: Tap Gesture & Related Protocols

Button practically uses a tap gesture behind the scenes, so there should be no surprises in explicitly introducing onTapGesture. Starting from this simple gesture, we could build the infrastructure for the rest of the gesture API. Namely:

  1. Gesture is a protocol that enables rudimentary composition and is the basis for reacting to gesture interactions.
  2. AnyGesture is Gesture's simple type eraser.
  3. GestureState is key to reactivity. It is updated through the updating(_:body:) function which returns a GestureStateGesture gesture. I'm not sure if this could be implemented through the other mapping methods (onChanged and onEnded), because IIRC these mapping methods fire at different points in the view lifecycle or gesture interaction. Finally, I imagine map would be easy to implement, as it only changes the gesture's value.
  4. Gestures attach to views through gesture(_:). To reduce the initial implementation's complexity, we could omit the including mask: GestureMask parameter.

Level 2: Many Gesture Types

High-level gesture recognition (like pan, rotation and pinch gestures) is free for native apps, but would likely need a custom implementation on the web. The pointer-events API seems like a good place to start. Besides this guide for implementing the pinch gesture, I didn't find a thorough guide for gesture detection in my brief research. At this point we may want to specify which gestures a given element can accept through CSS, though not every gesture type is available on all major browsers. The following gesture types would need to be recognized:

  1. SpatialTapGesture would be a refined implementation of TapGesture. The gesture would provide the taps' locations as its value by employing the pointer-event API. Namely, it would expect a pointer down and subsequent pointer up event to fire.
  2. DragGesture requires a pointer down, pointer move (potentially with a minimum distance required), and a pointer up to end the gesture.
  3. MagnificationGesture and RotationGesture are multitouch gestures. They require one or more fingers on touch devices; research is required on how they'd be detected with a trackpad input; I also don't know how SwiftUI handles this for devices with just a mouse input (maybe through scrolling and a modifier key?). I think both of the aforementioned gestures could be implemented by constructing a vector between two fingers; magnification would measure if the vector's magnitude grew (at least by the minimumScaleDelta); rotation would measure if the vector's principal argument changed (at least by the minimymAngleDelta). I don't know how more than two fingers would affect the results.
  4. LongPressGesture starts with a pointer down event. It waits for minimumDuration before firing, and eagerly terminates if a pointer move exceeds the maximumDistance. This gesture can also be attached through one onLongPressGesture method; the other methods with the same name can be safely ignored because they're only available on Apple TV.

Level 3: High-Level Control

The following modifiers are used for advanced gesture interactions. After Level 2, which is far into the future, we could start tinkering with how different gestures combine together on our custom gesture-detection engine.

  1. Attaching gestures through gesture(_: including:), where a mask controls precedence for the view's and subviews' gestures. Perhaps, the mask control could be passed down through the environment or directly through the Fiber reconciler. Then, masking could change the priority of the gestures.
  2. highPriorityGesture(_:including:) could probably be implemented by a also changing internal gesture priorities.
  3. defersSystemGestures(on:) is probably difficult to implement on the web; more research is required.
  4. ExclusiveGesture is a gesture where each of the provided sub-gestures fires independently. The gesture's value is either the first or second sub-gesture value. This would likely be implemented by polling both sub-gestures. The first sub-gesture to fire would propagate its value to the exclusive gesture, and polling for the sub-second gesture would stop. After the first sub-gesture ends, the state would be reset.
  5. A SequenceGesture waits for the first sub-gesture to fire before polling for the second one. Using the same principle as ExclusiveGesture, the first sub-gesture would be allowed to complete, and then the second one, for the sequence gesture to fire. The gesture's value is either just the first sub-gesture value, or both sub-gestures' values.
  6. A SimulaneousGesture/simultaneousGesture(_:including:) allows its sub-gestures to fire concurrently. Both gestures are continuously polled, making the simultaneous gesture's value equivalent to (First.Value?, Second.Value?).
@filip-sakel
Copy link
Contributor Author

Also attaching the swift interface Gesture definition:

@available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *)
public protocol Gesture {
  associatedtype Value
  static func _makeGesture(gesture: SwiftUI._GraphValue<Self>, inputs: SwiftUI._GestureInputs) -> SwiftUI._GestureOutputs<Self.Value>
  associatedtype Body : SwiftUI.Gesture
  var body: Self.Body { get }
}

@carson-katri
Copy link
Member

Can a custom Gesture conforming use any DynamicProperty, such as @Environment?

@filip-sakel
Copy link
Contributor Author

Yes, it appears so. Though, state is reset when the gesture ends (you can only change it when the gesture is updating). I think that’s why _makeGesture takes a graph value.

@shial4
Copy link

shial4 commented Feb 20, 2023

Can’t wait for this feature to be part of tokamak :P

@shial4
Copy link

shial4 commented Jun 5, 2023

quick question, would each renderer deliver the necessary information for gestures at a given target? such as GTK? Web and so on?

such as

  • touchBegan
  • toucheEnded
  • canceled and changed?

and gestures would be built on top of it? in the TokamakCore layer?

@filip-sakel
Copy link
Contributor Author

It will probably depend on specific gestures. In some early experimentation, I found that Safari exposed rotation/scale gestures but not multi-touch events to implement these gestures by hand. Thus, at least some renderers will require implementing gestures on their own. However, even if not required, renderers should expose system gestures when possible, following Tomamak's philosophy of using mostly native functionality. However, if two targets, and by extension their renderers, do not support some gestures, we could implement the business logic in TokamakCore to avoid code duplication between renderers.

@shial4
Copy link

shial4 commented Aug 2, 2023

initial PR-adding support can be found here
#538

@shial4
Copy link

shial4 commented Aug 9, 2023

Based on work I did here
I would like to start brainstorming and ask for some input/help on a few topics.

  1. Firstly let us start with AnyGesture<Value>, we need this type of erasure for Gestures. I've attempted it many times, however, each time I've got blocked by the Gesture.Body. Is there a smart way of doing it? Anyone can help?

  2. We need some way of blocking gestures if we captured gestures already, in a subview. HTML doesn't provide such functionality. Listeners are delivered to any of the receivers. Following up, we need to handle GestureMask to enable and disable them accordingly. Plus if the view is disabled then the gesture should be too. Meaning communication needs to be done both ways, up & down the view tree.

  3. Finally, the last thing I require assistance with is Transactions. As They are not working for GestureState and animation isn't happening like it is for SwiftUI. TODO can be found here. Follow up the code from there.

.gesture(LongPressGesture(minimumDuration: 2)
                        .updating($isDetectingLongPress) { currentState, gestureState, transaction in
                            gestureState = currentState
                            transaction.animation = Animation.easeIn(duration: 2.0)
                        }
                        .onEnded { finished in
                            self.completedLongPress = finished
                        })

this code animates with SwiftUI but it doesn't with Tokamak

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants