Is your feature request related to a problem? Please describe.
I am developing NVDA remote companion, which includes an android application.
I want to be able to controll pc with touch screen. For this, several modes should be added, because we have limited possibilities, like it is done in touchscreen support in nvda.
This mode includes text mode, object navigation mode, browse mode and so on.
Currently, Remote client in nvda only supports forwarding key presses, and braille input gesture subset. If we take object navigation mode as an example, we cannot simply forward keys, becaue on remote side, gesture might be different, and leader may not have information which keys to send, and when it comes to controll with touchscreen, it is limited to swipes in four directions, which should be mapped to object hyerarchy.
Describe the solution you'd like
I would like to propose an additional type, input_gesture, which will be handled on follower's side and will execute gesture which was sent by follower.
I am ready to contribute final solution and send a pull-request.
Describe alternatives you've considered
n/a
Additional context
No response
Is your feature request related to a problem? Please describe.
I am developing NVDA remote companion, which includes an android application.
I want to be able to controll pc with touch screen. For this, several modes should be added, because we have limited possibilities, like it is done in touchscreen support in nvda.
This mode includes text mode, object navigation mode, browse mode and so on.
Currently, Remote client in nvda only supports forwarding key presses, and braille input gesture subset. If we take object navigation mode as an example, we cannot simply forward keys, becaue on remote side, gesture might be different, and leader may not have information which keys to send, and when it comes to controll with touchscreen, it is limited to swipes in four directions, which should be mapped to object hyerarchy.
Describe the solution you'd like
I would like to propose an additional type, input_gesture, which will be handled on follower's side and will execute gesture which was sent by follower.
I am ready to contribute final solution and send a pull-request.
Describe alternatives you've considered
n/a
Additional context
No response