Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#368 - New mouse step for low-level mouse down and mouse up #369

Merged
merged 6 commits into from
Mar 28, 2019

Conversation

kensoh
Copy link
Member

@kensoh kensoh commented Mar 28, 2019

Building on the series of enhancements around Sikuli integration #361 and #365 for greater control of mouse, publishing a PR that adds a new mouse step.

This step lets user send a low-level mouse down or up event to the user interface. This can be used for complex UI mouse actions. For eg, when dragging and dropping UI elements -

// below drags the UI element left by 200 pixels
hover some_element.png
mouse down
target_x = mouse_x() - 200
hover (`target_x`,`mouse_y()`)
mouse up

create a new clean_master branch by syncing from upstream
include live mode dynamic variable for timeout step -
aisingapore#347
…ouse cursor

- mouse_xy() - returns x,y coordinates as string text eg, (200,400)
- mouse_x() - returns x coordinate as integer number eg, 200
- mouse_y() - returns y coordinate as integer number eg, 400

Use case would be using vision step to click on a specified x,y coordinates on the screen. But this opens up possibility of enhancement to TagUI steps such that (x,y) coordinates can also be given as UI element identifiers instead of current web locators or image locators. (this requires sikuli visual automation)

More details here - aisingapore#361
Building on aisingapore#361 and Sikuli integration, this is a PR that adds ability for TagUI to interact with UI elements using (x,y) coordinates. Backward compatible with web locators and image locators.

For eg, `click (400,200)` will click on the 400th pixel from left of the screen and 200th pixel from top of the screen. Works also for other steps - `rclick`, `dclick`, `hover`, `type`, `select`.
…p actions

Building on the series of enhancements around Sikuli integration aisingapore#361 and aisingapore#365 for greater control of mouse, publishing a PR that adds a new `mouse` step.

This step lets user send a low-level mouse down or up event to the user interface. This can be used for complex UI mouse actions. For eg, when dragging and dropping UI elements -

```
// below drags the UI element left by 200 pixels
hover some_element.png
mouse down
target_x = mouse_x() - 200
hover (`target_x`,`mouse_y()`)
mouse up
```
@kensoh kensoh merged commit 40f46c4 into aisingapore:master Mar 28, 2019
kensoh added a commit to tebelorg/TagUI that referenced this pull request Apr 8, 2019
A new release of TagUI. The goal will be to create packaged installations of the changes, bug fixes and enhancements since August 2018, as well as the corresponding documentation updates. Notable changes include PRs from users on Japanese (by @ariarijp), German (@derhackler) and French (@AmirJaballah) TagUI translation engine (prior to that only English and Chinese is vetted manually by me).

And the recent PRs I sent on aisingapore#350 enabling dynamic variables in live mode, aisingapore#352 enabling datatable in test mode, aisingapore#371 keyboard step for sending complex keystrokes to the screen, aisingapore#369 mouse step for low-level mouse down and up actions on the screen, aisingapore#366 enabling interacting with UI elements by specifying their (x,y) coordinates, aisingapore#362 mouse_xy() / mouse_x() / mouse_y() helper functions, aisingapore#383 graceful error handling for unexpected and unhandled errors, aisingapore#385 enhancement to report option that has auditing and tracking capability.

Last, and most importantly, migration from SikuliX v1.1.3 to SikuliX v1.1.4. This is because in between the 2 versions there is a move from much older OpenCV and Tesseract to major new versions. Performance and accuracy (image finding and OCR from image to text) should improve. However, I would imagine the greater accuracy may cause the automation results of some use cases to be different, as more accurate text gets retrieved and more accurate image gets found. Ie, scripting base on somewhat imperfect anchors and placeholders can sometimes break when the anchors and placeholders improve and are no longer the same, if you know what I mean.

Due to this reason, this release will be a major release to v5.0 as users, particularly users of visual automation, are encouraged to validate that the new release with SikuliX v1.1.4 is still giving the kind of automation results they want, or make modifications if needed, before they migrate to the new TagUI release.
kensoh added a commit that referenced this pull request Apr 8, 2019
…nCV & Tesseract) (#388)

Sending some PR(s) on a new release of TagUI. The goal will be to create packaged installations of the changes, bug fixes and enhancements since August 2018, as well as the corresponding documentation updates. Notable changes include PRs from users on Japanese, German and French (removed names otherwise will spam them with notification) TagUI translation engine (prior to that only English and Chinese is vetted manually by me).

And the recent PRs I sent on #350 enabling dynamic variables in live mode, #352 enabling datatable in test mode, #371 keyboard step for sending complex keystrokes to the screen, #369 mouse step for low-level mouse down and up actions on the screen, #366 enabling interacting with UI elements by specifying their (x,y) coordinates, #362 mouse_xy() / mouse_x() / mouse_y() helper functions, #383 graceful error handling for unexpected and unhandled errors, #385 enhancement to report option that has auditing and tracking capability, #387 ability to disable generation of .log .raw .js files by default

Last, and most importantly, migration from SikuliX v1.1.3 to SikuliX v1.1.4. This is because in between the 2 versions there is a move from much older OpenCV and Tesseract to major new versions. Performance and accuracy (image finding and OCR from image to text) should improve. However, I would imagine the greater accuracy may cause the automation results of some use cases to be different, as more accurate text gets retrieved and more accurate image gets found. Ie, scripting base on somewhat imperfect anchors and placeholders can sometimes break when the anchors and placeholders improve and are no longer the same, if you know what I mean.

Due to this reason, this release will be a major release to v5.0 as users, particularly users of visual automation, are encouraged to validate that the new release with SikuliX v1.1.4 is still giving the kind of automation results they want, or make modifications if needed, before they migrate to the new TagUI release.

---

In addition, #387 disable logging and generation of .raw .js .log files after each run

Sending a PR that lets users switch the logging off by default.

The default TagUI behaviour is generating .raw (expansion of modules and subscripts), .js (generated JavaScript code), .log (log of what happened during the automation. This feature would probably be useful by default for developers. For business users, probably they would not be close enough to the nitty gritty of files to bother with these files.

However, there may be a use scenario where such logs are explicitly not wanted. Whether for the minimalist movement, save environment with non-excessive use of non-necessary storage or perhaps for privacy and security reasons.

With this PR, users only need to put a file tagui_no_logging (can be empty or has contents) in the tagui/src folder. When this file exists, TagUI will delete the .js .raw .log files after executing the script. Thus when `tagui flow` is run there won't be flow.js flow.log flow.raw after execution.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant