Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[plock-10]: Attempted fix for dropped chars #22

Merged
merged 2 commits into from
Feb 3, 2024

Conversation

jasonjmcghee
Copy link
Owner

@jasonjmcghee jasonjmcghee commented Feb 2, 2024

(quick fix at lunch...)

@kengoodridge @pentamassiv

I managed to get it working with this approach. Note the 19 character chunks... Did not work with 20. 🤯

Also the newline fix... @kengoodridge, also seemed necessary - I added a space before every newline, which fixed it for me, but checking if a text chunk started with one was not enough.

We'll figure out the right way to get you credit @kengoodridge - maybe I'll open my PR as a modification of yours...

@pentamassiv - does this fix make any kind of sense to you?

I tried sleeps as you suggested to no avail, I also noticed the 2ms sleep was outside of the chunked loop, which seemed wrong as a HID signal was being sent, potentially with no delay.


FWIW the easiest way to reproduce this issue is to cat a large text file and use diff command.

@pentamassiv
Copy link

Sorry, I was going to comment on this yesterday, but it was already late at night so I decided to reply today after some sleep. Now it is late at night again :P

If I understand your changes correctly, you needed to reduce the length of the chunks to 19, because you replace each \n with space+\n, so they can get longer after making the chunks. This can potentially push a new line character to the start of a new chunk in the text function. The workaround should work in most cases, but would fail for strings where there are multiple newlines, (e.g "123456789012\n\n\n\nerr").

The beginning and end of chunks are marked with a | char
input:    123456789012\n\n\n\nerr
plock A:  |123456789012\n\n\n\nerr|
plock B:  |123456789012 \n \n \n \nerr|
enigo:    |123456789012 \n \n \n |\nerr|  <-- new chunk starts with a newline

I think you could improve the workaround by using
replacen("\n", " \n", 1) to only replace the first occurrence of a newline. Then the chunks(19) should always work reliably.

The proper fix has to be in enigo, but I don't have a suspicion where to look right now. I am also traveling and sick, so it might take some time before I look into it :/

@jasonjmcghee
Copy link
Owner Author

@pentamassiv totally true. Great catch.

@pentamassiv
Copy link

I noticed that you removed the hack to type strings starting with a newline. However you are still splitting the strings in chunks of 19 characters. This should not be necessary. Enigo does that internally for you.

@jasonjmcghee
Copy link
Owner Author

@pentamassiv i was going to remove it, but it provides a way to cancel the operation. I haven't spent the time to come up with a better way yet 😅 - is there a concept of an interrupt signal in enigo?

@pentamassiv
Copy link

Interesting, why do you need that? I guess it is because you sometimes want to type a lot of text and it might take too long due to enigo using sleep on macOS?

I'd like to get rid of the sleep and then it should be instantaneous just like on Windows so in the future there should be no need for an interrupt.

@jasonjmcghee
Copy link
Owner Author

How often do you ctrl+c in the terminal? I do it all the time. I also stop LLMs very often halfway through a response. That's what the "escape" is for. It's to stop whatever is running. If there's text streaming out, that should stop too.

@pentamassiv
Copy link

Enigo will release all held keys when you terminate the program, but that's it. Since you only enter text it does not seem relevant to your use-case though. Chunking the text is the only option there is to stop it mid way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants