Fix a bug causing NVDA to use the wrong language when announcing selected/unselected symbols.#7687
Conversation
selected/unselected symbols. When languages of the currently selected voice and of NVDA itself differ, NVDA uses the voice language when announcing typed and erased characters. This wasn't the case when selecting by character, the NVDA language was always used.
|
Since there is no bug for this, could you please update this with steps to reproduce. Thanks. |
|
Sorry if this is a duplicate, but it seems my email reply didn't go through, at least I can't see it on the page.
|
|
Thanks for adding that. |
feerrenrut
left a comment
There was a problem hiding this comment.
Overall this looks ok. However this causes the unit tests to fail, so this will need to be updated before we can accept the PR. To run the tests run scons tests.
Also, I would like this to be tested with the new version of espeak that is currently incubating (on the next branch)
|
I've just tested it with the next branch and, as expected, it works exactly the same because it's an NVDA issue that can be observed when using espeak but it's not unique to one synthesizer, it can be observed with other synths too, for example ivona2 on sapi5. That synth also communicates to NVDA about what language it speaks so the issue affects it too. I've provided my instructions for espeak because every NVDA instance has one available but this was only meant as an easily-reproducible example. I'l look into the unit tests too. At a first glance, it seems that now a synthesizer is required to determine the language and the tests fail because it's not available in the testing environment. Providing a mock and rewriting the relevant method so that it can handle this case properly and revert to the old behavior when needed seem like good approaches to solve this problem. |
The previous commit broke the unit tests because getCurrentLanguage needed a synthesizer to be set but it wasn't in the test environment. Fixed the getCurrentLanguage method to not require a synthesizer and fall back to using NVDA's language. when a synth is not available, just as if it didn't declare what language it was using.
NVDA now uses the correct language when announcing symbols when text is selected.
Summary of the issue:
When languages of the currently selected voice and of NVDA itself
differ, NVDA uses the voice language when announcing typed and erased
characters. This wasn't the case when selecting by character, the NVDA
language was always used.
Description of how this pull request fixes the issue:
There was a function called "getCurrentLanguage" already present in speech.py. It just needed to be used instead of LanguageHandler to determine the locale used when reading the selected text.
Testing performed:
I have tested this on the english language version of NVDA and polish espeak and I can confirm that it works correctly now.
Change log entry:
Bug Fixes
NVDA now uses the correct language when announcing symbols when text is selected.