Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic language detection based on unicode ranges #2990

Open
nvaccessAuto opened this issue Feb 13, 2013 · 73 comments
Open

Automatic language detection based on unicode ranges #2990

nvaccessAuto opened this issue Feb 13, 2013 · 73 comments

Comments

@nvaccessAuto
Copy link

@nvaccessAuto nvaccessAuto commented Feb 13, 2013

Reported by ragb on 2013-02-13 12:26

This is kind of a spin-of of #279.

As settled some time ago, this proposal aims to implement automatic text “language” detection for NVDA. The main goal of this feature is for users to read text in different languages (or better said, language families) using proper synthesizer voices. By using unicode character ranges, one can understand at least the language family of a bunch of text: Latine-based (english, german, portuguese, spanish, french,…),, cyrilic (russian, ukrainian,…), kanji (japanese, maybe korean, - I that already written but it is too much for my memory), greek, arabic (arabic, farsy), and others more.

In broad terms, the implementation of this feature in NVDA requires the addition of a detection module in the speech sub system, that intercepts speech commands and adds “fake” language commands for the synth to change language, based on changes on text characters. It is also needed an interface for the user to tell NVDA what particular language to choose for some language family, that is, what to assume for latin-based, what to assume for arabic-based characters, etc.

I’ve implemented a prototype of this feature in a custome vocalizer driver, with no interface to choose the “proper” language. Prliminary testing with arabic users, using arabi and english vocalizer voices, has shown good results, that is, people like the idea. Detection language code was adapted from the Guess_language module, removing some of the detection code which was not applicable (tri-gram detection for differentiating latin languages, for instance).

I’ll explain the decision to use, for now, only unicode based language detection. Language detection could also be done using trigrams (see here for instance), dictionaries, or other heuristics of that kind. However, the text that is passed each time for the synthesizer is very very small (a line of text, a menu name, etc), which makes these processes, which are probabilistic by nature, very very error-prone. From my testing, applying trigram detect for latin languages in NVDA showed completely unusable, further from adding a noticeable delay when speaking. For bigger text content (books, articles, etc.) it seems to work well, however I don’t know if this can by applied somehow in the future, say by analyzing virtuel buffers, or anything.

Regarding punctuation, digits, and other general characters, I’m defaulting to the current language (and voice) of the synth.

I’ll create a branch with my detection module integrated within NVDA, with no interface.

Regarding the interface for selecting what language to assume for each given language group (when applicable, greek, for instance, is only itself), I see a dialog with various combo boxes, each one for each language family, to choose the language to be used. I think restricting the available language choices from the available languages of the current synth may improve usability. I don’t know where to put that dialog, or what to call it (“language detection options”?).

Any questions please ask.

Regards,

Rui Batista
Blocked by #5427, #5438

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Feb 13, 2013

Comment 1 by jteh on 2013-02-13 12:29
Is this technically a duplicate of #1606? (If so, we'd probably close #1606, since this one contains more technical detail.)

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Feb 13, 2013

Comment 2 by ragb (in reply to comment 1) on 2013-02-13 12:37
Replying to jteh:

Is this technically a duplicate of #1606? (If so, we'd probably close #1606, since this one contains more technical detail.)

I thin #1606 is only related with ponctuation, although, to be honest, I don't understand that ticket's description that well.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented May 21, 2013

Comment 3 by Ahiiron on 2013-05-21 14:35
I think for usability and reliability as you said, the user would probably configure languages to auto-switch to, like the Vocalizer implementation.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Jul 13, 2015

Comment 4 by dineshkaushal on 2015-07-13 05:30
Please check auto language detection.

There is a Writing Script dialog within preferences menu. This dialog has options to add/ remove and move up and down languages. I tested with 2 Devanagari languages Hindi and marathi, and I could get the proper language code for those languages in the log.

Code is in branch in_t2990

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Aug 17, 2015

Comment 6 by dineshkaushal on 2015-08-17 19:16
In this round, the adjacent ranges are merged, code is reorganized, option to ignore language detection for language specified by document is added, detailed review of sequence is done and comments are improved. There are 2 branches, in_t2990 branch with iso 15924 script codes with a bit more complicated code and presumably fast code, and in_t2990_simple with iso codes removed with simple code and hopefully not slow code.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Sep 21, 2015

Comment 7 by jteh on 2015-09-21 05:09
Note: there was a round of code review which was unfortunately lost resultant to the recent server failure. However, the review was addressed. The following relates to the most recent changes.

Thanks for the changes, Dinesh. This looks pretty good. A few things:

gui

  • I didn't spot this the first time, but you pass the name= keyword argument when creating WritingScriptsDialog.languageList. As far as I can tell, this name argument isn't used for anything, so it should be removed. The label (which you add above) is what gets displayed.
  • You set a tool tip for the language list, but this was copied from elsewhere and isn't relevant here. It should be removed.
  • The label "Priority Language for auto language detection" is a bit awkward. Perhaps "Preferred languages for auto language detection"?
  • Looking at this further (now that I can add and remove entries, etc.), I think a wx.ListBox with single selection would be more appropriate, as a sighted user can then see all of the preferred languages.
  • If you hit any of the buttons in that dialog, the user's position in the list becomes invalid. Obviously, if the user removes an item, you can't restore to that item, but the position should at least be the item above or below. At present, the user just loses their position and pressing down arrow just throws them to the top item.

unicodeScriptHandler

  • This file needs a copyright header. Copy from another module and tweak; e.g. browseMode.py.
  • langIDToScriptID is a dict (which is unordered), which means scriptIDToLangID is also unordered. I now understand why you originally had the setdefault code, but even this doesn't solve the problem of us not really understanding what language will get chosen as a fallback. I realise it's hard to choose defaults for some scripts, but there probably should at least be some defaults; e.g. English probably makes sense for Latin given the prevalence of English text. You should be able to achieve this with an OrderedDict for langIDToScriptID. scriptIDToLangID doesn't need to be ordered; you can just use your original setdefault code for that, since you always want to take the first language. There should be comments about this, though.

unicodeScriptPrep

  • Needs copyright header; see above.
  • Please add a brief docstring explaining what this module does.

Documentation

  • Please add documentation to the User Guide concerning the new dialog.

Thanks!

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Sep 28, 2015

Comment 8 by dineshkaushal on 2015-09-28 08:11
Fixed all the code related issues. I have not yet added the documentation; I will do it once the code is ok. Should I modify userGuide.html?

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 7, 2015

Comment 9 by dineshkaushal on 2015-10-07 13:34
Added documentation for Writing Scripts section in configuring NVDA main section.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 18, 2015

Comment 10 by James Teh <jamie@... on 2015-10-18 23:55
In commit eb09127:
Merge branch 't2990' into next

Incubates #2990.
Changes:
Added labels: incubating

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 19, 2015

Comment 11 by jteh on 2015-10-19 01:22
Thanks. I made quite a few changes before incubating. Here are the significant ones:

  • Fixed bug where auto language detection was occurring for characters even with auto language switching disabled.
  • Fixed bug where pressing Remove button after opening the dialog removed the last item instead of the first.
  • Fixed bug where pressing the Remove or Move up buttons in the dialog when there were no items caused an exception.
  • Fixed bug in the dialog where if you removed all languages and then pressed Add, the languages you removed wouldn't appear in the Add dialog. This is a common Python mistake you should be aware of: when boolean testing a list (e.g. if not ignoreLanguages), the list being empty will be treated as False. Most of the time, this is what you want, but in some cases (like this one), you actually need to know the difference between the empty list and None (no list provided). To differentiate these, you must use: if ignoreLanguages is None
  • Translator comments: corrections, removed extraneous comments, added missing comments.
  • Changed "writing scripts" to "language detection" across the board. Looking at this as an actual user, Writing Scripts just isn't intuitive to most users. We only use this for language detection anyway. I also made some other terminology and documentation more user friendly.
  • Renamed unicodeScriptHandler module to languageDetection, as it's only used for this and this is clearer about its purpose.
@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 19, 2015

Comment 12 by MarcoZehe on 2015-10-19 10:46
This has some unwanted side effects: The latin unicode range seems to be hard-coded to English, but the range may also include French, German, and other European languages. In my case, I am bilingually working in English and German contexts all day. So even when my Windows is set to English, my synthesizer is usually set to the German voice, because I can stand the German voice speaking English, but I cannot stand the English voice, of any synthesizer, try to speak German.

In consequence: If I try to set my synth to German Anna in the Vocalizer 2.0 for NVDA, it will still use the English Samantha voice for most things, even German web pages. I have to turn off language detection completely to get my old functionality back. This will, of course, also take away the language switching where the author did use correct lang attributes on web sites or in Word documents.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 19, 2015

Comment 14 by James Teh <jamie@... on 2015-10-19 11:59
In commit 6fd9ad3:
Merge branch 't2990' into next: Hopefully fixed problems which caused the voice language not to be preferred for language detection.

Incubates #2990.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 19, 2015

Comment 16 by nishimotz on 2015-10-19 12:32
I have tested nvda_snapshot_next-12613,8dbd961 with an add-on version of Japanese TTS,
which is developed by me and supports LangChangeCommand.

For example, the word 'Yomu' ('read' in Japanese) usually consists of two characters,
0x8aad and 0x3080.

読む

The first one is ideographic character (Chinese letter),
and the second is phonetic character (Hiragana).

To give correct pronunciation, Japanese TTS should take the two characters at the same time,
because the reading of Chinese character is context-dependent in Japanese language.

With this version of NVDA, the two letters are pronounced separately, so the reading of first letter is wrong.
If automatic language detection is turned off, the issue does not occur.

In the unicodeScriptData.py, it seems that 0x8aad is in the range of "Han",
and 0x3080 is "Hiragana".
For Japanese language, they should be treated as single item in the detectedLanguageSequence.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 26, 2015

Comment 18 by jteh (in reply to comment 16) on 2015-10-26 11:04
Dinesh, thoughts on comment:16?

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 27, 2015

Comment 19 by nvdakor on 2015-10-27 07:51
Hi,
To whoever coded lang detection dialog: may I suggest some GUI changes:

  • Move up/down buttons: it might be better to lay it out horizontally. Also, shouldn't these buttons become disabled once the index of the selected language (GetString Selection) reaches lower or upper limit (0 or -1)?
  • Add/remove: It might be better to position them horizontally, following what's available in config profiles dialog.
    I'd be happy to push these changes as part of t2990 branch. Thanks.
@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 27, 2015

Comment 20 by nvdakor on 2015-10-27 07:53
Hi,
On second thoughts, I'd wait until the fundamentals are done (including fixing comment 16) before pushing GUI changes.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 27, 2015

Comment 21 by mohammed on 2015-10-27 13:47
hi.

another GUI change would be to only have a close button. I don't think OK and cancel are functional in this dialogue box. thoughts?

on another note, since #5427 is closed as fixed, I think it should be removed from the blocking tickets?

thanks.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 28, 2015

Comment 22 by jteh on 2015-10-28 00:41
Holding this back for 2015.4, as there are outstanding issues, and even if they are fixed, there won't be sufficient time for them to be tested.
Changes:
Milestone changed from near-term to None

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 28, 2015

Comment 23 by jteh (in reply to comment 21) on 2015-10-28 00:51
Replying to mohammed:

another GUI change would be to only have a close button. I don't think OK and cancel are functional in this dialogue box.

They should be. Cancel should discard any changes you make (e.g. removing a language you didn't intend to remove), whereas OK saves them.

on another note, since #5427 is closed as fixed, I think it should be removed from the blocking tickets?

No, it shouldn't. Blocking indicates whether another ticket was required for this one, whether it's fixed yet or not. If it is fixed, it's still useful to know that it was required.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 28, 2015

Comment 24 by dineshkaushal on 2015-10-28 07:32
Regarding comment 16:

The problem of han and Hiragana is occurring because our algorithm assumes that each language has only one script. One possible solution is that during unicodeData building we can name all han and hiragana characters as something HiraganaHan and then add language to script mapping for Japanese as HiraganaHan we could do the same for chinese and Korean.

Another solution is that we could create script groups and add a check for script groups for each character and do not split strings for script groups.

Could anyone explain what scripts are relevant for Japanese, Chinese and Korean languages? and how various scripts combine for these languages.

Alternatively a reliable reference for a resource.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 28, 2015

Comment 26 by nishimotz on 2015-10-28 08:49
Speaking from conclusion, the approach of DualVoice addon is much useful for Japanese language users:

  • Latin characters are treated as second voice language, which is configurable.
  • In some cases, numbers or punctuations should be treated as Japanese language (i.e. use Japanese symbol dictionary or use Japanese TTS), so it would be nice for the user to allow turning on/off. For example, non-native Japanese language users prefer reading numbers in their native language such as English, however, it is difficult to listen that for Japanese native users.
  • Sometimes, mixed use of Latin/Non-Latin characters would be natural to be treated as Japanese (the primary language) sentence. Heuristics can be used for such detection and the user may have choice of priority regarding this.

I think such requirements are because of Japanese TTS and symbol dictionary, which already covers wider ranges of Unicode characters by historical reasons.

If such requirement is only for Japanese users, I will work around only for Japanese.
However, I would like to hear from other language users who have similar requirements.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 29, 2015

Comment 27 by jteh on 2015-10-29 00:35
Note that switching to specific voices and synthesisers for specific languages is not meant to be covered here. We'll handle that separately, as among other things, it depends on speech refactor (#4877).

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 29, 2015

Comment 28 by nishimotz on 2015-10-29 03:01
In Japan, there are some users of Vocalizer for NVDA.

https://vocalizer-nvda.com/docs/en/userguide.html#automatic-language-switching-settings

I am asking them to the usage of this functionality.

As far as I heard, automatic language switching based on content attribute and character code should be separately disabled for Japanese language users.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 29, 2015

Comment 29 by jteh (in reply to comment 28) on 2015-10-29 03:09
Replying to nishimotz:

In Japan, there are some users of Vocalizer for NVDA.

As far as I heard, automatic language switching based on content attribute and character code should be separately disabled for Japanese language users.

To clarify, do you mean that these users disable language detection (using characters), but leave language switching for author-specified language enabled? Or are you saying the reverse? Or are you saying that different users have different settings, but all agree both need to be toggled separately? How well doe sthe Vocalizer language detection implementation work for Japanese users?

For what it's worth, I'm starting to think we should allow users to disable language detection (i.e. using characters) separately. At the very least, it provides for a workaround if our language detection code gets it wrong. I'm not convinced it is necessary to separately disable author-specified language switching, though. If you disagree, can you explain why?

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 29, 2015

Comment 30 by nishimotz on 2015-10-29 03:51
Author-specified language switching is useful for users of multilingual synthesizers, however it should be disabled in some cases.

For example, if a synthesizer supports English and Japanese, and if the actual content of a web site is written in Japanese characters, and the element is incorrectly attributed as lang='en', the content cannot be accessed at all, without turning off the author-specified language switching.
Such websites have been reported by the NVDA users in Japan.

I am now investing the implementation of Vocalizer language detection by myself, however, I heard that they are only useful for working with multilingual materials.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Oct 29, 2015

Comment 31 by nishimotz on 2015-10-29 12:41
As far as I have investigated, Vocalizer driver 3.0.12 covers various needs of Japanese NVDA users.

The important feature is:
"Ignore numbers and common punctuation when detecting text language."
Without this, automatic language detection based on characters is difficult to use with Japanese TTS.

By the way, it would be nice to allow disabling "language switching for author-specified language" and enabling "detect text language based on unicode characters" in some cases.
Vocalizer for NVDA does not allow this so far.

For example, Microsoft Word already has ability of content language detection based on character code.
For choosing visual appearance such as display font, this works very well.
However, it would be very difficult to understand if NVDA voice languages are switched by such language attributes, because Japanese sentence usually contains half-width numbers or symbols and full-shape Japanese characters. To be correctly pronounced, they should be sent to Japanese TTS simultaneously.

I am now asking to some friends regarding this, but it seems Japanese users of Microsoft Word cannot use the language switching of NVDA because of this.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Nov 2, 2015

Comment 32 by James Teh <jamie@... on 2015-11-02 05:30
In commit 2bba21c:
Revert "NVDA now attempts to automatically detect the language of text to enable automatic language switching even if the author has not specified the language of the text. See the Language Detection section of the User Guide for details."

This is causing problems for quite a few languages and needs some additional work before it is ready.
This reverts commits 60c25e8 and 72f8514.
Re #2990.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Nov 2, 2015

Comment 33 by jteh on 2015-11-02 05:31
Changes:
Removed labels: incubating

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Nov 4, 2015

Comment 34 by mohammed on 2015-11-04 16:00
hi.

it'd be good if people here can try the automatic language implementation in the new ad-on from Codefactory. for me it works if I choose an English voice from NVDA's voice settings dialog box. the only annoyance for me is that I hear punctuation marks with the Arabic voice regardless of "Trust voice's language when processing characters and symbols" state.

Jamie, can we probably make this functionality that has been reverted available as an ad-on? because for me, it is the most successful implementation where my primary language is English and Arabic is a secondary. it worked perfectly for me.

@nvaccessAuto
Copy link
Author

@nvaccessAuto nvaccessAuto commented Nov 4, 2015

Comment 35 by jteh (in reply to comment 34) on 2015-11-04 22:24
Replying to mohammed:

it'd be good if people here can try the automatic language implementation in the new ad-on from Codefactory.

Do you mean that the Code FActory add-on includes it's wn language detection or do you mean you were trying an NVDA next build which included this functionality (before it was reverted)? I assume the second, but just checking.

Jamie, can we probably make this functionality that has been reverted available as an ad-on?

Unfortunately, no; it needs to integrate quite deeply into NVDA's speech code. However, work on this isn't being abandoned. It just needs more work before it's ready for wide spread testing again.

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 15, 2017

@dineshkaushal please review my pull request on your repository regarding encoding issues.

I am still investigating regarding language detection.

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 15, 2017

@dineshkaushal updated my pull request regarding number characters.

without the fix, cases such as follows cause the problem:

1個
(one item in English)

In this case, number characters should be treated as Japanese text.
Otherwise, "one" in English, then "ko" (reading of the ideographic character) in Japanese.
It is so stupid.
Japanese TTS handles this whole text and gives the correct reading "ikko."

@mohdshara
Copy link

@mohdshara mohdshara commented Aug 15, 2017

I need help with this: if I run git clone --recursive https://github.com/nvda-india/nvda/tree/in-t2990-review I get: fatal: repository 'https://github.com/nvda-india/nvda/tree/in-t2990-review/' not found
cloning the whole nvda-india works, however it doesn't include this tree. I am sure git experts can tell.

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 15, 2017

git clone --recursive -b in-t2990-review https://github.com/nvda-india/nvda
@mohdshara
Copy link

@mohdshara mohdshara commented Aug 15, 2017

@nishimotz thanks a lot. that worked. it works beautifully with Windows one core voices. Is there a way to choose which voice speaks a language if there's more one such voice in that synth?

@jcsteh
Copy link
Contributor

@jcsteh jcsteh commented Aug 15, 2017

@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Aug 16, 2017

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 16, 2017

Original code treats numbers as Common category.
Because detectScript() ignores Common category, the language code of digit numbers will be same as the preceding characters.
For example, even if Japanese has higher priority, "Excel 2016" is spoken in English to the end.
It is difficult to understand for Japanese language users.

My modification treats digit numbers, for all languages, as their native script, so the preferred language priority is respected.
For example, if Japanese has higher priority, "Excel" is spoken in English and "2016" is in Japanese.
This is much easier to understand.

@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Aug 19, 2017

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 19, 2017

Use of default language sounds good, however, I found an issue with your new revision.

setup:

  • Windows 10 Japanese (English available as additional language)
  • NVDA General settings > langauge : en (English)
  • NVDA preferred language : empty
  • NVDA Synthesizer : OneCore voice

procedure:

  • open NVDA menu > Preferences
  • move to "Windows 10 OCR"
  • expected : English voice "Windows ten o c r"
  • actual : English voice "Windows", Japanese "juu (ten in Japanese)", English "o c r"
@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 20, 2017

Tests are working as expected.

The second parameter of detectLanguage() is given in speech.py.
The locale value is used as default language of language detector.

However, if automatic language detection is enabled at the NVDA voice setting, locale value is set to the synthesizer's default language.
If Microsoft David is selected, locale is set to 'en_us.'
If Microsoft Ichiro is selected at the voice setting, locale is set to 'ja_jp,' even NVDA general setting is set to English.
As the result, if English is set to NVDA language, number is spoken in Japanese.

Am I correct?
Is that the expected behavior?

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 20, 2017

I have learned more about your code.
I am still not sure how voice language (aka default language) and prerefenres should be used.
For example, this test, written by me, fails.
It is because second parameter of detectLanguage has higher priority than preferred languages, so Number always respects the voice language.
Is it relevant or not?

	def test_case1(self):
		combinedText = u"Windows 10 OCR"
		config.conf["languageDetection"]["preferredLanguages"] = ("ja",)
		languageDetection.updateLanguagePriorityFromConfig()
		detectedLanguageSequence = languageDetection.detectLanguage(combinedText, "en_US")
		self.compareSpeechSequence(detectedLanguageSequence, [
			LangChangeCommand("en"),
			u"Windows ",
			LangChangeCommand("ja"),
			u"10 ",
			LangChangeCommand("en"),
			u"OCR"
		])
		config.conf["languageDetection"]["preferredLanguages"] = ()
		languageDetection.updateLanguagePriorityFromConfig()

@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Aug 21, 2017

nishimotz added a commit to nishimotz/nvda that referenced this issue Aug 21, 2017
@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 21, 2017

Thank you for clarifications regarding preferences.

I made new pull request which only adds tests regarding Japanese.

dineshkaushal added a commit to nvda-india/nvda that referenced this issue Aug 23, 2017
@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Aug 23, 2017

@nishimotz
Copy link
Contributor

@nishimotz nishimotz commented Aug 23, 2017

So far, Japanese language users can accept the behavior of current implementation, I think.

@mohdshara
Copy link

@mohdshara mohdshara commented Aug 26, 2017

could you summarize what work needs to be done before you consider send this as a BR to be reviewed? For Arabic this works as expected, and it seems this is true for Japanese too.

@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Nov 23, 2017

@zstanecic
Copy link
Contributor

@zstanecic zstanecic commented Nov 23, 2017

@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Nov 23, 2017

@zstanecic
Copy link
Contributor

@zstanecic zstanecic commented Nov 23, 2017

@feerrenrut
Copy link
Contributor

@feerrenrut feerrenrut commented Nov 27, 2017

Yes, it's now too late for this change to go into 2017.4. This is perhaps best anyway, the associated PR ( #7629 ) is a large change, which will take some time to review and given the nature of the change, it will be good for many people to use it via master and next builds before it goes into a release

@dineshkaushal
Copy link
Contributor

@dineshkaushal dineshkaushal commented Nov 27, 2017

@Adriani90
Copy link
Collaborator

@Adriani90 Adriani90 commented Jul 29, 2019

@dineshkaushal are you still considering to continue your work on this? It would be highly appreciated. Since there has been put a lot of work in that PR, it would be really too bad if this is discontinued. Now that NVDA has been migrated to Python 3, I guess the PR is not compatible anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
10 participants
You can’t perform that action at this time.