-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chr() aliases codepoint numbers mod 2**32 #6123
Comments
From zefram@fysh.org
chr() is reducing the supplied codepoint number mod 2**32. The output -zefram |
From @lizmatFixed with rakudo/rakudo@20fa14be7a , tests needed.
|
The RT System itself - Status changed from 'new' to 'open' |
From @usev6I started to add a test or two for this issue, but then I found the following test in S29-conversions/ord_and_chr.t: #?rakudo.moar todo 'chr max RT #124837' Looking at https://en.wikipedia.org/wiki/Code_point and http://www.unicode.org/glossary/#code_point I understand that U+10FFFF is indeed the maximum Unicode code point. On the JVM backend we already throw for invalid code points (this is handled by class Character, method toChars under the hood: https://docs.oracle.com/javase/8/docs/api/java/lang/Character.html#toChars-int-): $ ./perl6-j -e 'say chr(0x10FFFF+1)' So, IMHO we could do be better on MoarVM as well. It feels to me that the check for valid code points shouldn't be implemented in NQP, but in MoarVM. Actually, MVM_unicode_get_name already has such a check implemented. |
1 similar comment
From @usev6I started to add a test or two for this issue, but then I found the following test in S29-conversions/ord_and_chr.t: #?rakudo.moar todo 'chr max RT #124837' Looking at https://en.wikipedia.org/wiki/Code_point and http://www.unicode.org/glossary/#code_point I understand that U+10FFFF is indeed the maximum Unicode code point. On the JVM backend we already throw for invalid code points (this is handled by class Character, method toChars under the hood: https://docs.oracle.com/javase/8/docs/api/java/lang/Character.html#toChars-int-): $ ./perl6-j -e 'say chr(0x10FFFF+1)' So, IMHO we could do be better on MoarVM as well. It feels to me that the check for valid code points shouldn't be implemented in NQP, but in MoarVM. Actually, MVM_unicode_get_name already has such a check implemented. |
From @toolforgerAm 19.03.2017 um 23:00 schrieb Christian Bartolomaeus via RT:
Yes, that's the maximum value you can encode in four bytes with UTF-8, I was wondering how the Unicode consortium might extend this limit, so I TL;DR I can confirm that 10ffff is going to remain the maximum for the DETAILS Technical limits: UTF-8 could be extended up to 0x108430ffff [1] Political limits: Since Java chose to use surrogate pairs, and UTF-16 is not extensible, Code space exhaustion: Unicode assigns code points like this: Assuming 10,000 new characters per year (which is conservative given the Regards, [1] Bit counts for each prefix are: [2] |
From @usev6On Mon, 20 Mar 2017 01:19:43 -0700, jo@durchholz.org wrote:
Thanks for sharing your findings! I looked some more at our code and the tests we have in roast. Things are complicated ... Probably it wasn't wise from me to mix the original point of this issue ("chr() is reducing the supplied codepoint number mod 2**32") with the question of the maximum allowed code point. But here we go. At one point in 2014 we had additional validity checks for nqp::chr. Those checks also looked for the upper limit of 0x10ffff. According to the IRC logs [1] the checks were removed [2], because the Unicode Consortium made it clear in "Corrigendum #9: Clarification About Noncharacters" [3] that Noncharacters are not illegal, but reserved for private use. (See also the answer to the question "Are noncharacters invalid in Unicode strings and UTFs?" in the FAQ [4].) AFAIU the check for the upper limit was useful, since 0x110000 and above are illegal (as opposed to the Noncharacters). Trying to add those checks back, I found failures in S32-str/encode.t on MoarVM. There are tests, that expect the following code to live. The tests where added for RT 123673: https://rt-archive.perl.org/perl6/Ticket/Display.html?id=123673 $ ./perl6-m -e '"\x[FFFFFF]".sink; say "alive"' # .sink to avoid warning Another thing to note in this context: Since we have \x, the patch from lizmat didn't fix the whole mod 2**32 thing: $ ./perl6-m -e 'chr(0x100000063).sink; say "alive"' # dies as expected So, adding the check for the upper limit for MoarVM [5] led to failing tests in S32-str/encode.t and did not help with the mod 2**32 problem. (AFAIU the conversion to 32 bit is done before the code from [5] in src/strings/ops.c runs.) On the JVM backend things look a bit better. Adding similiar code to method chr in src/vm/jvm/runtime/org/perl6/nqp/runtime/Ops.java helps with the upper limit for code points and helps with the mod 2**32 problem (since we cast to int after said check. The tests from S32-str/encode.t were failing before (they have been fudged for a while). I'd be glad if someone with a deeper knowledge would double check if these tests are correct wrt "\x[FFFFFF]": In case they are dubious, I'd propose to add a validity check for the upper limit to MVM_string_chr (MoarVM) and chr (JVM). That would only leave the mod 2**32 problem on MoarVM. [1] https://irclog.perlgeek.de/perl6/2014-03-28#i_8509990 (and below) [2] usev6/nqp@a4eda0bcd2 (JVM) and MoarVM/MoarVM@d93a73303f (MoarVM) [3] http://www.unicode.org/versions/corrigendum9.html [4] http://www.unicode.org/faq/private_use.html#nonchar8 [5] $ git diff Inline Patchdiff --git a/src/strings/ops.c b/src/strings/ops.c
index 9bfa536..7e77d21 100644
--- a/src/strings/ops.c
+++ b/src/strings/ops.c
@@ -1919,6 +1919,8 @@ MVMString * MVM_string_chr(MVMThreadContext *tc, MVMCodepoint cp) {
if (cp < 0)
MVM_exception_throw_adhoc(tc, "chr codepoint cannot be negative");
+ else if (cp > 0x10ffff)
+ MVM_exception_throw_adhoc(tc, "chr codepoint cannot be greater than 0x10FFFF");
MVM_unicode_normalizer_init(tc, &norm, MVM_NORMALIZE_NFG);
if (!MVM_unicode_normalizer_process_codepoint_to_grapheme(tc, &norm, cp, &g)) { |
1 similar comment
From @usev6On Mon, 20 Mar 2017 01:19:43 -0700, jo@durchholz.org wrote:
Thanks for sharing your findings! I looked some more at our code and the tests we have in roast. Things are complicated ... Probably it wasn't wise from me to mix the original point of this issue ("chr() is reducing the supplied codepoint number mod 2**32") with the question of the maximum allowed code point. But here we go. At one point in 2014 we had additional validity checks for nqp::chr. Those checks also looked for the upper limit of 0x10ffff. According to the IRC logs [1] the checks were removed [2], because the Unicode Consortium made it clear in "Corrigendum #9: Clarification About Noncharacters" [3] that Noncharacters are not illegal, but reserved for private use. (See also the answer to the question "Are noncharacters invalid in Unicode strings and UTFs?" in the FAQ [4].) AFAIU the check for the upper limit was useful, since 0x110000 and above are illegal (as opposed to the Noncharacters). Trying to add those checks back, I found failures in S32-str/encode.t on MoarVM. There are tests, that expect the following code to live. The tests where added for RT 123673: https://rt-archive.perl.org/perl6/Ticket/Display.html?id=123673 $ ./perl6-m -e '"\x[FFFFFF]".sink; say "alive"' # .sink to avoid warning Another thing to note in this context: Since we have \x, the patch from lizmat didn't fix the whole mod 2**32 thing: $ ./perl6-m -e 'chr(0x100000063).sink; say "alive"' # dies as expected So, adding the check for the upper limit for MoarVM [5] led to failing tests in S32-str/encode.t and did not help with the mod 2**32 problem. (AFAIU the conversion to 32 bit is done before the code from [5] in src/strings/ops.c runs.) On the JVM backend things look a bit better. Adding similiar code to method chr in src/vm/jvm/runtime/org/perl6/nqp/runtime/Ops.java helps with the upper limit for code points and helps with the mod 2**32 problem (since we cast to int after said check. The tests from S32-str/encode.t were failing before (they have been fudged for a while). I'd be glad if someone with a deeper knowledge would double check if these tests are correct wrt "\x[FFFFFF]": In case they are dubious, I'd propose to add a validity check for the upper limit to MVM_string_chr (MoarVM) and chr (JVM). That would only leave the mod 2**32 problem on MoarVM. [1] https://irclog.perlgeek.de/perl6/2014-03-28#i_8509990 (and below) [2] usev6/nqp@a4eda0bcd2 (JVM) and MoarVM/MoarVM@d93a73303f (MoarVM) [3] http://www.unicode.org/versions/corrigendum9.html [4] http://www.unicode.org/faq/private_use.html#nonchar8 [5] $ git diff Inline Patchdiff --git a/src/strings/ops.c b/src/strings/ops.c
index 9bfa536..7e77d21 100644
--- a/src/strings/ops.c
+++ b/src/strings/ops.c
@@ -1919,6 +1919,8 @@ MVMString * MVM_string_chr(MVMThreadContext *tc, MVMCodepoint cp) {
if (cp < 0)
MVM_exception_throw_adhoc(tc, "chr codepoint cannot be negative");
+ else if (cp > 0x10ffff)
+ MVM_exception_throw_adhoc(tc, "chr codepoint cannot be greater than 0x10FFFF");
MVM_unicode_normalizer_init(tc, &norm, MVM_NORMALIZE_NFG);
if (!MVM_unicode_normalizer_process_codepoint_to_grapheme(tc, &norm, cp, &g)) { |
From @samcvThis now throws if the size is too large. Closing as resolved. |
@samcv - Status changed from 'open' to 'resolved' |
Migrated from rt.perl.org#130914 (status was 'resolved')
Searchable as RT130914$
The text was updated successfully, but these errors were encountered: