You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.
This issue is user submitted and needs to be validated:
When the letters "abcd" are put into the Unicode length text box, both UTF-8 and UTF-16 are shown to use 32 bits. However, if UTF-16 uses 16 bits to encode a character, this should be 16*4=64 bits. Is there a problem with this or is this intentional?
Also reported by another user:
In the data representation section, the Unicode Encoding Size interactive has a bug where the same number of bits is displayed for UTF-16 as it is for UTF-8, allowing for values such as 8 or 24 bits as the length for a piece of text for UTF-16. This is very misinforming and I hope it is fixed soon so that students do not put this error into their work. Thanks :)
Also reported by another user:
I think there is a problem with the interactive that lets you compare text samples to see how many bits they will use depending on which UTF encoding scheme you use. It appears to give the same results for both UTF-8 and UTF-16. It would be great if you could check this out as students will be keen to use it for their upcoming NCEA external assessments in Computer Science. http://csfieldguide.org.nz/en/chapters/data-representation.html#comparison-of-text-representations
Also reported by another user:
I have come across what I think is a bug on your the Unicode Encoding Size interactive at the bottom of the Unicode section.
It is always showing the same size for UTF-8 and UTF-16. For example, the letter 'a' is saying UTF-8 size = 8 (correct) and UTF-16 = 8 (which is not possible. It should be 16)
Another example is the character '猫'. It says that both UTF-8 and UTF-16 is equal to 24 whereas it should be UTF-8=24 and UTF-16 = 16
Can you please let me know if this is indeed a bug or if I have missed something.
The text was updated successfully, but these errors were encountered:
This issue is user submitted and needs to be validated:
When the letters "abcd" are put into the Unicode length text box, both UTF-8 and UTF-16 are shown to use 32 bits. However, if UTF-16 uses 16 bits to encode a character, this should be 16*4=64 bits. Is there a problem with this or is this intentional?
Also reported by another user:
In the data representation section, the Unicode Encoding Size interactive has a bug where the same number of bits is displayed for UTF-16 as it is for UTF-8, allowing for values such as 8 or 24 bits as the length for a piece of text for UTF-16. This is very misinforming and I hope it is fixed soon so that students do not put this error into their work. Thanks :)
Also reported by another user:
I think there is a problem with the interactive that lets you compare text samples to see how many bits they will use depending on which UTF encoding scheme you use. It appears to give the same results for both UTF-8 and UTF-16. It would be great if you could check this out as students will be keen to use it for their upcoming NCEA external assessments in Computer Science.
http://csfieldguide.org.nz/en/chapters/data-representation.html#comparison-of-text-representations
Also reported by another user:
I have come across what I think is a bug on your the Unicode Encoding Size interactive at the bottom of the Unicode section.
It is always showing the same size for UTF-8 and UTF-16. For example, the letter 'a' is saying UTF-8 size = 8 (correct) and UTF-16 = 8 (which is not possible. It should be 16)
Another example is the character '猫'. It says that both UTF-8 and UTF-16 is equal to 24 whereas it should be UTF-8=24 and UTF-16 = 16
Can you please let me know if this is indeed a bug or if I have missed something.
The text was updated successfully, but these errors were encountered: