You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I realize this is not a JerryScript specific question, but I was wondering what people's approaches are towards dealing with strings from JS when they have a \u0000 unicode code point(s) in them. This ends up being encoded as a 0x00 byte, which happens to be the "terminator" of a C string as well.
I can imagine this being a source of bugs, esp. if assumptions are made in the C code w.r.t. the length given by jerry_get_(utf8_)string_length() vs strlen()/strlen_s().
Some approaches I can think of to deal with this:
Don't use anything from <string.h>, instead use a proper utf8 library to deal with strings that only takes in string data in the form of a pointer + length.
Truncate: just use the first 0x00 as the end and wipe all data after it just to be sure.
...
The text was updated successfully, but these errors were encountered:
I realize this is not a JerryScript specific question, but I was wondering what people's approaches are towards dealing with strings from JS when they have a
\u0000
unicode code point(s) in them. This ends up being encoded as a0x00
byte, which happens to be the "terminator" of a C string as well.I can imagine this being a source of bugs, esp. if assumptions are made in the C code w.r.t. the length given by
jerry_get_(utf8_)string_length()
vsstrlen()
/strlen_s()
.Some approaches I can think of to deal with this:
<string.h>
, instead use a proper utf8 library to deal with strings that only takes in string data in the form of a pointer + length.0x00
as the end and wipe all data after it just to be sure.The text was updated successfully, but these errors were encountered: