E.g., standalone \x80 and \xEF\xC0\xBB correctly throw (since the parser is catching these as invalid multibyte sequences), but the valid UTF-8 encoded byte sequence \xE2\x82\xAC is being incorrectly passed through as /\xE2\x82\xAC/. It is in fact equivalent to the single code point \u20AC (or €) in JS.