You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ gojq -nr '["\u0000"] | @tsv'
# In the line immediately above, a single literal NUL should appear.
In 2015, there was discussion about what jq should do with NUL and
even though it may not be documented in the manual, the decision
to present NUL as \0 by @TSV was part of the discussion at jqlang/jq#759
The text was updated successfully, but these errors were encountered:
So, there is no way to deal with NUL characters properly, and we should not use it in the input for the filters. Rather than making it indistinguishable against "\\\0", I prefer to keep it as it is to notify the user the wrong place in the input data.
there is no way to deal with NUL characters properly
But, when generating a TSV file, special characters (notably tabs, linefeeds, and returns) must be handled sensibly, and I believe, the same logic applies to NULs as well. Agreed, there is no "TSV standard" that mandates allowing raw NULs in a TSV file, but (a) there are plenty of applications which recognize the two-character sequences '\t', '\r', '\r' and '\0' as stand-ins; and (b) in the present context, jq itself establishes a time-tested and reasonable standard.
Okay, I noticed that the escaped character is escaped in @tsv so that's fine. I'm worried about the behavior of @csv and @sh that they yield the same string against different strings (["\u0000", "\\0"] | @tsv,@csv,@sh,map(explode)). But after a few minute research, it seems that dealing with the null characters in CSV and shells are very difficult. After all, most people are happy with the tool just work against their real data, and few people are care about the behavior against the null characters.