utf8 just seems to make more sense as a string decoding default than ascii and it doesn't seem to break any tests. Of course the base64 string is in ascii but utf8 seems to be a more reasonable default case for the decoded result string.
utf8 just seems to make more sense as a string decoding default then ascii and it doesn't seem to break any tests. Of course the base64 string is in ascii but utf8 seems to be a more reasonable default case for the decoded result string.
I do not understand your concern. The ascii encoding tag carries more information than utf8. utf8 is one of the falbacks.
so if ASCII is not good enough, it will fall through
I am sure you know what you are talking about but you lost me pretty much completely.
I don't code much parrot but I have done some and I have lots of years coding Perl, 'C' etc. including assembler. More specifically AFAICT in ".sub decode_base64" there is a parameter called "enc". The routine decodes the base64 string into a ByteBuffer and when it finishes doing that it converts the ByteBuffer to the string it will return with "bb.'get_string'(enc)". If an enc parameter was supplied it uses the supplied value otherwise it currently defaults to "ascii" rather than "utf8" as I am suggesting. I don't understand what you mean either when you say that utf8 is one of the fallbacks or when you say that the ascii encoding tag carries more information than utf8.
Anyway, if I am still not making any sense to you let me know and I will close the issue.
Decoded strings are always ASCII, by definition utf8 makes no sense, it is slower, bigger and carries less information.