I’m not sure it’s worth the required code duplication to use a smaller buffer (i.e. use less memory). It might be worth setting cap==len in the returned slice (return src[:n:n]), although that would be a performance penalty for anyone who appends to the return value.
There's no code duplication here, yet the output slice is sized as expected (by me).
The Go compiler should be smart enough to figure out that byte(s) can be a no-op in this case.
Users of this function who want to append later probably already use Decode(dst, src) instead. Appending to the result of DecodeString feels like an edge case to me, and at least in the Go source tree I didn't find a single example for that. Therefore I was surprised to see that DecodeString wastes memory in the common case where the output is expected to be complete as it is.
Trimming the return value using src[:n:n] would not help the GC
Indeed. The only value to doing so is that it helps avoid surprise of the kind expressed in this issue. @rillig also expressed concern about "leaking data", but given that the caller already had the hex input, it's not so clear to me that this is a major concern. On balance, I don't think that we should make this change.
I use runtime.ReadMemStats to confirm
FWIW, this kind of thing is easier to measure using benchmarks.
Yet another side remark: it would be great if standard encoding/* packages had Append*(dst byte, args...) byte methods with semantics similar to strconv.Append*. Such methods could be easily used in zero-alloc mode. There is an outdated proposal for encoding/base64 at #19366 .