I think this is because it literally needs m[string(b)] in the source code to trigger. After inlining, it looks more like:
tmp1 := name
tmp2 := string(tmp1)
_, ok := m[tmp2]
Since this optimization is done in walk, it's hard to determine (1) that the index used is a byte slice converted to a string, and (2) that the byte slice was not modified between the string conversion and the map indexing. (2) is particularly hard at this phase of the compiler.
So while I think this may be fixable, it will certainly take a lot of work. Either plumbing new information into walk, or moving this optimization to SSA where that info is easier to glean. Not sure it's worth all that work.
FWIW this was quite surprising for me when I came across the issue (the actual code in question was returning the string value of a field in a by-value struct, defined that way to prevent unintentional conversions between to byte; as it happens, the field was exported, but I had intended to unexport it, which would mean that this using an explicit type conversion inline in the map access wouldn't be possible).
My assumption that the inlined function would be exactly equivalent to the expression inline led to a significant performance regression.
ISTM that doing the optimisation at SSA level would be nicer (and potentially open the door to other optimisations, such as passing byte to string-argument functions without copying) but maybe that would be too much work?
Not entirely related, but a careful implementation of #29095 can help here too.
In the referenced issue, we need to inject the constant literals in the inlined functions to make const string optimizations work after the inlining.
Posting here just to create a cross-issue link.