If a map entry is created during iteration, that entry may be produced during the iteration or may be skipped. The choice may vary for each entry created and from one iteration to the next.
I expect that package authors might not be expecting the non-deterministic behavior that comes with concurrent modification. This can lead to unexpected bugs and behavior. I believe Java goes so far to even throw a runtime exception.
How about doing it at run-time? Go will already panic if many concurrent reads and writes are done on a map. This wouldn't be a data race per se, but it could be possible to catch writes in the middle of a range at run-time.
It's not definitely wrong. It's entirely reasonable to add elements to a map during an iteration, if the new elements have some characteristic that allows the iteration to skip them.
I don't think vet should warn about this. Perhaps other static analyzers could warn about this case, but I don't see it as appropriate for vet.
That said, look at real code. Try writing the vet warning and running it over a bunch of packages written by different people (e.g., all Kubernetes packages including third party imported packages). If the new check never fires, it's probably not helpful and we shouldn't add it. If the new check only issues warnings on correct code, we shouldn't add it. If the new check only issues warnings on code that turns out to be incorrect, we should add it. Other cases (warnings on both correct and incorrect code) require a judgement call.
This pattern isn't always wrong. Consider, say, a program that constructs a map containing the transitive closure of nodes in a graph by iterating until no new nodes are added. Such a program is guaranteed to converge on a map with deterministic contents, even though the set of nodes scanned (and added) in any given iteration may vary.
To detect real bugs, perhaps it would suffice to make the order of iteration more aggressive under some configuration. For example, in -race mode we could randomly choose between producing every new entry and no new entries (similar to what is proposed for #35128, or the existing scheduler randomization). Then the real bugs could be detected during testing, perhaps by fuzz tests (#44551).
One criteria for the check is that there needs to be a path back to the range statement, e.g. do not report inserting and then breaking/returning. I suspect there will be too many false positives otherwise.
That said, look at real code.
+1. The background rate of people misunderstanding "skippable" insertions vs. getting it right will determine how reasonable adding this check is.
Consider, say, a program that constructs a map containing the transitive closure of nodes in a graph by iterating until no new nodes are added.
@bcmills I don't understand the example you have in mind. Can you elaborate?
To detect real bugs, perhaps it would suffice to make the order of iteration more aggressive under some configuration.