The idea is that when-ever we find anything passing validation where it shouldn't, we add an entry. To make this somewhat sane, a custom regression-language was created where you can target the API in a simple human-readable way. The drawback is that it can be difficult to see what the real calls are that are being made. This is a balance between being able to motivate people to write regressions and being very verbose in case the regression fails.