Given that malicious actors (not to mention our external auditors) will be using AI to find bugs in bberg, there's no excuse for us not to do the same. We already do this of course but I want to find ways to make this (a) easier and (b) more automated. This could take a lot of different forms but a basic approach would allow for development of a corpus of "auditing concerns" plus a platform for automating the process of kicking off agents to investigate.
Example outcomes that seem realistic/useful to me:
- No module contains a bug that is identical or very similar to one previously found in another
- No comment is inaccurate (this is of course subjective but I mean the objective ones)
- No unused and/or un-useful code remains in the codebase
- No method is untested (to within reason)
Given that malicious actors (not to mention our external auditors) will be using AI to find bugs in bberg, there's no excuse for us not to do the same. We already do this of course but I want to find ways to make this (a) easier and (b) more automated. This could take a lot of different forms but a basic approach would allow for development of a corpus of "auditing concerns" plus a platform for automating the process of kicking off agents to investigate.
Example outcomes that seem realistic/useful to me: