Skip to content

veorq/cag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 

Repository files navigation

Crypto Audit Guidelines

I've been doing security audits for quite a few years, both independently and with Kudelski Security, reviewing various implementations of cryptographic functionalities, from smart card applications and dedicated silicon to web and mobile applications, from standards algorithms and elliptic curve arithmetic to consensus protocols and zero-knowledge proofs. Having worked with many different customers, written many reports, seen reports from other auditors, and being on the other side of the fence, I've learnt about things that work and things that don't, and wish I had learnt some of these earlier. I also wish all auditors abided to some quality and ethical standard and that customers were better informed of what they can expect and demand from auditors. The following guidelines are thus an attempt to help auditors do a better job and customers get better value for their money, based on my experience (the usual YMMV disclaimer applies). Lots of advice could be shared and lots of stories could be told and perhaps at some point I'll write a longer piece, but I've deliberately limited the points below this to what I believe will be the most beneficial.

If you're also doing cryptography audits and would like to comment on these guidelines, please use GitHub Issues. If you'd like to contribute an entry, please feel free to file a PR.

Thanks to the additional contributors: Antony Vennard (@diagprov), Thomas Pornin (NCC), Trail of Bits.

For auditors

  • Don't inflate severities: Sometimes auditors rate an issue high-severity not because its severity is demonstrably high--which usually entails its being exploitable--but because it looks bad and embarrassing to the auditors (for example, the use MD5), or because they speculate it would be exploitable under a coincidence of events as likely as correctly guessing a 256-bit key's value. But customers don't care about your personal feelings about the bug, they only need meaningful risk rating in order to prioritize fixes and (when applicable) communicate with their users. Never shy away from rating a bug hi-sev if it's actually hi-sev (likely exploitable with baaad consequences, as per the threat model defined), but clearly articulate the exploitation scenario and the business impact (data loss, DoS, etc.). Crying wolf will not help you build respectability in this business.

  • Be constructive: find solutions not just problems: Every issue identified should come with mitigation recommendations. A tautological recommendation such as "fix it" is insufficient. Instead, be specific and if possible offer a patch. As auditors, it's not always easy to figure out what's the best mitigation, especially to design errors, so don't hesitate to write down several mitigation strategies and discuss them with your customer. You may also distinguish between short-term, simple fixes (such as adding an additional sanity check, or update some dependency) and long-term fixes, which require more effort or systemic/design changes (such API change, use of a different framework, or CI/CD pipeline redesign).

  • Scope flexibly: Estimating the person-day budget for a "good enough" work is nigh impossible. Auditors use various heuristics such as N lines of code covered per hour or N times the time it took to write the code, but these quantitative estimates often end up being of little value compared to qualitative factors that include design complexity, code clarity, language used, auditors' familiarity with the system, and so on. You'll often end up spending 80% of your time on 20% on the lines of code under scope, which is hard to predict prior to the audit start. What I found to work well is to provide a range to the customer with a conservative cap (in order to avoid going over budget) and stop the audit when I feel it's completed.

  • Do a day if you charge a day: This should be obvious but it's alas not always the rule, especially in bigger and less specialized firms. A "day" of work is commonly understood as the equivalent of 8 hours of work, so charge the day rate for every 8-hour of work, not for every weekday when an employee assigned to the project showed up to the office and worked a couple hours on it between meetings and coffee breaks.

  • Log your work: For every hour or block of 2-4 hours of work, keep track of what you've been doing, which files/functions/mechanisms you've analyzed, keep note of your thoughts or failed attack attempts, and share this journal with your team. The customer may demand you to justify how you've spend the time charged, and you should be able to justify it.

  • 4 eyes are better than 2: It's sometimes natural to distribute the work among team members by splitting the code auditing tasks into different components of the code base (packages, subcrates, etc.), but the problem with this approach is that only up to one person looks at a given line of code, and that nobody gets a full understanding of the interaction between the components. What I found to work well is to assign two persons to a same component, and that everyone gets at least a basic understanding of all the components being reviewed and how they work together. Being two instead of just one also leads to discussions that help identify bugs and false positives. It also makes the work feel less boring.

  • Communicate what you do and ask questions: You'll often have a Slack or other group chat established with the customer (if not, try to create one). It's usually good to agree, while preparing the statement of work or during the kick-off meeting, on how this communication channel will be used. I recommend that auditors regularly share the information of what part of the code they're working on, what they find unclear or awkward, and report issues as soon as they find them. This helps catching false positives early and, on the developers' end, planning mitigation. Don't hesitate to ask for clarification about design decisions or the code's expected behavior, a better understanding of the designers/developers' perspective will help you catch issues and craft more adequate mitigations.

  • Better a more verbose report: A verbose report is better than a too laconic one. As a security auditor familiar with attack and exploitation techniques, it's often tempting to skip the details and expect the reader to fill gaps in bugs' description and mitigation recommendations. But the risk is that readers misunderstand the actual issue and fail to address it correctly. Writing down details also helps you spot potential errors or misunderstandings in your analysis. Don't hesitate to refer to external resources such as blog posts, research articles, or even code bases from similar projects.

  • Describe what you haven't found: A report void of security issues can feel awkward for both the auditor and the customer, the latter worried that the auditor may not have done the job they were paid to. To alleviate such concerns, whether your report includes zero or 100 findings, list the kind of bugs you've been looking for, describe any tools you've used and their configuration, enumerate the properties you've verified (for example, elliptic curve point validation, nonce uniqueness, zero-knowledge property, and so on).

  • Never take things for granted: When reviewing an algorithm or protocol implementation, always understand what it does and creatively think about what could go wrong; even if the scheme implemented is provably secure, even if it has comprehensive unit tests and full coverage, even if it's formally verified, and even if it's written in Rust. Oftentimes security issues can come from quirks of the language, bad error handling mechanism, unexpected behavior of callers or callees, inaccurate threat model, or other real-world causes.

  • Remain objective and professional: In the report, even if some code's quality may look shocking to you, never use a derogatory or mocking tone, yet be direct and honest in your assessment. Likewise, don't be overly complimentary when the code quality is above average. If you believe a customer is making an unreasonable request, such as dissimulating problems in order to avoid scaring users or investors, politely decline.

  • Adapt the report to its audience: A report will be written according to its target audience, therefore you'll deliver a different document for different audiences. For example, if only developers will read your report, an informal markdown document might be enough, which will save editing time. If the report needs to be shared with investors, compliance auditors, or top management, you'll have to deliver a polished document with colors and logos and an executive summary. You must therefore know in advance who will read the report, and in particular whether it will be made public. In such a case, you'll want to take extra care to avoid misunderstandings and misleading quotes taken out of their context. You'll also work together with the customer to make sure that systems running in production are patched to protect against the security flaws identified.

  • Do your homework: You must stay up to date with the literature and the latest vulnerabilities and attack strategies. It saves considerable time to maintain a checklist of common issues in cryptographic components, and of common bugs and gotchas specific to a given language. It can help to create your own tools to automate stuff and save time during the audits, but oftentimes such tools will already exist so you want to be familiar with them prior to the starting an audit.

For customers

  • Identify what you need and communicate it: Most of the time the first request will sound as "we need a security audit of XYZ", but be ready to elaborate on what that means from your perspective: are you most worried about coding errors, mismatches between the code and the specs, or by design errors? Let the auditors know what your team feels should be the priority of the audit, and what you feel is the greatest risk, and describe it in the context of your threat model and operating model.

  • Share as much information as you can: "Code is documentation enough" is rarely true, especially with complex cryptographic protocols. Even if your code clearly describes that is done, it can't describe what it ought to do let alone the adversarial model and target security properties. So make sure to have such documentation, even as informal markdown documents, and also share with the auditors any design documents, related research papers, previous audit reports, and anything that could save them time and help them grasp the system audited. When the audit's goal is to match a specification against an implementation, make sure to notify auditors of known discrepancies between the two.

  • View the process as constructive It is tempting to view an audit report as an endorsement of your project, and it can be tough if the report you receive finds issues with work. Realise that an audit is unlikely to be an unconditional endorsement of your product, as this may raise questions about the auditor's potential conflicts of interest. Bear in mind that an audit is just a point-in-time review and the goal is to uncover potential issues, so you can fix them, and generally improve the security posture of the audit target. The outcome of the audit should be that your project or product is improved and your auditors should work with you professionally and objectively towards that goal.

  • Agree in advance on the report content: You don't want to be charged for 3 days of work that is only about (re)writing your specs when you only need an informal description of security issues in the report. Therefore make sure that the statement of work or kick-off meeting accurately reflects your expectation of what the report should include and not include.

  • Don't hesitate to challenge the auditors: If you find that an estimate of 5 person-weeks to audit your 500-LoC project is bonkers, tell your auditors, or, better, work with someone else. Also, auditors can't be expected to have as deep an understanding of your code base as your developers who've been working on it for the last six months, but they should appear to be comfortable with the scheme being audited and its implementation. If they don't understand what your code is doing, they're unlikely to find bugs therein.

  • Careful with upsold work packages: Some companies will try to upsell things such as "threat modeling", "security hardening", "performance optimization", or other more or less relevant sub-projects that will translate in increased consulting fees. These can bring great value to the customer if done right and and if the content and goal of the work is clearly understood by both parties beforehand. But it can also turn out to be a scam when pitched by the consulting firm's sales person and signed off by a middle manager of the customer and when no engineer is involved. Such a situation ultimately hurts both sides.

  • Don't say you "passed the audit", let alone "with flying colors", if you communicate about it on your blog or social media platform. Again, a crypto audit and more generally code security audits are assessments against common vulnerabilities and are limited by the auditors' ingenuity and experience, and even though such audits can include the use of checklists they are in no way pass-or-don't-pass audits, as for example SOC or ISO compliance audits are.

  • You don't necessarily need a report, and can ask for findings to only be informally reported to your developers via IRC, Slack, Signal, or other platforms. This will likely save you a few days worth of consulting fees. If you need a consolidated report of all the findings for your archives but don't need a super formal and polished document (for example, when you don't plan to share it with investors or customers), then auditors will also spend less time on the report preparation.