New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how this library is/is-not vulnerable to `Just a Little Bit More` side channel secret recovery. #238

Open
nathan-at-least opened this Issue Apr 14, 2015 · 9 comments

Comments

Projects
None yet
5 participants
@nathan-at-least
Copy link

nathan-at-least commented Apr 14, 2015

I've just read the abstract to Just a Little Bit More,
Joop van de Pol and Nigel P. Smart and Yuval Yarom
.

I'm not well versed in side-channel analysis, and I cannot tell from the abstract if that method could feasibly compromise secret bits when this library is used to generate signatures. I'm hoping for an explanation and/or discussion here about that.

I propose these end results to close this ticket:

If this library is vulnerable to this kind of side-channel analysis, document why in the README, then put the best mitigation on the roadmap.

If this library is not vulnerable to that kind of analysis, then a concise comparison between OpenSSL and this library could motivate a transition in Bitcoin (or perhaps even improvements to OpenSSL or the wider ecosystem, if you're an optimist).

@sipa

This comment has been minimized.

Copy link
Contributor

sipa commented Apr 14, 2015

@nathan-at-least

This comment has been minimized.

Copy link

nathan-at-least commented Apr 14, 2015

Thanks for the explanation. I did not know that libsecp256k1 does not use wNAF multiplication, nor that "Constant time, constant memory access" precludes the leak in this reference. (I hope others find this ticket useful, otherwise I apologize if I wasted your time. ;-)

For posterity, here's the "Improved signing security" section of the 0.10.0 release notes which explains that Bitcoin uses libsecp256k1 for key generation and signing.

@sipa

This comment has been minimized.

Copy link
Contributor

sipa commented Apr 14, 2015

@DavidEGrayson

This comment has been minimized.

Copy link

DavidEGrayson commented Apr 14, 2015

Does the C standard give any guarantees about running time? I haven't read the whole standard, but I also haven't heard about any guarantees like that. I am somewhat worried that a sufficiently advanced optimizer in a compiler might make changes to the secp256k1 code that defeat the security of it.

An unlikely example is that the compiler might detect you are doing elliptic curve multiplication and replace your algorithm with wNAF, because that algorithm has the same outputs and side effects (where time is not considered a side effect). It probably won't make such a big change, but it might make some changes on a smaller scale that have similar effects.

I'm wondering if we can really be 100% safe against timing attacks when writing cross-platform code in C, or if we just settle for being almost always safe.

@nathan-at-least

This comment has been minimized.

Copy link

nathan-at-least commented Apr 15, 2015

I don't know about the C standard, although I've heard that the rust language has had a difficult time with constant time features because of its reliance on llvm (see @tarcieri's Thought's on Rust Cryptography), which apparently aggressively optimizes in a manner that's hard to constrain. So regardless of the standard, the docs may need to specify which compiler and options are known to preserve these feature goals.

Additionally, the "runtime" introduces further uncertainty; consider that any C code may be running on emscripten, virtual machines, emulators. Imagine that llvm introduces better support for constant time/memory, but this library is used on an emscripten stack. Now what new side channel vulnerabilities exist?

(Yes, we can all groan and cringe at JavaScript emulators, but in K months when the entire plasmacloud runs completely in a cyber-blockchain-browser, our grandkids will laugh at our backwardsness.)

Edit: Added links, and minor grammatical changes.

@sipa

This comment has been minimized.

Copy link
Contributor

sipa commented Apr 15, 2015

@tarcieri

This comment has been minimized.

Copy link

tarcieri commented Apr 15, 2015

@sipa you might be interested in my thoughts on that approach in this slide and the next few following it:

https://speakerdeck.com/tarcieri/thoughts-on-rust-cryptography?slide=106

@gmaxwell

This comment has been minimized.

Copy link
Member

gmaxwell commented Apr 15, 2015

@DavidEGrayson the standard pretty much only says the results agree with the abstract machine; no promises are made with timing. In practice the world is not (currently) that awful.

But writing the whole program in a macro assembler is not a realistic option for reviewability (demonstrated nicely by bugs in asm written ECC code elsewhere...). A middle ground might would be writing the software however, and compiling to assembler for release, with static analysis to check the timing.

Even checking the assembler (obvious we can't check everyone's and haven't built portable automation for checking it, but we've checked on our own systems) is not sure to be enough as the processor pipeline is not promised by the architecture to be constant time, unfortunately (on desktops, there are some embedded devices where they timing is precisely specified). Measuring with the TSC is potentially not adequate as it's subject to the pipeline too... meaning it's at least theoretically possible (translation: future CPUs will probably doom you) for the processor to maintain "Apparent constant time" with the TSC by nailing you. DJB has been trying to get intel to promise some things, but it seems so far they've resisted.

I have interval counters with 10ps single shot resolution, so I could try instrumenting a modern host sometime; though tapping onto a high-speed bus is likely to be pretty fussy.... really more interesting would be capturing high bandwidth (several GHz wide) power traces; but thats clearly beyond my ability.

Checking the ASM is not as crazy as you might think, at least for a codebase this small; as you can cover a lot with pattern matching. So it's only mostly crazy. ... I'd worry in the rust examples that without any formal analysis for rust yet (or a mature, clearly sound compiler) that stunts like that might be more likely to introduce weird bugs in practice. "Congrats; your code sends your funds into a black hole in perfectly constant time." :(

We also have additional blinding in the works beyond the unknown discrete log stuff thats there already, which should reduce the potential for exposure somewhat. (E.g. so long as your word sized addition has no data dependance the result is still secret, even if multiplies or other things are not)... though the goal there is more about hardening against power analysis which is even harder to protect against. That it might make things robust in the face of a timing disaster is just a bonus.

FWIW, it might be worth mentioning that our current degree of protection on this front is partially a product of Yuval Yarom, who was kind enough to spare a moment to respond to our request back in November after we thought we were reasonably constant time, and took a look at our implementation and gave us some good additional citations and very helpful advice (in particular, we had previously assumed cacheline uniformity was enough). (I don't say this to suggest it's all fine or has been deeply audited by Yuval Yarom-- just mostly a credit where its due, and also to point out that we were taking the concern seriously).

@tarcieri

This comment has been minimized.

Copy link

tarcieri commented Apr 15, 2015

I'd worry in the rust examples that without any formal analysis for rust yet (or a mature, clearly sound compiler) that stunts like that might be more likely to introduce weird bugs in practice. "Congrats; your code sends your funds into a black hole in perfectly constant time." :(

The flipside is that Rust is actually a type safe language (and more than that, has a good, modern type system), whereas something like OpenSSL implements this sort of thing by generating assembly language with Perl. I think that Rust's present instability aside, the use of static typing is incredibly helpful in finding bugs.

@gmaxwell gmaxwell added this to the initial release milestone Aug 27, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment