New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tox Handshake Vulnerable to KCI #426

Open
zx2c4 opened this Issue Jan 13, 2017 · 64 comments

Comments

@zx2c4

zx2c4 commented Jan 13, 2017

Hello,

I found this source code confusingly written (and downright scary at times) and the specification woefully underspecified and inexplicit, so it's entirely possible my understanding of the handshake is inaccurate. But on the off-chance that 5 minutes of source code review at 4am yielded something accurate, here is my understanding of the handshake:

Peer A (Alice) has the longterm static keypair (S_A^{pub}, S_A^{priv}). Peer A has the session-generated ephemeral keypair (E_A^{pub}, E_A^{priv}). Peer B (Bob) has the longterm static keypair (S_B^{pub}, S_B^{priv}). Peer B has the session-generated ephemeral keypair (E_B^{pub}, E_B^{priv}).

Message 1: A -> B

XAEAD(key=ECDH(S_A^{priv}, S_B^{pub}), payload=E_A^{pub})

Message 2: B -> A

XAEAD(key=ECDH(S_B^{priv}, S_A^{pub}), payload=E_B^{pub})

Session Key Derivation

ECDH(E_A^{priv}, E_B^{pub}) = ECDH(E_B^{priv}, E_A^{pub})

Is this an accurate representation of the handshake? If so, keep reading. If not, you may safely stop reading here, close the issue, and accept my apologies for the misunderstanding.

The issue is that this naive handshake is vulnerable to key-compromise impersonation, something that basically all modern authenticated key exchanges (AKEs) are designed to protect against. Concretely, the issue is that if A's longterm static private key is stolen, an attacker can impersonate anybody to A without A realizing. Let's say that Mallory, M, has stolen A's private key and wants to pretend to be B:

Message 1: M -> A

XAEAD(key=ECDH(S_A^{priv}, S_B^{pub}), payload=E_M^{pub})

Message 2: A -> M

XAEAD(key=ECDH(S_A^{priv}, S_B^{pub}), payload=E_A^{pub})

Session Key Derivation

ECDH(E_A^{priv}, E_M^{pub}) = ECDH(E_M^{priv}, E_A^{pub})

A now thinks he is talking to B, but is actually talking to M.

Perhaps Tox doesn't care about this, or about many of the threat models that modern AKEs are designed to protect against, in which case, probably it's fine to continue using your homebrewed crypto. But if you actually desire some kind of high assurance security, I strongly recommend not building your own protocols and instead use something designed by an educated expert, such as Noise.

This is just what immediately leaped out at me after a few short minutes of review. I haven't even begun to look at key derivation and other interesting aspects (are you guys really just using raw ECDH results as keys?).

Again, apologies if this doesn't actually represent the handshake you're using; I'm not 100% certain. But in case it does, then let this be a wake-up call to developers not to roll your own crypto, as well as a wake-up call to users not to rely on crypto software written by non-experts.

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 13, 2017

Hi Jason, thanks for the report. We are aware of all three issues you've mentioned, but it's great to have them written down. I'll explain a bit of background about what we're doing here, and the reasons for why issues like this have not been addressed.

We started the TokTok project about a year ago with a (now slightly outdated) plan. We inherited toxcore and the protocol it implements from the Tox project. We're now in some mix of phase 1 and 2, where we slowly improve the code while keeping the protocol exactly the same, with all its flaws and shortcomings. You've described one of them, but there are others. We should be more explicit about this on the website (I have filed an issue for this just now).

Initially, the plan was for us to not touch toxcore at all, and instead rewrite the specification, which does contain all the information we need, just not in an obvious way. That plan relied on others working on toxcore. Since nobody would take on the toxcore part, we had to take it on ourselves, which is the main reason we're not as far along in the plan as we had initially hoped.

The new plan is roughly:

  1. Improve toxcore code base, not making any protocol changes, with focus on testability.
  2. Implement a formal model of the protocol and run equivalence tests between it and c-toxcore. This part goes together with improving the spec, since the model is the formal version of the textual spec. Up to this point, we actively ignore any design flaws and focus purely on ensuring that the implementation matches the specification.
  3. Publish a threat model. Implement attacks on network, random users, and specific users. Still not changing the protocol.
  4. Redesign the protocol and make a single cutover from old protocol to the new one.

We do have crypto experts on board, but they are very much closing their eyes to the issues most of the time. I might have more to say about this, but not in public. I'm happy to discuss in private (IRC/email/ricochet) if you're interested.

I think the main action we can take related to this particular issue right now is to implement the attack. This was supposed to happen in step 3, but I don't see good reasons to keep it that far in the future. Perhaps it's a good time to publish all known attacks and their implications somewhere.

@zx2c4

This comment has been minimized.

zx2c4 commented Jan 13, 2017

Hi @iphydf,

Thanks for your response. So, it sounds like you're aware that this is an issue and confirm that indeed the handshake follows this construction and is therefore vulnerable to KCI.

In that case, I strongly recommend that you put a large red disclaimer on the Tox website and in all applications indicating to users that Tox is not secure. As is, the security assurances made on the website, marketing, and in-app GUI are dangerous.

@azet

This comment has been minimized.

azet commented Jan 13, 2017

Hi,

It seems either someone micromanaged too much or you guys got the wrong workflow figured out entirely.

Initially, the plan was for us to not touch toxcore at all, and instead rewrite the specification, which does contain all the information we need, just not in an obvious way. That plan relied on others working on toxcore. Since nobody would take on the toxcore part, we had to take it on ourselves, which is the main reason we're not as far along in the plan as we had initially hoped.

Upkeep of the core and porting is more important than fixing fundamental security flaws in the protocol itself, which, apparently, is live and used by people? This does not make sense to me.

The new plan is roughly:

  1. Improve toxcore code base, not making any protocol changes, with focus on testability.
  2. Implement a formal model of the protocol and run equivalence tests between it and c-toxcore. This part goes together with improving the spec, since the model is the formal version of the textual spec. Up to this point, we actively ignore any design flaws and focus purely on ensuring that the implementation matches the specification.
  3. Publish a threat model. Implement attacks on network, random users, and specific users. Still not changing the protocol.
  4. Redesign the protocol and make a single cutover from old protocol to the new one.

Usually I'd start with a threat model, so you can think about what/whom you want to defend against/protect, which attack vectors are relevant, etc. - a formal model sounds nice, but having a rough idea about how the protocol should look like first is maybe a better entry point. Modeling, testing etc should be done if you have a rough impression of what you're actually working on.

Sorry I'm just very confused by this response. Marking the project "experimental" after the fact is also problematic as you already have a user-base you need to care about (which of course means upkeep of your core, but first of all you want to supply them with strong security, as this is the point of the whole project, I take it? Your website says so.).

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 13, 2017

Now there are two discussions in this thread.

Roadmap/workflow

@azet here is a thought process:

  • We want Tox to be secure for a well-specified and published definition of secure (i.e. threat model).
  • We have a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C).
  • Thus, any change we make has the potential to make Tox less secure, running counter to our goal.
  • We could throw away all the code and rewrite using a different protocol with different security properties, but it would take a while.
  • we are working with very tight resources: a few volunteers with limited time.
  • It would be very hard to motivate those few people to drop a working product and write a whole new one. I've actually tried, but there is no audience for such plans.
  • It seems to me that the route we're taking is one that allows us to reach the goal with the starting point we inherited.

I would be quite interested in your thoughts around this, and perhaps we can steer in a different direction that's better for the project.

Security properties

First I should note the obvious, which is that exclaiming "X is not secure" is as useless a thing as saying "X is secure". As @zx2c4 correctly said, it depends on the threat model. There are very few ways to make information transmission secure to every possible known and unknown attack (and then a crowbar to the wrist can break that security as well).

Regarding the particular issue:

  • KCI depends on getting a user's secret key. If your secret key is compromised, you have several things to worry about, KCI is only one of them.
  • Preventing KCI in the current protocol is possible but would break deniability in the simple case.

Regarding the general issue of "oh my god tox is not secure don't use it": this is slightly overreacting to the actual issues. As said, there are a number of possible attacks on individuals or on the network, but if secret keys remain secret, none of those attacks can compromise message content.

Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification. This is the point we are currently working on: improving the code and at the same time improving our understanding of it, so that we can make large scale changes in a safe way.

@zx2c4 can you point at the part of the Noise spec that explains how deniability is achieved?

@zx2c4

This comment has been minimized.

zx2c4 commented Jan 13, 2017

Regarding the general issue of "oh my god tox is not secure don't use it": this is slightly overreacting to the actual issues.

I think when your homebrewed crypto protocol falls to basic crypto 101 vulnerabilities that modern AKEs are explicitly designed to prevent, it's time to pin up the red banners telling people not to use your stuff.

And to put this in context - this is what I found after a few minutes of scrolling. Judging by your replies, I'm a bit frightened to look in more depth...

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 13, 2017

I agree that we should tell users about the particular security guarantees Tox does and does not provide. We will add this to the website.

I would be interested in discussing further action if you're willing to talk. I would also be interested in discussing the implications of your findings if you're interested in looking more in depth and sharing what you think of it.

By the way, do you consider OTR secure or should they put up a red banner as well? What about the SIGMA protocol? Both these protocols provide a different set of security properties. The left and right set differences are non-empty.

For discussion of the current protocol I would like to ask you to direct questions at @irungentoo, who created the design and implementation of this protocol.

Can you point at the part of the Noise spec that explains how deniability is achieved? Also, can you point me to the parts of the code that you reviewed and whose logic you found to be of concern?

@zx2c4

This comment has been minimized.

zx2c4 commented Jan 13, 2017

You might benefit from a bit of humility before comparing your protocol to OTR and SIGMA, both of which were groundbreaking works created by experts, as opposed to a slapdash protocol that has neither a specification for any coherent evaluation of security properties nor a sturdy codebase.

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 13, 2017

I'm sorry I made it sound like I'm comparing us to them. I was asking about your opinion regarding these protocols, which both provide and lack certain security properties. I am still interested in your evaluation of the importance of each of their security properties, especially wrt. a similar lack or presence of these properties in Tox.

I'm also sorry to learn that a discussion I was hoping to be respectful and constructive has so quickly degenerated. I am sorry for the slightly snarky comment about those other protocols and red banners. I hope we can go back to where we started: a constructive discussion.

As said, we are quite aware of the situation we have inherited, and we are actively working on improving it. Your help in this endeavour would be greatly appreciated.

@GrayHatter

This comment has been minimized.

GrayHatter commented Jan 13, 2017

For anyone reading this, without a crypto background. The assertions being made are the same as saying: the lock on your house is broken because if someone steals your keys they can unlock your door.

I agree with iphy on this, the reaction and outrage doesn't match the reality of the issue. All of it sounds like concern trolling to me.

@zx2c4

This comment has been minimized.

zx2c4 commented Jan 13, 2017

the lock on your house is broken because if someone steals your keys they can unlock your door.

That's not a great analogy. KCI is a bit more subtle than that.

All of it sounds like concern trolling to me.

No, not really. As I wrote in the original post: if you don't actually care about having a secure protocol that meets modern expectations of an AKE, by all means defend and justify your homemade situation. However, if you're interested in gaining the trust of users and confirmation from cryptographers, you'd benefit immensely from not trying to tout the current situation as secure, but rather put up a large scary warning indicating to your users that you're working on it but that you're not there yet.

I was asking about your opinion regarding these protocols, which both provide and lack certain security properties. I am still interested in your evaluation of the importance of each of their security properties, especially wrt. a similar lack or presence of these properties in Tox. I hope we can go back to where we started: a constructive discussion. As said, we are quite aware of the situation we have inherited, and we are actively working on improving it. Your help in this endeavour would be greatly appreciated.

I think the best place to design a new crypto protocol is probably not a Github issue report. Take some time, write it out, work out the details, talk to your professors, etc. Alternatively, spend time reading existing papers and evaluating if they fit what you want and whether they have an implementation ready built for you to use. Message boards are a pretty bad place for ad-hoc design of something so critical.


Anyway, I'll tuck out now for a little while to see how this evolves. I've done my part. There's an vuln found in 5 minutes of review. There's homebrewed crypto. There's a "a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C)" (@iphydf). Now it's up to you how you want to handle this. Treat it as serious and worthy of a red "do not use" banner, if you'd like to give the impression that you care about the same standards of security that the world of cryptographers do. Carry on as usual if you simply want your thing to continue to be casually used by people who don't care that much and are okay with using naive constructions.

@GrayHatter

This comment has been minimized.

GrayHatter commented Jan 13, 2017

Also, if you personally are worried about someone stealing your key without you knowing. And your friends aren't rapidly disconnecting and reconnecting. No one else has your key.

@GrayHatter

This comment has been minimized.

GrayHatter commented Jan 13, 2017

put up a large scary warning indicating to your users that you're working on it but that you're not there yet.

You're right, a totally rational and nuanced response to an attack that would quickly become discovered.

Because you don't seem interested in discussing anything other than a low risk attack in hyperbolic style. I'm going to make sure this this thread doesn't devolve into what it already has started to. Anyone who would like a deeper discussion can join #toktok on freenode.

@TokTok TokTok locked and limited conversation to collaborators Jan 13, 2017

@TokTok TokTok unlocked this conversation Jan 13, 2017

@lvh

This comment has been minimized.

lvh commented Jan 13, 2017

I'm a cryptographer. I disagree with the lock analogy assertion that it's trivial or obvious. Being able to impersonate someone whose secrets have been compromised is, indeed, obvious; KCI works in the other direction. I don't think the other direction is obvious at all. (I agree that the tone is not one I'd use, but that's neither here nor there.)

@nbraud

This comment has been minimized.

Member

nbraud commented Jan 13, 2017

@lvh I agree that KCI is a non-intuitive (to users, at least) issue.
I also agree with @GrayHatter that it isn't a “let's set our pants on fire and run around screaming” kind of issue, as it requires first a key compromise.

However, I would be even more interested in moving that conversation away from the name-calling and back in rational, constructive discussion. That seems to be a much harder problem, unfortunately :(

@lvh

This comment has been minimized.

lvh commented Jan 13, 2017

@nbraud OK, does that mean you agree with the suggested resolution in the form of documenting the known attacks, including the handshake not being secure against KCI?

@lvh

This comment has been minimized.

lvh commented Jan 13, 2017

@GrayHatter When you say "an attack that would quickly become discovered", is that because you're asserting that adversaries can't compromise keys without you finding out, or is there some other subtlety I'm missing?

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 13, 2017

I'm still interested in having this discussion. KCI is an interesting and important topic, and I'd like to know more about @zx2c4's and @lvh's thoughts here.

I would also like to give @irungentoo a chance to weigh in on the concrete issue. In my experience, Tox solves some security issues in a non-obvious way. I have looked around the code and specification several times and found a number of issues, most of which I have later found out were somehow mitigated by non-obvious means. It is quite possible that the same is true in this case. I think it's reasonable to wait for the person who knows the protocol best to provide insight.

@lvh: we should and will definitely be documenting known attacks.

@nbraud

This comment has been minimized.

Member

nbraud commented Jan 13, 2017

@zx2c4 As mentioned by others, the plan is to provide users a single-cutoff switch to a better protocol, with a documented threat model & security claims.

The current “slapdash protocol”, along with its lack of an actual spec and of a robust implementation, is what we inherited from @irungentoo . As @iphydf mentioned, the goal is to first gain an understanding of where we stand and develop a robust codebase, so as to be able to provide a sane upgrade path.

Of course, part of that is documenting the current protocol's failings, in particular which security properties it fails to provide, under which threat models, and why they are relevant to users.
I don't believe, however, that putting a big fat warning that “everything is broken” is accurate or helpful to users, though.

@nbraud

This comment has been minimized.

Member

nbraud commented Jan 13, 2017

@lvh Sure, see above answer. You were just a bit too fast ;-)

PS: I should have specified that I'm not the most active contributor here, in part due to issues outside of my control, so don't take my opinion as representative of what other TokTok contributors think.

@paragonie-scott

This comment has been minimized.

paragonie-scott commented Jan 13, 2017

Just an aside:

The best thing to do in situations like this is to make a clean break. Start over with a secure protocol (in this case AKE) rather than try to smoothly transition users towards a secure protocol and introduce downgrade attacks.

@GrayHatter

This comment has been minimized.

GrayHatter commented Jan 13, 2017

@GrayHatter When you say "an attack that would quickly become discovered", is that because you're asserting that adversaries can't compromise keys without you finding out, or is there some other subtlety I'm missing?

Because of how the protcol works, if someone else tried to impersonate you, your friends would rapidly connect and disconnect from you. You can see what this would look like in the client by running the same tox "profile" on two systems at the same time.

@eternaleye

This comment has been minimized.

eternaleye commented Jan 13, 2017

@GrayHatter: The issue of KCI is not "I stole your key, now I can pretend to be you" - it's "I stole your key; now whenever you try to talk to someone, I can gaslight you instead, pretending to be them"

This is best combined with any of the MANY techniques for network-level interception, such that you never even have a chance to talk to anyone but the attacker

(This then trivially bootstraps to a fully-general MITM).

@kebolio

This comment has been minimized.

kebolio commented Jan 13, 2017

would like to mention that "Noise" could also be be called "homebrewed crypto" in that someone has actually sat down and written it. It is also Yet Another Encrypted Messaging Protocol, like there has never been enough of them (OTR, insecure Axolotl gimmick)

@lvh

This comment has been minimized.

lvh commented Jan 13, 2017

@kebolio I don't think that's a statement you could get cryptographers to support (certainly not me). Noise is peer-reviewed, and explicitly highlights many issues and how it addresses them, including specifically AKE KCI.

@nbraud

This comment has been minimized.

Member

nbraud commented Jan 13, 2017

@eternaleye As far a I get @GrayHatter's point, the user being impersonated (Alice) would see the user whose key was compromised (Bob) rapidly connect and disconnect while the attack is ongoing.
Of course, running the attack while Alice is offline likely sidesteps the issue.

@eternaleye

This comment has been minimized.

eternaleye commented Jan 13, 2017

@nbraud That would be true if it wasn't trivial to deny the connection to Alice using network-level techniques.

@iphydf iphydf added the CAT:security label Jan 14, 2017

@lvh

This comment has been minimized.

lvh commented Jan 14, 2017

Protocols aren't primitives. Secure primitives certainly do not imply secure
protocols (Example: AES is a secure block cipher, but AES-ECB is clearly not a
secure way to encrypt messages). Secure protocols mostly imply secure
primitives. (Counterexample: a protocol using HMAC-MD5 doesn't have forgery
issues even though MD5 is not a secure hash function.)

There are several levels of "homebrew" of "roll-your-own" cryptography:

  • Designing your own block ciphers or hash functions.
  • Designing your own compositions of primitives, like AE or MAC.
  • Designing your own protocols, like TLS or Noise.

This vulnerability exists on that third level. As a consequence, this isn't a
repudiation of NaCl or libsodium. They're excellent libraries. Curve25519 is a
DH primitive, and there's no DH vulnerability here. The problem is that it's not
an AKE, and that's what you're using it as. The docs clearly enumerate what it
does and does not do:

Security model

crypto_scalarmult is designed to be strong as a component of various
well-known "hashed Diffie–Hellman" applications. In particular, it is
designed to make the "computational Diffie–Hellman" problem (CDH) difficult
with respect to the standard base.

crypto_scalarmult is also designed to make CDH difficult with respect to
other nontrivial bases. In particular, if a represented group element has
small order, then it is annihilated by all represented scalars. This feature
allows protocols to avoid validating membership in the subgroup generated by
the standard base.

NaCl does not make any promises regarding the "decisional Diffie–Hellman"
problem (DDH), the "static Diffie–Hellman" problem (SDH), etc. Users are
responsible for hashing group elements.

For example, this clearly states that you're responsible for hashing group
elements, which ostensibly the Tox AKE does not do. If you build an AKE, there
are other documented aspects of Curve25519 to consider; for example, some AKE
protocols require contributory behavior, which means that in Curve25519 you're
(exceptionally) required to consider representations of points of low order (see
https://cr.yp.to/ecdh.html).

The claim that libsodium doesn't give you the tools to produce a secure AKE is
incorrect. Firstly, you can do a traditional signing key exchanges. Secondly,
Noise is a proof from construction; there are implementations of the Noise
protocol available on the site, and you'll see that it defines a KCI-secure AKE
that you can implement using nothing but NaCl/libsodium.

Finally, as much as I try to draw this conversation away from individuals and
towards technical discussion, I hope you'll find that I've tried pretty hard
both here and in general to provide constructive contributions, and trying to
educate those who'll listen. And, I tell people to consult a cryptographer,
although you could do a lot worse than NaCl as a set of solid primitives :)

If a chainsaw does a bad job of cutting an apple, it's not a bad chainsaw.

@zx2c4

This comment has been minimized.

zx2c4 commented Jan 14, 2017

I'll respond to a few small obvious things I've seen since I left the thread alone yesterday.

@irungentoo's hubris / "KCI ain't that bad"

You are fucked if you get your key stolen. There are so many more fun things you can do if you steal someones key that I simply didn't bother trying to handle that case because it would not provide any actual security.

This isn't as true for modern AKEs, which give pretty nice security properties that your handrolled naiveté just doesn't account for. How so, you ask?

  • Compromise one key, A, with Tox AKE --> mount an active man in the middle attack on an infinite quantity of keypairs (A, {everybody else}).
  • Compromise one key, A, with modern mutual AKE --> man in the middle attack not feasible.
  • Compromise two keys, A & B, with modern mutual AKE --> man in the middle attack feasible between one keypair (A, B).

So, if you quantify it this way, in terms of "number of full man in the middle attacks feasible after compromising N keys", a modern mutual AKE is infinity times better than the Tox AKE.

If you're serious about doing things right, you wouldn't hubristically categorize things as "any actual security" so hastily, when in fact there's a massive body of research and human accomplishment that's preceded your novice crypto. Open your eyes. Read some papers. Humble yourself while looking at your species, in awe of the wonderful cryptographic techniques created before you. So you made a mistake. We all do. Time to educate yourself and improve now.

"Since we use NaCl, we must be safe, unless DJB is an idiot!"

Making a protocol is different from using safe primitives. There are dragons at every step in the process. NaCl came with an implementation of CurveCP -- a protocol -- which exists accessibly as libchloride. Are you using libchloride? No you're not. So not only do you fail to use NaCl in a safe way, but you fail to use the protocl that NaCl says is safe!

"It's still more safe than cleartext or Skype"

Maybe true (if you ignore the C implementation vulnerabilities in toxcore), but there are easily accessible things more secure than Tox that actually use modern cryptography, rather than Tox's handrolled non-peer-reviewed no-security-model drivel. So it seems obvious to recommend that you use those actually secure protocols instead. When folks expect for the word "secure" to indicate what cryptographers consider acceptable, using "secure" for something like Tox is disingenuous and even potentially dangerous.

"I'm confused and don't understand KCI; my analogies are incorrect!"

Probably this discussion is not for you, then. Also, you probably shouldn't be developing cryptographic software in that case.

"Cryptographers are a secret Illuminati club"

No they're not. They're just people who took the time to study and actively engage in peer review and open constructive criticism.

"We'll never quit! Help us, or get lost! You can't break our spirits!"

It's not really about that. By all means, continue to develop software and expand your experience and education, in private. But publicly releasing and promoting software with known fundamental insecurities is irresponsible and reckless.

"But we can't just take down the network because it's a DHT"

You can remove code repositories, pre-built binaries, websites (contacting the owners of tox.{im,chat}), etc. You can also add big red disclaimers "DO NOT USE - EXPERIMENTAL & INSECURE" to every medium you have.

@iphydf's admissions

We have a largely undocumented, untested, and not well-understood code base of about 19 ksloc (C).
Tox provides some strong security guarantees. We haven't got to the point where we can enumerate them properly, given the general lack of understanding of the code and specification.

I appreciate the honesty here. It's also a pretty good indication that you should start from scratch. Determine what security goals you want; then develop software around them. The fact that nobody understands the code, the crypto, or the protocol all but admits complete failure. You can't guarantee something if you don't know what you're guaranteeing, or the mechanism by which this guarantee is brought about. You admitted this yourself -- that because you don't understand Tox, "any change we make has the potential to make Tox less secure, running counter to our goal."

Heeding to advice

You've got cryptographers and security experts telling you to shutdown and take a more conservative approach. Your reaction has been one of pride and stubbornness. Yes, you've worked very hard on this and it's your baby so you want to keep it. But responsibility is something important. Providing software that does not provide adequate security under the label of "secure software" is dishonest and irresponsible. The webpage touts Tox as "VERY secure", which it clearly is not.

This doesn't have to do with sabotage or demoralization. It's about responsibility.

Hang up the red "we're only an experiment" banners, or abandon ship.

@paragonie-scott

This comment has been minimized.

paragonie-scott commented Jan 14, 2017

Hang up the red "we're only an experiment" banners, or abandon ship.

Here's an image that @Bascule created for this exact use case:

Danger: Experimental

Here's the Markdown code to embed in READMEs, etc.

![Danger: Experimental](https://camo.githubusercontent.com/275bc882f21b154b5537b9c123a171a30de9e6aa/68747470733a2f2f7261772e6769746875622e636f6d2f63727970746f7370686572652f63727970746f7370686572652f6d61737465722f696d616765732f6578706572696d656e74616c2e706e67)

To be clear: There is no shame in your project being experimental. One of mine proudly emblazens itself as an experiment until such a time that it can be audited by a team of penetration testers and cryptographers.

I would suggest you take roughly this course of action:

  1. Slap up the image above.
  2. Figure out how to implement a protocol like Noise into Tox, and ask a cryptographer to review it.
  3. Develop your newer protocol based on info from step 2.
  4. When you think you're ready, ask a (ideally, different) cryptographer to review your implementation.
  5. If they give it a clean bill of health, publish their findings and ask the original cryptographer to peer-review it.
  6. If all is well, then you can call yourself secure again, until someone else finds a protocol flaw that can compromise your security goals. Hopefully it won't be an obvious or trivial one.

Every once in a while a few tox devs get together to play https://github.com/OpenRA/OpenRA/ and while I don't mean to derail this thread to much, but @lvh @azet @kebolio @eternaleye @paragonie-scott would you like to play a few games with us?

Sorry, I didn't see your message last night. I'm afraid I must decline due to other responsibilities. (I barely have the free time to play video games with my closest friends these days.)

@bvrulez

This comment has been minimized.

bvrulez commented Jan 14, 2017

The fact that tox is providing a fast text and voice messaging service without a server (of a company) in the middle is important to users. I am mostly concerned about my data beeing stored with somebody else (and synchronised between clients), and not so much about the random chance that a single conversation might be hacked. By indicating to label this a big "danger" people actively destroy the potential of this application. I am a user with no business here, just wanted to make clear that the fine points of cryptographic security might make up the last 10% of this. 90% are already there. If a professional cryptographer wants to code the rest, why not? :)

@JFreegman

This comment has been minimized.

Member

JFreegman commented Jan 14, 2017

@zx2c4

So, if you quantify it this way, in terms of "number of full man in the middle attacks feasible after compromising N keys", a modern mutual AKE is infinity times better than the Tox AKE.

Perhaps your reasoning is flawed if it leads you to such a hyperbolic conclusion.

It's not really about that. By all means, continue to develop software and expand your experience and education, in private.

You should read my previous response more carefully.

The webpage touts Tox as "VERY secure", which it clearly is not.

Tox's security claims assume that your private key remains private. I think this is a reasonable assumption, as there is no software in the world that can be considered "VERY secure" if your private key has been compromised. There are only varying degrees of fucked, which most of us agree should be limited as best as reasonably possible.

This doesn't have to do with sabotage or demoralization. It's about responsibility.

According to your idea of responsibility, the internet in its entirety should be shut down, as known security vulnerabilities range from ARP all the way up to HTTPS. Security is not a black and white issue, and I would expect a self-proclaimed expert who is so sure of himself that he thumbs up his own posts to know this.

@lvh

This comment has been minimized.

lvh commented Jan 14, 2017

I have spent countless hours of my life providing free cryptographic consultancy and design, up to and including literally writing a book and then giving it away for free.

I find your suggestion that I am explaining cryptography on an ostensibly underfunded crypto project to actually be about making shit up so I can scare people into giving me money ridiculous and offensive.

@cebe

This comment has been minimized.

Member

cebe commented Jan 14, 2017

@lvh thank you for that! As far as I see, @JFreegman's comment is addressed to @zx2c4, not to you.

@GrayHatter

This comment has been minimized.

GrayHatter commented Jan 14, 2017

First @lvh you're always welcome here. I appreciate your reasonable and rational responses. So I'm not going respond line by line. I assume you care about crypto, and about teaching how to use it correctly. I'll just hit the broad points.

There are several levels of "homebrew" of "roll-your-own" cryptography:

Right, but would you dissagree that, among the security/crypto circles. It's use derogatorily to reference shit code written by someone with no idea what they're doing? So unless you're trying to imply that the original author fucked up, don't you think it becomes a bit problematic, if not outright insulting?

Also, It's doesn't even apply in this case. We're not even rolling our own crypto. An argument can be made that we're created a "crypto system". But that's even a hard sell, given we're using the NaCl API as the documentation instructs.

The claim that libsodium doesn't give you the tools to produce a secure AKE is
incorrect.

I must have missed that part of the NaCl documentation. As NaCl compatibility was one of the original design goals for Toxcore. (While I'm here, I'm also going to mention again that tox.chat already warns users not to get their key stolen. If you'd like to have a separate discussion on the merits of THAT warning. We should open a new issue) Also, let's remind everyone that @irungentoo, the original author of the codebase was aware of the attack verctor, and decided not to include it as a part of the threat model.

Noise is a proof from construction; there are implementations of the Noise
protocol available on the site, and you'll see that it defines a KCI-secure AKE
that you can implement using nothing but NaCl/libsodium.

Link, the Noise stuff I saw didn't really offer ANY documentation. But then I never looked THAT hard.

Finally, as much as I try to draw this conversation away from individuals and
towards technical discussion, I hope you'll find that I've tried pretty hard
both here and in general to provide constructive contributions, and trying to
educate those who'll listen. And, I tell people to consult a cryptographer,
although you could do a lot worse than NaCl as a set of solid primitives :)

You've been awesome, as I said at the start, you're always welcome around here. (Hopefully you'll hit up HN again and answer my pending question to you)

If a chainsaw does a bad job of cutting an apple, it's not a bad chainsaw.

Right, but if you then call up the chainsaw maker, and shit all over the work they've done. They're allowed to be offended.

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 14, 2017

I believe this discussion has come to an end. We acknowledge that the issue exists and will work towards fixing it. We do welcome contributions in this direction.

@zx2c4 thank you for starting the discussion and giving the explanation in your report. I would appreciate if you could help review the PR that adds a notice to the website about the lack of security review.
@lvh thank you for further helping create a better understanding of the issue and ways to solve it. We (toktok team) appreciate all your help and would in no way consider it an act of malice. We continue to welcome reports of security flaws.

I will say this very clearly once again: there is an avoidable security flaw in the Tox handshake. This is not something someone made up. The effect is that if your secret key is stolen, an attacker can impersonate anyone to you. We will fix this issue, most likely by adopting Noise for handshakes.

I will post one more message on this issue and then lock it. Please contact me (my github email is public and I'm usually on IRC: iphy @ freenode) if you feel this decision is inappropriate. I am keeping this issue open until it is solved.

@iphydf

This comment has been minimized.

Member

iphydf commented Jan 14, 2017

I would appreciate if all the collaborators could stop posting on this issue as well. I'm locking the conversation now.

@TokTok TokTok locked and limited conversation to collaborators Jan 14, 2017

@TokTok TokTok unlocked this conversation Jan 14, 2017

@lvh

This comment has been minimized.

lvh commented Jan 14, 2017

Two points; I was replying to @bvrules. Secondly, in my example libsodium primitives are the chainsaw. Anyway, in fairness, it's worth stating that the actual maintainers have been courteous.

I'm going to check out from these threads because I'm not particularly interested in emotional abuse from the peanut gallery, but dear maintainers: you know where to find me if you'd like some free crypto advice. I'll try to remember to answer the KCI example question with some papers if you'd like some light reading :)

@TokTok TokTok locked and limited conversation to collaborators Jan 14, 2017

@TokTok TokTok unlocked this conversation Apr 18, 2017

@Halfwake

This comment has been minimized.

Halfwake commented Sep 1, 2017

I'm trying to get at the meat of this discussion. The following is true?

With Tox, if you have your private key stolen, someone can impersonate your friends. There are protocols that make this impossible, but they require nonrepuidation. The Tox specification was designed with the assumption that nonrepuidation is more dangerous than impersonation.

@nazar-pc

This comment has been minimized.

nazar-pc commented Sep 1, 2017

@Halfwake

With Tox, if you have your private key stolen, someone can impersonate your friends.

This is true, someone can impersonate your friends to you.

There are protocols that make this impossible, but they require nonrepuidation.

I think you're talking about deniability here. And in general you don't have to sacrifice deniability in order to be protected from this kind of potential vulnerability.

When talking about current Tox implementation, then yes, you chose either deniability or protection against KCI, but as was discussed previously, the whole protocol should better be re-designed to address this issue fundamentally.

@fcore117

This comment has been minimized.

fcore117 commented Sep 1, 2017

At least in current state people get very easy to use IM app that is still more secure than Skype, if someone can upgrade protocol fast to be more secure it is good but if it takes another 4 or more years then no one sees alternative to Skype never.

It is pain that lot of people are forced to use proprietary Skype(including me) or other bloated IM apps to communicate that are usually server based and can be easily blocked.

nazar-pc: if you know someone who can help speed up C development within months not years then call them here to develop but otherwise Tox in current state can still save lot of people from endless Skype slavery.

More overall optimizations/fixes and new group chats and Tox can be serious alternative to Skype.

All security is broken anyway when someone bugs your pc or house and passwords will be revealed at gun point in worst case scenario.

@nazar-pc

This comment has been minimized.

nazar-pc commented Sep 1, 2017

@fcore117, this is an issue about technical implications and possible solutions for mentioned vulnerability, not about whether Tox is an alternative to anything or not. So let's keep discussion close to the topic if you have anything to add to the point.
Also I'm not representing Tox team in any way and I'm not a part of it.

@bvrulez

This comment has been minimized.

bvrulez commented Sep 1, 2017

If the implication of the technical problem and its solution of a reorganisation of the whole code would mean that no further development will take part in the most recent code, while forking to the other code base will take like 4 years to develop a similar state of working applications, then I would vote against this decision, on a basis of the technical issues. A parallel development would be welcomed of course.

@lvh

This comment has been minimized.

lvh commented Sep 1, 2017

@Halfwake the part about KCI is true; the part about the necessary trade-off is not (in general) true. The property you're describing is slightly different than "non-repudiation" (a property of signatures, which is one way to get KCI), instead you want "deniability", which other protocols offer. Deniability in this case means that the receiver can authenticate the sender, but the receiver can not convince anyone else that the actual sender must be the sender of a given message. (This is different still from "indistinguishability", which means different things in different context, but in this context specifically it would usually mean that a passive network observer can't tell that you're speaking $PROTOCOL.)

@gitbugged

This comment has been minimized.

gitbugged commented Oct 2, 2017

This is true, someone can impersonate your friends to you.

From my understanding, isn't this something the clients can solve (rather than tox-core)? e.g. a button that implements socialist millionaire authentication...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment