Skip to content
scholarly edited this page Jul 26, 2013 · 3 revisions

NEVER Reuse Passwords

This cannot be over-emphasized. While the traditional [Dolev-Yao] model assumes that the network is compromised but the nodes are secure, this is not a reasonable assumption today and probably never will be. Except in a wireless network, an attacker cannot control a link without first controlling a node. Therefore, we should assume that the server administrator is honest and competent but overworked, possibly naive, and certainly fallible. We must assume that the server will be compromised sooner or later. Reusing passwords under this threat model is utterly foolish.

Why is this really such a problem? Why can't we assume that the server will stay safe?

http://arstechnica.com/security/2013/05/its-official-password-strength-meters-arent-security-theater/?comments=1&post=24483593

[http://blog.crackpassword.com/2013/02/yahoo-dropbox-and-battle-net-hacked-stopping-the-chain-reaction/]

leaked_password_lists_and_dictionaries. (2012, Jul 13). In The Password Project. Retrieved 21:26, July 10, 2013, from http://thepasswordproject.com/doku.php?id=leaked_password_lists_and_dictionaries&rev=1342207197.

[Steve Gibson's password haystack strategy][https://www.grc.com/haystack.htm] is certainly workable for a password that you know will be protected by a good slow key derivation function. It is not, however, appropriate for a password that might be stored or seen in plain text -- i.e. any server-side password. If you use the same padding pattern for all of your passwords, as Steve suggests, and your pattern gets revealed by a careless application developer, a smart cracker can use that pattern against you. He does warn against this, but not strongly enough, in my opinion.

Use Random Passwords

Each security domain should have a long, unique, random password. Uniqueness prevents a compromise of one domain from leading to compromises in others. Randomness prevents success through an online dictionary attack. We assume that administrators will rate-limit and block any high-speed online attack, so such will work only against very poor passwords. Length and complexity forces the attacker to use brute-force attack if he obtains a hashed password file. Once we have committed to storing each password, generating unique random passwords is easy. The complexity is only limited by the rules imposed by the security domain's policy. For your entertainment, I present the following less-than-optimal password complexity policies:

Hunt, Troy (2011), "Who's Who of Bad Password Practices", [http://www.troyhunt.com/2011/01/whos-who-of-bad-password-practices.html] [Visa/Wells Fargo weak password policy][http://blog.zorinaq.com/?a=2011-m05#e54] makes the password way too short. [Godaddy][http://support.godaddy.com/help/article/2653/generating-a-strong-password] allows a good long password, but restricts the alpabet to four out of 33 possible special characters.

Use An Expensive KDF

A key derivation function (KDF) is a special cryptographic tool that turns your weak, memorable, low-entropy password, into something that at least appears to be random -- something that is usable as a secret key for encrypting data. And it does this in a way that is not generally reversable. We want this function to be slow and expensive because, you know your password and only have to use the function once for each time you unlock your database, so waiting 1 or 2 seconds is not a big deal. However, the attacker has to guess your password, and if you chose it well, he will have to guess it many times. The slower that function is, the longer it will take for him to guess your password.

There are three main candidates for your KDF. The standard one that most people use nowadays is called PBKDF2. Most people use it with SHA-1, a very fast hash function. The problem is that the attacker's super cracking hardware is *designed especially for doing SHA-1 very quickly -- not what you wanted to hear. The second choice, bcrypt is much slower to compute. The third choice, a relatively new invention, is called scrypt. What makes scrypt special is that not only can you make it very slow, but you can also make it take up a lot of memory -- not what the attacker wanted to hear: his super cracking hardware doesn't have very much memory because it would be too expensive. What scrypt does is force the attacker to use normal computers to check each password guess. So unless he really wants your password, he will probably say "Scrypt? Never mind. I'll go hack someone who uses SHA-1"

We assume that if the attacker has obtained the password file, he also has access to any other unencrypted data stored on the same server, so a good password no longer benefits us. (The attacker has already won this domain.) However, it may benefit others who do not use unique passwords because it will absorb some of the attacker's time, epecially if the passwords are hashed with an expensive KDF. The attacker will only attempt to crack all of the passwords in the hope of obtaining credentials that are reused. If the hashes are created with scrypt or bcrypt, the attacker may simply give up and move on to lower-hanging fruit, as his hardware is designed for simpler hash functions.

Encourage the Use of Secure Authentication Mechanisms

SRP with scrypt is ideal: 1) there is never a need for the plaintext password to traverse the network. 2) the verifier stored by the server is expensive to crack. Client certificates (X.509 or PGP or SSH) are also a good option, but may require a more complex infrastructure.

Use TLS Carefully

If the security domain adminstrator insists on using a plaintext authentication system, it should at the very least be protected by TLS. If not, then it must be a very low-value site, so there is really nothing to be done beyond using a unique password.

LS to prefer ephemeral session keys over long-term keys. (Perfect Forward Secrecy.)

If the attacker also has control of which root trusted certificates the browser trusts (e.g. corporate IT department), he may establish a rogue TLS proxy that can transparently decrypt all TLS traffic. We can partially adress this by rejecting any certificate that is not known-good by our own criteria. While this constitutes a denial of service, it can prevent disclosure of credentials. In general, it is unwise to use high-value domains using a client that you do not absolutely control.

Use Hardened Client Configuration for High-Value Sites

To protect credentials of high-value sites, we can use a hardened browser configuration. This includes disabling non-essential plugins and scripts, and strictly limiting which domains the browser instance can access. As mentioned above we can manually configure which certificates are trusted and reject all connections via non-trusted certificates. All non-hardened configurations should be prevented from accessing the high-value domains.

Even for other sites where disabling plugins and scripts may not be desireable, limiting session length can reduce the risk of BEAST-like attacks from a man-in-the-browser. Limiting the number of open tabs can encourage shorter sessions. If the user can only open, for example, five or nine tabs, he is much less likely to leave any open and unattended for long periods of time. Some users have reported that this also helps them focus better and be more productive. Research should allow us to come up with a more productive way to use tabs than just allowing them to pile up until they collapse.