Skip to content
EntropyString for Erlang
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

EntropyString for Erlang

Efficiently generate cryptographically strong random strings of specified entropy from various character sets.

Build Status   Hex Version   License: MIT



Add to rebar.config

{deps, [
     {entropy_string, {git, "", {tag, "1.0.0"}}}

To build and run tests

> rebar3 compile
> rebar3 eunit



To run code snippets in the Erlang shell

> rebar3 compile
> erl -pa _build/default/lib/entropy_string/ebin
Erlang/OTP ...
1> l(entropy_string).

Generate a potential of 1 million random strings with 1 in a billion chance of repeat:

2> Bits = entropy_string:bits(1.0e6, 1.0e9).
3> entropy_string:random_string(Bits).

There are six predefined character sets. By default, random_string/1 uses charset32, a character set with 32 characters. To get a random hexadecimal string with the same entropy Bits as above (see Real Need for description of what entropy Bits represents):

2> Bits = entropy_string:bits(1.0e6, 1.0e9).
3> entropy_string:random_string(Bits, charset16).

Custom characters are also supported. Using uppercase hexadecimal characters:

2> Bits = entropy_string:bits(1.0e6, 1.0e9).
3> entropy_string:random_string(Bits, <<"0123456789ABCDEF">>).

Convenience functions are provided for common scenarios. For example, OWASP session ID usingcharset32:

2> entropy_string:session_id().

Session ID using RFC 4648 file system and URL safe characters:

2> entropy_string:session_id(charset64).



entropy_string provides easy creation of randomly generated strings of specific entropy using various character sets. Such strings are needed as unique identifiers when generating, for example, random IDs and you don't want the overkill of a GUID.

A key concern when generating such strings is that they be unique. Guaranteed uniqueness, however,, requires either deterministic generation (e.g., a counter) that is not random, or that each newly created random string be compared against all existing strings. When ramdoness is required, the overhead of storing and comparing strings is often too onerous and a different tack is chosen.

A common strategy is to replace the guarantee of uniqueness with a weaker but often sufficient probabilistic uniqueness. Specifically, rather than being absolutely sure of uniqueness, we settle for a statement such as "there is less than a 1 in a billion chance that two of my strings are the same". This strategy requires much less overhead, but does require we have some manner of qualifying what we mean by "there is less than a 1 in a billion chance that 1 million strings of this form will have a repeat".

Understanding probabilistic uniqueness of random strings requires an understanding of entropy and of estimating the probability of a collision (i.e., the probability that two strings in a set of randomly generated strings might be the same). The blog posting Hash Collision Probabilities provides an excellent overview of deriving an expression for calculating the probability of a collision in some number of hashes using a perfect hash with an N-bit output. Thef Entropy Bits section below discribes how entropy_string takes this idea a step further to address a common need in generating unique identifiers.

We'll begin investigating entropy_string by considering our Real Need when generating random strings.


Real Need

Let's start by reflecting on a common statement of need for developers, who might say:

I need random strings 16 characters long.

Okay. There are libraries available that address that exact need. But first, there are some questions that arise from the need as stated, such as:

  1. What characters do you want to use?
  2. How many of these strings do you need?
  3. Why do you need these strings?

The available libraries often let you specify the characters to use. So we can assume for now that question 1 is answered with:

Hexadecimal will do fine.

As for question 2, the developer might respond:

I need 10,000 of these things.

Ah, now we're getting somewhere. The answer to question 3 might lead to the further qualification:

I need to generate 10,000 random, unique IDs.

And the cat's out of the bag. We're getting at the real need, and it's not the same as the original statement. The developer needs uniqueness across some potential number of strings. The length of the string is a by-product of the uniqueness, not the goal, and should not be the primary specification for the random string.

As noted in the Overview, guaranteeing uniqueness is difficult, so we'll replace that declaration with one of probabilistic uniqueness by asking:

  • What risk of a repeat are you willing to accept?

Probabilistic uniqueness contains risk. That's the price we pay for giving up on the stronger declaration of strict uniqueness. But the developer can quantify an appropriate risk for a particular scenario with a statement like:

I guess I can live with a 1 in a million chance of a repeat.

So now we've gotten to the developer's real need:

I need 10,000 random hexadecimal IDs with less than 1 in a million chance of any repeats.

Not only is this statement more specific, there is no mention of string length. The developer needs probabilistic uniqueness, and strings are to be used to capture randomness for this purpose. As such, the length of the string is simply a by-product of the encoding used to represent the required uniqueness as a string.

How do you address this need using a library designed to generate strings of specified length? Well, you don't directly, because that library was designed to answer the originally stated need, not the real need we've uncovered. We need a library that deals with probabilistic uniqueness of a total number of some strings. And that's exactly what entropy_string does.

Let's use entropy_string to help this developer by generating 5 IDs:

2> Bits = entropy_string:bits(10000, 1000000).
3> lists:map(fun(_) -> entropy_string:random_string(Bits, charset16) end, lists:seq(1,5)).

To generate the IDs, we first use

Bits = entropy_string:bits(10000, 1000000).

to determine how much entropy is needed to generate a potential of 10000 strings while satisfy the probabilistic uniqueness of a 1 in a million risk of repeat. We can see from the output of the Erland shell it's about 45.51 bits. Inside the list comprehension we used

entropy_string:random_string(Bits, charset16)

to actually generate a random string of the specified entropy using hexadecimal (charset16) characters. Looking at the IDs, we can see each is 12 characters long. Again, the string length is a by-product of the characters used to represent the entropy we needed. And it seems the developer didn't really need 16 characters after all.

Finally, given that the strings are 12 hexadecimals long, each string actually has an information carrying capacity of 12 * 4 = 48 bits of entropy (a hexadecimal character carries 4 bits). That's fine. Assuming all characters are equally probable, a string can only carry entropy equal to a multiple of the amount of entropy represented per character. entropy_string produces the smallest strings that exceed the specified entropy.


Character Sets

As we've seen in the previous sections, entropy_string provides predefined characters for each of the supported character set lengths. Let's see what's under the hood. The predefined character sets are charset64, charset32, charset16, charset8, charset4 and charset2. The characters for each were chosen as follows:

  • CharSet 64: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_
    • The file system and URL safe char set from RFC 4648.  
  • CharSet 32: 2346789bdfghjmnpqrtBDFGHJLMNPQRT
    • Remove all upper and lower case vowels (including y)
    • Remove all numbers that look like letters
    • Remove all letters that look like numbers
    • Remove all letters that have poor distinction between upper and lower case values. The resulting strings don't look like English words and are easy to parse visually.  
  • CharSet 16: 0123456789abcdef
    • Hexadecimal  
  • CharSet 8: 01234567
    • Octal  
  • CharSet 4: ATCG
    • DNA alphabet. No good reason; just wanted to get away from the obvious.  
  • CharSet 2: 01
    • Binary

You may, of course, want to choose the characters used, which is covered next in Custom Characters.


Custom Characters

Being able to easily generate random strings is great, but what if you want to specify your own characters? For example, suppose you want to visualize flipping a coin to produce 10 bits of entropy.

2> entropy_string:random_string(10, charset2).

The resulting string of 0's and 1's doesn't look quite right. Perhaps you want to use the characters H and T instead.

2> entropy_string:random_string(10, <<"HT">>).

As another example, we saw in Character Sets the predefined hex characters for charSet16 are lowercase. Suppose you like uppercase hexadecimal letters instead.

2> entropy_string:random_string(48, <<"0123456789ABCDEF">>).

To facilitate efficient generation of strings, entropy_string limits character set lengths to powers of 2. Attempting to use a character set of an invalid length returns an error.

2> entropy_string:random_string(48, <<"123456789ABCDEF">>).
{error,<<"Invalid char count: must be one of 2,4,8,16,32,64">>}

Likewise, since calculating entropy requires specification of the probability of each symbol, entropy_string requires all characters in a set be unique. (This maximize entropy per string as well).

2> entropy_string:random_string(48, <<"123456789ABCDEF1">>).
{error,<<"Chars not unique">>}



To efficiently create random strings, entropy_string generates the necessary number of random bytes needed for each string and uses those bytes in a binary pattern matching scheme to index into a character set. For example, to generate strings from the 32 characters in the charSet32 character set, each index needs to be an integer in the range [0,31]. Generating a random string of charSet32 characters is thus reduced to generating random indices in the range [0,31].

To generate the indices, entropy_string slices just enough bits from the random bytes to create each index. In the example at hand, 5 bits are needed to create an index in the range [0,31]. entropy_string processes the random bytes 5 bits at a time to create the indices. The first index comes from the first 5 bits of the first byte, the second index comes from the last 3 bits of the first byte combined with the first 2 bits of the second byte, and so on as the bytes are systematically sliced to form indices into the character set. And since binary pattern matching is really efficient, this scheme is quite fast.

The entropy_string scheme is also efficient with regard to the amount of randomness used. Consider the following possible Erlang solution to generating random strings. To generated a character, an index into the available characters is created using rand.uniform/1. The code looks something like:

-module (random_string).


-define(CHARS, <<"abcdefghijklmnopqrstuvwxyz0123456">>).
-define(LEN, 32).

char(Ndx) ->
  Offset = Ndx * 8,
  <<_Skip:Offset, Char:8, _Rest/binary>> = ?CHARS,

ndx(_) ->
  rand:uniform(?LEN) - 1.

len(Len) ->
  list_to_binary([char(Ndx) || Ndx <- lists:map(fun ndx/1, lists:seq(1,Len))]).
2> c(random_string).
3> random_string:len(16).

In the code above, rand:uniform/1 generates a value used to index into the hexadecimal character set. The Erlang docs indicate that each returned random value has 58 bits of precision. Suppose we're creating strings with Len = 16. Generating each string character consumes 58 bits of randomness while only injecting 5 bits (log2(32)) of entropy into the resulting random string. The resulting string has an information carrying capacity of 16 * 5 = 80 bits, so creating each string requires a total of 928 bits of randomness while only actually carrying 80 bits of that entropy forward in the string itself. That means 848 bits (91%) of the generated randomness is simply wasted.

Compare that to the entropy_string scheme. For the example above, plucking 5 bits at a time requires a total of 80 bits (10 bytes) by available. Creating the same strings as above, entropy_string uses 80 bits of randomness per string with no wasted bits. In general, the entropy_string scheme can waste up to 7 bits per string, but that's the worst case scenario and that's per string, not per character!

There is, however, a potentially bigger issue at play in the above code. Erlang rand does not use a cryptographically strong psuedo random number generator. So the above code should not be used for session IDs or any other purpose that requires secure properties.

There are certainly other popular ways to create random strings, including secure ones. For example, if you created an appropriate bin_to_hex/1 function, generating secure random hex strings can be done by

2> bin_to_hex(:crypto.strong_rand_bytes(8))

Or you could use base64 like this

2> l(base64).
2> base64.encode(crypto:strong_rand_bytes(8)).

Since Base64 encoding is concerned with decoding as well, you would have to strip any padding characters. And the characters used are not URL or file system safe. You could do subsequent character substitution, but you're going down a rabbit hole from co-opting a function for purposes it wasn't designed for.

These two solutions each have the limitations. You can't alter the characters, but more importantly, each lacks a clear specification of how random the resulting strings actually are. Each specifies byte length as opposed to specifying the entropy bits sufficient to represent some total number of strings with an explicit declaration of an associated risk of repeat using whatever encoding characters you want.

Fortunately you don't need to really understand how secure random bytes are efficiently sliced and diced to use entropy_string. But you may want to provide your own Custom Bytes, which is the next topic.


Custom Bytes

As previously described, entropy_string automatically generates cryptographically strong random bytes to generate strings. You may, however, have a need to provide your own bytes, for deterministic testing or perhaps to use a specialized random byte generator.

Suppose we want a string capable of 30 bits of entropy using 32 characters. We can specify the bytes to use during string generation by

2> Bytes = <<16#fac89664:32>>.
3> entropy_string:random_string(30, entropy_string:charset(charset32), Bytes).

The Bytes provided can come from any source. However, an error is returned if the number of bytes is insufficient to generate the string as described in the Efficiency section:

2> Bytes = <<16#fac89664:32>>.
3> entropy_string:random_string(32, charset32, Bytes).
{error,<<"Insufficient bytes: need 5 and got 4">>}

Note the number of bytes needed is dependent on the number of characters in the character set. For a string representation of entropy, we can only have multiples of the entropy bits per character. In the example above, each character represents 5 bits of entropy. So we can't get exactly 32 bits and we round up by the bits per character to a total 35 bits. We need 5 bytes (40 bits), not 4 (32 bits).

entropy_string:bytes_needed/2 can be used to determine the number of bytes needed to cover a specified amount of entropy for a given character set.

2> entropy_string:bytes_needed(32, charset32).


Entropy Bits

Thus far we've avoided the mathematics behind the calculation of the entropy bits required to specify a risk that some number random strings will not have a repeat. As noted in the Overview, the posting Hash Collision Probabilities derives an expression, based on the well-known Birthday Problem, for calculating the probability of a collision in some number of hashes (denoted by k) using a perfect hash with an output of M bits:

Hash Collision Probability

There are two slight tweaks to this equation as compared to the one in the referenced posting. M is used for the total number of possible hashes and an equation is formed by explicitly specifying that the expression in the posting is approximately equal to 1/n.

More importantly, the above equation isn't in a form conducive to our entropy string needs. The equation was derived for a set number of possible hashes and yields a probability, which is fine for hash collisions but isn't quite right for calculating the bits of entropy needed for our random strings.

The first thing we'll change is to use M = 2^N, where N is the number of entropy bits. This simply states that the number of possible strings is equal to the number of possible values using N bits:

N-Bit Collision Probability

Now we massage the equation to represent N as a function of k and n:

Entropy Bits Equation

The final line represents the number of entropy bits N as a function of the number of potential strings k and the risk of repeat of 1 in n, exactly what we want. Furthermore, the equation is in a form that avoids really large numbers in calculating N since we immediately take a logarithm of each large value k and n.


Take Away

  • You don't need random strings of length L.
    • String length is a by-product, not a goal.
  • You don't need truly unique strings.
    • Uniqueness is too onerous. You'll do fine with probabilistically unique strings.
  • Probabilistic uniqueness involves measured risk.
    • Risk is measured as "1 in n chance of generating a repeat"
    • Bits of entropy gives you that measure.
  • You need to a total of N strings with a risk 1/n of repeat.
    • The characters are arbitrary.
  • You need entropy_string.
A million potential IDs with a 1 in a billion chance of a repeat:
2> entropy_string:random_string(entropy_string:bits(1.0e6, 1.0e9)).


You can’t perform that action at this time.