15 Years of Mistakes: why everybody created insecure passwords

in Project HOPE4 years ago (edited)

image.png
(image: TechRadar)

The previous article of this series has given an overview of the reasons why human-generated passwords are weak (they're already out there, they're predictable, and they're short) and what was the origin of all those bizarre password creation rules that had to be respected (a mid-level manager working for a standardization institute that was tasked with creating them, but actually had no clue about computers, passwords, and security in general).

This article, which is the second part of this series, will focus on why passwords that were fully compliant with the NIST recommendations from 2003 turned out to be insecure.

Changing the point of view

One of the major mistakes made by Bill Burr while compiling NIST Special Publication 800-63 - Appendix A was to base his reasoning on a computer's point of view, and then forgetting that his guidelines were to be followed by users to challenge a machine. In other words, using Shannon's information theory to try and enhance human choices with regards to password entropy, Burr's rules were formally correct and yet inconsistent with their scope. The goal of higher complexity was only apparently achieved; on top of that, several critical details of computer theory were overlooked in favor of human biased rules, which ended up hiding the fundamental weaknesses of the resulting passwords when challenged by a properly structured computer attack.

When describing an attacker, that is always meant to be a computer program controlled by a hacker whose goal is to guess the password. Burr's mistakes can be summarized as follows.

Starting from dictionary words is intuitive for humans, but gives attackers a jumpstart.

Using a common word as a starting point is obvious for humans as it gives them an immediate, intuitive reference to build upon. However, that undermines robustness from the get-go as it fundamentally breaks one of the key principles of security: part of the data can be inferred as it is already known. In the example, the scrambled keyword Antid0t& can immediately be correlated to the originally chosen word antidote. The issue is subtle because a human eye will recognize it at first sight, but a computer cannot do that - hence Burr assumed that higher entropy corresponds to higher robustness; however, the existing correlation between all the letters that compose the original word remains and can be leveraged by the attacker through the use of dictionaries. For this purpose, dictionaries exist - and can be very easily retrieved - that include all known existing words (and some that don't exist - more on that later).

Character substitution, alphabets, and the depths of space

The main body of Burr's rules required a keyword to possess at least one upper case letter, one lower case letter, one number, and one special character. The purpose of these rules was to increase the so-called search space depth, otherwise known as the alphabet of a keyword; that is a number obtained by the sum of all different characters that will be used to compose the keyword. This can be used to measure the complexity of breaking a chosen keyword with a brute-force attack, where the computer will try all possible combinations of all characters from all the alphabets until a match is found.

For instance, a password only composed by numbers will have a search space depth of 10 (roman digits from 1 to 9, and zero); using the English alphabet will give a depth of 26; using uppercase and lowercase letters will give a depth of 52 (26 uppercase + 26 lowercase and one letter, in either case, is sufficient to obtain the extension as explained below); using special characters will give a depth of 33.

Search space depth is an incremental function, as one character from any set - no matter which character - is enough to require the inclusion of the entire corresponding set in the alphabet of a brute-force attack. So for example the starting word antidote has a search space depth of 26 (being composed only with lowercase letters), but Antidote has a depth of 26+26=52 (because a brute force attack will have to try all possible combinations of lowercase and uppercase), Antid0te has a depth of 26+26+10=62 (adding the 10 numbers to the alphabet) and Antid0t& has a depth of 26+26+10+33=95 because the introduction of the special character & forces an attacker to extend its search to all special characters.

At this point, according to Burr's rules, the resulting keyword complexity would be high enough to make it "secure". Burr based his conclusion on the evaluation of the search space size, which is defined as:

Search Space Size: the count of all possible passwords with a given alphabet size and up to a given password's length.

The search space size of antidote (depth=26, length=8) is 217,180,147,158 (or 2.17 x 10^11); while the search space of Antid0t& (depth=95, length=8) is 6,704,780,954,517,120 (or 6.70 x 10^15). The difference between them is four orders of magnitude, which Burr's calculations estimated to be "good enough" to thwart a brute-force attack.

Unfortunately, as we know, he was wrong: according to data based on real-world efficiency of attack against password strength, antidote would be guessed immediately, and Antid0t& would - in the best case - be guessed after 4 hours (half the time it takes to exhaust its alphabet's search space) but most likely fall in as little as one minute and ten seconds.

That is possible because two mistakes were made. First, in formulating his recommendations, Burr assumed that an attacker would simply brute-force his way through the password, so the substitution rules were introduced to maximize the search space depth and make brute-force not feasible within a reasonable time. By doing so, however, he neglected that CPUs get faster and more efficient as technology is evolving, following Moore's law. Second and most crucially, Burr did not consider that the attacker could use a method smarter than brute-force.

The next article will explain how the original NIST guidelines have been defeated by better and more sophisticated hacker attacks.

(to be continued in part 3)

Sort:  
 4 years ago  

@tipu curate

Congratulations @lucabarbera! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You received more than 2250 upvotes. Your next target is to reach 2500 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Do not miss the last post from @hivebuzz:

Hive Power Up Day - Let's grow together!
Update for regular authors

The points mentioned here are very important, since security on the web is very important, nowadays almost everything is worked for security keys. Greetings friend and thanks for sharing this interesting post.