I've been working on an upcoming modeling practice blog post and I needed (once again) to implement a card-shuffling algorithm. As always, I turned to the trusty Fisher-Yates shuffle to shuffle my hypothetical cards, but in the process of doing so I was struck by the realization that I didn't actually understand what this commonplace algorithm was actually doing.

So, not content to let sleeping knowledge lie, I dove into learning about this algorithm and why, exactly, it is so efficient at sorting. Come along with me as we learn more (quite possibly too much) about the Fisher-Yates card shuffling algorithm!

A deck of cards, slightly spread out, with the Ace of Hearts on top.
Prepare to be SHUFFLED! Photo by Daniel Rykhev / Unsplash

The Original Method

The algorithm itself was described in 1938 by statisticians Ronald Fisher and Frank Yates as a method by which personnel could randomly sort collections of items. The algorithm relies on a random number generator; their version simply used a table of random numbers.

The algorithm can be defined like this:

  1. GIVEN: A collection of items which we want to randomly sort
  2. FIRST: Randomly select one of the "unshuffled" items.
  3. THEN: Place that item at the beginning of a new, separate collection.
  4. FINALLY: Remove the selected item from the source collection.
  5. UNTIL: All numbers have been sorted.

Visualizing the Original Method

It might be easier to visualize this. Let's say we have the letters A-H which we would like to sort. Our initial collection might look like this:

{ A, B, C, D, E, F, G, H }

We now need to randomly pick one of these letters. Let's say our random pick is G. We need to move G to a new, separate collection and remove it from the source collection:

{ A, B, C, D, E, F, G, H } => { A, B, C, D, E, F, H } and { G }

We can do this for all the subsequent letters until the source collection is empty.

{ A, B, C, D, E, F, H } and { G } => { A, B, C, E, F, H } and { G, D }

{ A, B, C, E, F, H } and { G, D } => { A, C, E, F, H } and { G, D, B }

{ A, C, E, F, H } and { G, D, B } => { A, C, E, F } and { G, D, B, H }

{ A, C, E, F } and { G, D, B, H } => { A, C, E } and { G, D, B, H, F }

{ A, C, E } and { G, D, B, H, F } => { C, E } and { G, D, B, H, F, A }

{ C, E } and { G, D, B, H, F, A } => { C } and { G, D, B, H, F, A, E }

{ C } and { G, D, B, H, F, A, E } => { } and { G, D, B, H, F, A, E, C }

Now the second collection is the sorted collection, and we can use that for our calculations.

Implementing the Original Method

Here's the original method in C#:

Random r = new Random();

List<char> unshuffledLetters = new List<char>() { 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I' };
List<char> shuffledLetters = new List<char>();

//Step 1: For each remaining unshuffled letter
for (int n = unshuffledLetters.Count; n > 0; n--)
{
    //Step 2: Randomly select one of the remaining unshuffled letters
    int k = r.Next(n);

    //Step 3: Place the selected letter in the shuffled collection
    char temp = unshuffledLetters[k];
    shuffledLetters.Add(temp);

    //Step 4: Remove the selected letter from the unshuffled collection
    unshuffledLetters.RemoveAt(k);
}

Issues with the Original Method

The method originally devised by Fisher and Yates was wonderful for times when you were making this kind of sorting using pen-and-paper calculations. Once the age of computers fully arrived, however, computer scientists such as Richard Durstenfeld noticed that the actual time complexity of the algorithm was very high: the big-O notation for this naive algorithm was O(n2) in the worst case (and the worst case is all that big-O notation cares about).

Durstenfeld then proposed (and Donald Knuth popularized) a variant on the original algorithm, which is now termed the Knuth shuffle or the "modern" method.

The Modern Method

The modern method of the Fisher-Yates algorithm is a slightly-modified version of the original. The steps look something like this:

  1. GIVEN: A collection of items which we want to randomly sort
  2. FIRST: Randomly select one of the "unshuffled" items.
  3. THEN: Swap the selected item with the last item in the collection that has not been selected.
  4. CONTINUE UNTIL: There are no remaining "unshuffled" items.

Visualizing the Modern Method

Now let's see if we can visualize how the modern method works. Let's pretend we have the same collection of letters as earlier:

{ A, B, C, D, E, F, G, H }

The first step of the algorithm is to randomly select an item (let's say we pick D) and swap it with the last "unswapped" item in the array (in this case, H). That move looks something like this:

{ A, B, C, D, E, F, G, H } => { A, B, C, H, E, F, G, D }

Now we need to pick the next random unsorted item (let's say it's E) and swap it with the last unsorted item (G):

{ A, B, C, H, E, F, G, D } => { A, B, C, G, E, F, E, D }

We can continue on like this until the array is fully sorted:

{ A, B, C, H, G, F, E, D } => { A, F, C, H, E, B, E, D }

{ A, F, C, H, G, B, E, D } => { A, F, C, G, H, B, E, D }

{ A, F, C, G, H, B, E, D } => { G, F, C, A, H, B, E, D }

{ G, F, C, A, H, B, E, D } => { G, C, F, A, H, B, E, D }

{ G, C, F, A, H, B, E, D } => { C, G, F, A, H, B, E, D }

Benefits of the Modern Method

Besides a theoretical improvement in time complexity (going from O(n2) to O(n), which will probably never matter unless you're dealing with inordinately large data sets) the primary benefit of the modern method is that it is an in-place algorithm; the sort happens within an existing collection rather than needing two collection (source and destination) as the original method does. This is beneficial when dealing with large data sets, as you will only ever need one copy of the data rather than two to use the modern sort.

Implementing the Modern Method

Here's the algorithm implemented in C#:

Random r = new Random();
List<char> unshuffledLetters = new List<char>() { 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I' };
//Step 1: For each unshuffled item in the collection
for (int n = unshuffledLetters.Count - 1; n > 0; --n)
{
    //Step 2: Randomly pick an item which has not been shuffled
    int k = r.Next(n + 1);

    //Step 3: Swap the selected item with the last "unstruck" letter in the collection
    char temp = unshuffledLetters[n];
    unshuffledLetters[n] = unshuffledLetters[k];
    unshuffledLetters[k] = temp;
}

Summary

Sorting a collection of items randomly is one of the most-studied problems in the world of computer science, and the modern version of the Fisher-Yates algorithm has been consistently shown to be the most efficient at sorting collections. Hopefully now you understand more thoroughly what exactly these algorithm do, and why they do it.

Happy Coding!