The middle daughter of Heap Sort and Cocktail Shaker Sort, she has recently discovered she a particular skill, a skill that makes her a nightmare to people like her parents. This skill has allowed her to get extra dessert, more screen time, even

]]>Odd-Even Sort is a headstrong little girl.

The middle daughter of Heap Sort and Cocktail Shaker Sort, she has recently discovered she a particular skill, a skill that makes her a nightmare to people like her parents. This skill has allowed her to get extra dessert, more screen time, even less chores; in short, everything a ten-year-old middle child could ever dream of.

That skill? Getting one parent to say yes when the other already said no.

She's sneaky about it. She will wait minutes, hours even, just to strike the unsuspecting parent at exactly the right moment. She knows just when her opportunity arises: when mom gets home from work, when dad is dealing with her baby brother Bogosort, when her elder sister Counting Sort needs to get to soccer practice. All her life she's been perfecting this skill, and now she's a master.

Here's hoping she uses her powers for good. Or at least extra ice cream.

**Created By:**N. Haberman**Type:**Exchange**Stable?**Yes**Best-Case:***O(n)***Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- FOR each item in an odd numbered position
- COMPARE that item against the item in the next-highest position.
- IF the elements are out of order, SWAP them.
- FOR each item in an even-numbered position
- COMPARE that item against the item in the next-highest position.
- IF the elements are out of order, SWAP them.
- GOTO 1, CONTINUE until list is sorted.

The base visualization from Wikipedia is cool to look at but doesn't help me understand this algorithm at all. So let's demonstrate the algorithm with a simple set of numbers. Here's our unsorted collection:

{ 15, 62, 42, 29, 17 }

On the first pass of the algorithm, in the first step, we compare the element in position 1 against the element in position 2. Position 1 is 15, position 3 is 62, so these are in order.

The next comparison is between position 3 (42) and position 4 (29). Those elements are out of order, so the algorithm swaps them, resulting in:

{ 15, 62, 29, 42, 17 }

The next move would compare the element in position 5 against the one in position 6, but since there *isn't* one in position six, the algorithm instead switches to doing the even-numbered comparison.

The first even comparison (position 2 to position 3) results in:

{ 15, 29, 62, 42, 17 }

The second even comparison results in:

{ 15, 29, 62, 17, 42 }

Now we go back to the odd-numbered comparisons:

{ 15, 29, 17, 62, 42 }

Now back to even-numbered:

{ 15, 17, 29, 62, 42 }

{ 15, 17, 29, 42, 62 }

And now the array is sorted! It took us four passes to do this, and if you're thinking that seems pretty inefficient, you are not wrong.

Here's the audibilization of this algorithm:

```
class OddEvenSort
{
public static void Sort(int[] array, int length)
{
bool isSorted = false;
while (!isSorted)
{
isSorted = true;
//Swap i and i+1 if they are out of order, for i == odd numbers
for (int i = 1; i <= length - 2; i = i + 2)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSorted = false;
}
}
//Swap i and i+1 if they are out of order, for i == even numbers
for (int i = 0; i <= length - 2; i = i + 2)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSorted = false;
}
}
}
return;
}
public static void Main()
{
int[] array = { 71, 42, 19, 3, 33, 28, 0, 89, 44, 2, 81 };
int length = array.Length;
CommonFunctions.PrintInitial(array);
Sort(array, length);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

Given that Odd-Even sort is a derivative of Bubble Sort, we would expect the worst-case time complexity to be similar, and indeed it is; for both algorithms it is *O(n ^{2})*. The best-case scenario for Odd-Even sort is remarkably better at O(n), though, and the space complexity is constant at

Odd-Even Sort, being a derivative of Bubble Sort, is not known to be an efficient algorithm. That said, it has some uses in specific cases (as seen on its Wikipedia page), so it's not totally without merit. Plus, she's really good at getting extra dessert. Maybe, if you're nice to her, she'll share with you.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The eight-year-old daughter of Heap Sort and Cocktail Shaker Sort, she absolutely *loves* to tell you all about her day: what things she ate, this cool car she saw on the way over, how

The indefatigable Bubble Sort is beloved by her whole family, provided she isn't in the room.

The eight-year-old daughter of Heap Sort and Cocktail Shaker Sort, she absolutely *loves* to tell you all about her day: what things she ate, this cool car she saw on the way over, how her little brother is annoying her at that particular moment, what she wants for her birthday in eight months. Everything.

Her parents adore her, but sometimes the family wishes she would just shut up for five minutes. All except for grandma Merge Sort, who could listen to little Bubbly's stories and exclamations for days. No wonder she loves going to Grandma's house.

**Created By:**Unknown (Popularized by Donald Knuth)**Type:**Exchange**Stable?**Yes**Best-Case:***O(n*^{2})**Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(n)***Sample Project:**Over on GitHub

- FOR each item in the collection
- IF two adjoining elements are in the wrong order, SWAP them.
- GO TO 1, CONTINUE until the list is sorted.

This animation is a good example of Bubble Sort, because like Bubble Sort it takes FOREVER to get anything done. So let's try another example.

Say we have the following collection:

{ 5, 8, 2, 4, 1 }

Bubble sort will swap elements on each pass that are not in the correct order. Here's the position of the collection elements after each swap:

{ 5, 2, 8, 4, 1 }

{ 5, 2, 4, 8, 1 }

{ 5, 2, 4, 1, 8 }

{ 2, 5, 4, 1, 8 }

{ 2, 4, 5, 1, 8 }

{ 2, 4, 1, 5, 8 }

{ 2, 1, 4, 5, 8 }

{ 1, 2, 4, 5, 8 }

In order to do this sort, the algorithm has to complete 4 passes of the array in order to sort 5 elements. That's wildly inefficient.

...But cool to watch. Check out this "audibilization" from YouTube:

```
class BubbleSort
{
public static void Main(string[] args)
{
int[] array = { 92, 28, 3, 71, 50, 14, 24, 20, 66, 70, 45, 17, 9, 99, 38 };
int temp;
Console.WriteLine("Bubble Sort");
CommonFunctions.PrintInitial(array);
//1. For each item in the array...
for (int i = 0; i <= array.Length - 2; i++)
{
for (int j = 0; j <= array.Length - 2; j++)
{
//2. ...if two adjoining elements are in the wrong order, swap them.
if (array[j] > array[j + 1])
{
temp = array[j + 1];
array[j + 1] = array[j];
array[j] = temp;
}
//3. Repeat this for all pairs of elements.
}
}
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

Bubble Sort is, to put it frankly, the worst sorting algorithm that isn't trying to be bad. The *best-*case scenario is still O(n^{2}). It is inefficient, slow, and generally a poor-performing sort, despite how easy the algorithm is to implement. That simplicity is really this algorithm's only redeeming value: it's easy to teach, in a "this is what you should not do" sort of way.

In short, Bubble Sort should not be used for real-world sorting scenarios.

Bubble Sort is a highly inefficient sorting algorithm that traverses through a collection, swapping neighboring values if they are out of order. It has an implementation only a mother could love. Or, perhaps, a grandmother?

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The two-year-old son of Heap Sort and Cocktail Shaker Sort, he is possibly the most mischievous scamp this side of the Mississippi, or at least that's what his grandfather would say.

A favorite trick of

]]>Bogosort is adorable, when he isn't running away.

The two-year-old son of Heap Sort and Cocktail Shaker Sort, he is possibly the most mischievous scamp this side of the Mississippi, or at least that's what his grandfather would say.

A favorite trick of his happens when the family plays poker, or Uno, or any other game involving cards. He sneaks up to the table (he can be very quite when he wants to be) and runs off with as many cards as his little hands can grab. As you might imagine, this dramatically improves the game, at least in his tiny opinion. The adults who have to spend time chasing him down, vaulting over dogs and rounding corners at dangerously high speeds, might disagree.

**Created By:**Unknown**Type:**Exchange**Stable?**Yes**Best-Case:***O(n)***Average-Case:***O((n + 1)!)***Worst-Case:**Unbounded**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- IF the list is not sorted...
- SHUFFLE the list randomly
- GO TO 1, CONTINUE until list is sorted.

A visualization for this particular sorting algorithm is kind of ridiculous. The procedure is just "shuffle randomly until sorted", very much like taking a deck of cards, throwing them into the air, gathering them up, and repeating this until the deck is in the correct order.

If you think this algorithm is amazingly stupid, you are not wrong.

But it's not the slowest algorithm ever. That honor goes to Bogobogosort, an algorithm that was specifically designed to be as slow as possible while still doing a ton of work. The author's own guess on that algorithm's time complexity is *O(n! ^{n!}). *That's just crazy.

That said, the "audibilization" of Bogosort is pretty neat: a ton of noise for not very much gain:

```
class BogoSort
{
static void Main(string[] args)
{
List<int> list = new List<int>() { 2, 1, 3, 0 };
Console.WriteLine("BogoSort");
Console.WriteLine("Sorting...");
Sort(list);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
static void Sort(List<int> list)
{
int iteration = 0;
while (!IsSorted(list))
{
PrintIteration(list, iteration);
list = Shuffle(list); //Shuffle the numbers randomly
iteration++;
}
PrintIteration(list, iteration); //Print the final, sorted iteration
Console.WriteLine();
Console.WriteLine("BogoSort completed after {0} iterations.", iteration);
}
static void PrintIteration(List<int> list, int iteration)
{
Console.Write("BogoSort iteration #{0}: ", iteration);
foreach(var value in list)
{
Console.Write($"{value} ");
}
Console.WriteLine();
}
static bool IsSorted(List<int> list)
{
for (int i = 0; i < list.Count - 1; i++)
{
if (list[i] > list[i + 1])
{
return false;
}
}
return true;
}
//Shuffle algorithm based on Fisher-Yates method.
static List<int> Shuffle(List<int> numbers)
{
Random r = new Random();
//Step 1: For each unshuffled item in the collection
for (int n = numbers.Count - 1; n > 0; --n)
{
//Step 2: Randomly pick an item which has not been shuffled
int k = r.Next(n + 1);
//Step 3: Swap the selected item with the last "unstruck" item in the collection
int temp = numbers[n];
numbers[n] = numbers[k];
numbers[k] = temp;
}
return numbers;
}
}
```

Any time you see a factorial (!) in a math equation, and you want that number to be small, you're gonna have a bad time. The average-case for Bogosort is *O((n + 1)!). *This means that, as the size of the collection increases, the time the algorithm requires to shuffle the collection increases *dramatically*. There's a reason why the implementation above has only four numbers in the starting collection; any more and I was getting hundreds of permutations before finding a solution (if one was found at all).

Obviously, Bogosort isn't meant to be taken seriously as a sorting algorithm. It's not useful in the real world; the algorithm can be reduced to "shuffle until sorted" which might take forever. That said, it can be useful, in a "do not do this" kind of way. Just make sure he can't reach the cards.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The father of four rambunctious children and husband to the always-working Heap Sort, "Shaky" (a loving nickname for him which is never said to his face) is a stay-at-home dad who absolutely adores his children but just

]]>Cocktail Shaker Sort would like a drink. A strong one, if you please.

The father of four rambunctious children and husband to the always-working Heap Sort, "Shaky" (a loving nickname for him which is never said to his face) is a stay-at-home dad who absolutely adores his children but just wants some alone time, is that so much to ask?

The problem is that, when he gets his alone time (rare as it is), he would greatly prefer to curl up with his beloved Heapy, but failing that, he'll make a stiff drink that the rest of the family can't be in the same room as. Not that he minds. Heapy is his best friend, his companion, his true love, but the rest of her family he could take or leave. Nevertheless, his kids love him, and the family has accepted him, even if he hasn't quite gotten around to feeling the same way.

**Created By:**Donald Knuth (1973)**AKA:**Bidirectional Bubble Sort**Type:**Exchange**Stable?**Yes**Best-Case:***O(n log n)***Average-Case:***O(n log n)***Worst-Case:***O(n log n)***Space Complexity:***O(n)***Sample Project:**Over on GitHub

- PASS THROUGH the collection from left-to-right.
- DURING that pass, SELECT the highest encountered value.
- DEPOSIT that value at the end of the considered range.
- DECREASE the end of the considered range by 1.
- PASS THROUGH the collection from right-to-left.
- DURING that pass, SELECT the lowest encountered value.
- DEPOSIT that value at the beginning of the considered range.
- INCREASE the start of the considered range by 1.
- GO TO 1, CONTINUE until the range is 0.

Lucky for us, Cocktail Shaker sort is an easy sort to understand. On each pass through the collection, it either finds the highest or lowest encountered value (depending on pass direction) and deposits that value at the beginning or end of the considered range. In fact, Cocktail Shaker sort is really just Bubble Sort but in both directions.

The "audibilization" of this sort is particularly interesting, because it's simple to understand without knowing the underlying math and logic:

```
class CocktailShakerSort
{
static void Sort(int[] array)
{
bool isSwapped = true;
int start = 0;
int end = array.Length;
while (isSwapped == true)
{
//Reset this flag. It is possible for this to be true from a prior iteration.
isSwapped = false;
//Do a bubble sort on this array, from low to high. If something changed, make isSwapped true.
for (int i = start; i < end - 1; ++i)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSwapped = true;
}
}
//If no swaps are made, the array is sorted.
if (isSwapped == false)
break;
//We need to reset the isSwapped flag for the high-to-low pass
isSwapped = false;
//The item we just moved is in its rightful place, so we no longer need to consider it unsorted.
end = end - 1;
//Now we bubble sort from high to low
for (int i = end - 1; i >= start; i--)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSwapped = true;
}
}
//Finally, we need to increase the starting point for the next low-to-high pass.
start = start + 1;
}
}
public static void Main()
{
int[] array = { 15, 10, 83, 5, 7, 42, 65, 50, 58, 71, 61 };
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

You may have heard before that Bubble Sort's complexities for all cases are *O(n ^{2})*. Cocktail Shaker Sort, being a direct derivative of Bubble Sort, is not much better; the average and worst-case complexities do not improve, and only the best case scenario (where the list is already sorted) becomes

About the only good thing Cocktail Shaker Sort has going for it is a space complexity of constant value, *O(1)*. But that's not very useful in the real world. In short, Cocktail Shaker Sort is not a valuable implementation of sorting for anything other than educational or theoretical problems.

Cocktail Shaker Sort is an easy-to-understand, simple-to-implement derivative of Bubble Sort. However, it's not a terribly useful sort in the real world, as it takes too long to do much of anything. Plus, he makes a drink that'll burn your eyeballs out. Don't say I didn't warn you.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>As the owner of a local, well-liked restaurant, Heap Sort's Heaping Helpings, she is constantly running around trying to make her restaurant better. As the mom of four kids, she (and husband Cocktail Shaker Sort) are constantly trying

]]>Heap Sort, the second daughter of the family, has a busy life.

As the owner of a local, well-liked restaurant, Heap Sort's Heaping Helpings, she is constantly running around trying to make her restaurant better. As the mom of four kids, she (and husband Cocktail Shaker Sort) are constantly trying to stay one step ahead of their childrens' antics, not always with success.

Heap Sort is nearly the exact opposite of her mother Merge Sort. She's a fantastic chef and her pies are legendary around town. Moreover, she absolutely adores cooking for other people, and has offered to teach her dear Mom a thing or two (she politely declined). Because of her success, the family regards her as the star child, much to older sister Insertion Sort's annoyance.

Heap Sort has a good life, and she knows it. Busy, yes; but then again, who isn't?

**Created By:**J. W. J. Williams (1964)**Type:**Selection**Stable?**No**Best-Case:***O(n log n)***Average-Case:***O(n log n)***Worst-Case:***O(n log n)***Space Complexity:***O(n)***Sample Project:**Over on GitHub

- CONSTRUCT a "max" heap from the unsorted data that satisfies the heap property.
- SWAP the first and final elements. The final element is now sorted.
- DECREMENT the range of "considered" elements (those still needing to be sorted) by 1.
- RESTRUCTURE the heap to satisfy the heap property (again).
- SWAP the new root with the last item in the range of considered items.
- CONTINUE until the range of considered items is 1.

This is one of the more complication visualizations in this series. I'm going to break it down into two steps.

The unsorted chart looks like this:

In order to even use Heap Sort, we must first satisfy something known as the heap property. The heap property (for a "max heap", which is what we will use in this post) is:

"For any given node C, if P is a parent node of C, then thekey(thevalue) of P is greater than or equal to the key of C"

The problem with the collection above is that the nodes do not satisfy this property; the value at the beginning (or "top") of the list is not the highest value. To make the property true, the algorithm sorts the collection into the following:

Now this collection satisfies the heap property: for any node C which has a parent P (a "parent" here meaning "appears earlier in this list"), said parent P has a value greater than C. This is what the next image in the animation is trying to convey:

This diagram is an outline of the heap, and the tree structure it represents. Because we now have a fully-balanced heap, we can proceed to the next step.

At this point, we know that the "root" of the heap (e.g. the node in the first position) must be the highest value in the considered range. Therefore we can place it at the "back" of the list.

We must then decrease the considered range by 1 (so as not to include the just-sorted item) and rebuild the heap. We continue to do this until the considered range is 1, at which point, the collection will be sorted.

You should also check out this "audibilization" of this algorithm from YouTube:

```
public class HeapSort
{
static void Sort(int[] array)
{
var length = array.Length;
for (int i = length / 2 - 1; i >= 0; i--)
{
Heapify(array, length, i);
}
for (int i = length - 1; i >= 0; i--)
{
int temp = array[0];
array[0] = array[i];
array[i] = temp;
Heapify(array, i, 0);
}
}
//Rebuilds the heap
static void Heapify(int[] array, int length, int i)
{
int largest = i;
int left = 2 * i + 1;
int right = 2 * i + 2;
if (left < length && array[left] > array[largest])
{
largest = left;
}
if (right < length && array[right] > array[largest])
{
largest = right;
}
if (largest != i)
{
int swap = array[i];
array[i] = array[largest];
array[largest] = swap;
Heapify(array, length, largest);
}
}
public static void Main()
{
int[] arr = { 74, 19, 24, 5, 8, 79, 42, 15, 20, 53, 11 };
Console.WriteLine("Heap Sort");
CommonFunctions.PrintInitial(arr);
Sort(arr);
CommonFunctions.PrintFinal(arr);
Console.ReadKey();
}
}
```

Heap sort is a consistent algorithm: the best, average, and worse-case time complexities are all *O(n log n). *In fact, heap sort is one of the most commonly used sorting algorithms in the real-world. Its efficient time complexity and low space requirements make it a good candidate for implementation in our apps.

The goodness that is Heap Sort knows no bounds; it is an efficient algorithm that is in use today in production systems. If you're looking to be to sort large lists, Heap Sort just might be your best option. Plus, she makes a mean rhubarb pie.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Quick Sort is the consummate overachiever.

Graduated first in his class, runs a successful business, owns a boat, married to a brilliant accountant, with two beautiful children. He does not slow down; he does everything fast and thoroughly. Keeping up with his family is quite the endeavor. When he picks something to get done, it gets DONE.

Sometimes, though, he wonders. He wonders if maybe there's more to life than climbing the org chart, buying soulless things, or making money. He wonders if maybe, just maybe, he could slow down and enjoy the scenery for once. And then he decides that's for wimps and goes back to working all hours of the night.

**Created By:**C. A. R. "Tony" Hoare (1959)**Type:****E**xchange**Stable?**No**Best-Case:***O(n log n)***Average-Case:***O(n log n)***Worst-Case:***O(n*^{2})**Space Complexity:***O(n)***Sample Project:**Over on GitHub

- SELECT a pivot point. (Our implementation selects the last number in the collection).
- REORDER the collection such that all values less than the pivot are before the pivot, and all values greater than the pivot are after it.
- RECURSIVE CONTINUE: Return to Step 1, selecting a new pivot in each of the divided higher and lower sections, until the array is sorted.

The critical thing Quick Sort does is select a pivot point, but different varieties do this differently. In the above animation (and the below implementation), the first pivot point is merely the last item in the collection, and it continues to pick the last item in each "partition" caused by the sort as a pivot point.

Reader James Curran also contributed another visualization of this sort:

Finally, check out this audibilization on YouTube:

```
class QuickSort
{
static int Partition(int[] array, int low,
int high)
{
//1. Select a pivot point.
int pivot = array[high];
int lowIndex = (low - 1);
//2. Reorder the collection.
for (int j = low; j < high; j++)
{
if (array[j] <= pivot)
{
lowIndex++;
int temp = array[lowIndex];
array[lowIndex] = array[j];
array[j] = temp;
}
}
int temp1 = array[lowIndex + 1];
array[lowIndex + 1] = array[high];
array[high] = temp1;
return lowIndex + 1;
}
static void Sort(int[] array, int low, int high)
{
if (low < high)
{
int partitionIndex = Partition(array, low, high);
//3. Recursively continue sorting the array
Sort(array, low, partitionIndex - 1);
Sort(array, partitionIndex + 1, high);
}
}
public static void Main()
{
int[] array = { 72, 12, 6, 33, 81, 97, 37, 59, 52, 1, 20 };
int length = array.Length;
Console.WriteLine("QuickSort");
CommonFunctions.PrintInitial(array);
Sort(array, 0, length - 1);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

The entire reason Quick Sort has that name is because, for the vast majority of circumstances, it is demonstrably quicker than other relatively-simple implementations. That said, there is some debate about how much quicker it is than, say, Merge Sort, which clearly means that Quick Sort must get along fabulously with his father-in-law Selection Sort, but maybe not his mother-in-law.

It would be valid, I believe, to say that Quick Sort is the simplest sorting algorithm that is actually in use today for real production code. Some of the upcoming algorithms are much more complex, but faster.

Also, apropos of nothing, the inventor of Quick Sort wrote possibly my favorite adage about coding:

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C. A. R. Hoare

Quick Sort is exactly what it sounds like. It gets things sorted, as quickly as possible, by subdividing the contents of a collection around an arbitrarily-selected pivot point. It then recursively selects smaller and smaller "partitions" and orders them.

Sometimes he wishes he could slow down. Maybe he will get his chance. Someday.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Unlike his elder brother Shell Sort, Comb Sort is a star athlete. He started as the quarterback for his high school football team when he was a sophomore. But more than that, he has everything he thought he could ever

]]>Comb Sort is the poster boy for the all-American child.

Unlike his elder brother Shell Sort, Comb Sort is a star athlete. He started as the quarterback for his high school football team when he was a sophomore. But more than that, he has everything he thought he could ever want, at least until recently. He's the junior prom king, the one voted "best athlete" in the yearbook, everything. Life has been easy for him so far.

Yet, something is bothering him. Something that, until recently, he couldn't describe.

He's not the smartest guy, and he knows this. His grades are mostly Cs and Ds, and he understands that that doesn't bode well for his future. His parents are high-achievers, his brother is a straight-A student, and he's... what? A star athlete? That won't last forever. What happens when he can no longer get by on physical prowess?

So, he reads. He studies. He's putting more time into making good test scores and less time into sports. It's not working yet (his last test was a C+), but he's hopeful that it will, and soon. He can't get through life on athletic skills alone. He's sorting out his life, one piece at a time.

**Created By:**Włodzimierz Dobosiewicz (1980)**Type:**Exchange**Stable?**YES**Best-Case:***O(n)***Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- DETERMINE the appropriate gap size
*h*by using a set shrink factor*k*. - COMPARE each pair of elements that are
*h*distance from each other. - SWAP those elements if they are in the wrong order.
- COMPLETE a pass through the array with the selected
*h*. - SHRINK the gap size
*h*using shrink factor*k*. - IF
*h*is not 1, GOTO step 2. - CONTINUE until
*h*is 1 AND a pass through the array results in no swaps.

You can see from this animation why this algorithm got the name "Comb Sort": the comparison distance look like two prongs in a comb. You can also see that this is a not-very-effective sort: it has to do many passes through all elements in order to fully sort this list. In fact, we would most likely consider Comb Sort to be a "naive" sort, on the same level as Selection Sort.

Also check out the "audibilization" of this sort from YouTube:

Finally, reader James Curran kindly created a visualization for this sort; it looks like this:

```
class CombSort
{
static int GetNextGap(int gap)
{
//The "shrink factor", empirically shown to be 1.3
gap = (gap * 10) / 13;
if (gap < 1)
{
return 1;
}
return gap;
}
static void Sort(int[] array)
{
int length = array.Length;
int gap = length;
//We initialize this as true to enter the while loop.
bool swapped = true;
while (gap != 1 || swapped == true)
{
gap = GetNextGap(gap);
//Set swapped as false. Will go to true when two values are swapped.
swapped = false;
//Compare all elements with current gap
for (int i = 0; i < length - gap; i++)
{
if (array[i] > array[i + gap])
{
//Swap
int temp = array[i];
array[i] = array[i + gap];
array[i + gap] = temp;
swapped = true;
}
}
}
}
public static void Main()
{
int[] array = { 10, 28, 1, 55, 6, 21, 36, 3, 45, 15, 0 };
Console.WriteLine("Comb Sort");
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

The worst case of *O(n ^{2}) *make intuitive sense, at least to me. Given that comb sort is based on bubble sort, we would expect poor time complexities for it. But the average case is interesting because it introduces a form of notation I hadn't seen before: big-omega.

Remember that the average time complexity for this algorithm is *Ω(n ^{2}/2^{p}). *This is read aloud as "big-omega of n-squared over 2 to the power of p".

This is not a great time, and because it's the best possible case on average, we can conclude that Comb Sort is not an efficient sorting algorithm.

Comb Sort is a relatively-inefficient algorithm that aims to sort a collection by comparing two elements across *h* distance, swapping them if they are out of order, and then slowly shrinking that distance. He's hoping he can get his academic life in order before his athletic skills run out. Slowly, surely, he's making that dream come true, one test at a time.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>He's a straight-A high school senior who really doesn't understand why he has to

]]>Shell Sort is the unassuming, quiet type. The late-teens son of high-achieving Insertion Sort and Quick Sort, Shell Sort would much rather just curl up alone with a good sci-fi/fantasy book and sip hot cocoa.

He's a straight-A high school senior who really doesn't understand why he has to be at this stupid family reunion thing. In fact, he'd much rather be home right now, reading the latest N.K. Jemisin book.

Although, Auntie Heapy's cooking is pretty good. And the family football game isn't the worst thing in the universe. And as much as he's annoyed by younger brother Comb Sort, it's really nice to have him on the same team. So it isn't all bad. Just *mostly* bad.

**Created By:**Donald Shell (1959)**Type:**Exchange/Insertion**Stable?**No**Best-Case:***O(n log n)***Worst-Case:***O(n*^{2})**Space Complexity:***O(n)***Sample Project:**Over on GitHub

- DETERMINE the size of the gap we want to use, called
*h*. - DIVIDE the main array into conceptual subarrays using elements that are
*h*distance from each other (no concrete subarrays will actually be created). - SORT those subarrays in place.
- REDUCE the gap size
*h*so as to group items closer together for sorting. - CONTINUE until a pass has been done with
*h*=1, when the list will be sorted.

This is one of the visualizations I found for Shell Sort, but it's not very clear (at least to me) what's actually happening here, so let's see if we can break it down.

The first thing you need to know is that the goal of Shell Sort is to reduce the number of moves that must be made when gap size *h* = 1. In that situation, Shell Sort becomes Insertion Sort, and Insertion Sort does very poorly if the numbers to be sorted are far away from where they should be.

Imagine we have the following data:

a_{0} |
a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
a_{11} |
---|---|---|---|---|---|---|---|---|---|---|---|

43 | 18 | 91 | 6 | 29 | 62 | 30 | 81 | 79 | 52 | 7 | 51 |

Let's also say that our gap size *h* will be 5 for the first round of sorting. That means that Shell Sort will create the following theoretical subarrays:

{ a_{0}, a_{5}, a_{10} }

{ a_{1}, a_{6}, a_{11} }

{ a_{2}, a_{7} }

{ a_{3}, a_{8} }

{ a_{4}, a_{9} }

Each of those subarrays will be sorted independently of one another, resulting in a collection that looks like this:

a_{0} |
a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
a_{11} |
---|---|---|---|---|---|---|---|---|---|---|---|

7 | 18 | 81 | 6 | 29 | 43 | 30 | 91 | 79 | 52 | 62 | 51 |

See how the 7 moved to the front of the array, much closer to where it should be when fully sorted?

Now let's shrink the gap size *h* to 3. Shell Sort will now use the following theoretical subarrays:

{ a_{0}, a_{3}, a_{6}, a_{9} }

{ a_{1}, a_{4}, a_{7}, a_{10} }

{ a_{2}, a_{5}, a_{8}, a_{11} }

Sorting those subarrays results in a collection that looks like this:

a_{0} |
a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
a_{11} |
---|---|---|---|---|---|---|---|---|---|---|---|

6 | 18 | 43 | 7 | 29 | 51 | 30 | 62 | 79 | 52 | 91 | 81 |

Now the 6 is in front of the 7, and is in fact where it should be when the array is fully sorted. Remember that the goal here is not to fully sort the array, but to reduce the number of moves Insertion Sort must make in order to sort the array when *h* = 1. Speaking of, a pass through of this data during Insertion Sort results in, of course:

a_{0} |
a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
a_{11} |
---|---|---|---|---|---|---|---|---|---|---|---|

6 | 7 | 18 | 29 | 30 | 43 | 51 | 52 | 62 | 79 | 81 | 91 |

**UPDATE (1 Aug 2019)**: Reader James Curran created another visualization of Shell Sort, and here it is:

Finally, take a look at this "audibilization" of Shell Sort from YouTube:

```
class ShellSort
{
static int Sort(int[] array)
{
int length = array.Length;
for (int h = length / 2; h > 0; h /= 2)
{
for (int i = h; i < length; i += 1)
{
int temp = array[i];
int j;
for (j = i; j >= h && array[j - h] > temp; j -= h)
{
array[j] = array[j - h];
}
array[j] = temp;
}
}
return 0;
}
public static void Main()
{
int[] array = { 53, 19, 71, 3, 66, 62, 20, 84 };
Console.WriteLine("Shell Sort");
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

At first glance, it appears that Shell Sort actually performs *worse* than Insertion Sort, given that Insertion Sort's best case is *O(n)* and Shell Sort's best case is *O(n log n)*. But in the real world the only time Insertion Sort performs in *O(n)* time is if the list is already sorted.

As stated earlier, Shell Sort's purpose is to minimize the distance items have to move when they are being sorted by the final stage where *h* = 1. The problem when trying to determine Shell Sort's time complexity is that such complexity depends entirely on what the implementation chooses for the values of *h*.

A final note: the implementation above actually has a time complexity of *O(n ^{2})*, and is used for simplicity and not fit for production.

Shell Sort is mainly an improvement on Insertion Sort, by making long-distance moves less common when sorting the array. He'd much rather be home with a good book, but the family reunion isn't awful. Sometimes.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Between working as an accountant at a high-powered law firm and her indelible engagement and eventual marriage to loving husband Quick Sort, Insertion Sort (Innie, as she is known to her friends) has a life dreamt of by

]]>The eldest child of the family, Insertion Sort has a good life.

Between working as an accountant at a high-powered law firm and her indelible engagement and eventual marriage to loving husband Quick Sort, Insertion Sort (Innie, as she is known to her friends) has a life dreamt of by many. The family relies on her smarts to keep their financial lives balanced, and by all accounts she does an amazing job of this. After all, who'd notice that, occasionally, a very small percentage of it gets, ahem, "redirected" to other causes. The parents don't even mind all that much.

Innie can't help but feel some jealousy toward her sister Heap Sort. She never got Heapy's culinary skills, and so feels that her parents are slightly disappointed in her, though she knows they would never show it. She is consistently the most well-off member of her family, and she makes sure they know all about it with flashy jewelry and designer clothes and expensive cars, hoping against vain hope that this makes her more "worthy" in her parents' eyes.

**Created By:**Unknown**Type:**Insertion**Stable?**YES**Best-Case:***O(n)***Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(n)***Sample Project:**Over on GitHub

- FOR each value in the array
- STORE the current value in a variable.
- WHILE we are pointing to a valid value...
- ...IF the current value is less than the value we are point at
- ...THEN move the pointed-at value up one space, and store the current value at the pointed-at position.

You may notice a "stair step" effect in the above visualization; that is, values progressively "stepping" down to the left of the frame. This stair stepping represents the comparison happening during steps 4 and 5 of the algorithm above; Insertion Sort moves each value up one step if it is greater than the current value.

Also, as with the previous entries in this series, check out the very cool "visualization and audibilization" of Insertion Sort from YouTube:

```
class InsertionSort
{
static void Main(string[] args)
{
int[] array = new int[15] { 55, 97, 76, 60, 4, 18, 37, 34, 88, 51, 43, 49, 19, 12, 63 };
Console.WriteLine("Insertion Sort");
CommonFunctions.PrintInitial(array);
//1. For each value in the array...
for (int i = 1; i < array.Length; ++i)
{
//2. Store the current value in a variable.
int currentValue = array[i];
int pointer = i - 1;
//3. While we are pointing to a valid value...
//4. If the current value is less than the value we are pointing at...
while (pointer >= 0 && array[pointer] > currentValue)
{
//5. Then move the pointed-at value up one space, and store the
// current value at the pointed-at position.
array[pointer + 1] = array[pointer];
pointer = pointer - 1;
}
array[pointer + 1] = currentValue;
}
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

As with most sorting algorithms, the best-case scenario for Insertion Sort is that the array is already sorted; in this case, the algorithm has *O(n) *time, because it still needs to iterate over the entire array even though it won't do any swaps.

But in the average and worst cases, Insertion Sort doesn't fare too well with its *O(n ^{2}) *complexity. There are a good many sorting algorithms (several of which we will see later in this series) which are substantially more efficient in these situations.

That said, insertion sort has a couple of things going for it, apart from the cushy job and respect from the family. For starters, it's relatively easy to implement and very efficient on data sets that are already mostly sorted, so in some scenarios it actually performs better than algorithms such as QuickSort. Further, it requires very little addition space to perform the algorithm. In short, it's a very useful learning algorithm, and has a few practical applications as well.

Insertion Sort is an algorithm that works best on mostly-sorted data, and is a bit inefficient for large unsorted collections. That said, it's very useful for small data, being easy to implement and analyze. Just watch it around your money.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

^{[1]: Image from Flickr, used under license.}

The land on which the familial home sits has been in her

]]>Merge Sort is the matriarch of the sorting algorithm family. A source of wisdom for all her children (and a not-insignificant amount of "I told you so"), she is constantly looking out for her beloved sons and daughters.

The land on which the familial home sits has been in her family for generations, and will be for generations yet to come if she has anything to say about it.

She loves cooking and baking, and to be fair, makes a delicious turnover. But, and I can't stress this enough, *don't* try her lasagna. You will regret it. That said, if she insists, make sure you have some Tums on hand and a bathroom nearby.

**Created By:**John von Neumann (1945)**Type:**Divide-and-Conquer**Stable?**YES**Best-Case:***O(n log n)***Average-Case:***O(n log n)***Worst-Case:***O(n log n)***Space Complexity:***O(n)***Sample Project:**Over on GitHub

- BREAK the array into smaller subarrays.
- RECURSIVELY BREAK the smaller subarrays into yet smaller arrays.
- WHEN the arrays are at their smallest possible size (1), stop breaking them.
- RECURSIVELY MERGE the broken sub arrays back together in the correct order.

A simple visualization would look like this:

During each step of the process in red, the arrays are broken in half. Once each element of the original array is alone in its own array, the smaller arrays are merged back together in the correct order in the remaining steps.

This animated visualization from Wikipedia was also helpful:

Finally, don't forget to check out the "visualization and audibilization" of Merge Sort from YouTube:

```
class MergeSort
{
static void Main(string[] args)
{
List<int> unsorted = new List<int>() { 42, 13, 86, 9, 10, 55, 71 };
List<int> sorted;
Random random = new Random(DateTime.Now.Millisecond);
Console.WriteLine("Merge Sort");
CommonFunctions.PrintInitial(unsorted);
sorted = Sort(unsorted);
CommonFunctions.PrintFinal(sorted);
Console.ReadLine();
}
//Uses recursion to break the collection into progressively smaller collections.
//Eventually, each collection will have just one element.
private static List<int> Sort(List<int> unsorted)
{
if (unsorted.Count <= 1)
{
return unsorted;
}
List<int> left = new List<int>();
List<int> right = new List<int>();
int median = unsorted.Count / 2;
for (int i = 0; i < median; i++) //Dividing the unsorted list
{
left.Add(unsorted[i]);
}
for (int i = median; i < unsorted.Count; i++)
{
right.Add(unsorted[i]);
}
left = Sort(left);
right = Sort(right);
return Merge(left, right);
}
//Method takes two sorted "sublists" (left and right) of original list and merges them into a new colletion
private static List<int> Merge(List<int> left, List<int> right)
{
List<int> result = new List<int>(); //The new collection
while (left.Any() || right.Any())
{
if (left.Any() && right.Any())
{
if (left.First() <= right.First()) //Comparing the first element of each sublist to see which is smaller
{
result.Add(left.First());
left.Remove(left.First());
}
else
{
result.Add(right.First());
right.Remove(right.First());
}
}
else if (left.Any())
{
result.Add(left.First());
left.Remove(left.First());
}
else if (right.Any())
{
result.Add(right.First());
right.Remove(right.First());
}
}
return result;
}
}
```

In the best, average, and worst cases, Merge Sort has a time complexity of *O(n log n)*. This is quite a bit better than Selection Sort, but our dear Merge Sort is savvy enough to not tell him that too often. After all, he knows it already.

What's not so great about Merge Sort is that it has a space complexity of *O(n)*, meaning that it needs to use more memory than that taken up by the list to perform its algorithm. On small lists this is immaterial, but on large lists (e.g. those encountered in the real world) it could potentially cause things like Out of Memory errors. Her book club, having sat through many long and winding tales always ending with "if only they'd listened to me," feels the same way.

Merge Sort, the devoted mother and grandmother of the family, is quite an algorithm. Wise, caring, and a terrible cook, she is a formidable woman with a lot to give and will gladly make you something to eat even if you just ate. You will still want those Tums, though.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Meet the patriarch of the sorting algorithm family! Selection Sort is the beloved father and grandfather to the rest of the gang.

He's old, and likes things done a certain way, but at the heart of it all he's a family man with a penchant for handing out Dum Dums and lecturing young'uns about days gone by. He and his darling wife Merge Sort will be doing the hosting duties for the family reunion at their large familial home, complete with an acre of land and a dog who quite enjoys chasing the kids around.

Boy, does he love that dog. Even better, the dog loves him. It won't leave his side for a moment, except to chase the occasional squirrel, rabbit, or toddler. He's getting up there in years, though, and Selection Sort dreads the day he has to explain to his young grandchildren where their beloved pet Rhythm has gone.

Selection Sort is one of the easiest sorting algorithms to understand, but also one of the worst-performing. That makes it a perfect candidate to kick off the reunion!

**Created By:**Unknown**Type:**Selection**Stable?**Depends on implementation.**Best-Case:***O(n*^{2})**Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- FOR EACH item in the collection...
- ASSUME the first item in the considered range is the smallest value
- ITERATE through the rest of the collection
- SWAP the smallest found value with the current value. This value is now sorted.
- ADJUST the considered range to exclude the now-sorted value.
- GO TO 2, CONTINUE until all elements have been sorted in this manner.

The red box is the found lowest value. Yellow values are a sublist of the total list and have already been sorted; therefore the "considered range" mentioned in the algorithm above is all values that are not yellow. The blue value is the currently scanned value.

You may notice from watching this animation that this algorithm has to iterate through the collection many times. As discussed below, this is a major reason why this algorithm is not very efficient.

Also check out this really cool "visualization and audibilization" of Selection sort from YouTube:

```
class SelectionSort
{
static void Main(string[] args)
{
int[] array = new int[15] { 45, 72, 58, 92, 26, 4, 13, 90, 81, 15, 33, 36, 47, 8, 54 };
int count = 15;
Console.WriteLine("Selection Sort");
//First, output starting state of the array
CommonFunctions.PrintInitial(array);
int temp, smallest;
//The algorithm builds the sorted list from the left.
//1. For each item in the array...
for (int i = 0; i < count - 1; i++)
{
//2. ...assume the first item is the smallest value
smallest = i;
//3. Cycle through the rest of the array
for (int j = i + 1; j < count; j++)
{
//4. If any of the remaining values are smaller, find the smallest of these
if (array[j] < array[smallest])
{
smallest = j;
}
}
//5. Swap the found-smallest value with the current value
temp = array[smallest];
array[smallest] = array[i];
array[i] = temp;
}
//Output final state of the array
CommonFunctions.PrintFinal(array);
//We call ReadLine here to avoid noisy output in the command line.
Console.ReadLine();
}
}
```

Selection sort is a poor-performing sort. It needs to iterate over the entire unsorted portion of the collection before it can move values, regardless of whether it has already located the lowest known value. Because of this, as shown above, it always has a time complexity of *O(n ^{2})*.

This is one of the worst time complexities that is regularly seen in sorting algorithms and essentially means that the time to complete the sort is a function of the square of the number of items to be sorted.

What's nice about Selection Sort (kind of) is that the space complexity is *O(1)*, meaning the algorithm requires no additional space beyond what is used to store the collection. In other words, it can be implemented without consuming additional memory, which is a point in its favor, though not a large one.

Selection Sort is one of the simplest sorts to understand and to write, but performs terribly. It's mostly of interest as a thought exercise, but don't tell him that. He might set the dog on you.

Don't forget to check out the sample project on GitHub to see more members of the sorting family reunion!

Happy Coding!

]]>Without further ado, welcome to **The Sorting**

Welcome, dear readers, to a brand new mega-series! This one is quite possibly the nerdiest thing I have ever blogged about. It's also the reason there hasn't been as many posts as normal lately; I've been working on it for nearly a month.

Without further ado, welcome to **The Sorting Algorithm Family Reunion**!

(What? I did say "nerdiest", didn't I?)

In this series of posts we will be investigating a set of sorting algorithms, from simple Selection Sort to complex Bitonic Merge Sort to stupid Bogosort. For each algorithm, we will include the following sections:

**The Rundown**: Statistics about the algorithm, including its time and space complexities, creator name and date, sort type, etc.**Algorithm**: A numbered set of steps that outlines how the algorithm accomplishes sorting.**Visualization**: A manner in which we can "see" how the algorithm performs. May be an animation, a set of diagrams, a walkthrough, etc.**C# Implementation**: The C# implementation of the algorithm, including comments. All implementations are hosted on GitHub.**Time and Space Complexity**: A discussion about what the time and space complexities (using Big-O Notation) of the algorithm mean for its implementations.**Summary**

Yes, dear readers, in order to write about a family reunion I had to create a family tree. Here's the one we'll be using:

The family tree above DOES NOT relate sorting methods by their algorithmic characteristics; it relates them by my ability to create stories and backgrounds. For example, you should not assume that because Pigeonhole Sort is the child of Merge and Selection Sorts that those algorithms are conceptually related in any way.

The reasoning for this is that it was impossible, within the framework of the story, to create a cohesive family tree that kept all of the algorithmic relationships alive. I chose to use some, shall we say, "artistic license" to keep the story intact. That said, there are many cases where the algorithms are indeed related, and where that happens I will make an effort to point it out in the posts themselves.

I'll be releasing a new post in this series every Monday and Thursday until I run out of posts (excepting Labor Day in the U.S.) Here's the current release schedule (all dates in 2019):

Jul 18: Selection Sort

Jul 22: Merge Sort

Jul 25: Insertion Sort

Jul 29: Shell Sort

Aug 1: Comb Sort

Aug 5: Quick Sort

Aug 8: Heap Sort

Aug 12: Cocktail Shaker Sort

Aug 15: Bogo Sort

Aug 19: Bubble Sort

Aug 22: Odd-Even Sort

Aug 26: Counting Sort

Aug 29: Bitonic Merge Sort

Sep 5: Pigeonhole Sort

Sep 9: Gnome Sort

Sep 12: Radix Sort

Sep 16: Bead Sort

Sep 19: Bucket Sort

This series has been quite a lot of work to write and plan, but now that it's ready for publication, I'm pretty proud of it. Don't forget to check out the sample project over on GitHub, and as always, thanks for reading!

Happy Coding!

Ten seconds and an ethernet cable. That's what it took K.C. and I to lose our memories, and for my assessment of my own skills to fall from "technical God" to "what the hell am I doing anyway?"

Perhaps I should back up a bit. You'll soon see why this statement is funny.

My wife K.C. and I bought a brand new iMac as our primary home machine. I know, I know, Microsoft developer buys Apple machine, bring on the jokes. But this iMac is really, *really* nice, and it works with all our other stuff.

In our family, I'm the "technical" one. I work on computers all day, so I am expected (not without reason) to know all about them. K.C. is not a Luddite by any means, don't get me wrong, but she's got an artist's eye and I've got an engineer's. We compliment each other. Plus, I looked forward to a problem I could solve.

It fell to me, as the "technical" one, to transfer 330 gigabytes of photos, videos, audio, and documents from our ancient Dell machine to the new iMac. Ten years of media, from before we got married, through the birth of our kids, to vacations, to the last audio we have of K.C.'s grandmother, to buying our house. All of it needed to get off the Dell and onto the iMac.

I figured this would be easy, which should have been the first clue. Apple publishes a tool called Migration Assistant which purports to be able to do this kind of transfer for you. I installed this on the Dell, but I could not for the life of me get it to run. We really didn't want to spend the $100 paying for the migration from the Apple Store (ironic considering the $3k we spent to buy the computer), or to buy a portable hard drive that we were going to use once, so I was attempting to get this done as cheaply as possible.

In a flash of determination, I ran to the local electronics store, bought an ethernet cable, hooked the two machines together, remoted into the Dell from the iMac, and started doing a transfer. K.C. keeps all of our media files in a well-defined folder structure, so all I had to do was move a few root-level folders and we would be done very quickly.

As I watched the progress bar fill slowly to 100%, my confidence increased. I had done it! Now we could junk the old Dell and get on with our lives.

The next day K.C. decided to clean out the old Dell. We wanted to give it to her father to use as a backup machine. She asked if we were OK to delete the media off of it, and I told her it was fine. Ten seconds later, I realized that we should probably keep them, just in case, but by then it was too late. No big deal, I thought. We've already got them all on the iMac.

Don't we?

Six days passed. K.C. noticed something late one night: several pictures that should have been in a specific folder were just missing. The folder structure was correct, none of the folders were missing but there weren't as many media items as there should have been. I cannot rightly describe to you, dear readers, the sinking feeling, the pit, that opened up in my gut.

Not everything made it.

As near as we can tell, about 70%-80% of our memories survived the transfer. There is no rhyme or reason as to what made it and what didn't; the only commonality we can find is that more recent stuff was much more likely to have been lost.

And yes, we should have had another backup. And yes, we shouldn't have deleted the source. And yes, we did a lot of things wrong. Hindsight is depressingly amazing, and thoroughly unhelpful.

We spent two nights scouring the photos and videos, searching for what was lost. All of our media from our kids' births were there, but the recent trip to California we'd taken was missing more than half of the photos K.C. had taken with our DSLR camera.

20% may not seem like much. But how do you value 20% of your memories? Especially when you're not even sure what was missing?

We took the Dell to Geek Squad in the vain hope that they could restore it. They could not. A trip to a local data-recovery store yielded the same result. Now we were stuck, stuck with the knowledge that we were missing parts of our lives. But which parts? And how the bloody hell did this happen in the first place?!

I took this really hard. I'm the "technical" one, it's my job to be good at stuff like this. If I am not that guy, what am I?

As with any true disasters, there's never just one decision that leads to the terrible result. The thoughts you have in the moment when you discover the error are not a fair representation of your actual skill, or indeed, have any true impact on what you are. But in the moment and for days afterward, I was convinced it was my fault, my doing. I was convinced that I caused our collective loss. To be fair, I kinda did. But I didn't need to be so hard on myself about it.

That was the trick. I had to let go of the nagging responsibility that, as the "technical" one, it was my job to fix the problem I had caused. I tried to fix it, tried my damnedest, but there was no fix available. It simply couldn't be undone.

In absence of a solution, I needed something else: I needed to forgive myself.

I know how corny this all sounds, believe me, but forgiveness is difficult for me. I get caught up in finding the answer to a given problem. I am the guy with the answers, or at least that's the mask I put on. I want to be the problem-solver. I want to be needed, appreciated. Sometimes the problem just cannot be solved. When this happens, I am forced to admit something which is difficult to utter: I am not always the guy who can fix everything.

We did, finally, get a bit lucky.

One of the brilliant things about living in such an interconnected world is that sometimes we have backups without even realizing it. My wife is a big user of Facebook, and it turned out that much of what we lost was stored there, in albums and dated posts. We are in the process of retrieving this data now, and let me tell you, it's an amazing feeling. It's not going to replace everything, but it helps.

I'm still the technical one. Problems like this are still mine to solve. My skills aren't diminished, despite the screwup and despite my own impostor syndrome. It sucks losing 20% of our memories, but we got much of it back, and the truly important things are still here.

I wish I knew how this happened. I never will. And I'm gonna have to be okay with that, with what we lost. We both are.

Moral of the story: back up your systems! Now! 20% of your memories is priceless. Don't lose it.

]]>I'm an adult, most of the time. That means I have to do adult things, like getting groceries or gas or maintaining our yard. I am also married with three kids. And *that* means a LOT of laundry.

Laundry by itself isn't too bad. It gives my easily-distracted brain a simple, repetitive task that often allows me to think more clearly than is otherwise possible. But the worst part of laundry, the absolute worst, is the socks.

Matching socks from five individuals, each with different styles, colors, lengths, etc. is a huge, time-consuming task. Frequently my beloved wife and I dedicate a whole day to just doing laundry (and other house-related chores) and we have to save the socks until the very end, because otherwise we end up with massive piles of unmatched foot wrappers. But even that takes enormous amounts of time and effort; we pick a sock, and scan the pile for it's mate. This is terribly inefficient.

Whatever else I am, I'm also a programmer. And I am not content to let a good problem, that of how to efficiently sort socks, go to waste.

In this post, we're going to see ~~five~~ SIX different potential solutions to sorting a huge pile of socks. These five solutions exist in the corresponding sample C# project on GitHub. We'll walk through why some are better than others, and hopefully come up with a way to efficiently sort socks.

Here's some of the code I wrote to set up this problem. We need a class Sock, a Sock Generator, and a method which checks to see if the socks are well and truly sorted.

```
public class Sock
{
public SockColor Color { get; set; }
public SockOwner Owner { get; set; }
public SockLength Length { get; set; }
}
public enum SockColor
{
Red,
Blue,
Green,
Black,
White
}
public enum SockLength
{
NoShow, //Disappears beneath the shoe line
Ankle, //Shows over shoe line, covers ankle
Crew //Comes partway up the calf, but no higher than halfway
}
public enum SockOwner
{
AdultMan,
AdultWoman,
Child
}
public static class SockGenerator
{
public static List<Sock> GenerateSocks(int max)
{
int count = 0;
List<Sock> socks = new List<Sock>(max);
//Generate a certain number of pairs of socks
//If the "max" is an odd number, we still generate pairs,
//(e.g. all socks will have a match).
while(count < max)
{
var pair = GeneratePair();
socks.Add(pair[0]);
socks.Add(pair[1]);
count = count + 2;
}
return socks.OrderBy(x => Guid.NewGuid()).ToList();
}
private static List<Sock> GeneratePair()
{
List<Sock> pair = new List<Sock>();
Random random = new Random();
var sockOwner = random.Next(0, 3);
var sockColor = random.Next(0, 5);
var sockLength = random.Next(0, 3);
var sock = new Sock()
{
Owner = (SockOwner)sockOwner,
Color = (SockColor)sockColor,
Length = (SockLength)sockLength
};
pair.Add(sock);
pair.Add(sock);
return pair;
}
//This method checks to see if all socks are in order with their matching pair.
public static bool AreMatched(List<Sock> socks)
{
bool areMatched = true;
for(int i = 0; i < socks.Count; i = i + 2)
{
var firstSock = socks[i];
var secondSock = socks[i + 1];
areMatched = areMatched
&& firstSock.Color == secondSock.Color
&& firstSock.Owner == secondSock.Owner
&& firstSock.Length == secondSock.Length;
if (areMatched == false) break;
}
return areMatched;
}
}
```

We can now generate a collection of socks, and confirm if that collection is sorted.

The primary goal of this exercise is to find a sorting and pairing solution that performs reasonably well for an arbitrarily large number of socks. Ideally we will also find something that works in the real world, but given that we programmers so rarely inhabit such a place, this would be a nice bonus rather than a must-have.

To check this goal, we will measure the time each solution takes to sort 500k (five hundred thousand) socks using the very nice Stopwatch class in .NET.

Let's start our solutions with the simplest one: a naive sort.

The idea goes like this: pick any sock, then iterate over the entire collection of socks until you find a match. Place the paired socks in a different pile.

- GIVEN a collection of unmatched socks.
- TAKE the first sock.
- LOOK at each sock in order until you find one that matches on all properties.
- MOVE the matching pair to a new collection.

```
public class Sorter
{
//1. GIVEN a collection of unmatched socks
public List<Sock> NaiveSort(List<Sock> unmatchedSocks)
{
Stopwatch watch = new Stopwatch();
watch.Start();
List<Sock> matchedSocks = new List<Sock>();
while(unmatchedSocks.Any())
{
//2. TAKE the first sock
Sock currentSock = unmatchedSocks[0];
unmatchedSocks.Remove(currentSock);
Sock matchingSock;
//3. LOOK at each sock in order until you find a match on all properties
foreach(var tempSock in unmatchedSocks)
{
if(tempSock.Color == currentSock.Color
&& tempSock.Length == currentSock.Length
&& tempSock.Owner == currentSock.Owner)
{
//4. MOVE the matching socks to a new collection
matchingSock = tempSock;
unmatchedSocks.Remove(tempSock);
matchedSocks.Add(currentSock);
matchedSocks.Add(tempSock);
break;
}
}
}
watch.Stop();
Console.WriteLine("Completed Naive Sort in " + watch.ElapsedMilliseconds.ToString() + " milliseconds.");
return matchedSocks;
}
}
```

Let's run our app, with 500k socks, and see if we get a reasonable time to sort.

It took approximately 68 seconds to sort 500k socks. This is now our baseline. Our subsequent solutions should all perform better than this.

So the naive sort isn't great, but it gets the job done. Now we just want to improve on it.

One simple way of improving on it would be to say that we only care about matching by one property; e.g. we only want to match on owner and not care about the length or color. In theory, this allows us to stop searching the pile for a "match" earlier than in the basic Naive Sort. Hence, our Naive Partial Sort solution.

- GIVEN a collection of unmatched socks.
- TAKE the first sock.
- LOOK at each sock in order until you find one that matches on ONLY ONE property.
- MOVE the "matching" pair to a new collection.

```
public List<Sock> NaivePartialSort(List<Sock> unmatchedSocks)
{
Stopwatch watch = new Stopwatch();
watch.Start();
List<Sock> matchedSocks = new List<Sock>();
while (unmatchedSocks.Any())
{
//Get the sock at the top of the pile
Sock currentSock = unmatchedSocks[0];
unmatchedSocks.Remove(currentSock);
Sock matchingSock;
//Iterate through the unmatched socks to find the next one that matches, ignoring color and length.
foreach (var tempSock in unmatchedSocks)
{
if (tempSock.Owner == currentSock.Owner)
{
matchingSock = tempSock;
unmatchedSocks.Remove(tempSock);
matchedSocks.Add(currentSock);
matchedSocks.Add(tempSock);
break;
}
}
}
watch.Stop();
Console.WriteLine("Completed Naive Partial Sort in " + watch.ElapsedMilliseconds.ToString() + " milliseconds.");
return matchedSocks;
}
```

With a new set of 500k socks, here's how our Naive Partial Sort performs:

This test took 71.8 seconds to run, which is *longer* than the basic Naive Sort.

~~I've run this test several times, and each time the Naive Partial Sort take a few seconds longer than the basic Naive Sort. It's very possible this is simply due to small sample size errors, but if any of my dear readers sees a reason why the Naive Partial Sort would consistently perform worse than the Naive Sort, I'd be happy to hear it.~~ UPDATE: Comments below explain why the Naive Partial Sort performs worse than the base Naive Sort.

Anyway, clearly we aren't going to get any better performance out of a naive-type sorting algorithm. We need to think about the problem differently.

With the naive, pick-a-sock-then-find-its-mate solutions not working terribly well, it's time we find a new solution. Maybe we could introduce another layer of abstraction and "pre-sort" the socks before matching them?

Here's the idea: if we sort the socks from the big combined pile into smaller, attribute-based piles (e.g. by color, by length, or by owner) then we could use the Naive Sort to sort the smaller piles, and it should take a lot less time.

- GIVEN a pile of unmatched socks
- SORT each sock into another pile based on a single attribute (color OR length, etc.)
- FOR EACH of these piles...
- TAKE the first sock in the pile.
- LOOK at each sock in the sub pile in order until you find a match.
- MOVE the matching pair to a new collection.

You will notice that steps 4, 5, and 6 are just the steps from the Naive Sort again.

```
//1. GIVEN a pile of unmatched socks.
public List<Sock> OneLevelPileSort(List<Sock> socks)
{
Stopwatch watch = new Stopwatch();
watch.Start();
//2. SORT each sock into another pile based on a single attribute.
var colorSortedSocks = SplitByColor(socks); //Implementation below
//3. FOR EACH of these piles...
//4. TAKE the first sock in the pile.
//5. LOOK at each sock in the sub pile in order until you find a match.
//6. MOVE the matching pair to a new collection.
List<Sock> matchedSocks = new List<Sock>();
matchedSocks.AddRange(NaiveSort(colorSortedSocks[0]));
matchedSocks.AddRange(NaiveSort(colorSortedSocks[1]));
matchedSocks.AddRange(NaiveSort(colorSortedSocks[2]));
matchedSocks.AddRange(NaiveSort(colorSortedSocks[3]));
matchedSocks.AddRange(NaiveSort(colorSortedSocks[4]));
watch.Stop();
Console.WriteLine("Completed One-Level Pile Sort in " + watch.ElapsedMilliseconds.ToString() + " milliseconds.");
return matchedSocks;
}
public List<List<Sock>> SplitByColor(List<Sock> socks)
{
List<List<Sock>> colorSortedSocks = new List<List<Sock>>();
//Initialize the lists
colorSortedSocks.Add(new List<Sock>());//0 Red
colorSortedSocks.Add(new List<Sock>());//1 Blue
colorSortedSocks.Add(new List<Sock>());//2 Green
colorSortedSocks.Add(new List<Sock>());//3 Black
colorSortedSocks.Add(new List<Sock>());//4 White
foreach (var sock in socks)
{
switch (sock.Color)
{
case SockColor.Red:
colorSortedSocks[0].Add(sock);
break;
case SockColor.Blue:
colorSortedSocks[1].Add(sock);
break;
case SockColor.Green:
colorSortedSocks[2].Add(sock);
break;
case SockColor.Black:
colorSortedSocks[3].Add(sock);
break;
case SockColor.White:
colorSortedSocks[4].Add(sock);
break;
}
}
return colorSortedSocks;
}
```

Here's the output we get for running this sort against 500k socks:

The one-level pile sort sorted 500k socks in 12.5 seconds, more than five times as quickly as the naive sort implementation! We're on to something now!

We need to try something... bigger.

This is the logical conclusion to the one-level pile sort: if sorting into piles based on one attribute was quicker than the naive solution, then most likely sorting into progressively smaller piles will be quicker still.

The idea here is to get the socks sorted into small-enough piles so that these piles do not themselves need to be sorted further.

- GIVEN a pile of unmatched socks.
- SORT each sock into a pile based on one attribute.
- FOR EACH attribute, sort the new piles into smaller piles, where all of the socks in that pile match on all sorted attributes.
- MOVE each pair from the smallest-possible piles into a new collection (there will be no sorting here).

Note that our implementation is not generic for N properties; it is specific to our known properties of Owner, Length, and Color.

```
public List<Sock> ThreeLevelPileSort(List<Sock> socks)
{
Stopwatch watch = new Stopwatch();
watch.Start();
var colorSortedSocks = SplitByColor(socks);
List<Sock> matchedSocks = new List<Sock>();
foreach (var colorSortedPile in colorSortedSocks)
{
var lengthSortedPiles = SplitByLength(colorSortedPile);
foreach(var lengthSortedPile in lengthSortedPiles)
{
var ownerSortedPiles = SplitByOwner(lengthSortedPile);
foreach(var ownerSortedPile in ownerSortedPiles)
{
foreach(var sock in ownerSortedPile)
{
matchedSocks.Add(sock);
}
}
}
}
watch.Stop();
Console.WriteLine("Completed Three-Level Pile Sort in " + watch.ElapsedMilliseconds.ToString() + " milliseconds.");
return matchedSocks;
}
public List<List<Sock>> SplitByColor(List<Sock> socks)
{
List<List<Sock>> colorSortedSocks = new List<List<Sock>>();
//Initialize the lists
colorSortedSocks.Add(new List<Sock>());//0 Red
colorSortedSocks.Add(new List<Sock>());//1 Blue
colorSortedSocks.Add(new List<Sock>());//2 Green
colorSortedSocks.Add(new List<Sock>());//3 Black
colorSortedSocks.Add(new List<Sock>());//4 White
foreach (var sock in socks)
{
switch (sock.Color)
{
case SockColor.Red:
colorSortedSocks[0].Add(sock);
break;
case SockColor.Blue:
colorSortedSocks[1].Add(sock);
break;
case SockColor.Green:
colorSortedSocks[2].Add(sock);
break;
case SockColor.Black:
colorSortedSocks[3].Add(sock);
break;
case SockColor.White:
colorSortedSocks[4].Add(sock);
break;
}
}
return colorSortedSocks;
}
public List<List<Sock>> SplitByLength(List<Sock> socks)
{
List<List<Sock>> colorSortedSocks = new List<List<Sock>>();
//Initialize the lists
colorSortedSocks.Add(new List<Sock>());//0 NoShow
colorSortedSocks.Add(new List<Sock>());//1 Ankle
colorSortedSocks.Add(new List<Sock>());//2 Crew
foreach (var sock in socks)
{
switch (sock.Length)
{
case SockLength.NoShow:
colorSortedSocks[0].Add(sock);
break;
case SockLength.Ankle:
colorSortedSocks[1].Add(sock);
break;
case SockLength.Crew:
colorSortedSocks[2].Add(sock);
break;
}
}
return colorSortedSocks;
}
public List<List<Sock>> SplitByOwner(List<Sock> socks)
{
List<List<Sock>> colorSortedSocks = new List<List<Sock>>();
//Initialize the lists
colorSortedSocks.Add(new List<Sock>());//0 AdultMan
colorSortedSocks.Add(new List<Sock>());//1 AdultWoman
colorSortedSocks.Add(new List<Sock>());//2 Child
foreach (var sock in socks)
{
switch (sock.Owner)
{
case SockOwner.AdultMan:
colorSortedSocks[0].Add(sock);
break;
case SockOwner.AdultWoman:
colorSortedSocks[1].Add(sock);
break;
case SockOwner.Child:
colorSortedSocks[2].Add(sock);
break;
}
}
return colorSortedSocks;
}
```

Here's the time it takes to sort 500k unmatched socks with the n-level pile sort:

The n-level pile sort took *less than a second* (245 milliseconds) to sort all the socks. That's a hell of an improvement over the Naive Sorts.

It means I've been doing laundry wrong my entire life.

More practically speaking, it means that we need to think more clearly about how to iterate over a collection when attempting to sort said collection. The problem with the naive sorts isn't that they work, it's that they're not efficient. By subdividing the main pile of socks into progressively smaller and smaller piles, we dramatically increase our efficiency.

Sorting socks in the real world is a time-consuming process; but we can use computer science ideas to make it quite a bit more efficient. The n-level pile sort (throwing the socks into progressively smaller piles where they match on all attributes) is clearly the most efficient way to sort socks that we've discussed.

Or, at least, the *second* most efficient.

```
public List<Sock> SpecialSort(List<Sock> socks)
{
Stopwatch watch = new Stopwatch();
watch.Start();
watch.Stop();
Console.WriteLine("Completed Special Sort in " + watch.ElapsedMilliseconds.ToString() + " milliseconds.");
Console.WriteLine("Nobody cares about matching socks anyway.");
return socks;
}
```

Don't forget to check out the sample project over on GitHub!

Happy Coding!

But wait, there's more! Courtesy of Thomas Levesque, here's another sorting solution, which he termed Dictionary Sort.

```
public List<Sock> DictionarySort(List<Sock> unmatchedSocks)
{
Stopwatch watch = new Stopwatch();
watch.Start();
List<Sock> matchedSocks = new List<Sock>();
var waitingForMatch = new Dictionary<(SockOwner, SockColor, SockLength), Sock>();
while (unmatchedSocks.Any())
{
int index = unmatchedSocks.Count - 1;
var sock = unmatchedSocks[index];
unmatchedSocks.RemoveAt(index);
var key = (sock.Owner, sock.Color, sock.Length);
if (waitingForMatch.TryGetValue(key, out var matchingSock))
{
matchedSocks.Add(sock);
matchedSocks.Add(matchingSock);
waitingForMatch.Remove(key);
}
else
{
waitingForMatch.Add(key, sock);
}
}
watch.Stop();
Console.WriteLine("Completed Dictionary Sort in " + watch.ElapsedMilliseconds.ToString() + " milliseconds.");
return matchedSocks;
}
```

The sample run of 500k socks produced this result:

As you can see, this is comparable to the Three-Level Pile Sort from earlier, in that it is *blazing* fast and accurate. Make sure to thank Thomas for contributing this result!

^{[1] Image from Flickr, used under license.}

I'm on a bit of a Blazor kick lately. Blazor sounds amazing, and to be honest when I hear the words "you can write C# and run it in the browser", all I can think is, "where do I sign up?"

In my previous post I walked through creating a new Blazor application in .NET Core and using it to manage a list of To-Do items.

It occurred to me that something very basic was missing from that previous post: the ability to sort the list of To-Do items by the columns in the display table. Given that Blazor is compiled to WebAssembly, and that I suck at Javascript, my go-to solutions like jQuery Datatables probably wouldn't work in a Blazor app (though to be fair, I haven't tried them).

So, why not do it myself, and learn a bit more about Blazor in the process? Come along with me as I take our sample Blazor project and refactor it to make the To-Do items table sortable.

We'll start with that To-Do list page markup from the earlier post. For posterity, here's that page:

```
@page "/todo"
@inject HttpClient Http
<h1>To-Do List</h1>
@if (items == null)
{
<p><em>Loading...</em></p>
}
else if (!items.Any())
{
<p><em>No ToDo items exist. Please add some.</em></p>
}
else
{
<table class="table">
<thead>
<tr>
<th>Remove</th>
<th>Date</th>
<th>Description</th>
<th>Is Complete</th>
</tr>
</thead>
<tbody>
@foreach (var item in items)
{
<tr>
<td><button onclick="@(() => RemoveTodo(item.ID))"><i class="oi oi-trash"></i></button></td>
<td>@item.DateCreated</td>
<td>@item.Description</td>
<td><input type=checkbox onchange="@(() => ToggleToDo(item.ID))" /> @item.IsComplete</td>
</tr>
}
</tbody>
</table>
}
@if (items != null)
{
<input placeholder="A new ToDo item" bind="@newItem" />
<button onclick="@AddTodo">Create</button>
}
@functions{
IList<ToDoItem> items = new List<ToDoItem>();
private string newItem;
protected override async Task OnInitAsync()
{
items = await Http.GetJsonAsync<List<ToDoItem>>("sample-data/todoitems.json");
}
private void AddTodo()
{
if (!string.IsNullOrWhiteSpace(newItem))
{
items.Add(new ToDoItem()
{
DateCreated = DateTime.Now,
Description = newItem,
ID = Guid.NewGuid()
});
newItem = string.Empty; //We need to reset this string, otherwise the text box will still contain the value typed in.
}
}
private void ToggleToDo(Guid id)
{
//First find the item
var todo = items.First(x => x.ID == id);
todo.IsComplete = !todo.IsComplete;
}
private void RemoveTodo(Guid id)
{
items.Remove(items.First(x => x.ID == id));
}
class ToDoItem
{
public Guid ID { get; set; }
public string Description { get; set; }
public bool IsComplete { get; set; }
public DateTime DateCreated { get; set; }
}
}
```

You can also see it in the sample project on GitHub. (Please note that the sample project was coded against Preview 3 of .NET Core 3, and may not work with other previews. When Blazor goes full RTM, I will update the sample project to work against that version of .NET Core.)

From this baseline, we can start creating the HTML and C# that will allow us to sort this table.

When we're done with this improvement, we want the following things:

- The list of tasks is able to be sorted by Date and Description columns, in both ascending and descending order.
- We want to show an icon indicating the current sort direction.

Let's get going!

Here's what our ToDo list item table looks like at the moment:

```
<table class="table">
<thead>
<tr>
<th>Remove</th>
<th>Date</th>
<th>Description</th>
<th>Is Complete</th>
</tr>
</thead>
<tbody>
@foreach (var item in items)
{
<tr>
<td><button onclick="@(() => RemoveTodo(item.ID))"><i class="oi oi-trash"></i></button></td>
<td>@item.DateCreated</td>
<td>@item.Description</td>
<td><input type=checkbox onchange="@(() => ToggleToDo(item.ID))" /> @item.IsComplete</td>
</tr>
}
</tbody>
</table>
```

The first thing we need to do is make the Date and Description header text look like they are clickable, and the simplest way to do that is to make them look like links. Here's a CSS class which does just that:

```
.sort-link{
cursor:pointer;
color:blue;
text-decoration:underline;
}
```

```
<table class="table">
<thead>
<tr>
<th>Remove</th>
<th>
<span class="sort-link">Date</span>
</th>
<th>
<span class="sort-link">Description</span>
</th>
<th>Is Complete</th>
</tr>
</thead>
<tbody>
...
</tbody>
</table>
```

Now, I know there is probably a better way to do this, but the fact is that I suck at CSS and this was the best I could come up with. If you've got a better way, let me know in the comments!

Now our issue is this: when the header sort links are clicked, what should happen? To solve that, we need a new method added to the `@functions`

block:

```
@functions{
IList<ToDoItem> items = new List<ToDoItem>();
//We need a field to tell us which direction the table is currently sorted by
private bool IsSortedAscending;
//We also need a field to tell us which column the table is sorted by.
private string CurrentSortColumn;
...
private void SortTable(string columnName)
{
//Sorting against a column that is not currently sorted against.
if(columnName != CurrentSortColumn)
{
//We need to force order by ascending on the new column
//This line uses reflection and will probably perform inefficiently in a production environment.
items = items.OrderBy(x => x.GetType().GetProperty(columnName).GetValue(x, null)).ToList();
CurrentSortColumn = columnName;
IsSortedAscending = true;
}
else //Sorting against same column but in different direction
{
if(IsSortedAscending)
{
items = items.OrderByDescending(x => x.GetType().GetProperty(columnName).GetValue(x, null)).ToList();
}
else
{
items = items.OrderBy(x => x.GetType().GetProperty(columnName).GetValue(x, null)).ToList();
}
//Toggle this boolean
IsSortedAscending = !IsSortedAscending;
}
}
}
```

The method `SortTable()`

should be run every time one of the header sort links is clicked. Therefore we need a delegate which runs that method, to which we must pass the name of the property we are sorting by (which is not necessarily the display name of the table column). That looks like this:

```
<table class="table">
<thead>
<tr>
<th>Remove</th>
<th>
<span class="sort-link" onclick="@(() => SortTable("DateCreated"))">Date</span>
</th>
<th>
<span class="sort-link" onclick="@(() => SortTable("Description"))">Description</span>
</th>
<th>Is Complete</th>
</tr>
</thead>
<tbody>
...
</tbody>
</table>
```

When you run the app, the table now looks something like this...

...and the columns get sorted! We've done it!

Except, which column is currently being sorted by? There's no obvious indicator of such a thing. Let's build one.

What I want now is a little arrow icon to be displayed next to the column header which the table is currently being sorted by. This example will use FontAwesome's icons for the arrows, so if your project doesn't already have FontAwesome you will need to include it.

The first issue we have is: which column needs the icon? We've already solved that by using a `CurrentSortColumn`

property on our page. And because we have a `IsSortedAscending`

boolean property, we know which direction the sort is currently going. All we need do now is combine those properties with some `onclick`

delegates and a little CSS, which gets us this:

```
<table class="table">
<thead>
<tr>
<th>Remove</th>
<th>
<span class="sort-link" onclick="@(() => SortTable("DateCreated"))">Date</span>
<span class="fa @(GetSortStyle("DateCreated"))"></span>
</th>
<th>
<span class="sort-link" onclick="@(() => SortTable("Description"))">Description</span>
<span class="fa @(GetSortStyle("Description"))"></span>
</th>
<th>Is Complete</th>
</tr>
</thead>
<tbody>
...
</tbody>
</table>
```

```
@functions{
IList<ToDoItem> items = new List<ToDoItem>();
private bool IsSortedAscending;
private string CurrentSortColumn;
...
private string GetSortStyle(string columnName)
{
if(CurrentSortColumn != columnName)
{
return string.Empty;
}
if(IsSortedAscending)
{
return "fa-sort-up";
}
else
{
return "fa-sort-down";
}
}
private void SortTable(string columnName)
{
if(columnName != CurrentSortColumn)
{
//We need to force order by descending on the new column
items = items.OrderBy(x => x.GetType().GetProperty(columnName).GetValue(x, null)).ToList();
CurrentSortColumn = columnName;
IsSortedAscending = true;
}
else //Sorting against same column but in different direction
{
if(IsSortedAscending)
{
items = items.OrderByDescending(x => x.GetType().GetProperty(columnName).GetValue(x, null)).ToList();
}
else
{
items = items.OrderBy(x => x.GetType().GetProperty(columnName).GetValue(x, null)).ToList();
}
IsSortedAscending = !IsSortedAscending;
}
}
}
```

Tada! We now have icons that appear in our columns, like so:

Now it's obvious which column is being sorted by!

At time of writing, Blazor does not support partial views. The moment it does, I will be able to make this solution much more friendly to maintain, but for now this will have to do (and it's pretty good for something that was experimental a few weeks ago).

Also, I will freely admit that I am a terrible front-end programmer and there's probably a much better way to do this with Blazor. That said, I fully believe solving little problems like this one is a FANTASTIC way to learn about a framework or system in a more comprehensive fashion. We can always come back later to make it better.

Our system is now able to sort the To-Do items by Date and Description. Each column can be sorted independently, and either in ascending or descending order. Finally, we have little arrow icons to indicate which direction the sorting is using.

The most important thing, though, is we got a little more experience using Blazor. I, for one, cannot friggin wait until Blazor is a full-fledged member of .NET Core. C# in the browser? Sign me up!

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>