Counting Sort is the star athlete of the family, even more so than Comb Sort.

A soccer striker with a knack for faking out the keeper, she scores goals like football linebackers down calories. She's been playing since she could walk; soccer is all she wants to do. She's even got a potential scholarship lined up to go play for the big state college next year, something she's eagerly looking forward to once she graduates.

That is, *if* she graduates. Her grades have not been, well, *good*. The high school has so far looked the other way because she helps the team win so much, but that won't fly in college. All she has to do is get a B or better average for her senior year, so she has enlisted a team of people to help her study: her dad Cocktail Shaker Sort, doting but gruff grand-uncle Bitonic Merge Sort, history-teacher uncle Pigeonhole Sort, and first cousin Comb Sort (who has a similar problem). They are hoping, trying, to ensure that Counting Sort can go to college and play.

But time is running out, and it's starting to look like they're gonna need a miracle for Counting Sort to live her dream.

### The Rundown

**Created By:**Harold H Seward (1954)**Type:**Distribution**Stable?**Yes**Worst-Case:***O(n + k)***Space Complexity:***O(n + k)***Sample Project:**Over on GitHub

### Algorithm

- COUNT how many times a particular value appears, and store those values in a "count" array.
- CREATE an "order" array which shows in what order the values should appear.
- SORT the main array using the order and count arrays.

### Visualization

This sort works by counting how many instances of a particular number show up. The algorithm above is not very descriptive, so let's see if we can make a more meaningful example.

Say we have the following starter array:

{ 5, 2, 5, 8, 1, 2, 4, 5 }

Counting sort would create a new "count" array of size *k* (which we will define to be 10 for simplicity) and store the number of times a given value appears.

That array would look like this:

a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
---|---|---|---|---|---|---|---|---|---|

1 | 2 | 0 | 1 | 3 | 0 | 0 | 1 | 0 | 0 |

We read this as "the value 1 appears 1 time, the value 2 appears 2 times, the value 3 appears 0 times" and so on.

We then need to create a "order" array in which the value increases, to show the order of the elements.

a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
---|---|---|---|---|---|---|---|---|---|

1 | 2 | 2 | 4 | 5 | 5 | 5 | 8 | 8 | 8 |

Using the counting array and the order array, we can place the values in their correct order:

{ 1, 2, 2, 4, 5, 5, 5, 8 }

### Implementation

In researching this algorithm I found it particularly helpful to step through each for loop of this implementation, to see what each array had as its values.

```
class CountingSort
{
static void Sort(int[] array)
{
int length = array.Length;
//Create a new "output" array
int[] output = new int[length];
//Create a new "counting" array which stores the count of each unique number
int[] count = new int[100];
for (int i = 0; i < 100; ++i)
{
count[i] = 0;
}
for (int i = 0; i < length; ++i)
{
++count[array[i]];
}
//Change count[i] so that count[i] now contains actual position of
//this character in the output array.
for (int i = 1; i <= 99; ++i)
{
count[i] += count[i - 1];
}
//Build the output array.
//To make this sorting algorithm stable, we are operating in reverse order.
for (int i = length - 1; i >= 0; i--)
{
output[count[array[i]] - 1] = array[i];
--count[array[i]];
}
//Copy the output array to the final array.
for (int i = 0; i < length; ++i)
{
array[i] = output[i];
}
}
public static void Main()
{
int[] array = { 64, 11, 83, 8, 13, 45, 92, 98, 55, 17, 41, 81, 11, 64, 14, 41, 11, 92 };
Console.WriteLine("Counting Sort");
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

### Time and Space Complexity

Counting Sort can be fairly efficient: it's worst-case time complexity is *O(n + k)*, where *k* is the range of potential values. For small values of *k*, this is an efficient time. The problem happens when *k* is large; in those scenarios, this algorithm becomes markedly less efficient than other distribution algorithms (e.g. Radix Sort).

Because the algorithm needs to create a "count" array that is the same size as the range of possible values *k*, the space complexity is also *O(n + k)*. Again, this is not a problem with small *k*, but with large *k* you could start running into memory or space issues.

### Summary

Counting Sort is an efficient sorting algorithm for small ranges of potential values *k*. However, once the range becomes large, Counting Sort quickly becomes overshadowed by other, more efficient sorts. But, with a little help from her grand-uncle and her cousin, she might, just might, still make it to college. She hopes.

The military-veteran uncle, Bitonic Merge Sort, is next, though if you peel back his layers you might find he's a bit of a softy.

Don't forget to check out the sample project over on GitHub!

Happy Coding!