I do a lot of application design and architecture, and a rewrite project I'm heading up needed an architecture sufficiently designed to handle a set of complex requirements. What I was came up with is not new, and has been demoed and used many times before, but after a coworker told me he'd never heard of it, it occurred to me that I hadn't written a post about it yet, so here we are.

In this post, we'll be discussing the Repository-Service Pattern, a name I applied to a software architecture pattern that has been in use for quite some time. We'll take a look at how this pattern might be implemented in a real app, and discuss some advantages and one big disadvantage the pattern has.

Let's get started!

The Repository-Service pattern breaks up the business layer of the app into two distinct layers.

- The lower layer is the
**Repositories**. These classes handle getting data into and out of our data store, with the important caveat that each Repository only works against a single Model class. So, if your models are Dogs, Cats, and Rats, you would have a Repository for each, the DogRepository would not call anything in the CatRepository, and so on. - The upper layer is the
**Services**. These classes can query multiple Repository classes and combine their data to form new, more complex business objects. Further, they introduce a layer of abstraction between the web application and the Repositories so that they can change more independently.

Let's pretend we will model a day's sales and profits at a local movie theatre.

Movie theatres make money from two major sources: ticket sales and food sales. We want to build an app that can both display the tickets and food items sold, as well as generate some simple statistics about how much sales we had that day.

By the end of this post, we will have a sample application which can do the following:

- Display all food items sold.
- Display all tickets sold.
- Display the average profit per ticket and average profit per food item on every page of the app.

*NOTE: This project is built in ASP.NET Core 3.0 using MVC architecture. All code samples in this post have been simplified. For the full code, check out the sample project on GitHub*.

First, let's understand what kind of models we want to work with. Here's the sample model objects for a food item and a ticket:

```
public class FoodItem
{
public int ID { get; set; }
public string Name { get; set; }
public decimal SalePrice { get; set; }
public decimal UnitPrice { get; set; }
public int Quantity { get; set; }
public decimal Profit
{
get
{
return (SalePrice * Quantity) - (UnitPrice * Quantity);
}
}
}
```

```
public class Ticket
{
public int ID { get; set; }
public string MovieName { get; set; }
public decimal SalePrice { get; set; }
public decimal StudioCutPercentage { get; set; }
public int Quantity { get; set; }
public decimal Profit
{
get
{
return (Quantity * SalePrice) - (StudioCutPercentage * (Quantity * SalePrice));
}
}
public decimal ProfitPerItem
{
get
{
return SalePrice - (StudioCutPercentage * SalePrice);
}
}
}
```

We will also need a simple model to represent the financial statistics:

```
public class FinancialStats
{
public decimal AverageTicketProfit { get; set; }
public decimal AverageFoodItemProfit { get; set; }
}
```

With these models in place, we can start building the lowest layer of this pattern: the Repository layer.

The Repositories are intended to deal with operations for a single business model. This commonly includes CRUD functionality, and might also include more complex methods (e.g. querying for a collection of objects, or running stats against said collection).

It follows that because we have two business models, we need two repositories. Here's the repository for FoodItem:

```
public interface IFoodRepository
{
List<FoodItem> GetAllSold();
}
public class FoodRepository : IFoodRepository
{
public List<FoodItem> GetAllSold()
{
//In a real project, this is where you would call your database/datastore for this info
List<FoodItem> items = new List<FoodItem>()
{
new FoodItem()
{
ID = 14,
Name = "Milk Duds",
SalePrice = 4.99M,
UnitPrice = 1.69M,
Quantity = 43
},
new FoodItem()
{
ID = 3,
Name = "Sour Gummy Worms",
SalePrice = 4.89M,
UnitPrice = 1.13M,
Quantity = 319
},
new FoodItem()
{
ID = 18,
Name = "Large Soda",
SalePrice = 5.69M,
UnitPrice = 0.47M,
Quantity = 319
},
new FoodItem()
{
ID = 19,
Name = "X-Large Soda",
SalePrice = 6.19M,
UnitPrice = 0.59M,
Quantity = 252
},
new FoodItem()
{
ID = 1,
Name = "Large Popcorn",
SalePrice = 5.59M,
UnitPrice = 1.12M,
Quantity = 217
}
};
return items;
}
}
```

The TicketRepository looks similar:

```
public interface ITicketRepository
{
List<Ticket> GetAllSold();
}
public class TicketRepository : ITicketRepository
{
public List<Ticket> GetAllSold()
{
List<Ticket> tickets = new List<Ticket>()
{
new Ticket()
{
ID = 1953772,
MovieName = "Joker",
SalePrice = 8.99M,
StudioCutPercentage = 0.75M,
Quantity = 419
},
new Ticket()
{
ID = 2817721,
MovieName = "Toy Story 4",
SalePrice = 7.99M,
StudioCutPercentage = 0.9M,
Quantity = 112
},
new Ticket()
{
ID = 2177492,
MovieName = "Hustlers",
SalePrice = 8.49M,
StudioCutPercentage = 0.67M,
Quantity = 51
},
new Ticket()
{
ID = 2747119,
MovieName = "Downton Abbey",
SalePrice = 8.99M,
StudioCutPercentage = 0.72M,
Quantity = 214
}
};
return tickets;
}
}
```

There's nothing complex about these repositories; all they do is query the data store (in our case, the data store doesn't exist and we are mocking the results) and return objects. The real complexity starts in the next layer, where we will build the Service classes.

Recall that the Service classes are designed to do two things:

- Inherit from the Repository classes AND
- Implement their own functionality, which is only necessary when said functionality deals with more than one business object.

As of yet, the only functionality we have is getting the sold Tickets and Food for the day; it isn't very complicated. Consequently the TicketService and FoodService classes are very simple:

```
public interface IFoodService : IFoodRepository { }
public class FoodService : FoodRepository, IFoodService { }
public interface ITicketService : ITicketRepository { }
public class TicketService : TicketRepository, ITicketService { }
```

But we need to keep in mind our Goal #3 from earlier, which is that we want to display the average item profit for both tickets and food items on every page of the app. In order to do this, we will need a method which queries both FoodItems and Tickets, and because the Repository-Service Pattern says Repositories cannot do this, we need to create a new Service for this functionality.

To accomplish this we need a new service class, one that queries both Food and Ticket repositories and constructs a complex object. Here's the new FinancialsService class and corresponding interface:

```
public interface IFinancialsService
{
FinancialStats GetStats();
}
public class FinancialsService : IFinancialsService
{
private readonly ITicketRepository _ticketRepo;
private readonly IFoodRepository _foodRepo;
public FinancialsService(ITicketRepository ticketRepo,
IFoodRepository foodRepo)
{
_ticketRepo = ticketRepo;
_foodRepo = foodRepo;
}
public FinancialStats GetStats()
{
FinancialStats stats = new FinancialStats();
var foodSold = _foodRepo.GetAllSold();
var ticketsSold = _ticketRepo.GetAllSold();
//Calculate Average Stats
stats.AverageTicketProfit = ticketsSold.Sum(x => x.Profit) / ticketsSold.Sum(x => x.Quantity);
stats.AverageFoodItemProfit = foodSold.Sum(x => x.Profit) / foodSold.Sum(x => x.Quantity);
return stats;
}
}
```

That completes our Services layer! Next we will create the Controllers layer, which is to say, we will create a new ASP.NET Core Web App.

Here's a simple FoodController class (the corresponding view is on GitHub):

```
public class FoodController : Controller
{
private readonly IFoodService _foodService;
public FoodController(IFoodService foodService)
{
_foodService = foodService;
}
public IActionResult Index()
{
var itemsSold = _foodService.GetAllSold();
return View(itemsSold);
}
}
```

The TicketController looks very similar:

```
public class TicketController : Controller
{
private readonly ITicketService _ticketService;
public TicketController(ITicketService ticketService)
{
_ticketService = ticketService;
}
public IActionResult Index()
{
var tickets = _ticketService.GetAllSold();
return View(tickets);
}
}
```

These two controllers and their actions give us a way to see the Tickets and Food Items sold, which accomplishes two of our goals. Here's a screenshot of the Food Items page:

We still have our Goal #3 to do, though. In order to see these stats on every page, we're going to create a new View Component.

A View Component in ASP.NET Core MVC consists of multiple parts. The first and most important part is a class, which implements the ViewComponent class. Our class looks like this:

```
public class FinancialStatsViewComponent : ViewComponent
{
private readonly IFinancialsService _financialService;
public FinancialStatsViewComponent(IFinancialsService financialService)
{
_financialService = financialService;
}
public Task<IViewComponentResult> InvokeAsync()
{
var stats = _financialService.GetStats();
return Task.FromResult<IViewComponentResult>(View(stats));
}
}
```

We also need a corresponding view, which will need to be located at ~/Views/Shared/Components/FinancialStats/Default.cshtml:

```
@model RepositoryServicePatternDemo.Core.Models.FinancialStats
<ul>
<li class="d-inline-block">
<strong>@Html.DisplayNameFor(x => x.AverageFoodItemProfit)</strong>
@Html.DisplayFor(x => x.AverageFoodItemProfit)
</li>
<li class="d-inline-block">
<strong>@Html.DisplayNameFor(x => x.AverageTicketProfit)</strong>
@Html.DisplayFor(x => x.AverageTicketProfit)
</li>
</ul>
```

Finally, we need to invoke this component on the _Layout view:

```
...
<div class="container">
<main role="main" class="pb-3">
@await Component.InvokeAsync("FinancialStats")
@RenderBody()
</main>
</div>
...
```

All of this results in the stats being visible on every page in the app, such as the Ticket Sales page:

Ta-da! We have accomplished out goals! Time to celebrate with some movie candy!

Here's the architecture diagram for the project we ended up building:

The diagram points out the major benefit to using this pattern: clear and consistent separation between the layers of the architecture. This gives us the ability to change one layer with minimal impact to the others, and with clear direction as to what each layer contains, we can do so quickly and with a minimum of code.

The Repository-Service Pattern is a great pattern for situations in which you need to query for data from a complex data store or need some layers of separation between what happens for single models vs combinations of models. That said, it has one primary drawback that needs to be taken into account.

That drawback is simply this: it's a LOT of code, some of which might be totally unnecessary. The TicketService and FoodService classes from earlier do nothing except inherit from their corresponding Repositories. You could just as easily remove these classes and have the Repositories injected into the Controllers.

I personally will argue that any real-world app will be sufficiently complicated so as to warrant the additional Service layer, but it's not a hill I'll die on.

The Repository-Service Pattern is a great way to architect a real-world, complex application. Each of the layers (Repository and Service) have a well defined set of concerns and abilities, and by keeping the layers intact we can create an easily-modified, maintainable program architecture. There is one major drawback, but in my opinion it doesn't impact the pattern enough to stop using it.

That's all for this post! Don't forget to check out the sample project over on GitHub! Also, feel free to ask questions or submit improvements either on the comments in this post or on the project repository. I would love to hear my dear readers' opinions on this pattern and how, or if, they are using it in their real-world apps.

Happy Coding!

]]>I had heard for years (mostly from my teammates) that dark mode helped reduce "eye strain" and that sounded pretty good to me. But partially due

]]>After a great many years of resisting, it has finally happened. I have joined the dark side.

I had heard for years (mostly from my teammates) that dark mode helped reduce "eye strain" and that sounded pretty good to me. But partially due to stubbornness and partially to laziness I refused to budge, refused to try out this cool new thing that all my teammates were raving about. That lasted until earlier this year.

For four months now, I've done everything in dark mode if it is available. Visual Studio, Chrome and Firefox, various web sites, my email client, the Ghost editor pages; if it has a dark or night mode, I'm turning that puppy on and relishing the relief it gives my poor eyes. I'm looking forward to the new Dark Mode in iOS 13; my phone is the other device I'm staring at most often and I'm sure Dark Mode will help reduce my headaches even further.

Oh, did I not mention that? I didn't switch because it was the cool trendy thing to do, no, I switched because I getting headaches so frequently I thought I must be bursting out of my skull. At one point I would develop a splitting headache almost daily, whenever I had been staring at my screen for most of the day. I tried several things (both external and internal, ergonomic and medicinal) to get my headaches under control, and while many of those helped, one thing seemed to do the most good: switching all of my applications to dark mode.

I can literally *feel* the difference. Reading dark text on a bright background now feels like burning my retinas out. Sites which don't offer a dark mode are sites that I leave as quickly as possible. It shouldn't make this much difference, but it does, and I'm surprised at how much I can feel it.

In short, dark mode on my devices has helped me get my headaches under control for several months now. I'm a total believer in how dark mode helps reduce eye strain, as it certainly has for me. I'm living proof that dark mode provides tangible benefits. Problem is: tangible benefits for whom?

What I found surprising while doing some Googling about this topic is that there's no scientific consensus that dark mode is better for your eyes. Optics is a finicky science, and what's good for one person doesn't make it good for another. The only real consensus I could find is that it is contrast, not color, that provides the biggest difference in readability and strain. Black-on-white and white-on-black provide very high contrast, hence why they are the default.

I am no longer surprised as to why my teammates were raving about dark mode years ago. It works for me, and it's definitely helped ease my headaches. I appear to be one of the people for whom dark mode is a benefit, and I am grateful for it. But as with any new fad, it isn't some universal panacea for all people who suffer from eye strain, as much as I (and the companies issuing their own dark modes) want you to believe it is.

But, and hear me out, it is worth trying. I didn't because I figured my headaches had some other root cause, like my posture, and so I missed out on years of less pain and distraction. Don't be like me. Try out dark mode, and if it works for you, keep using it. If it doesn't, go right back to light mode. There's no downside.

I am eagerly awaiting the time where "dark mode vs light mode" becomes the next silly flame war topic, right after "tabs vs spaces", "command line vs GUI", and my perennial favorite "real programmers use X".

(And as soon as I can, I'm going to have to figure out a dark mode for this blog, one that doesn't completely undo the design.)

Happy Coding!

]]>The only child of Bead Sort and Radix Sort, he longs to become a fisherman and own his own business. His mom celebrates this idea, and encourages him to pursue it; his dad, a normally-chill guy, wants

]]>Bucket Sort has a dream: a dream to one day captain a boat.

The only child of Bead Sort and Radix Sort, he longs to become a fisherman and own his own business. His mom celebrates this idea, and encourages him to pursue it; his dad, a normally-chill guy, wants him to succeed but thinks that the whole idea is pretty stupid. See, there's a problem with this dream...

Bucket Sort gets violently seasick any time a boat is in his immediate vicinity. He's never been out on the ocean, much less for days on end, and he barely knows which way a fishing pole goes. He couldn't tell you the difference between bait and tackle. In short, he has no idea what he's dreaming of.

But it is still his dream. One day, with luck, he can make it come true. For a few minutes, anyway.

**Created By:**Unknown**Type:**Distribution**Stable?**Yes**Average-Case:***O(n + (n*^{2}/k) + k)**Worst-Case:***O(n*^{2})**Space Complexity:***O(n * k)***Sample Project:**Over on GitHub

- SET UP an array of initially empty "buckets".
- SCATTER each object from the unsorted array into their corresponding buckets.
- SORT each bucket individually.
- GATHER items from each bucket in their correct order.

Lucky for us, this is an easy algorithm to visualize. Imagine we have the following array:

{ 41, 7, 18, 3, 11, 23, 45, 15 }

We need to divide these items into buckets based on some sort of identifier. For simplicity, we will create the buckets based on the range of values, ten numbers in each bucket. That results in these buckets:

(Fear my l33t paint skillz!)

We can then "scatter" the numbers into each bucket based on their range. That looks like this:

We now sort the items in each bucket using a different sorting algorithm (our implementation uses Insertion Sort) to result in "sorted buckets":

We then "gather" the items from each bucket in order, which results in the sorted array:

{ 3, 7, 11, 15, 18, 23, 41, 45 }

```
class BucketSort
{
public static List<int> Sort(params int[] x)
{
List<int> sortedArray = new List<int>();
int numOfBuckets = 10;
//Create buckets
List<int>[] buckets = new List<int>[numOfBuckets];
for (int i = 0; i < numOfBuckets; i++)
{
buckets[i] = new List<int>();
}
//Iterate through the passed array and add each integer to the appropriate bucket
for (int i = 0; i < x.Length; i++)
{
int bucket = (x[i] / numOfBuckets);
buckets[bucket].Add(x[i]);
}
//Sort each bucket and add it to the result List
for (int i = 0; i < numOfBuckets; i++)
{
List<int> temp = InsertionSort(buckets[i]);
sortedArray.AddRange(temp);
}
return sortedArray;
}
//Insertion Sort
public static List<int> InsertionSort(List<int> input)
{
for (int i = 1; i < input.Count; i++)
{
//2. Store the current value in a variable
int currentValue = input[i];
int pointer = i - 1;
//3. As long as we are pointing to a valid value in the array...
while (pointer >= 0)
{
//4. If the current value is less than the value we are pointing at...
if (currentValue < input[pointer])
{
//5. Move the pointed-at value up one space, and insert the current value at the pointed-at position.
input[pointer + 1] = input[pointer];
input[pointer] = currentValue;
}
else break;
}
}
return input;
}
static void Main(string[] args)
{
int[] array = new int[] { 43, 17, 87, 92, 31, 6, 96, 13, 66, 62, 4 };
Console.WriteLine("Bucket Sort");
CommonFunctions.PrintInitial(array);
List<int> sorted = Sort(array);
CommonFunctions.PrintFinal(sorted);
Console.ReadLine();
}
}
```

Because Bucket Sort uses another sorting algorithm as its "inner" sort algorithm, the time and space complexities for it are directly influenced by the complexities of that "inner" algorithm. This is why our implementation above uses Insertion Sort, which has been shown to work efficiently on small collections, even more efficiently than Quicksort.

What's more interesting (to me, at least) is the average case of this algorithm: *O(n + (n ^{2}/k) + k)*, where

What's to stop us from implementing a bucket sort with exactly one item per bucket? Well, the space complexity, for one. This algorithm has a space complexity of *O(n * k)*, and therefore requires *much* more space as the number of buckets increases. That said, space is cheap, and the tradeoff might be worth it.

Bucket Sort is an interesting algorithm, in that it tries to make another algorithm's job easier by first sorting the elements into related collections called "buckets". It's not a terribly useful algorithm for general cases, but when the input is evenly distributed it can perform in efficient time. The boat thing, however, is another issue entirely. His dad hopes poor Bucket Sort can see the truth, someday.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The third child of the family patriarchs Selection and Merge Sorts, he greets every problem, every family squabble, every thing that ever happens to him with "Cool, man". He once witnessed a car crash and later described the event

]]>Radix Sort is possibly the chillest person you will ever meet.

The third child of the family patriarchs Selection and Merge Sorts, he greets every problem, every family squabble, every thing that ever happens to him with "Cool, man". He once witnessed a car crash and later described the event as "two cars having a meeting of the minds" (the family hasn't let him live this down). A peculiar scent seems to pervade areas he inhabits. He's the most easy-going person at the reunion.

At least, that's what he wants you to think.

In reality, at this moment, he's terrified. Terrified that someone will find out that he's a fraud. That job as an auto mechanic the family still thinks he has? He was fired last week over something stupid. Only his beloved wife Bead Sort knows, and he is petrified that someone else will discover his secret. With no way to provide for his family, he feels like a failure. Beady tried to help, tried to make him feel like he contributes, but he seems to only shrink further away from her.

He's holding his breath, praying no one finds out, barely clinging to his friendly, chill facade.

**Created By:**Unknown, possibly Herman Hollerith (1887)**Type:**Distribution**Stable?**Depends on implementation**Worst-Case:***O(w * n)***Space Complexity:***O(w + n)***Sample Project:**Over on GitHub

- SORT the numbers into buckets based on the value of the least-significant digit.
- SORT the numbers in each bucket again, based on the value of the next-least-significant digit.
- CONTINUE UNTIL there are no significant digits left.

Suppose we have the following array:

{ 5419, 13, 384, 201, 7, 2232, 3, 81, 19, 4029 }

Radix Sort's algorithm tells us we need to sort on the least-signification digit. We should therefore sort based on these values:

{9, 3, 4, 1, 7, 2, 3, 1, 9, 9}

Doing that sort results in this intermediate array:

{ 201, 81, 2232, 13, 3, 384, 7, 5419, 19, 4029 }

We now need to sort on the second-least-significant digit. Problem is, some of these numbers (e.g. 3 and 7) don't have such a digit. We therefore need to impose a digit at that position of 0.

{ 201, 81, 2232, 13, 03, 384, 07, 5419, 19, 4029 }

We can now sort based on the second-least-significant digit, resulting in:

{ 201, 03, 07, 13, 19, 5419, 4029, 2232, 81, 384 }

We need to do this twice more; once for the third-least-significant digit:

{ 3, 7, 13, 19, 4029, 81, 201, 2232, 384, 5419 }

And, finally, we sort based on the most-significant digit, resulting in the final sorted array:

{ 3, 7, 13, 19, 81, 201, 384, 2232, 4029, 5419 }

What we have just done is technically called a least-significant digit radix sort.

The audibilization of this sort is pretty neat, check it out:

```
class RadixSort
{
public static void Main()
{
int[] array = { 330, 8, 27, 4419, 55, 816, 419, 77, 622, 1234, 6, 9, 241, 1, 35, 7733, 4, 69 };
int length = array.Length;
Console.WriteLine("Radix Sort");
CommonFunctions.PrintInitial(array);
int max = array.Max();
for (int exp = 1; max / exp > 0; exp *= 10)
{
CountingSort(array, length, exp);
}
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
//This is a modified version of Counting Sort from an earlier post
//We need to do Counting Sort against each group of integers,
//where the groups are made based on the position of significant digits.
//So, we use Counting Sort on the least-significant digit, then the next-least, etc.
//After that, we concatenate the groups together to form the final array.
public static void CountingSort(int[] array, int length, int exponent)
{
//Create a new "output" array
int[] output = new int[length]; // output array
int i;
//Create a new "counting" array which stores the count of each unique number
int[] count = new int[10];
for (i = 0; i < 10; i++)
{
count[i] = 0;
}
for (i = 0; i < length; i++)
{
count[(array[i] / exponent) % 10]++;
}
//Change count[i] so that count[i] now contains actual position of
//this character in the output array.
for (i = 1; i < 10; i++)
{
count[i] += count[i - 1];
}
//Build the output array.
//This is the tricky part.
for (i = length - 1; i >= 0; i--)
{
output[count[(array[i] / exponent) % 10] - 1] = array[i];
count[(array[i] / exponent) % 10]--;
}
//Copy the output array to the final array.
for (i = 0; i < length; i++)
{
array[i] = output[i];
}
}
}
```

This algorithm has a unique worst-case time complexity: *O(w * n)*, where *w* is the number of bits required to store each value. This makes intuitive sense; it means that as the number of significant digits grows, the time needed for the algorithm to complete also grows.

The space complexity for this algorithm is even more obvious: *O(w + n)*, which means that the space required is a direct function of the number of digits needing to be sorted.

Both of these values are relatively low compared to other sorting algorithms. Because of this, Radix Sort is a pretty useful real-world algorithm. In fact, because Radix Sort doesn't actually need the values being sorted to be integers, we can use this algorithm to sort various things, including letters and words.

Radix Sort is one of the more useful real-world sorts; because it operates on least-significant digit, it can be used to sort quite a lot more than just numbers. But be aware; beneath the chill, "whatever, man" surface, he's quietly petrified that someone he cares about is going to discover his secret.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The girlfriend of Pigeonhole Sort and a hopeless romantic, she is eager to meet his parents and siblings and make good impressions on them. She can't wait for a chance to show off the bounty of her vegetable garden, and has brought

]]>Gnome Sort is brand new to the family.

The girlfriend of Pigeonhole Sort and a hopeless romantic, she is eager to meet his parents and siblings and make good impressions on them. She can't wait for a chance to show off the bounty of her vegetable garden, and has brought squash, carrots, and fresh spinach to the reunion in the hopes of impressing her (maybe, potentially, hopefully) soon-to-be in-laws.

She might be out of luck, though. Pigeon has done this before, and the girlfriends he brings to the reunion aren't seen at the next one. Nevertheless, Gnome Sort hopes her gifts of food and joy help tell Pigeon's family that maybe she really is the real deal.

Even if she isn't, the family figures, they still get free fresh veggies. That's a good deal.

**Created By:**Hamid Sarbazi-Azad (2000)**Type:**Exchange**Stable?**Yes**Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- START at the beginning of the array.
- IF the pot at the current position and next-highest pot are in the correct order, go to next position.
- ELSE if they are not in order, SWAP them.
- GO TO 2, CONTINUE UNTIL all pots are sorted.

The nice thing about this visualization is that it makes it obvious what the "gnome" is doing. It also makes it very clear that this is a terribly inefficient algorithm.

The part that the visualization is skipping is that when a number is place into it's final sorted position, the "gnome" needs to step back *up* the sorted items to find the next unsorted item. Even if it does this quickly, it's doing many more accesses than other algorithms.

This one also has an "audibilization"; it sounded kind of like a police siren on speed to me. What do you think?

```
class GnomeSort
{
static void Sort(int[] arr, int length)
{
int index = 0;
while (index < length)//If there is no pot next to the gnome, he is done.
{
if (index == 0) //If the gnome is at the start of the line...
{
index++;//he steps forward
}
if (arr[index] >= arr[index - 1])//If the pots next to the gnome are in the correct order...
{
index++;//he goes to the next pot
}
else //If the pots are in the wrong order, he switches them.
{
int temp = 0;
temp = arr[index];
arr[index] = arr[index - 1];
arr[index - 1] = temp;
index--;
}
}
return;
}
public static void Main()
{
int[] array = { 84, 61, 15, 2, 7, 55, 19, 40, 78, 33 };
CommonFunctions.PrintInitial(array);
Sort(array, array.Length);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

The worst case for this algorithm (being the case we care about for real-world applications) is abysmal: *O(n ^{2})*. This is the same worst case as Bubble Sort, another poorly-performing sorting algorithm. Even with a constance

Where this algorithm shines is in teaching. I'd be willing to bet that many of you, dear readers, have done this kind of sorting in real life, such as when organizing books or DVDs. It's not efficient, true, but it's simple to understand and implement.

Gnome Sort is a poor-performing algorithm, but one whose value lies in teaching, not in performance: it is a good example of what not to do. Her veggies are delicious, though, and the family hopes she sticks around, if for no other reason than to get those scrumptious greens.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Their mother dotes on "her little boy" despite that fact that he's a full-grown adult. She does his laundry, helps with his chores, and generally cleans up

]]>The youngest child of the family patriarchs, Pigeonhole Sort could get away with murder, at least as far as his siblings are concerned.

Their mother dotes on "her little boy" despite that fact that he's a full-grown adult. She does his laundry, helps with his chores, and generally cleans up after him. His siblings think this is ridiculous, but he doesn't mind. He likes the attention.

This might explain why he has shown up to each of the last four reunions with a different girlfriend.

But despite his preference for new people, he also likes *giving* attention. A history teacher at the local middle school, Pigeonhole Sort loves kids but will probably never have them, and so he spends a lot of his time doting on his nieces and nephews. One of them, Counting Sort, has recently asked him to help her study to try to earn a scholarship to the big state school. He's flattered, because he wasn't the best student either. But for his beloved niece, he'll do anything, even if that means studying trigonometry for the first time in ten years.

**Created By:**Unknown**Type:**Distribution**Stable?**Yes**Worst-Case:***O(n + k)***Space Complexity:***O(n + k)***Sample Project:**Over on GitHub

- DETERMINE the range of possible values from the max and min values in the collection.
- CREATE a set of "pigeonholes", one for each value in the range.
- FOR EACH value in the unsorted array, mark how many times that value appears by incrementing the value in the pigeonhole.
- COPY each non-zero value from the pigeonholes into the new sorted array.

Pigeonhole Sort is very similar to Counting Sort, so let's use the sample unsorted array from that post:

{ 5, 2, 5, 8, 1, 2, 4, 5 }

We can see that our range of possible values is 1 to 8, so we create a "pigeonhole" array that looks like this:

a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
---|---|---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

Now we loop through the unsorted array, and for each value found we increment the value in the corresponding pigeonhole. That results in this:

a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
---|---|---|---|---|---|---|---|

1 | 2 | 0 | 1 | 3 | 0 | 0 | 1 |

From this, we can see that value 1 appears once, value 2 appears twice, value 5 appears 3 times, and so on. We can therefore use this pigeonhole array to create the final, sorted array:

{ 1, 2, 2, 4, 5, 5, 5, 8 }

```
class PigeonholeSort
{
public static void Sort(int[] array)
{
int length = array.Length;
//Find the range of values in the array
int min = array.Min();
int max = array.Max();
int range = max - min + 1;
//Create a set of pigeonholes with the size of the range of values
int[] pigeonholes = new int[range];
for (int i = 0; i < length; i++)
{
pigeonholes[i] = 0;
}
//For each value in the array, mark how many times the index of the pigeonhole appeared in the root array.
for (int i = 0; i < length; i++)
{
pigeonholes[array[i] - min]++;
}
int index = 0;
//Use the pigeonhole array to sort the main array.
for (int j = 0; j < range; j++)
{
//We are using a post-decrement here to keep track of the number of values we've already added to the array.
while (pigeonholes[j]-- > 0)
{
array[index] = j + min;
index++;
}
}
}
static void Main()
{
int[] array = {51, 18, 99, 23, 40, 1, 82, 85, 18, 12, 76};
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

As mentioned earlier, Pigeonhole Sort is remarkably similar to Counting Sort, which is probably why they get along so well. Their worst-case time complexities match: both are *O(n + k)*. As with Counting Sort, Pigeonhole Sort works best when *k* is small, and becomes rather inefficient as *k* gets larger. We can therefore say that Pigeonhole Sort may be useful in certain scenarios, but we are probably better off going with a different algorithm for real production code.

Pigeonhole Sort works well on small collections, but its efficiency begins to wane as the size of the collection increases. For real production apps, it's utility is limited. That said, he will do anything for his niece, even if that means sitting up, late at night, trying to memorize the difference between isosceles and scalene triangles.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>He's very rigid, set in his ways; he'd much prefer that the world stop changing and adhere to his ideals for once, because then we'd all be better off. The way he sees it, the world is black-and-white, right-and-wrong,

]]>Bitonic Merge Sort is the military veteran uncle in the family.

He's very rigid, set in his ways; he'd much prefer that the world stop changing and adhere to his ideals for once, because then we'd all be better off. The way he sees it, the world is black-and-white, right-and-wrong, and there's a clear answer to every question. This is probably the reason why he doesn't get along with Bead Sort.

That said, he's a bit of a softy. He'll sneak Werther's Originals for his young grand-nieces and grand-nephews even after their parents have told them no, and has more than once spoken about how proud he is of grand-nephew Comb Sort's athletic ability. He's even offered to help grand-niece Counting Sort study so that she can keep her athletic scholarship. But at the end of the day, he's a man who holds strongly to his beliefs and is willing to defend them, no matter the cost.

**Created By:**Ken Batcher**Type:**Concurrent**Stable?**No**Best-Case:***O(log*^{2}(n))**Average-Case:***O(log*^{2}(n))**Worst-Case:***O(log*^{2}(n))**Space Complexity:***O(n log*^{2}(n))**Sample Project:**Over on GitHub

- SPLIT the array into two equal halves.
- SORT the left half in ascending order, and the right half in descending order.
- FOR EACH half of the array...
- ...GO TO 1 (using the half of the array as the new "base" array), RECURSIVE CONTINUE until the split array size is 1.
- MERGE all sorted subarrays back together.

This is a difficult algorithm to visualize. The most important thing we need to know is that this algorithm makes a big assumption: that the unsorted array size is a power of 2 (e.g. 4, 8, 16, 32, etc). The algorithm only works in such cases.

Wikipedia uses this visualization of an eight-input bitonic sorter:

The idea is that inputs flow from left to right, "entering" the sort on the left and "exiting" on the right. At each "intersection" the two connected numbers are compared. If they are out of order, they are swapped.

Let's imagine we are using the following array

{ 81, 12, 19, 53, 39, 7, 2, 45 }

I found it much easier to visualize this algorithm if I added the numbers to the sorting array after each sorting step. That resulted in this image:

Each "phase" in this diagram represents a step in the algorithm. The numbers represent the current position of the array elements after each phase.

Please note that this visualization does not match the algorithm above or the implementation below, though all are permutations of the same ideas.

Finally, here's the "audibilization" of this sort, termed Batcher's Bitonic Sort in the video:

```
class BitonicMergeSort
{
static void Swap<T>(ref T leftHandSide, ref T rightHandSide)
{
T temp;
temp = leftHandSide;
leftHandSide = rightHandSide;
rightHandSide = temp;
}
static void CompareAndSwap(int[] array, int i, int j, int direction)
{
int k;
k = array[i] > array[j] ? 1 : 0;
if (direction == k) //If the order the elements are currently in DOES NOT match the sort direction (array[i] > array[j])...
{
//...Swap the elements so they DO match the sort direction
Swap(ref array[i], ref array[j]);
}
}
//This method recursively sorts a bitonic sequence in ascending order,
//if dir = 1, and in descending order otherwise (means dir=0).
//The sequence to be sorted starts at index position low,
//the parameter count is the number of elements to be sorted.
static void BitonicMerge(int[] array, int low, int count, int direction)
{
if (count > 1)
{
int k = count / 2;
for (int i = low; i < low + k; i++)
{
CompareAndSwap(array, i, i + k, direction);
}
BitonicMerge(array, low, k, direction);
BitonicMerge(array, low + k, k, direction);
}
}
//This function first produces a bitonic sequence by recursively
//sorting its two halves in opposite sorting directions, and then
//calls BitonicMerge to make them in the same direction
static void BitonicSort(int[] array, int low, int count, int direction)
{
if (count > 1)
{
int k = count / 2;
// sort left side in ascending order
BitonicSort(array, low, k, 1);
// sort right side in descending order
BitonicSort(array, low + k, k, 0);
//Merge entire sequence in ascending order
BitonicMerge(array, low, count, direction);
}
}
public static void Main()
{
int[] array = { 66, 98, 11, 43, 7, 28, 14, 49, 77, 61, 31, 12, 71, 93, 15, 2 };
int length = array.Length;
Console.WriteLine("Bitonic Merge Sort");
CommonFunctions.PrintInitial(array);
BitonicSort(array, 0 /*low value*/, length, 1); //1 is for sorting in ascending order
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

The time complexity for this algorithm is the same across all cases: *O(log ^{2}(n))*. This signifies that the size of the array and other conditions do not change the time needed, a very good thing in a sorting algorithm.

However, this is only true because the algorithm makes a major assumption: it assumes the input array has a length that is a power of 2. We see in this algorithm, as in others that make assumptions about their input, that those assumptions are what cause the improved performance many of them have. This is not a general-case sorting algorithm; this is a specialized one.

Bitonic Merge Sort is a very performant sort, provided the input size is a power of 2. Its consistent time complexities make it ideal for small or large sets of data. Plus, he's always got some Werther's. Maybe he'll even give you one.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>A soccer striker with a knack for faking out the keeper, she scores goals like football linebackers down calories. She's been playing since she could walk; soccer is all she wants to do. She's even

]]>Counting Sort is the star athlete of the family, even more so than Comb Sort.

A soccer striker with a knack for faking out the keeper, she scores goals like football linebackers down calories. She's been playing since she could walk; soccer is all she wants to do. She's even got a potential scholarship lined up to go play for the big state college next year, something she's eagerly looking forward to once she graduates.

That is, *if* she graduates. Her grades have not been, well, *good*. The high school has so far looked the other way because she helps the team win so much, but that won't fly in college. All she has to do is get a B or better average for her senior year, so she has enlisted a team of people to help her study: her dad Cocktail Shaker Sort, doting but gruff grand-uncle Bitonic Merge Sort, history-teacher uncle Pigeonhole Sort, and first cousin Comb Sort (who has a similar problem). They are hoping, trying, to ensure that Counting Sort can go to college and play.

But time is running out, and it's starting to look like they're gonna need a miracle for Counting Sort to live her dream.

**Created By:**Harold H Seward (1954)**Type:**Distribution**Stable?**Yes**Worst-Case:***O(n + k)***Space Complexity:***O(n + k)***Sample Project:**Over on GitHub

- COUNT how many times a particular value appears, and store those values in a "count" array.
- CREATE an "order" array which shows in what order the values should appear.
- SORT the main array using the order and count arrays.

This sort works by counting how many instances of a particular number show up. The algorithm above is not very descriptive, so let's see if we can make a more meaningful example.

Say we have the following starter array:

{ 5, 2, 5, 8, 1, 2, 4, 5 }

Counting sort would create a new "count" array of size *k* (which we will define to be 10 for simplicity) and store the number of times a given value appears.

That array would look like this:

a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
---|---|---|---|---|---|---|---|---|---|

1 | 2 | 0 | 1 | 3 | 0 | 0 | 1 | 0 | 0 |

We read this as "the value 1 appears 1 time, the value 2 appears 2 times, the value 3 appears 0 times" and so on.

We then need to create a "order" array in which the value increases, to show the order of the elements.

a_{1} |
a_{2} |
a_{3} |
a_{4} |
a_{5} |
a_{6} |
a_{7} |
a_{8} |
a_{9} |
a_{10} |
---|---|---|---|---|---|---|---|---|---|

1 | 2 | 2 | 4 | 5 | 5 | 5 | 8 | 8 | 8 |

Using the counting array and the order array, we can place the values in their correct order:

{ 1, 2, 2, 4, 5, 5, 5, 8 }

In researching this algorithm I found it particularly helpful to step through each for loop of this implementation, to see what each array had as its values.

```
class CountingSort
{
static void Sort(int[] array)
{
int length = array.Length;
//Create a new "output" array
int[] output = new int[length];
//Create a new "counting" array which stores the count of each unique number
int[] count = new int[100];
for (int i = 0; i < 100; ++i)
{
count[i] = 0;
}
for (int i = 0; i < length; ++i)
{
++count[array[i]];
}
//Change count[i] so that count[i] now contains actual position of
//this character in the output array.
for (int i = 1; i <= 99; ++i)
{
count[i] += count[i - 1];
}
//Build the output array.
//To make this sorting algorithm stable, we are operating in reverse order.
for (int i = length - 1; i >= 0; i--)
{
output[count[array[i]] - 1] = array[i];
--count[array[i]];
}
//Copy the output array to the final array.
for (int i = 0; i < length; ++i)
{
array[i] = output[i];
}
}
public static void Main()
{
int[] array = { 64, 11, 83, 8, 13, 45, 92, 98, 55, 17, 41, 81, 11, 64, 14, 41, 11, 92 };
Console.WriteLine("Counting Sort");
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

Counting Sort can be fairly efficient: it's worst-case time complexity is *O(n + k)*, where *k* is the range of potential values. For small values of *k*, this is an efficient time. The problem happens when *k* is large; in those scenarios, this algorithm becomes markedly less efficient than other distribution algorithms (e.g. Radix Sort).

Because the algorithm needs to create a "count" array that is the same size as the range of possible values *k*, the space complexity is also *O(n + k)*. Again, this is not a problem with small *k*, but with large *k* you could start running into memory or space issues.

Counting Sort is an efficient sorting algorithm for small ranges of potential values *k*. However, once the range becomes large, Counting Sort quickly becomes overshadowed by other, more efficient sorts. But, with a little help from her grand-uncle and her cousin, she might, just might, still make it to college. She hopes.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The middle daughter of Heap Sort and Cocktail Shaker Sort, she has recently discovered she a particular skill, a skill that makes her a nightmare to people like her parents. This skill has allowed her to get extra dessert, more screen time, even

]]>Odd-Even Sort is a headstrong little girl.

The middle daughter of Heap Sort and Cocktail Shaker Sort, she has recently discovered she a particular skill, a skill that makes her a nightmare to people like her parents. This skill has allowed her to get extra dessert, more screen time, even less chores; in short, everything a ten-year-old middle child could ever dream of.

That skill? Getting one parent to say yes when the other already said no.

She's sneaky about it. She will wait minutes, hours even, just to strike the unsuspecting parent at exactly the right moment. She knows just when her opportunity arises: when mom gets home from work, when dad is dealing with her baby brother Bogosort, when her elder sister Counting Sort needs to get to soccer practice. All her life she's been perfecting this skill, and now she's a master.

Here's hoping she uses her powers for good. Or at least extra ice cream.

**Created By:**N. Haberman**Type:**Exchange**Stable?**Yes**Best-Case:***O(n)***Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- FOR each item in an odd numbered position
- COMPARE that item against the item in the next-highest position.
- IF the elements are out of order, SWAP them.
- FOR each item in an even-numbered position
- COMPARE that item against the item in the next-highest position.
- IF the elements are out of order, SWAP them.
- GOTO 1, CONTINUE until list is sorted.

The base visualization from Wikipedia is cool to look at but doesn't help me understand this algorithm at all. So let's demonstrate the algorithm with a simple set of numbers. Here's our unsorted collection:

{ 15, 62, 42, 29, 17 }

On the first pass of the algorithm, in the first step, we compare the element in position 1 against the element in position 2. Position 1 is 15, position 3 is 62, so these are in order.

The next comparison is between position 3 (42) and position 4 (29). Those elements are out of order, so the algorithm swaps them, resulting in:

{ 15, 62, 29, 42, 17 }

The next move would compare the element in position 5 against the one in position 6, but since there *isn't* one in position six, the algorithm instead switches to doing the even-numbered comparison.

The first even comparison (position 2 to position 3) results in:

{ 15, 29, 62, 42, 17 }

The second even comparison results in:

{ 15, 29, 62, 17, 42 }

Now we go back to the odd-numbered comparisons:

{ 15, 29, 17, 62, 42 }

Now back to even-numbered:

{ 15, 17, 29, 62, 42 }

{ 15, 17, 29, 42, 62 }

And now the array is sorted! It took us four passes to do this, and if you're thinking that seems pretty inefficient, you are not wrong.

Here's the audibilization of this algorithm:

```
class OddEvenSort
{
public static void Sort(int[] array, int length)
{
bool isSorted = false;
while (!isSorted)
{
isSorted = true;
//Swap i and i+1 if they are out of order, for i == odd numbers
for (int i = 1; i <= length - 2; i = i + 2)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSorted = false;
}
}
//Swap i and i+1 if they are out of order, for i == even numbers
for (int i = 0; i <= length - 2; i = i + 2)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSorted = false;
}
}
}
return;
}
public static void Main()
{
int[] array = { 71, 42, 19, 3, 33, 28, 0, 89, 44, 2, 81 };
int length = array.Length;
CommonFunctions.PrintInitial(array);
Sort(array, length);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

Given that Odd-Even sort is a derivative of Bubble Sort, we would expect the worst-case time complexity to be similar, and indeed it is; for both algorithms it is *O(n ^{2})*. The best-case scenario for Odd-Even sort is remarkably better at O(n), though, and the space complexity is constant at

Odd-Even Sort, being a derivative of Bubble Sort, is not known to be an efficient algorithm. That said, it has some uses in specific cases (as seen on its Wikipedia page), so it's not totally without merit. Plus, she's really good at getting extra dessert. Maybe, if you're nice to her, she'll share with you.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The eight-year-old daughter of Heap Sort and Cocktail Shaker Sort, she absolutely *loves* to tell you all about her day: what things she ate, this cool car she saw on the way over, how

The indefatigable Bubble Sort is beloved by her whole family, provided she isn't in the room.

The eight-year-old daughter of Heap Sort and Cocktail Shaker Sort, she absolutely *loves* to tell you all about her day: what things she ate, this cool car she saw on the way over, how her little brother is annoying her at that particular moment, what she wants for her birthday in eight months. Everything.

Her parents adore her, but sometimes the family wishes she would just shut up for five minutes. All except for grandma Merge Sort, who could listen to little Bubbly's stories and exclamations for days. No wonder she loves going to Grandma's house.

**Created By:**Unknown (Popularized by Donald Knuth)**Type:**Exchange**Stable?**Yes**Best-Case:***O(n*^{2})**Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(n)***Sample Project:**Over on GitHub

- FOR each item in the collection
- IF two adjoining elements are in the wrong order, SWAP them.
- GO TO 1, CONTINUE until the list is sorted.

This animation is a good example of Bubble Sort, because like Bubble Sort it takes FOREVER to get anything done. So let's try another example.

Say we have the following collection:

{ 5, 8, 2, 4, 1 }

Bubble sort will swap elements on each pass that are not in the correct order. Here's the position of the collection elements after each swap:

{ 5, 2, 8, 4, 1 }

{ 5, 2, 4, 8, 1 }

{ 5, 2, 4, 1, 8 }

{ 2, 5, 4, 1, 8 }

{ 2, 4, 5, 1, 8 }

{ 2, 4, 1, 5, 8 }

{ 2, 1, 4, 5, 8 }

{ 1, 2, 4, 5, 8 }

In order to do this sort, the algorithm has to complete 4 passes of the array in order to sort 5 elements. That's wildly inefficient.

...But cool to watch. Check out this "audibilization" from YouTube:

```
class BubbleSort
{
public static void Main(string[] args)
{
int[] array = { 92, 28, 3, 71, 50, 14, 24, 20, 66, 70, 45, 17, 9, 99, 38 };
int temp;
Console.WriteLine("Bubble Sort");
CommonFunctions.PrintInitial(array);
//1. For each item in the array...
for (int i = 0; i <= array.Length - 2; i++)
{
for (int j = 0; j <= array.Length - 2; j++)
{
//2. ...if two adjoining elements are in the wrong order, swap them.
if (array[j] > array[j + 1])
{
temp = array[j + 1];
array[j + 1] = array[j];
array[j] = temp;
}
//3. Repeat this for all pairs of elements.
}
}
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

Bubble Sort is, to put it frankly, the worst sorting algorithm that isn't trying to be bad. The *best-*case scenario is still O(n^{2}). It is inefficient, slow, and generally a poor-performing sort, despite how easy the algorithm is to implement. That simplicity is really this algorithm's only redeeming value: it's easy to teach, in a "this is what you should not do" sort of way.

In short, Bubble Sort should not be used for real-world sorting scenarios.

Bubble Sort is a highly inefficient sorting algorithm that traverses through a collection, swapping neighboring values if they are out of order. It has an implementation only a mother could love. Or, perhaps, a grandmother?

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The two-year-old son of Heap Sort and Cocktail Shaker Sort, he is possibly the most mischievous scamp this side of the Mississippi, or at least that's what his grandfather would say.

A favorite trick of

]]>Bogosort is adorable, when he isn't running away.

The two-year-old son of Heap Sort and Cocktail Shaker Sort, he is possibly the most mischievous scamp this side of the Mississippi, or at least that's what his grandfather would say.

A favorite trick of his happens when the family plays poker, or Uno, or any other game involving cards. He sneaks up to the table (he can be very quite when he wants to be) and runs off with as many cards as his little hands can grab. As you might imagine, this dramatically improves the game, at least in his tiny opinion. The adults who have to spend time chasing him down, vaulting over dogs and rounding corners at dangerously high speeds, might disagree.

**Created By:**Unknown**Type:**Exchange**Stable?**Yes**Best-Case:***O(n)***Average-Case:***O((n + 1)!)***Worst-Case:**Unbounded**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- IF the list is not sorted...
- SHUFFLE the list randomly
- GO TO 1, CONTINUE until list is sorted.

A visualization for this particular sorting algorithm is kind of ridiculous. The procedure is just "shuffle randomly until sorted", very much like taking a deck of cards, throwing them into the air, gathering them up, and repeating this until the deck is in the correct order.

If you think this algorithm is amazingly stupid, you are not wrong.

But it's not the slowest algorithm ever. That honor goes to Bogobogosort, an algorithm that was specifically designed to be as slow as possible while still doing a ton of work. The author's own guess on that algorithm's time complexity is *O(n! ^{n!}). *That's just crazy.

That said, the "audibilization" of Bogosort is pretty neat: a ton of noise for not very much gain:

```
class BogoSort
{
static void Main(string[] args)
{
List<int> list = new List<int>() { 2, 1, 3, 0 };
Console.WriteLine("BogoSort");
Console.WriteLine("Sorting...");
Sort(list);
Console.WriteLine("Press any key to exit.");
Console.ReadKey();
}
static void Sort(List<int> list)
{
int iteration = 0;
while (!IsSorted(list))
{
PrintIteration(list, iteration);
list = Shuffle(list); //Shuffle the numbers randomly
iteration++;
}
PrintIteration(list, iteration); //Print the final, sorted iteration
Console.WriteLine();
Console.WriteLine("BogoSort completed after {0} iterations.", iteration);
}
static void PrintIteration(List<int> list, int iteration)
{
Console.Write("BogoSort iteration #{0}: ", iteration);
foreach(var value in list)
{
Console.Write($"{value} ");
}
Console.WriteLine();
}
static bool IsSorted(List<int> list)
{
for (int i = 0; i < list.Count - 1; i++)
{
if (list[i] > list[i + 1])
{
return false;
}
}
return true;
}
//Shuffle algorithm based on Fisher-Yates method.
static List<int> Shuffle(List<int> numbers)
{
Random r = new Random();
//Step 1: For each unshuffled item in the collection
for (int n = numbers.Count - 1; n > 0; --n)
{
//Step 2: Randomly pick an item which has not been shuffled
int k = r.Next(n + 1);
//Step 3: Swap the selected item with the last "unstruck" item in the collection
int temp = numbers[n];
numbers[n] = numbers[k];
numbers[k] = temp;
}
return numbers;
}
}
```

Any time you see a factorial (!) in a math equation, and you want that number to be small, you're gonna have a bad time. The average-case for Bogosort is *O((n + 1)!). *This means that, as the size of the collection increases, the time the algorithm requires to shuffle the collection increases *dramatically*. There's a reason why the implementation above has only four numbers in the starting collection; any more and I was getting hundreds of permutations before finding a solution (if one was found at all).

Obviously, Bogosort isn't meant to be taken seriously as a sorting algorithm. It's not useful in the real world; the algorithm can be reduced to "shuffle until sorted" which might take forever. That said, it can be useful, in a "do not do this" kind of way. Just make sure he can't reach the cards.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>The father of four rambunctious children and husband to the always-working Heap Sort, "Shaky" (a loving nickname for him which is never said to his face) is a stay-at-home dad who absolutely adores his children but just

]]>Cocktail Shaker Sort would like a drink. A strong one, if you please.

The father of four rambunctious children and husband to the always-working Heap Sort, "Shaky" (a loving nickname for him which is never said to his face) is a stay-at-home dad who absolutely adores his children but just wants some alone time, is that so much to ask?

The problem is that, when he gets his alone time (rare as it is), he would greatly prefer to curl up with his beloved Heapy, but failing that, he'll make a stiff drink that the rest of the family can't be in the same room as. Not that he minds. Heapy is his best friend, his companion, his true love, but the rest of her family he could take or leave. Nevertheless, his kids love him, and the family has accepted him, even if he hasn't quite gotten around to feeling the same way.

**Created By:**Donald Knuth (1973)**AKA:**Bidirectional Bubble Sort**Type:**Exchange**Stable?**Yes**Best-Case:***O(n)***Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- PASS THROUGH the collection from left-to-right.
- DURING that pass, SELECT the highest encountered value.
- DEPOSIT that value at the end of the considered range.
- DECREASE the end of the considered range by 1.
- PASS THROUGH the collection from right-to-left.
- DURING that pass, SELECT the lowest encountered value.
- DEPOSIT that value at the beginning of the considered range.
- INCREASE the start of the considered range by 1.
- GO TO 1, CONTINUE until the range is 0.

Lucky for us, Cocktail Shaker sort is an easy sort to understand. On each pass through the collection, it either finds the highest or lowest encountered value (depending on pass direction) and deposits that value at the beginning or end of the considered range. In fact, Cocktail Shaker sort is really just Bubble Sort but in both directions.

The "audibilization" of this sort is particularly interesting, because it's simple to understand without knowing the underlying math and logic:

```
class CocktailShakerSort
{
static void Sort(int[] array)
{
bool isSwapped = true;
int start = 0;
int end = array.Length;
while (isSwapped == true)
{
//Reset this flag. It is possible for this to be true from a prior iteration.
isSwapped = false;
//Do a bubble sort on this array, from low to high. If something changed, make isSwapped true.
for (int i = start; i < end - 1; ++i)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSwapped = true;
}
}
//If no swaps are made, the array is sorted.
if (isSwapped == false)
break;
//We need to reset the isSwapped flag for the high-to-low pass
isSwapped = false;
//The item we just moved is in its rightful place, so we no longer need to consider it unsorted.
end = end - 1;
//Now we bubble sort from high to low
for (int i = end - 1; i >= start; i--)
{
if (array[i] > array[i + 1])
{
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
isSwapped = true;
}
}
//Finally, we need to increase the starting point for the next low-to-high pass.
start = start + 1;
}
}
public static void Main()
{
int[] array = { 15, 10, 83, 5, 7, 42, 65, 50, 58, 71, 61 };
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

You may have heard before that Bubble Sort's complexities for all cases are *O(n ^{2})*. Cocktail Shaker Sort, being a direct derivative of Bubble Sort, is not much better; the average and worst-case complexities do not improve, and only the best case scenario (where the list is already sorted) becomes

About the only good thing Cocktail Shaker Sort has going for it is a space complexity of constant value, *O(1)*. But that's not very useful in the real world. In short, Cocktail Shaker Sort is not a valuable implementation of sorting for anything other than educational or theoretical problems.

Cocktail Shaker Sort is an easy-to-understand, simple-to-implement derivative of Bubble Sort. However, it's not a terribly useful sort in the real world, as it takes too long to do much of anything. Plus, he makes a drink that'll burn your eyeballs out. Don't say I didn't warn you.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>As the owner of a local, well-liked restaurant, Heap Sort's Heaping Helpings, she is constantly running around trying to make her restaurant better. As the mom of four kids, she (and husband Cocktail Shaker Sort) are constantly trying

]]>Heap Sort, the second daughter of the family, has a busy life.

As the owner of a local, well-liked restaurant, Heap Sort's Heaping Helpings, she is constantly running around trying to make her restaurant better. As the mom of four kids, she (and husband Cocktail Shaker Sort) are constantly trying to stay one step ahead of their childrens' antics, not always with success.

Heap Sort is nearly the exact opposite of her mother Merge Sort. She's a fantastic chef and her pies are legendary around town. Moreover, she absolutely adores cooking for other people, and has offered to teach her dear Mom a thing or two (she politely declined). Because of her success, the family regards her as the star child, much to older sister Insertion Sort's annoyance.

Heap Sort has a good life, and she knows it. Busy, yes; but then again, who isn't?

**Created By:**J. W. J. Williams (1964)**Type:**Selection**Stable?**No**Best-Case:***O(n log n)***Average-Case:***O(n log n)***Worst-Case:***O(n log n)***Space Complexity:***O(n)***Sample Project:**Over on GitHub

- CONSTRUCT a "max" heap from the unsorted data that satisfies the heap property.
- SWAP the first and final elements. The final element is now sorted.
- DECREMENT the range of "considered" elements (those still needing to be sorted) by 1.
- RESTRUCTURE the heap to satisfy the heap property (again).
- SWAP the new root with the last item in the range of considered items.
- CONTINUE until the range of considered items is 1.

This is one of the more complication visualizations in this series. I'm going to break it down into two steps.

The unsorted chart looks like this:

In order to even use Heap Sort, we must first satisfy something known as the heap property. The heap property (for a "max heap", which is what we will use in this post) is:

"For any given node C, if P is a parent node of C, then thekey(thevalue) of P is greater than or equal to the key of C"

The problem with the collection above is that the nodes do not satisfy this property; the value at the beginning (or "top") of the list is not the highest value. To make the property true, the algorithm sorts the collection into the following:

Now this collection satisfies the heap property: for any node C which has a parent P (a "parent" here meaning "appears earlier in this list"), said parent P has a value greater than C. This is what the next image in the animation is trying to convey:

This diagram is an outline of the heap, and the tree structure it represents. Because we now have a fully-balanced heap, we can proceed to the next step.

At this point, we know that the "root" of the heap (e.g. the node in the first position) must be the highest value in the considered range. Therefore we can place it at the "back" of the list.

We must then decrease the considered range by 1 (so as not to include the just-sorted item) and rebuild the heap. We continue to do this until the considered range is 1, at which point, the collection will be sorted.

You should also check out this "audibilization" of this algorithm from YouTube:

```
public class HeapSort
{
static void Sort(int[] array)
{
var length = array.Length;
for (int i = length / 2 - 1; i >= 0; i--)
{
Heapify(array, length, i);
}
for (int i = length - 1; i >= 0; i--)
{
int temp = array[0];
array[0] = array[i];
array[i] = temp;
Heapify(array, i, 0);
}
}
//Rebuilds the heap
static void Heapify(int[] array, int length, int i)
{
int largest = i;
int left = 2 * i + 1;
int right = 2 * i + 2;
if (left < length && array[left] > array[largest])
{
largest = left;
}
if (right < length && array[right] > array[largest])
{
largest = right;
}
if (largest != i)
{
int swap = array[i];
array[i] = array[largest];
array[largest] = swap;
Heapify(array, length, largest);
}
}
public static void Main()
{
int[] arr = { 74, 19, 24, 5, 8, 79, 42, 15, 20, 53, 11 };
Console.WriteLine("Heap Sort");
CommonFunctions.PrintInitial(arr);
Sort(arr);
CommonFunctions.PrintFinal(arr);
Console.ReadKey();
}
}
```

Heap sort is a consistent algorithm: the best, average, and worse-case time complexities are all *O(n log n). *In fact, heap sort is one of the most commonly used sorting algorithms in the real-world. Its efficient time complexity and low space requirements make it a good candidate for implementation in our apps.

The goodness that is Heap Sort knows no bounds; it is an efficient algorithm that is in use today in production systems. If you're looking to be to sort large lists, Heap Sort just might be your best option. Plus, she makes a mean rhubarb pie.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Quick Sort is the consummate overachiever.

Graduated first in his class, runs a successful business, owns a boat, married to a brilliant accountant, with two beautiful children. He does not slow down; he does everything fast and thoroughly. Keeping up with his family is quite the endeavor. When he picks something to get done, it gets DONE.

Sometimes, though, he wonders. He wonders if maybe there's more to life than climbing the org chart, buying soulless things, or making money. He wonders if maybe, just maybe, he could slow down and enjoy the scenery for once. And then he decides that's for wimps and goes back to working all hours of the night.

**Created By:**C. A. R. "Tony" Hoare (1959)**Type:****E**xchange**Stable?**No**Best-Case:***O(n log n)***Average-Case:***O(n log n)***Worst-Case:***O(n*^{2})**Space Complexity:***O(n)***Sample Project:**Over on GitHub

- SELECT a pivot point. (Our implementation selects the last number in the collection).
- REORDER the collection such that all values less than the pivot are before the pivot, and all values greater than the pivot are after it.
- RECURSIVE CONTINUE: Return to Step 1, selecting a new pivot in each of the divided higher and lower sections, until the array is sorted.

The critical thing Quick Sort does is select a pivot point, but different varieties do this differently. In the above animation (and the below implementation), the first pivot point is merely the last item in the collection, and it continues to pick the last item in each "partition" caused by the sort as a pivot point.

Reader James Curran also contributed another visualization of this sort:

Finally, check out this audibilization on YouTube:

```
class QuickSort
{
static int Partition(int[] array, int low,
int high)
{
//1. Select a pivot point.
int pivot = array[high];
int lowIndex = (low - 1);
//2. Reorder the collection.
for (int j = low; j < high; j++)
{
if (array[j] <= pivot)
{
lowIndex++;
int temp = array[lowIndex];
array[lowIndex] = array[j];
array[j] = temp;
}
}
int temp1 = array[lowIndex + 1];
array[lowIndex + 1] = array[high];
array[high] = temp1;
return lowIndex + 1;
}
static void Sort(int[] array, int low, int high)
{
if (low < high)
{
int partitionIndex = Partition(array, low, high);
//3. Recursively continue sorting the array
Sort(array, low, partitionIndex - 1);
Sort(array, partitionIndex + 1, high);
}
}
public static void Main()
{
int[] array = { 72, 12, 6, 33, 81, 97, 37, 59, 52, 1, 20 };
int length = array.Length;
Console.WriteLine("QuickSort");
CommonFunctions.PrintInitial(array);
Sort(array, 0, length - 1);
CommonFunctions.PrintFinal(array);
Console.ReadKey();
}
}
```

The entire reason Quick Sort has that name is because, for the vast majority of circumstances, it is demonstrably quicker than other relatively-simple implementations. That said, there is some debate about how much quicker it is than, say, Merge Sort, which clearly means that Quick Sort must get along fabulously with his father-in-law Selection Sort, but maybe not his mother-in-law.

It would be valid, I believe, to say that Quick Sort is the simplest sorting algorithm that is actually in use today for real production code. Some of the upcoming algorithms are much more complex, but faster.

Also, apropos of nothing, the inventor of Quick Sort wrote possibly my favorite adage about coding:

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C. A. R. Hoare

Quick Sort is exactly what it sounds like. It gets things sorted, as quickly as possible, by subdividing the contents of a collection around an arbitrarily-selected pivot point. It then recursively selects smaller and smaller "partitions" and orders them.

Sometimes he wishes he could slow down. Maybe he will get his chance. Someday.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>Unlike his elder brother Shell Sort, Comb Sort is a star athlete. He started as the quarterback for his high school football team when he was a sophomore. But more than that, he has everything he thought he could ever

]]>Comb Sort is the poster boy for the all-American child.

Unlike his elder brother Shell Sort, Comb Sort is a star athlete. He started as the quarterback for his high school football team when he was a sophomore. But more than that, he has everything he thought he could ever want, at least until recently. He's the junior prom king, the one voted "best athlete" in the yearbook, everything. Life has been easy for him so far.

Yet, something is bothering him. Something that, until recently, he couldn't describe.

He's not the smartest guy, and he knows this. His grades are mostly Cs and Ds, and he understands that that doesn't bode well for his future. His parents are high-achievers, his brother is a straight-A student, and he's... what? A star athlete? That won't last forever. What happens when he can no longer get by on physical prowess?

So, he reads. He studies. He's putting more time into making good test scores and less time into sports. It's not working yet (his last test was a C+), but he's hopeful that it will, and soon. He can't get through life on athletic skills alone. He's sorting out his life, one piece at a time.

**Created By:**Włodzimierz Dobosiewicz (1980)**Type:**Exchange**Stable?**YES**Best-Case:***O(n)***Average-Case:***O(n*^{2})**Worst-Case:***O(n*^{2})**Space Complexity:***O(1)***Sample Project:**Over on GitHub

- DETERMINE the appropriate gap size
*h*by using a set shrink factor*k*. - COMPARE each pair of elements that are
*h*distance from each other. - SWAP those elements if they are in the wrong order.
- COMPLETE a pass through the array with the selected
*h*. - SHRINK the gap size
*h*using shrink factor*k*. - IF
*h*is not 1, GOTO step 2. - CONTINUE until
*h*is 1 AND a pass through the array results in no swaps.

You can see from this animation why this algorithm got the name "Comb Sort": the comparison distance look like two prongs in a comb. You can also see that this is a not-very-effective sort: it has to do many passes through all elements in order to fully sort this list. In fact, we would most likely consider Comb Sort to be a "naive" sort, on the same level as Selection Sort.

Also check out the "audibilization" of this sort from YouTube:

Finally, reader James Curran kindly created a visualization for this sort; it looks like this:

```
class CombSort
{
static int GetNextGap(int gap)
{
//The "shrink factor", empirically shown to be 1.3
gap = (gap * 10) / 13;
if (gap < 1)
{
return 1;
}
return gap;
}
static void Sort(int[] array)
{
int length = array.Length;
int gap = length;
//We initialize this as true to enter the while loop.
bool swapped = true;
while (gap != 1 || swapped == true)
{
gap = GetNextGap(gap);
//Set swapped as false. Will go to true when two values are swapped.
swapped = false;
//Compare all elements with current gap
for (int i = 0; i < length - gap; i++)
{
if (array[i] > array[i + gap])
{
//Swap
int temp = array[i];
array[i] = array[i + gap];
array[i + gap] = temp;
swapped = true;
}
}
}
}
public static void Main()
{
int[] array = { 10, 28, 1, 55, 6, 21, 36, 3, 45, 15, 0 };
Console.WriteLine("Comb Sort");
CommonFunctions.PrintInitial(array);
Sort(array);
CommonFunctions.PrintFinal(array);
Console.ReadLine();
}
}
```

The worst case of *O(n ^{2}) *make intuitive sense, at least to me. Given that comb sort is based on bubble sort, we would expect poor time complexities for it. But the average case is interesting because it introduces a form of notation I hadn't seen before: big-omega.

Remember that the average time complexity for this algorithm is *Ω(n ^{2}/2^{p}). *This is read aloud as "big-omega of n-squared over 2 to the power of p".

This is not a great time, and because it's the best possible case on average, we can conclude that Comb Sort is not an efficient sorting algorithm.

Comb Sort is a relatively-inefficient algorithm that aims to sort a collection by comparing two elements across *h* distance, swapping them if they are out of order, and then slowly shrinking that distance. He's hoping he can get his academic life in order before his athletic skills run out. Slowly, surely, he's making that dream come true, one test at a time.

Don't forget to check out the sample project over on GitHub!

Happy Coding!

]]>