Eight Tips For Your Programming Team's Standup Meetings

As my organization has gone further down the Agile project management path (from our original process of a lean waterfall), one of the things we've started doing is daily standup meetings. These are short (15 minutes or less) meetings in which each team member reports on what they have accomplished recently and what they are planning to do today. They've been a fantastic tool to keep our team on track and on time, and I'm rapidly becoming convinced that they're going to be (if they aren't already) an essential part of modern software development teams.

A development team conducting a standup meeting.

Standup meetings are fast, directed conversations between team members where everyone updates everyone else as their own status and what they are doing. My team has come to a point where we are fully comfortable conducting our standup meetings, and so I thought I'd share some of the tips we've discovered as to how you can conduct your standups efficiently and quickly.

1. Keep it short

I cannot stress this enough: developers are busy people, and we don't like being interrupted. If a "standup" meeting is longer than 15 minutes, it is not a standup meeting. My group (six developers including me, plus a manager) aims for our meetings to be seven minutes or less. Keep the meeting short so everyone can get back to work!

2. Do it every day

Yes, every day. Even days when half the office is out on vacation and there's pressing bugs that need to be fixed right NOW. If there are people working on that day, you should do the standup meeting.

3. Use a standard template

My team's template looks like this:

  1. Here's what I accomplished yesterday (including task numbers, bug reports, etc.).
  2. Here's what I'm planning to do today (including task numbers, work requests, etc.).
  3. Blocks (things I cannot accomplish without someone else's help).
  4. Lingers (things which I am waiting to be done by someone else but are not impeding my work).
  5. Task status (e.g. whether or not our TFS tasks are up-to-date).

4. Don't allow the conversation to drift

Only pertinent work stuff is allowed. If any other things besides what's in your team's template need to be discussed, they should be discussed outside the standup.

5. Everybody gets a turn

This is important because, after all, we're a team, not a group of individuals. We're only as good as the least of us.

6. Meet at the same time every day

The expectation is that this time will be when everyone is expected to be working. Late morning, before lunch, works particularly well for my team.

7. Have a designated meeting leader

In our company's case, it's the team lead (e.g. me) that directs these meetings. That means it falls directly on my shoulders to ensure that the meeting is short, effective, and gets everyone involved. This is critical because whoever this person is (and it does not have to be the team lead), they are responsible for ensuring that the meeting interrupts everyone as little as possible.

8. You don't have to actually stand up

My group does our standups over Slack, because many times people just aren't in the office but are still working (i.e. on work-from-home days).

The point of having a standup meeting is to prevent those soul-sucking hour long meetings where everything gets talked about but nothing gets done. Such meetings happen when people don't know what other people are doing, and so managers or team leads call meetings to discuss who's doing what. Inevitably, these kinds of meetings get derailed because in the three weeks (or longer!) since the last meeting more has happened than can be discussed in an hour, and so the meeting takes three hours and nothing gets resolved. Those kind of meetings are a drain on resources and team morale, and should be dragged out into the street and shot. Standups are a way for everyone to explain what they've done and what they need to do, as well as getting the team to talk amongst one another, so that you don't need the soul-sucking meetings in the first place.

In short, standup meetings should be quick, directed, and done. That way, everyone can get back to what they want to be doing in the first place: programming!

The standup meeting tips I've listed above are what works for my team, at my company, but our way is not the only way to run effective standups. What tips do you have for your team's standup meetings? I'd love to hear them, so let me know in the comments!

Happy Coding Standups!

Image is Equipe durante um stand up meeting (daily meeting), used under license

The Sublime Joy of Continuous Integration and Continuous Delivery (CI/CD)

Once in a while a new process comes along and blows your friggin mind. That's what's been happening with me and my team recently, once our organization finally implemented Continuous Integration (CI) and Continuous Delivery (CD) on a large scale. These two processes have enabled our business to merge changes and push changes to production much more quickly than we could have before.

In short, now that we have CI/CD, I cannot even fathom how we got any work done and delivered without it.

What are CI and CD?

Let's use ThoughtWorks's definition of Continuous Integration:

"Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early."

By its very nature, CI tends to catch small problems before they become big problems, by forcing the developers to merge their small changes together and making them sort out any issues that arise.

One major reason you would want to use a CI process is to enable the Continuous Delivery (CD) process, which has a great definition over on puppet.com:

"Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing."

The only way CD works is if you can test changes in a production-like environment, and so CD processes often introduce a "staging" environment in which changes can be tested. Further, because the deployment and testing are automated, you can be assured that once your changes are deployed to production they will just work.

Our Team's Process

Prior to this, we'd been doing what I believe a lot of organizations are still doing: manual releases. That process generally went like this:

  1. Software developers check in their (hopefully working) code to source control.
  2. Developers then make a release package, which contains all of the code that they need to deploy to the staging environment.
  3. Developers contact our server group to make the staging deployment.
  4. Server group makes the deployment, tells the developers to check.
  5. Developers check, confirm that it looks good, notify their managers.
  6. Managers tell the stakeholders (e.g. people that care that this code gets deployed) to check out the system in staging, connected to test data.
  7. Stakeholders confirm that the system looks good.
  8. Developers make a second release package, this time for the QA environment.
  9. Server team moves system to QA.
  10. QA engineers test the changes, then notify developers and server team if the tests pass.
  11. Developers make yet another release package, this time for production.
  12. Developers contact server group, server group schedules a deployment for sometime off-hours (e.g. late at night).
  13. Once system is deployed, developers and server group confirm that it looks good.
  14. If anything bad happens, server team rolls back to the previous deployment.

Whew! That's a lot of steps, and at any time during that process something bad could happen and we might end up needing to start the whole thing over again. It clearly wasn't an ideal process.

But Jerry (who pointed out to me that our organization at the time was not agile like we thought, rather it was a lean waterfall process) and his team finally finished developing our company's CI/CD process, and is it ever a joy to use. Now our deployment process looks like this:

  1. Developers check in their (hopefully working) code.
  2. Said code gets automatically deployed to a development environment.
  3. Approvals are needed from the developer and his/her manager to deploy to the QA environment.
  4. Once deployed to QA, the QA engineers test the system.
  5. Once the test passes, the exact same code gets approval and is deployed to production.
  6. Once on production, the stakeholders are notified.

From 14 steps down to 6, and most of the process is automated. That's a huge improvement, and I can't even begin to calculate the total time it has saved me and my team, just from a development standpoint.

Is It More Work?

Yes, but only in the beginning. Let's be clear, there is some extra work involved in setting this up.

First of all, your organization needs some extra physical infrastructure (e.g. build servers) that you should probably have anyway, though not everyone does.

Secondly, somebody (most likely the developers) has to take of things like configuration and transforms (in our particular case, because we do everything in ASP.NET, we developers end up taking care of the web.config transforms), but your developers are probably already doing that work, just not in a unified manner.

Third, depending on how complicated your CI/CD process is, it might take some time to teach your developers and server admins the proper way to implement CI/CD, but that's time well spent and we need more teachers anyway.

Fourth, good, comprehensive testing becomes critically important with these processes. Failing tests cause a stop in delivery, and so writing valuable tests is now a requirement more than an option when using CI/CD.

Those obstacles are temporary, even if they are difficult, and once it is set up Continuous Integration and Continuous Delivery are truly a joy to use. They has saved me more time in the first month of our CI/CD process being operational than I care to admit.

Developers of the world: if you're not already on a CI/CD process, start bugging your bosses and coworkers about setting one up. It's becoming (if it isn't already) an integral part of running a modern software development shop, and it takes what was once a time-consuming manual process (deploying an app to production) and dramatically speeds it up. There will come a time when Continuous Integration and Continuous Delivery are no longer optional, and that time is rapidly approaching.

Are your teams using Continuous Integration or Continuous Delivery? If so, how do you like your versions of those processes? Is there anyone out there who is dissatisfied with how their CI/CD process works? Sound off in the comments!

Happy Coding!

Page image is Baggage Claim Haneda 2nd Terminal, found on Wikimedia and used under license.

Creating a Post Archive with the Ghost API and jQuery

I've long been missing an important feature in Ghost, my blog publishing platform: there's no inherent feature to create a post archive, or a list of all my posts in one place. I've gotten several requests for this feature, so I finally decided to just sit down and develop it using the Ghost Public API and a tiny bit of jQuery.

What follows is how I built my post archive the first time around. I have since replaced it with a Ghost-generated structure using the #get helper to solve some caching issues, but I like the look of the following system better. Please note that this post was written using Version 0.11.4 of Ghost and so it may change as Ghost changes.

Using jQuery and Ghost

Setting Up the Ghost Page Template

The first problem I had was that this page (the archive page) was not going to be either a post or a static page using my theme's default template. It was going to be a page, but it needed its own template, one that was very stripped-down.

Ghost supports using custom page templates by having a .hbs file that is prefixed with "page-". So, in my Ghost theme's root folder, I now have a file called page-all-posts.hbs, which looks like the following:

{{!< default}}

{{! This is a page template. A page outputs content just like any other post, and has all the same
    attributes by default, but you can also customise it to behave differently if you prefer. }}
{{#post}}
    <header class="post-header">
        <h1 class="post-title">{{title}}</h1>
    </header>
    <section class="post-content">
        {{content}}    
        <div id="postLoading">
            <h3>Loading post archive...</h3>
        </div>
        <div id="postList"></div>
    </section>
{{/post}}

The template is pretty darn simple. It has the "normal" Ghost stuff, like the {{#post}} tag, as well as the page title and content. The difference is what happens at the end: the div "postList" is where we will populate the list of all posts.

However, just having the template doesn't help us; I also need to create a static page with the same url as the name of the page template file. In my Ghost admin window, I created a new post, marked it as a static page, and gave it the URL "all-posts".

With that template and page set up, we can now write the jQuery to get us the actual posts.

Querying for All Posts

The first thing we have to do in order to display all posts is to get all the posts. I've already blogged about something like this when I implemented the 5 random posts sidebar, and so this solution will be very similar.

Ghost exposes a public API that can be used to query posts, users, and tags. In this case, I only want posts and I specifically want all my posts, so my query is very easy:

$(document).ready(function () {
    $.get(
        ghost.url.api('posts', {limit: 'all'})
    ).done(onSuccess);
});

The onSuccess method is really just a pass-through to another method, called showArchive:

function onSuccess(data) {  
    showArchive(data.posts);
    .... //Here's the code that does the 5 random posts sidebar
}

The real tricks begin when we implement the showArchive method.

Displaying the Posts

When we query for posts using the Ghost API, the posts object which is returned looks like this (I have simplified this schema):

"posts":[{
    "id":1,
    "title":"Welcome to Ghost",
    "slug":"welcome-to-ghost",
    "markdown":"...",
    "html":"...",
    "status":"published",
    "created_at":"2014-11-17T19:02:27.147Z",
    "created_by":1,
    "updated_at":"2014-11-17T19:02:27.147Z",
    "updated_by":1,
    "published_at":"2014-11-17T19:02:27.173Z",
    "published_by":1,
    "author":1,
    "url":"/welcome-to-ghost/"
  }, {
    "id":2,
    "uuid":"ac0a0374-a43c-15c4-391b-128d6bbba7c5",
    "title":"Lorem Ipsum Dolor",
    "slug":"lorem-ipsum-dolor",
    "markdown":"...",
    "html":"...",
    "status":"published",
    "created_at":"2014-11-18T19:02:27.147Z",
    "created_by":1,
    "updated_at":"2014-11-18T19:02:27.147Z",
    "updated_by":1,
    "published_at":"2014-11-18T19:02:27.173Z",
    "published_by":1,
    "author":1,
    "url":"/lorem-ipsum-dolor/"
  }],

From this, I can use the published_at property to get the month and year each post was published.

NOTE: The query we executed earlier already loads the posts in most-recent-first order, so we don't need to do any further ordering.

So, here' the outline of what we need the showArchive function to do:

  1. For each post, get the month and year that post was published.
  2. Each time the month or year changes, output a new subheader for that month and year.
  3. For each post, output a link to that post.

Here's the complete function:

function showArchive(posts) {  
    var monthNames = ["January", "February", "March", "April", "May", "June",
      "July", "August", "September", "October", "November", "December"
    ];
    var currentMonth = -1;
    var currentYear = -1;
    if(window.location.pathname == "/your-archive-page-url/"){ //Only display on this page
        $(posts).each(function(index,value){ //For each post 
            var datePublished = new Date(value.published_at); //Convert the string to a JS Date
            var postMonth = datePublished.getMonth();  //Get the month (as an integer)
            var postYear = datePublished.getFullYear(); //Get the 4-digit year

            if(postMonth != currentMonth || postYear != currentYear)
            { //If the current post's month and year are not the current stored month and year, set them
                currentMonth = postMonth;
                currentYear = postYear;
                //Then show a month/year header
                $("#postList").append("<br><span><strong>" + monthNames[currentMonth] + " " + currentYear + "</strong></span><br>");
            }
            //For every post, display a link.
            $("#postList").append("<span><a href='" + value.url +"'>" + value.title + "</a></span><br>");
        });
    }
}

I fully realize that this is not the best HTML (or Javascript, really), but it suits my purposes for now. Here's a screenshot of what this looks like on my blog:

Woohoo! I've got a working solution to show all my posts! Which would be great...except this isn't what I'm actually using now.

Using Ghost Only

The problem is that I use CloudFlare on this blog, and when doing so CloudFlare caches this page before the script has a chance to run. This results in an empty page, which is obviously not what I want. So, instead, I ended up going with a native Ghost solution, which looks something like this:

<div id="postList">  
    {{#get "posts" limit="all"}}
        {{#foreach posts}}
            <span>
                {{#if featured}}
                    <span class='fa fa-star'></span>
                {{/if}}
                {{date published_at format="MMM DD, YYYY"}}:&nbsp;  
                <a href="{{url}}">{{title}} </a>
            </span>
            <br>
        {{/foreach}}
    {{/get}}
</div>  

Here's a little breakdown of what this does:

  • The {{get}} helper gets me all my posts.
  • Within the {{get}} context, the {{foreach}} helper loops through each post.
  • Within the {{foreach}} context, the {{if}} helper enables me to check if a particular post is featured, and if so, output a star icon.
  • The {{date}} helper allows me to get the post's publishing date and format it.
  • Finally, the {{url}} helper and the {{title}} helper output the post's URL and title respectively.

Here's how the native Ghost solution looks:

The major downside to the native Ghost solution is that I no longer have the month and year section headers, something I would rather like to have. But, the page is no longer cached by CloudFlare, so it works. Further, this solution is much cleaner than the jQuery solution.

At any rate, now I've got a working page that lists all my blog posts! Check it out!

Summary

Please let me know if you found this post useful, and share other interesting tips about Ghost in the comments! Check out the Ghost documentation to learn about the other helpers they have available.

Happy Coding!

Mapping DataTables and DataRows to Objects in C# and .NET

My group regularly uses DataSet, DataTable, and DataRow objects in many of our apps.

(What? Don't look at me like that. These apps are old.)

Anyway, we're simultaneously trying to implement good C# and object-oriented programming principles while maintaining these old apps, so we often end up having to map data from a data set to a C# object. We did this enough times that I and a coworker (we'll call her Marlena) decided to sit down and just make up a new mapping system for use with these DataTable and DataRow objects.

As always with my code-based posts, there's a GitHub project with a full working example app, so check that out too!

One Jump Ahead

So here's a basic problem with mapping from DataSet, DataTable, and DataRow objects: we don't know at compile time what columns and tables exist in the set, so mapping solutions like AutoMapper won't work for this scenario. Our mapping system will have to assume what columns exist. But, in order to make it more reusable, we will make the mapping system return default values for any values which it does not locate.

There's also another, more complex problem: the databases we are acquiring our data from use many different column names to represent the same data. 20 years of different maintainers and little in the way of cohesive naming standards will do that do a database. So, if we needed a person's first name, the different databases might use:

  • first_name
  • firstName
  • fname
  • name_first

This, as might be imagined, makes mapping anything rather difficult. So our system will also need to be able to map from many different column names.

Finally, this system wouldn't be worth much if it couldn't handle collections of objects as well as single objects, so we'll need to allow for that as well.

So, in short, our system needs to:

  1. Map from DataTable and DataRow to objects.
  2. Map from multiple different column names.
  3. Handle mapping to a collection of objects as well as a single object.

We'll need several pieces to accomplish this. But before we can even start building the mapping system, we must first acquire some sample data.

Mine, Mine, Mine

We're going to create some DataSet objects that we can test our system against. In the real world, you would use an actual database, but here (for simplicity's sake) we're just going to manually create some DataSet objects. Here's a sample class which will create two DataSet objects, Priests and Ranchers, each of which use different column names for the same data:

public static class DataSetGenerator  
{
    public static DataSet Priests()
    {
        DataTable priestsDataTable = new DataTable();
        priestsDataTable.Columns.Add(new DataColumn()
        {
            ColumnName = "first_name",
            DataType = typeof(string)
        });
        priestsDataTable.Columns.Add(new DataColumn()
        {
            ColumnName = "last_name",
            DataType = typeof(string)
        });
        priestsDataTable.Columns.Add(new DataColumn()
        {
            ColumnName = "dob",
            DataType = typeof(DateTime)
        });
        priestsDataTable.Columns.Add(new DataColumn()
        {
            ColumnName = "job_title",
            DataType = typeof(string)
        });
        priestsDataTable.Columns.Add(new DataColumn()
        {
            ColumnName = "taken_name",
            DataType = typeof(string)
        });
        priestsDataTable.Columns.Add(new DataColumn()
        {
            ColumnName = "is_american",
            DataType = typeof(string)
        });

        priestsDataTable.Rows.Add(new object[] { "Lenny", "Belardo", new DateTime(1971, 3, 24), "Pontiff", "Pius XIII", "yes" });
        priestsDataTable.Rows.Add(new object[] { "Angelo", "Voiello", new DateTime(1952, 11, 18), "Cardinal Secretary of State", "", "no" });
        priestsDataTable.Rows.Add(new object[] { "Michael", "Spencer", new DateTime(1942, 5, 12), "Archbishop of New York", "", "yes" });
        priestsDataTable.Rows.Add(new object[] { "Sofia", "(Unknown)", new DateTime(1974, 7, 2), "Director of Marketing", "", "no" });
        priestsDataTable.Rows.Add(new object[] { "Bernardo", "Gutierrez", new DateTime(1966, 9, 16), "Master of Ceremonies", "", "no" });

        DataSet priestsDataSet = new DataSet();
        priestsDataSet.Tables.Add(priestsDataTable);

        return priestsDataSet;
    }

    public static DataSet Ranchers()
    {
        DataTable ranchersTable = new DataTable();
        ranchersTable.Columns.Add(new DataColumn()
        {
            ColumnName = "firstName",
            DataType = typeof(string)
        });
        ranchersTable.Columns.Add(new DataColumn()
        {
            ColumnName = "lastName",
            DataType = typeof(string)
        });
        ranchersTable.Columns.Add(new DataColumn()
        {
            ColumnName = "dateOfBirth",
            DataType = typeof(DateTime)
        });
        ranchersTable.Columns.Add(new DataColumn()
        {
            ColumnName = "jobTitle",
            DataType = typeof(string)
        });
        ranchersTable.Columns.Add(new DataColumn()
        {
            ColumnName = "nickName",
            DataType = typeof(string)
        });
        ranchersTable.Columns.Add(new DataColumn()
        {
            ColumnName = "isAmerican",
            DataType = typeof(string)
        });

        ranchersTable.Rows.Add(new object[] { "Colt", "Bennett", new DateTime(1987, 1, 15), "Ranchhand", "", "y" });
        ranchersTable.Rows.Add(new object[] { "Jameson", "Bennett", new DateTime(1984, 10, 10), "Ranchhand", "Rooster", "y" });
        ranchersTable.Rows.Add(new object[] { "Beau", "Bennett", new DateTime(1944, 8, 9), "Rancher", "", "y" });
        ranchersTable.Rows.Add(new object[] { "Margaret", "Bennett", new DateTime(1974, 7, 2), "Bar Owner", "Maggie", "y" });
        ranchersTable.Rows.Add(new object[] { "Abigail", "Phillips", new DateTime(1987, 4, 24), "Teacher", "Abby", "y" });

        DataSet ranchersDataSet = new DataSet();
        ranchersDataSet.Tables.Add(ranchersTable);

        return ranchersDataSet;
    }
}

We'll test our system against this sample data.

Something There

Now we can build our actual mapping solution. First off, we need a way to decide what column names map to object properties. It was Marlena's idea to keep those things together, and so we came up with a class called DataNamesAttribute that looks like this:

[AttributeUsage(AttributeTargets.Property)]
public class DataNamesAttribute : Attribute  
{
    protected List<string> _valueNames { get; set; }

    public List<string> ValueNames
    {
        get
        {
            return _valueNames;
        }
        set
        {
            _valueNames = value;
        }
    }

    public DataNamesAttribute()
    {
        _valueNames = new List<string>();
    }

    public DataNamesAttribute(params string[] valueNames)
    {
        _valueNames = valueNames.ToList();
    }
}

This attribute can then be used (in fact, can only be used, due to the AttributeUsage(AttributeTargets.Property) declaration) on properties of other classes. Let's say we're going to map to a Person class. We would use DataNamesAttribute like so:

public class Person  
{
    [DataNames("first_name", "firstName")]
    public string FirstName { get; set; }

    [DataNames("last_name", "lastName")]
    public string LastName { get; set; }

    [DataNames("dob", "dateOfBirth")]
    public DateTime DateOfBirth { get; set; }

    [DataNames("job_title", "jobTitle")]
    public string JobTitle { get; set; }

    [DataNames("taken_name", "nickName")]
    public string TakenName { get; set; }

    [DataNames("is_american", "isAmerican")]
    public bool IsAmerican { get; set; }
}

Now that we know where the data needs to end up, let's start mapping out the mapper (heh).

Reflection

Our mapper class will be a generic class so that we can map from DataTable or DataRow objects to any kind of object. We'll need two methods to get different kinds of data:

public class DataNamesMapper<TEntity> where TEntity : class, new()  
{
    public TEntity Map(DataRow row) { ... }
    public IEnumerable<TEntity> Map(DataTable table) { ... }
}

Let's start with the Map(DataRow row) method. We need to do three things:

  1. Figure out what columns exist in this row.
  2. Determine if the TEntity we are mapping to has any properties with the same name as any of the columns (aka the Data Names) AND
  3. Map the value from the DataRow to the TEntity.

Here's how we do this, using just a bit of reflection:

public TEntity Map(DataRow row)  
{
    //Step 1 - Get the Column Names
    var columnNames = row.Table.Columns
                               .Cast<DataColumn>()
                               .Select(x => x.ColumnName)
                               .ToList();

    //Step 2 - Get the Property Data Names
    var properties = (typeof(TEntity)).GetProperties()
                                      .Where(x => x.GetCustomAttributes(typeof(DataNamesAttribute), true).Any())
                                      .ToList();

    //Step 3 - Map the data
    TEntity entity = new TEntity();
    foreach (var prop in properties)
    {
        PropertyMapHelper.Map(typeof(TEntity), row, prop, entity);
    }

    return entity;
}

Of course, we also need to handle the other method, the one where we can get a collection of TEntity:

public IEnumerable<TEntity> Map(DataTable table)  
{
    //Step 1 - Get the Column Names
    var columnNames = table.Columns.Cast<DataColumn>().Select(x => x.ColumnName).ToList();

    //Step 2 - Get the Property Data Names
    var properties = (typeof(TEntity)).GetProperties()
                                        .Where(x => x.GetCustomAttributes(typeof(DataNamesAttribute), true).Any())
                                        .ToList();

    //Step 3 - Map the data
    List<TEntity> entities = new List<TEntity>();
    foreach (DataRow row in table.Rows)
    {
        TEntity entity = new TEntity();
        foreach (var prop in properties)
        {
            PropertyMapHelper.Map(typeof(TEntity), row, prop, entity);
        }
        entities.Add(entity);
    }

    return entities;
}

You might be wondering just what the heck the PropertyMapHelper class is. If you are, you might also be about to regret it.

Dig a Little Deeper

The PropertyMapHelper, as suggested by the name, maps values to different primitive types (int, string, DateTime, etc.). Here's that Map() method we saw earlier:

public static void Map(Type type, DataRow row, PropertyInfo prop, object entity)  
{
    List<string> columnNames = AttributeHelper.GetDataNames(type, prop.Name);

    foreach (var columnName in columnNames)
    {
        if (!String.IsNullOrWhiteSpace(columnName) && row.Table.Columns.Contains(columnName))
        {
            var propertyValue = row[columnName];
            if (propertyValue != DBNull.Value)
            {
                ParsePrimitive(prop, entity, row[columnName]);
                break;
            }
        }
    }
}

There are two pieces in this method that we haven't defined yet: the AttributeHelper class and the ParsePrimitive() method. AttributeHelper is a rather simple class that merely gets the list of column names from the DataNamesAttribute:

public static List<string> GetDataNames(Type type, string propertyName)  
{
    var property = type.GetProperty(propertyName).GetCustomAttributes(false).Where(x => x.GetType().Name == "DataNamesAttribute").FirstOrDefault();
    if (property != null)
    {
        return ((DataNamesAttribute)property).ValueNames;
    }
    return new List<string>();
}

The other we need to define in ParsePrimitive(), which as its name suggests will parse the values into primitive types. Essentially what this class does is assign a value to a passed-in property reference (represented by the PropertyInfo class). I'm not going to post the full code on this post (you can see it over on GitHub), so here's a snippet of what this method does:

private static void ParsePrimitive(PropertyInfo prop, object entity, object value)  
{
    if (prop.PropertyType == typeof(string))
    {
        prop.SetValue(entity, value.ToString().Trim(), null);
    }
    else if (prop.PropertyType == typeof(int) || prop.PropertyType == typeof(int?))
    {
        if (value == null)
        {
            prop.SetValue(entity, null, null);
        }
        else
        {
            prop.SetValue(entity, int.Parse(value.ToString()), null);
        }
    }
    ...
}

That's the bottom of the rabbit hole, as it were. Now, we can use the DataSet objects we created earlier and our mapping system to see if we can map this data correctly.

Two Worlds

Here's a quick program that can test our new mapping system:

class Program  
{
    static void Main(string[] args)
    {
        var priestsDataSet = DataSetGenerator.Priests();
        DataNamesMapper<Person> mapper = new DataNamesMapper<Person>();
        List<Person> persons = mapper.Map(priestsDataSet.Tables[0]).ToList();

        var ranchersDataSet = DataSetGenerator.Ranchers();
        persons.AddRange(mapper.Map(ranchersDataSet.Tables[0]));

        foreach (var person in persons)
        {
            Console.WriteLine("First Name: " + person.FirstName + ", Last Name: " + person.LastName
                                + ", Date of Birth: " + person.DateOfBirth.ToShortDateString()
                                + ", Job Title: " + person.JobTitle + ", Nickname: " + person.TakenName
                                + ", Is American: " + person.IsAmerican);
        }

        Console.ReadLine();
    }
}

When we run this app (which you can do too), we will get the following output:

Which is exactly what we want!

(I mean, really, did you expect me to blog about something that didn't work?)

Go the Distance

It concerns me that this system is overly complicated, and I'd happily take suggestions on how to make it more straightforward. While I do like how all we need to do is place the DataNamesAttribute on the correct properties and then call an instance of DataNamesMapper<T>, I feel like the whole thing could be easier somehow. Believe it or not, this version is actually simpler than the one we're actually using in our internal apps.

Also, check out the sample project over on GitHub, fork it, test it, whatever. If it helped you out, or if you can improve it, let me know in the comments!

Finally, extra special bonus points will go to anyone who can figure out a) what the hell those odd section titles are about and b) where I got the sample data from.

Happy Coding!