We have an ongoing debate in my team concerning what exactly a code review covers, and I am once again asking for my readers' help in deciding whether I am crazy or not.
Here's the setup: we are a five-person development team, responsible for 2 major projects and 15 minor ones. I'm the team lead, which means it is my job to conduct code reviews (except for my own code, which I delegate to my teammates). Once we are done with a code review, we notify the QA department and file a ticket so they can test the changes we made.
Recently we had a bug which both QA and I missed. Said bug ended up being pretty major, and forced a rollback of one of our larger projects which caused some problems with our consumers. Now, because we have a Continuous Integration/Continuous Deployment process in place, the overall impact of this rollback was minimized, but it was still rather embarrassing. When you have 20k users for an app, a bug like this was bound to get noticed by at least a few of them.
Unfortunately (or maybe fortunately), it was noticed by a few particularly loud users.
These loud users promptly complained to my boss's boss, who notified my boss, who notified me. My boss's boss (our director) was terribly concerned; from his perspective, this particular bug was catchable, and should have been caught in either my code review or QA's testing. To be fair, he's right. But our director stressed that I should have found this particular issue during code review, and this rebuke flies in the face of what I've been doing so far.
Here's the real problem: this particular bug has to do with how the system functions. As a rule, during code review I do not run the code except to do simple sanity-checks (e.g. ensuring that I can log in, get to the appropriate page, and so on). This is primarily because my 5-person team is in charge of 15+ projects and I do not have the time to be an expert on every feature and every bug in these apps. So, I divide responsibilities this way:
During code review, I
- Check for team standards (naming, syntax, etc.)
- Check for the existence of major pieces (e.g. particular references, dependencies, etc.)
- Confirm that test instructions exist and are understandable.
During QA testing, I expect them to:
- Confirm that the code actually works as intended, and fail the ticket if it does not.
Except now a bunch of people in my company, not just the director, are saying that code review needs to also ensure that the code works as intended. This is probably because the QA team is even more limited than my team is (we have around 25 QA testers for 120+ developers). Still, I find this rather redundant; if QA is testing to confirm that the code works, why does my code review also need to do that?
(Automated tests do help this problem, to an extent. For the newer projects they help a lot. But the older projects don't have them, and the time needed to implement them is more than we have at the moment.)
So, I need your help, dear readers. Am I crazy? I don't think code review should include manual testing, ensuring that the code works as intended from a user's perspective. In my opinion, that's QA's job. But am I wrong?
I need your opinions on this question: should standard code reviews also test that the code works?
Even more than this: whether I'm right or wrong, what are some arguments that I can use to back up whichever side is right? I'll need to reinforce my opinion with stories or facts to my bosses, and the more I can gather the better off we'll be.
Let me know what you think in the comments below! I'll compile the answers into another post and, hopefully, have some insight as to why code reviews should or should not include manual testing.
Hello Dear Reader! Want to get the best C#, ASP.NET, web tech, tips, and stories anywhere on the Web? Sign up to receive my premium newsletter The Catch Block! Each Wednesday, you'll get the best reads, job listings, stories, tips, and news from around the ASP.NET and C# worlds. All for only $5/month or $50/year! Become a subscriber today!