A few weeks ago I posed a question in my Opinion Time! series that got quite a lot of conflicting opinions. The question itself was simple enough: should code reviews include some kind of manual testing? I figured this would be pretty straightforward, and a clear and obivous winner would emerge over whether or not this was a good idea.
Hoo boy, was I wrong.
The opinions I got ran the gamut, from "nope, that's unnecessary" to "well, maybe a little" to "abso-friggin-lutely." I've compiled some of the best ones in this post in the hope that someone (maybe even future me) will stumble across them and be reminded that there are always many ways to accomplish the same task.
Just not the right way. :)
Nope, That's Unnecessary
There were very few people who came down firmly on one side or the other of this debate. Reader gauiis was one of them.
"Definitely not. Code Reviewer should be able to review the code/diff online without having to run it."
At least one person acknowledged that their team would like to do better, but the amount of effort they needed to put into doing it was too much to handle:
"We've had similar discussions, not so much due to bugs getting past QA, but because it's a lot more efficient to catch clear and obvious bugs before they get deployed for QA to test, and avoid the whole cycle of logging a defect, fixing it, redeploying, retesting...
"Our not-very-helpful conclusion was that yes, code reviewers should confirm that the change actually works as intended, but no, the developers didn't want to spend their time doing that. So it's still at an impasse."
Reader cocowalla put it a different way: if you're spending time doing manual testing, you are taking time away from doing other tasks that also need to be done. In short...
"No - you can't do everything, you must delegate.
"I've had to do this before when working with... well, let's say 'less meticulous' devs, and it's simply not possible - to properly perform manual testing takes time, so you'll have to steal from other tasks, or work overtime."
Yep, Do It!
Quite a few other people came down on the side of needing to do testing, at least cursory testing to ensure the "happy path" works.
"Personally, i feel doing QA during code reviews is redundant. The developer should be smoking the application to make sure at the very least the happy path works."
The most upvoted comment on the post was submitted by Marius Myburg, and he made some very good points:
"Code review is just that; code review. It is another developer checking your code changes, pretty briefly but still checking it, to make sure you did not miss something obvious. So code review should not include manual testing.
"BUT - before you even get to code review, you should have tested your code changes functionally. You can not ever, except for the simplest of changes (like a wording change on A UI for example), make a code change and not do your basic check - which is running the code to ensure that it basically works. It is not your job to do QA's work, meaning you are not responsible for full, detailed testing of the app or at least the affected parts of the app. But it is your job to do at least a surface level test of your change in the running program."
Doing The Right Review
Somewhat surprisingly (at least to me) quite a lot of commenters pointed out that maybe, just maybe, we were doing the wrong kind of review in the first place.
"Since we've got a separate environment for developers, we decided as a team, that the reviewer will do the smoke tests of the task he's checking, before passing it to QA to test/staging environment. It'll put less work on QA, which in many cases was just the same as it's done on dev environment and will also allow to find potential bugs faster, 'closer' to developers."
"Why not send the code review to all the developers on the project?
Junior devs might pick up new tricks while mid/senior devs might catch potential issues.
"Having just 1 person aka the lead do code reviews is probably not very efficient."
Paul Curran (rightly) pointed out that expecting the team lead to do all the reviews is creating a artificial bottleneck, one that doesn't need to exist.
"The flaw is in expecting a team lead to do a code review; the flaw being that a single person is the dependency on checking and validating code. ...By creating an environment where the code reviews are undertaken by the whole team there is a better chance that different views by multiple people would throw up better chances of spotting chances."
What Do You Think?
The community has spoken, and the result is a resounding... maybe!
In all seriousness, most people seemed to fall on the side of "some testing should be done" and then disagreed as to how to get more thorough testing done (automated vs manual). I'm of this opinion as well, and I probably didn't make it clear in my original post that yes, my developers do test their own changes and yes, I do run the code in the case of more complicated code reviews.
Unfortunately for you, dear readers, this means that you'll have to come up with the opinion that matters, yours, on your own. But, really, isn't that what we're all trying to do anyway?
Let me know if you think there's a side to this argument that didn't get covered here, or just need to speak your mind on why manual testing during code review is a good or bad idea. Share in the comments!
Did you enjoy this post? Then you'll love my premium newsletter The Catch Block! Issues come out every Wednesday, and they have the best links to quality stories from all around the ASP.NET and web programming worlds. Plus, original stories, tips, tutorials and sample code you won't find anywhere else. Even better, it's only $5/month, or $55/year! Check out the previous issues, and then sign up to get The Catch Block today!