(This is the fourth part in a series about working effectively to pass code review quickly and without missed defects.)
In the last post, I discussed how proactively involving your reviewers in your design and implementation process would get your reviews approved faster. The biggest risk of proactivity, though, is that one or more reviewers might disagree with you.
If we don’t manage arguments properly, things will come to a head at review time when your reviewer is free to let loose all their criticisms yet again. Then you’ll reply in kind, and so on.
Time spent going back and forth in a code review is wasted time.
(This is the third part in a series about working effectively to pass code review quickly and without missed defects.)
In the first two parts of this series, we discussed why code reviews needed to be small, and failing that, how we could make them seem small by managing extraneous cognitive load. There is a third way to make reviews go faster, and that’s by proactively ensuring your reviewers know a bit about what you’re working on before they review it (managing germane cognitive load).
(This is the second part in a series about working effectively to pass code review quickly and without missed defects.)
In part one, we showed that, all other factors considered, the shorter the code review, the better. It may seem intuitive that this is the case. After all, barring some entry into a code obfuscation competition, even the worst written code can still be understood if it’s short.
But you’ve probably also seen code that was easy to understand despite it being many lines long.
Examples of this phenomenon include: a class rename that spans 100 files; a host of new Factory classes; addressing all instances of a specific compiler warning; or new functionality in a system that you originally authored.
These things seem small even though they aren’t actually. So, how do we really quantify "small"?
(This is the first part in a series about working effectively to pass code review quickly and without missed defects. In this series I will use Git terminology with respect to source control and branch management.)
Let me tell you a code reviewer’s horror story. It’s the start of a sprint and Taylor is assigned to develop a spiffy new feature. They excitedly get to work. They carefully review all the requirements, spend a few hours on a whiteboard designing a solution, and then start by writing unit tests. After just 1.5 weeks of the 2 week sprint, Taylor has a robust, performant implementation that has 100% test coverage.
(shines flashlight under face) Then… they create a pull request into master for the code to be reviewed (frightened screams erupt from code reviewers around the campfire).
Because we have ranges! 🙂
We usually think of an algorithm as a single function with inputs and outputs. Our algorithms textbooks reinforce this notion. They present very concise descriptions that neatly fit in half of a page. This is fine until one actually attempts to implement it as a single function; all the little details add up until you’re left with a gigantic, monolithic function.
That monolithic function lacks readability; it’s so long that you have to scroll for a full minute to get to the end. It’s nested so deeply that you have to scroll horizontally to read it. It’s nigh-impossible to trace the state of a variable that was declared at the top as it mutates every other line.
Because of that, the function also lacks maintainability. Any single line-change has the potential to affect the many lines below it, altering the behavior in unpredictable ways.
Nobody wants to touch this code because it’s such a pain to get any context. There are just too many details. It’s like trying to learn the purpose of an airplane by reading an aerospace engineering textbook.
Complex code requires abstractions. Abstractions help communicate higher level concepts, improving readability and therefore reducing the time to fix bugs or add new features.
A compiler takes code as input and spits out an executable as output. Do you think compilers are implemented as a single monolithic function?
There’s a good chance that your monolithic function should be refactored into one or more classes. It’s okay to implement an algorithm as an object. I encourage it, even.
In this blog post I’ll walk through a handful of code smells that indicate you’ve got a class instead of a function, and then follow it up with code examples demonstrating code that exhibits these code smells. Finally, I’ll demonstrate how we might refactor the offending functions into classes.
In my previous blog post, I gave a tutorial on the double dispatch pattern. I mentioned that you could reuse the pattern for a variety of things, one of those being I/O. In this blog post, we’ll walk through how we can add serialization support for our class hierarchy without touching the classes themselves.
Okay, you’ve ended up in a situation where you’re either
dynamic_cast* to find out the actual type of a pointer, then moving on to do what you really want, or
- checking some
enumin a base class to see which derived class it really is so that you can perform type-specific operations on it
* or some other form of run-time type information
You know something is wrong with this, but you can’t think of a satisfactory solution. You’ve already considered adding another virtual function on the base class, but you’re worried about how many virtual functions are already there, or the impact that this might have on consumers of the base class.
Good on you for recognizing that. Admitting you have a problem is the first step to solving it 🙂
In this tutorial, I’ll talk about one solution to this problem I’ve had some success with: the double-dispatch Visitor pattern. With it, you can trim down those long if-else if blocks, separate responsibility into manageable pieces, and even stabilize your interface better.
Often when you’re trying to debug a piece of code, the debugger steps into a block you didn’t expect it to. Most of the time this is because your code has a logic error, and it wasn’t doing what your mental model thought it would.
Other times it’s because you accidentally compiled with optimizations on, and the compiler did some magic to make the outcome the same even if the code was different.
This post talks about a scenario where it’s neither of those things. Instead it’s something much more dangerous and difficult to detect — a violation of the One Definition Rule (ODR).
In this post I’ll talk a little about C++’s One Definition Rule, and then discuss how one manifested itself in a project I was working on. I’ll talk about how to detect them, and how I resolved mine.
(I talk a lot about violating ODR in the context of a tool I use (SWIG), but the manner in which I violated ODR is applicable to any C++ library that links to another.)