80 Characters Is Enough

It’s human nature to break the rules. “Rules are made to be broken,” you must hear at least twice a day during all that time you’re not using to floss your teeth. When the Lord God commanded Adam not to eat the fruit of the tree, all it took was one Talking Snake, and look where we are now, all these years later, and well you know what the doctor says about apples…

We’re rule breakers. We break them for fun. We break them for profit. We speed to work. We let our kids eat cotton candy for breakfast. And so on.

But just because we break the rules, it doesn’t make it acceptable. It doesn’t make the rules dismissible. And it damn well doesn’t make that orange skirt look good with the brown terracotta jacket and the purple 10 gallon hat.

And you know what else doesn’t look good? Long lines of code.

And just because we push character after character unto our lines, until we’ve past through to the hundreds, right into the thousands, and we’re well on the way to chaining together the entire text of Great Gatsby all into a one-liner… That doesn’t make it okay, or godly. No, unless you’re some sadistic code-god determined to smote the open source community with inline lambdas…

Whether you’re a habitual chainer or not, or whether your line limit is 81 or infinity, I’m here to explain why you need to think about the standard of 80, and why it’s there, and why it helps. And here’s the TL;DR:

Line Count Is Meaningless

There’s this mistaken notion that “short code” is somehow better than “long code”, as if you can measure quality by how many lines of code something is, as if less lines of code means a faster process–or it somehow reduces complexity or increases maintainability. No, lines of code, in reality, means nothing. Nothing at all. When it comes to code, statements talk, and… well nothing else does. And if you jam thirty-seven statements onto one line or ten or you space them out across ten-million, it’s still thirty-seven statements.

Thinking about line count is absurd. And the irony that it comes from people that know better is mind-boggling.

These lengthy lines of code frequently come from people that preach about slimming down our functions. We’re all in agreement that fat functions are bad. So, all together now, let’s take ONE second, un momento, and apply what we know about functions to lines of code.

We like smaller functions because the longer a function, intrinsically, the more frequently it is to change. And this is, in the scientific world, what we call a “double whammy”. Firstly, since the function is fat/long, any change to it is poorly isolated, and has too vast a depth. I.E., if you’re an object, and the responsibilities for shit()ing and faceWash()ing are both housed in the same function, if you’re not careful with a change to that function, you just might end up shitting on your face, and well, that’s probably not what you had in mind, now is it? Secondly, we just said because it’s long, changes to it will happen more frequently. The name of the game here is Not Shitting on Your Face, and, as engineers, we try to do that as much as possible. But, with long functions, we do this all the time, shooting ourselves in the foot, so to speak.

Anyway, let’s apply what we know about functions to lines of code. Maybe we’ll discover something

Any line with a lot going on is more likely to change than a line with not as much going on. And because a lot is going on, there’s a higher likelihood of someone submitting a pull-request to shit on your face, or break your code, or introduce a bug. And before you butt in with, “but that’s why we have tests”, let me butt back with, “do tell me about how you never have production bugs”.

Long lines of code are just all around terrible, even more so than long functions, methinks.

A Line Has Meaning

Let’s keep applying what we know to be true about functions to lines of code, because, you know, we have a brain, and there’s this thing I heard about once in a Kurt Vonnegut book called… common sense.

Remember the awful days of functions just being there? Remember how they used to just be repositories for things to happen, with no real meaning?

Yeah, it was awful.

But then, the White Knight swept us off our feet. It told us about Single Responsibility. It said, “Yo homie, a function has meaning. It should do something. It should mean something. And that should only be one thing. Your functions oughta have a single responsibility.”

And this was divine intervention to our cesspits we called functions. After we applied it, code started to make sense. It started to be testable. It started to be readable. The smells went away. And rainbows started appearing.

But then, the rebels we are, we threw everything we learned out the window when we hopped all aboard the Lambda-Chain-Train headed straight to Hell. We started jamming ten statements (read ten responsibilities) onto a single line of code. We started making functions out of lines, and we got right back to where we started, with our functions (read lines) having multiple responsibilities, and what’s really going on in there being a more carefully kept secret than the reveal of an Agatha Christie thriller-mystery.

Maybe it’s the suspense driving people wild, or maybe it’s the spirit of competition to see who can write the next longest line of code. Whatever it is, it’s leading to the Fat2.0 revolution. Because fat isn’t how many lines a function is. It’s how many statements it is. Just because you snuck six-hundred statements into two lines of code, that doesn’t mean you can call it one function, not any less than you could call that triple-god-object-in-a-single-function a function.

So let’s stop the revolution right here. Let’s stop bringing fat back. We’re not Justin Timberlake.

Because, Duh

To wrap things up, I’ll just state the obvious.

Have you ever written something like:

var x = new Person();

And thought, “you know, that’s too many lines for that. What would make that good is if I changed it to:”

var x = new Person(); x.setName("Cuzzo");


Why? Because we’ve been writing one statement per line since the beginning of computers. With chaining, we see the ability to take out the extra x, and we should. But keep it two separate lines, people. It’s two responsibilities.

We might all have monitors long enough to watch six high-def Pornhubs at once. More than 80 characters can definitely fit on the screen. So why not use all that space? Because, one, it’s a standard. And, two, if you’re going over 80 characters, you’ve probably done more than one thing wrong. You probably oughta make a function out ten-minute side-scrolling platform-adventure-game you call a line, so, you know, I don’t have to beat Bowser just to know what your line is doing.

The next time you’re in the middle of that epic chain, and you’re trying to remember if you just filter()ed or map()ed, take a second to think about me, the dumbass, scratching my head all the way back at the beginning, ten high-levels ago, trying to remember what foldRight() does in the first place. Zeesh.