One of the subjects of a possibly heated discussion among programmers are the limits in the length of lines of code. Are long lines a problem? Should we impose a limit such as the traditional 80 columns?
Long lines clearly have disadvantages. Many programming environments and tools don’t wrap long lines and instead force you to use horizontal scrollbars to see the full text, which is obviously clumsy and makes it a bit harder to follow the code structure, as when you scroll to the right, you lose the ability of seeing at a glance the nesting level you are in. Visual diff tools, for example, usually present you unwrapped lines and split the screen in two columns. Short lines make it much easier to spot the differences.
However, they also have some advantages. As long as you don’t surpass the column limit in your screen (nowadays usually over 100 columns), long lines allow you to write code more naturally and, in fact, can increase the clarity of the code being written. In addition, the less lines a sentence takes, the more sentences you can fit in a screen of text. This can make the difference between being able to view a whole method or function without scrolling, hence having a clear visual perception of its structure, or having to scroll (or scroll a lot more) to achieve the same.
The attitude of some people defending arbitrarily short limits in the length of lines of code is a bit childish, and many of them fail to understand at least three fundamental concepts, making it reasonable to doubt about their programming and reasoning ability.
First, that the average length of a line of code varies with the language. If you impose the same limit to every language, you are failing to understand that the languages are different and code written in them looks different. In C, most lines of code are part of functions. This means that the typical obligatory indentation level of a line of code is 1. However, in C++, if you are creating a well-designed piece of code, you could be writing an inline method of a class inside a namespace. The initial indentation level is 3 or 4, depending on if you indent the public, private and protected keywords too. If you use tab characters and are displaying tabs as 8-columns wide gaps, that means 16 or 24 columns less for something that may be fundamentally the same code. Furthermore, in C, standard functions have names rarely surpassing 8 characters while, again as an example, in C++ you could be using std::transform, which are 14 characters by itself. Many modern languages use more verbose function and method names. For them, 80 columns are simply not a reasonable limit.
Second, a basic logic mistake: they are confusing the problem with its symptoms. Long lines of code may indicate bad code because it may mean you are using too many nested expressions, making the code hard to follow due to the increased cyclomatic complexity. However, a long line may not be long due to this problem. It may be long because it has to be long. For example, if it’s a printf of a slightly long message. What if it has 90 columns? Nothing. It’s perfectly fine. If, on top of that, you believe that if a line is short, you are writing good code, you are not placing yourself in a good position. What makes code good or bad is how well, efficiently and clearly the problem is being solved, how well you are dividing your application into independent modules and into simple functions and how it makes the initial problem look easy. It is perfectly possible to write an uggly application using short lines. Remember, long lines are most times not a problem, but in some languages they may indicate a problem. It’s not the same.
Third, the solutions they propose usually don’t fix anything. If you have a long line in which you call a function with four parameters and it turns out the line is too long, they tell you to put each parameter in its line. Woah. What a radical change! The code will be so much better structured that way, if it was not by the fact that it’s the same code. People using a random length limit will, many times, surpass that limit, and their solution is to split the expression. The expression is the same. You are not making the code better by splitting the line. You are not changing your approach to the problem, or the structure of your program, or decreasing the cyclomatic complexity, or anything. You are only splitting the line. Plus, as mentioned previously, you are pushing one line below the limits of your screen. If you have space in your screen, use it.
My advice is: try to structure your program as best as you can. Create good code. Write simple functions. If, when doing that, you notice you write an unusually long line, it’s fine if it fits in your screen and it’s readable. Focusing on keeping the line length below an arbitrary limit is only a distraction. You won’t be a better programmer. Be practical. If you are working with people whose screen resolution is lower and they tell you they usually have problems reading your long lines, be a good work mate and try to make them shorter. If they are working in the same application but their lines are much shorter, maybe there’s a problem with your code.
PS: Usability studies have shown short lines are easier to read because it’s easier to track the line you are reading and jump to the beginning of the next line when you reach the end on the current one. Sometimes this is used as an argument in favour of short lines of code, but fails to notice that the studies apply to unstructured normal text, not code. Code is easier to read in this regard because it follows an structure, the lines are usually different from each other and are divided into short blocks instead of long paragraphs. In my humble opinion, this argument is void when talking about source code.