Error handling: return values and exceptions

My colleague edulix started a discussion on the golang list about the merits of go’s error handling. This got me thinking about the problem of error handling in general, and that no language seems to have has gotten it quite right. What follows is a quick braindump.

Two approaches

There are two prominent approaches to error handling in software engineering today, using return values and exceptions.

With return value error handling, errors are indicated through the values returned from functions, the caller has to write checks on these values to detect error conditions. Originally, error returns were encoded as falling outside the range of data corresponding to normal operation.

For example, when opening a file with fopen in C++, the normal return is a pointer to an object representing the file. But if an error occurs the function returns null (additional information about the error is available from a global variable, errno). Just as the caller of the function uses normal return values, said caller is responsible for dealing with errors present in the return.

Exceptions establish a separate channel to relay error information back from functions. When using exceptions, a function’s return only holds values that correspond to normal behavior. In this sense the values returned by functions are more immediately identified with the function’s purpose, details regarding what can and does go wrong are separated into the exceptional channel.

For example, the purpose of the Java’s parseInt function is to parse an integer. So the return value’s content is exclusively a consequence of this purpose: the integer parsed from the string. If something goes wrong when attempting to achieve this purpose it will be represented in the throwing of an exception which holds error details. Whereas with return values there is no need for additional machinery beyond that provided by normal function calls, exception requires an extra mechanism: try/catch blocks.

Exceptions’ claimed virtues

exc

Exceptions try to declutter code (http://mortoray.com/2012/03/08/the-necessity-of-exceptions/)

Exceptions aim to be an improvement over return values. Here are some possible shortcomings of the return value approach:

  • The code is cluttered with error checking that reduces its readability, intent is obscured.
  • Sometimes the scope at which an error occurs is not the best one to handle it. In these cases one has to manually propagate error’s backwards, using several returns until an adequate error handling scope is reached. This is cumbersome.
  • If one forgets to check for errors the program may continue to run in an inconsistent state, causing fatal errors down the line that may not be easy to understand.

Exceptions try to address these possible shortcomings:

  • With exceptions normal program flow is separated from error handling code, which lives in special catch blocks for that purpose. This declutters normal behaviour code, making intent clear and readable.
  • With exceptions errors programmers have the option of handling errors at the right scope, without having to manually return across several levels. When exceptions are not handled at the scope the error occurs, they are propagated automatically up the stack until a suitable handler is found.
  • If exceptions are not handled, the program crashes immediately instead of continuing in an inconsistent state. A stack trace shows the error, and the call stack leading to the error.

But it’s not all good

The story doesn’t end here of course, exceptions have their problems. Some say that they are worse than the solution they were trying to improve upon. This is the position that many proponents of Go’s error handling defend, stating that returns with multiple values (feature not present orignally in C, hence errno and the like) are not only sufficient but simply better than exceptions:

  • Because exceptions propagate up and can be handled elsewhere, programmers are tempted to ignore errors and “let someone else deal with it”. This lazy behaviour can lead to errors being handled too late or not at all.
  • It is in principle impossible to know whether a given function call can throw an exception, short of analyzing its entire forward call tree. As a consequence, the use of exceptions in a language is equivalent to hidden goto’s that can jump control back up the call stack at any function invocation. In contrast, when using return values error information is formally present in the function signature.

Whereas before we listed exception propagation as an improvement (allowing error handling at the right scope), here it is listed as a weakness: it can encourage bad programming practice. But it is the second criticism that I find more significant: exceptions are opaque and may lead to unpredictable behaviour. With such an unpredictable ingredient, the programmer is unable to ancipate and control the potential paths his/her code can take, as lurking under every function call is the potential for a jump in execution back up the call stack.

Of course, in real world practice, exceptions are not the randomness disaster that this description may suggest[1]. Properly documented functions do communicate to the programmer, to a reasonable approximation, what can and cannot happen exception-wise. And properly implemented functions do not whimsically throw exceptions at every opportunity, “magically” making your code jump to a random location.

Good intentions: checked exceptions

What about checked exceptions, you say? The rationale for checked exceptions in Java is precisely one that formally addreses the very problem we just described, because

With checked exceptions, function calls must, in their type signature, specify what exceptions can be thrown.

Checked exceptions are part of a function’s type and the compiler ensures that exceptions are either caught or declared to be thrown (a use of the type system that fits well with what I’ve advocated here). Isn’t this exactly what’s needed? The subtle problem is that the phrase “caught or declared to be thrown” does not mean the same as “correctly handled”. The use of checked exceptions encourages lazy behaviour from programmers that makes things even worse. When forced by the compiler to deal with checked exceptions, programmers bypass it by

  • Indiscriminately adding throws clauses, which just add noise to the code
  • Writing empty try/catch blocks that never get populated with error handling code, potentially resulting in silent failures

The second problem is especially dangerous, it can result in exceptions being swallowed and silenced until some larger error occurs down the line, an error which will be very hard to diagnose. If you recall, this is precisely the third drawback we listed for error handling via return values. Both problems arise, not due to the language feature itself, but out of imperfect programming.

The bottom line for a language feature is not what some ideal programmer can do with it, but the use it encourages in the real world. In this case the argument goes that checked exceptions (and as we saw above, error return values) make it harder to write correct error handling code; it requires additional discipline not to shoot oneself in the foot. Which is a shame, since the rationale behind checked exceptions is very appealing: documentation at the type level and compiler enforcement of proper error handling.

Quick recap. Exceptions aim to be an improvement over return values offering benefits mentioned above: code decluttering, error handling at the right scope via propagation and failing fast to avoid hidden errors. Of these three features, the second is questioned and undermined by the two points that proponents of return values make. Opacity in particular is a strong drawback. Checked exceptions aim to resolve this by formally including exception information in the function type and enforcing error handling through the compiler, but the industry consensus seems to be that they do more harm than good.

The way forward

It’s difficult to reach a general conclusion in favor of exceptions or return values, the matter is unclear. But let’s assume that exceptions are no good. Granted this assumption, it would not mean that the problems with return values magically went away. Is there some way, besides exceptions, to address these problems?

One approach I find promising achieves decluttering but within the confines of errors as return values, using a functional style. This technique exploits several programming language features: sum types, pattern matching and monads. But those are just details, what’s important is the end result, see this example in Rust:

The first two lines of the function are operations that can fail, they return a sum type that can represent either a success or failure. The desired behaviour is such that if any error occurs at these lines, then the function from_file should stop executing and must return that error.

Notice how the logic that achieves this is not cluttering the code, it is embedded in the try! macro, which uses pattern matching on the sum type to either return a value to be used in the next line (the file variable), or short circuit program flow back to the caller. If everything works the code eventually returns Ok. Let’s restate the main points

  • from_file has a sum type return that indicates that the function may fail.
  • the individual invocations within the function itself also return that type.
  • the try! macro extracts successful values from these invocations, or shorts circuit execution passing the error as the return of from_file.
  • the caller of the from_file function can proceed the same way or or deal with errors directly (via pattern matching)

Besides the internal machinery and language features in use, I hope it’s clear that this style of handling errors mimics some of the positive aspects of exceptions without using anything but return values. Here is another example which has a similar[2] intent, this time in Scala

This code shows a series of operations that can each fail: getting a row from a database, getting one of its columns, and then doing a lookup on a map. Finally, if any of the steps fail a default value is assigned. The equivalent code with standard null checking would be an ugly and repetitive series of nested if-blocks checking for null. As in the previous example, this error checking logic is not cluttering the code, but occurs behind the scenes thanks to the Option monad. Again, no exceptions, just return values.

Error handling is hard. After years of using exceptions it is still controversial whether they are a net gain or a step back. We cannot expect that a language feature will turn up and suddenly solve everything. Having said that, it seems to me that the functional style we have seen has something to offer in at least one of the areas where traditional return values fall short. Time will tell whether this approach is a net gain.


Notes

[1] If real world application of exceptions was in fact disastrous, software written with exceptions would just never work, and apparently it does; exceptions are widely in use in countless systems today.

So a reasonable position to take is that criticisms of exceptions are on the mark when pointing out that the exception mechanism is opaque and potentially unpredictable. This criticism does not mean that software written with exceptions is inherently flawed, but that exceptions make correct error handling hard. But then again, error handling is a hard problem to begin with. Does the opacity of exceptions null its advantages?

[2] In this case the problem is how to deal with failure in the form of null values.

[3] In Rust for example, the problem of swallowing errors is also addressed with #[must_use] , see http://doc.rust-lang.org/std/result/

Further reading

Exceptions

http://www.joelonsoftware.com/items/2003/10/13.html

http://www.lighterra.com/papers/exceptionsharmful/

http://mortoray.com/2012/03/08/the-necessity-of-exceptions/

Checked exceptions

http://www.artima.com/intv/handcuffs3.html

http://googletesting.blogspot.ru/2009/09/checked-exceptions-i-love-you-but-you.html

Rust

http://doc.rust-lang.org/std/io/index.html

http://rustbyexample.com/result/try.html

http://www.hoverbear.org/2014/08/12/Option-Monads-in-Rust/

Scala

http://danielwestheide.com/blog/2012/12/26/the-neophytes-guide-to-scala-part-6-error-handling-with-try.html

http://twitter.github.io/effectivescala/#Error handling-Handling exceptions

http://blog.protegra.com/2014/01/28/exploring-scala-options/

https://groups.google.com/forum/#!topic/scala-debate/K6wv6KphJK0%5B101-125-false%5D

Live programming and the feedback loop

Continuing along the theme of tightening the feedback loop, I saw this post on LtU mentioning a paper on live programming

Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and behavior, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible.

This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.

 

 

A cognitive approach to evaluating programming languages

Programmers have been known to engage in flame wars about programming languages (and related matters like choice of text editor, operating system or even code indent style). Rational arguments are absent from these heated debates, differences in opinion usually reduce to personal preferences and strongly held allegiances without much objective basis. I have discussed this pattern of thinking before as found in politics.

Although humans have a natural tendency to engage in this type of thought and debate for any subject matter, the phenomenon is exacerbated for fields in which there is no available objective evidence to reach conclusions; no method to settle questions in a technical and precise way. Programming languages are a clear example of this, and so better/worse opinions are more or less free to roam without the constraints of well established knowledge. Quoting from a presentation I link to below

Many claims are made for the efficacy and utility of new approaches to software engineering – structured methodologies, new programming paradigms, new tools, and so on. Evidence to support such claims is thin and such evidence, as there is, is largely anecdotal. Of proper scientific evidence there is remarkably little. – Frank Bott

Fortunately there is a recognized wisdom that can settle some debates: there is no overall better programming language, you merely pick the right tool for the job. This piece of wisdom is valuable for two reasons. First, because it is most probably true. Second, its down to earth characterization of a programming language as just a tool inoculates against religious attitudes towards it; you dont worship tools, you merely use them.

But even though this change of attitude is welcome and definitely more productive than the usual pointless flame wars, it does not automatically imply that there is no such thing as a better or worse programming language for some class of problems, or that better or worse cannot be defined in some technical yet meaningful way. After all, programming languages should be subject to advances like any other engineering tool The question is, what approach can be used to even begin to think about programming, programs, and programming languages in a rigorous way?

One approach is to establish objective metrics on source code that reflect some property of the program that is relevant for the purposes of writing better software. One such metric is the Cyclomatic complexity as a measure of soure code complexity. The motivation for this metric is clear, complex programs are harder to understand, maintain and debug. In this sense, cyclomatic complexity is an objective metric that tries to reflect a property that can be interpreted as better/worse; a practical recommendation could be to write and refactor programs code in a way that minimizes the value of this metric.

But the problem with cyclomatic complexity, or any measure, is whether it in fact reflects some property that is relevant and has meaningful consequences. It is not enough that the metric is precisely defined and objective if it doesn’t mean anything.  In the above, it would be important to determine that cyclomatic complexity is in fact correlated with difficulty in understading, maintaining, and debugging. Absent this verified correlation, one cannot make the jump from an objective metric on code to some interpretation in terms of better/worse, and we’re back where we started.

The important thing to note is that correctly assigning some property of source code a better/worse interpretation is partly a matter of human psychology, a field whose methods and conclusions can be exploited. The fact that some program is hard to understand (or maintain, debug, etc) is a consequence both of some property of the program and some aspect of the way we understand programs. This brings us to the concept of the psychology of programming as a necessary piece in the quest to investigate programming in a rigorous and empirical way.

Michael Hansen discusses these ideas in this talk: Cognitive Architectures: A Way Forward for the Psychology of Programming. His approach is very interesting, it attempts to simulate cognition via the same cognitive architectures that play a role in artificial general intelligence. Data from these simulations can cast light as to how how different programming language features impact cognition, and therefore how these features perform in the real world.

The ACT-R cognitive architecture

I have to say, however, that this approach seems very ambitious to me. First, because modeling cognition is incredibly hard to get right. Otherwise we’d already have machine intelligence. Secondly, because it is hard to isolate the effects of anything beyond a low granularity feature. And programming languages, let alone paradigms, are defined by the interplay of many of these features and characteristics. Both of these problems are recognized by the speaker.


[1] Image taken from http://www.lackuna.com/2013/01/02/4-programming-languages-to-ace-your-job-interviews/

Make static type checking work for you

(Beta reduction)

When first using programming languages like C, C++, Java, I never stopped to think about what the compilation phase was actually doing, beyond considering it a necessary translation from source code to machine code. The need to annotate variables and methods with types seemed an intrinsic part of that process, and I never gave much thought about what it really meant, or the purpose it served; it was just part of writing and compiling code.

Using dynamic languages changes things, you realize that code may need some form of translation or interpretation for execution, but does not absolutely require type annotations as with statically typed languages. In this light, one reconsiders what a type system is from scratch, how it plays a part in compilation, and what compilation errors and type errors really are.

A type error (I am not talking about statistics) is an important concept to grasp well. In fact not just for programming, it is also a useful concept to illuminate errors in natural language and thinking. Briefly, a type error is treating data as belonging to a kind to which it does not in fact belong. The obvious example is invoking operations on an object that does not support them. Or alternatively, according to this page

type error: an attempt to perform an operation on arguments of the wrong types.

If one tries to perform such illegal operations (and there is no available conversion or coercion), the program can behave unexpectedly or crash. Statically typed languages can detect this error at compile time, which is a good thing. This is one of the main points advocates of statically typed languages make when discussing static and dynamic languages.

But the question is, how big an advantage is this? How important are type errors, what fraction of common programming errors do they make up? And how much of program correctness (in the non strict sense of the term) can a compiler check and ensure?

Compilers and type systems can be seen not just requirements to write programs that do not crash, but also tools with which to express and check problem domain information. The more information about the problem domain can be encoded in the type information in a program, the more the compiler can check for you. In this mindset one adds type information order to exploit the type system, rather than just conform to it.

Here are a few examples where problem domain information is encoded in types in order to prevent errors that otherwise could occur at runtime, and therefore would need specific checks. A concrete example taken from the first post I linked

  • If I call move on a tic-tac-toe board, but the game has finished, I should get a compile-time type-error. In other words, calling move on inappropriate game states (i.e. move doesn’t make sense) is disallowed by the types.
  • If I call takeMoveBack on a tic-tac-toe board, but no moves have yet been made, I get a compile-time type-error.
  • If I call whoWonOrDraw on a tic-tac-toe board, but the game hasn’t yet finished, I get a compile-time type-error.

By encoding these rules of the problem domain into the type system, it is not possible to write a program that violates the rule, logic errors in the program do not compile.

But it is unrealistic to go all the way and say, all the relevant information should be expressible in types, and its really seductive twin: all program errors are type errors. Unfortunately, the real world is not that tidy and convenient, unless you’re doing specialized things like theorem proving, or programming with an exotic programming language like Coq.

As advocates of dynamic languages know very well, there is no subsitute for unit testing and integration tests. The compiler should not make you feel safe enough to disregard testing. Compile-time checking and testing are complementary, not mutually exclusive. Compile-time checking does not eliminate the need for testing, nor does testing eliminate the benefits of compile-time checking.

Still, there is most probably room left to express more relevant information in the type system and making the compiler do more work for you. And this becomes even more plausible if you’re using a language with such a powerful type system as Scala. More on these matters: Static Typing Where Possible, Dynamic Typing When Needed: The End of the Cold War Between Programming Languages.