Programmers have been known to engage in flame wars about programming languages (and related matters like choice of text editor, operating system or even code indent style). Rational arguments are absent from these heated debates, differences in opinion usually reduce to personal preferences and strongly held allegiances without much objective basis. I have discussed this pattern of thinking before as found in politics.
Although humans have a natural tendency to engage in this type of thought and debate for any subject matter, the phenomenon is exacerbated for fields in which there is no available objective evidence to reach conclusions; no method to settle questions in a technical and precise way. Programming languages are a clear example of this, and so better/worse opinions are more or less free to roam without the constraints of well established knowledge. Quoting from a presentation I link to below
Many claims are made for the efficacy and utility of new approaches to software engineering – structured methodologies, new programming paradigms, new tools, and so on. Evidence to support such claims is thin and such evidence, as there is, is largely anecdotal. Of proper scientific evidence there is remarkably little. – Frank Bott
Fortunately there is a recognized wisdom that can settle some debates: there is no overall better programming language, you merely pick the right tool for the job. This piece of wisdom is valuable for two reasons. First, because it is most probably true. Second, its down to earth characterization of a programming language as just a tool inoculates against religious attitudes towards it; you dont worship tools, you merely use them.
But even though this change of attitude is welcome and definitely more productive than the usual pointless flame wars, it does not automatically imply that there is no such thing as a better or worse programming language for some class of problems, or that better or worse cannot be defined in some technical yet meaningful way. After all, programming languages should be subject to advances like any other engineering tool The question is, what approach can be used to even begin to think about programming, programs, and programming languages in a rigorous way?
One approach is to establish objective metrics on source code that reflect some property of the program that is relevant for the purposes of writing better software. One such metric is the Cyclomatic complexity as a measure of soure code complexity. The motivation for this metric is clear, complex programs are harder to understand, maintain and debug. In this sense, cyclomatic complexity is an objective metric that tries to reflect a property that can be interpreted as better/worse; a practical recommendation could be to write and refactor programs code in a way that minimizes the value of this metric.
But the problem with cyclomatic complexity, or any measure, is whether it in fact reflects some property that is relevant and has meaningful consequences. It is not enough that the metric is precisely defined and objective if it doesn’t mean anything. In the above, it would be important to determine that cyclomatic complexity is in fact correlated with difficulty in understading, maintaining, and debugging. Absent this verified correlation, one cannot make the jump from an objective metric on code to some interpretation in terms of better/worse, and we’re back where we started.
The important thing to note is that correctly assigning some property of source code a better/worse interpretation is partly a matter of human psychology, a field whose methods and conclusions can be exploited. The fact that some program is hard to understand (or maintain, debug, etc) is a consequence both of some property of the program and some aspect of the way we understand programs. This brings us to the concept of the psychology of programming as a necessary piece in the quest to investigate programming in a rigorous and empirical way.
Michael Hansen discusses these ideas in this talk: Cognitive Architectures: A Way Forward for the Psychology of Programming. His approach is very interesting, it attempts to simulate cognition via the same cognitive architectures that play a role in artificial general intelligence. Data from these simulations can cast light as to how how different programming language features impact cognition, and therefore how these features perform in the real world.
I have to say, however, that this approach seems very ambitious to me. First, because modeling cognition is incredibly hard to get right. Otherwise we’d already have machine intelligence. Secondly, because it is hard to isolate the effects of anything beyond a low granularity feature. And programming languages, let alone paradigms, are defined by the interplay of many of these features and characteristics. Both of these problems are recognized by the speaker.
 Image taken from http://www.lackuna.com/2013/01/02/4-programming-languages-to-ace-your-job-interviews/