Causal entropy maximization and intelligence

CEM

Taken from Causal Entropic Forces

Recently I was referred to a paper titled Causal Entropic Forces published in Physical Review Letters that attempts to link intelligence and entropy maximization. You can find reviews of this paper here and here. The paper starts with

Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization….In this Letter, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we show can spontaneously induce remarkably sophisticated behaviors associated with the human ‘‘cognitive niche,’’ including tool use and social cooperation, in simple physical systems.

The authors then go on to define a causal path version of entropy. Briefly, this is a generalization from standard entropy, a measure of how many states a system can be in at a specific point in time, to causal path entropy, a measure of how many paths that system can follow during a given time horizon. In technical language, microstates are mapped to paths in configuration space, and macrostates are mapped to configuration space volumes:

In particular, we can promote microstates from instantaneous configurations to fixed-duration paths through configuration space while still partitioning such microstates into macrostates according to the initial coordinate of each path

In other words, an initial coordinate establishes a volume in configuration space which represents possible future histories starting at that point. This is the macrostate (depicted as a cone in the image above)

Having defined this version of entropy, the authors then add the condition of entropy maximization to their model; this is what they call causal entropic forcing. For this to have a net effect, some macrostates have volumes which are partially blocked off for physical reasons. Consequently these macrostates have less available future paths, and less causal path entropy. The result is that different macrostates with different entropies can be differentially favored by condition of causal entropy maximization:

there is an environmentally imposed excluded path-space volume that breaks translational symmetry, resulting in a causal entropic force F directed away from the excluded volume.

Note that, contrary to actual thermodynamical systems that naturally exhibit entropy maximization for statistical reasons, causal entropic forcing is not physical, it is a thermodynamics inspired premise the authors add to their model as a “what if” condition, to see what behaviour results. So, what happens when systems are subject to causal entropic forcing?

 we simulated its effect on the evolution of the causal macrostates of a variety of simple mechanical systems: (i) a particle in a box, (ii) a cart and pole system, (iii) a tool use puzzle, and (iv) a social cooperation puzzle…The latter two systems were selected because they isolate major behavioral capabilities associated with the human ‘‘cognitive niche’’

Before you get excited, the “tool use puzzle” and “social cooperation puzzle” are not what one would imagine. They are simple “toy simulations” that can be interpreted as tool use and social cooperation. In any case, the result was surprising. When running these simulation the authors observed adaptive behaviour that was remarkably sophisticated given the simplicity of the physics model it emerged from. What’s more, not only was the behaviour adaptive, but it exhibited a degree of generality; the same basic model was applied to both examples without specific tuning.

The remarkable spontaneous emergence of these sophisticated behaviors from such a simple physical process suggests that causal entropic forces might be used as the basis for a general—and potentially universal—thermodynamic model for adaptive behavior.

How does this fit in with intelligence?

I see two different ways one can think about this new approach. One, as an independent definition of intelligence from very simple physical principles. Two, in terms of existing definitions of intelligence, seeing where it fits in and if it can be shown to be equivalent or recovered partially.

Defining intelligence as causal entropy maximization (CEM) is a very appealing as it only requires a few very basic physical principles to work. In this sense it is a very powerful concept. But as all definitions it is neither right nor wrong, its merit rests on how useful it is. The question is thus how well does this version of intelligence capture our intutions about the concept, and how well it fits with existing phenomena that we currently classify as intelligent[1]. Ill consider a simple example to suggest that intelligence defined this way cannot be the entire picture.

That example is unsurprisingly life, the cradle of intelligence. The concept that directly collides with intelligence defined as CEM is negentropy. Organisms behave adaptively to keep their biological systems within the narrow space of parameters that is compatible with life. We would call this adaptive behaviour intelligent, and yet its effect is precisely that of reducing entropy. Indeed, maximizing causal entropy for a living being means one thing, death.

One could argue that the system is not just the living organism, but the living organism plus its environment, and that in that case the entropy perhaps would be maximized[2]. This could resolve the apparent incompatibility, but CEM still seems unsatisfying. How can a good definition of intelligence leave out an essential aspect of intelligent life: the entropy minimization that all living beings must carry out. Is this local entropy minimization implicit in the overall causal entropy maximization?

CEM, intelligence and goals

Although there is no single correct existing definition of intelligence, it can be said that current working definitions share certain common features. Citing [3]

If we scan through the definitions pulling out commonly occurring features we find that intelligence is:

• A property that an individual agent has as it interacts with its environment or environments.

• Is related to the agent’s ability to succeed or profit with respect to some goal or objective.

• Depends on how able to agent is to adapt to different objectives and environments.

In particular, intelligence is related to the ability to achieve goals. One of the appealing characteristics of CEM as defining intelligence is that it does not need to define goals explicitly. In the simulations carried out by the authors the resulting behaviour seemed to be directed at achieving some goal that was not specified by the experimenters. It could be said that the goals emerged spontaneously from CEM.  But it remains to be seen whether this goal directed behaviour results automatically in real complex environments. For the example of life I mentioned above, it looks to be just the opposite.

So in general, how does CEM fit in with existing frameworks where intelligence is the ability to achieve goals in a wide range of environments[3]? Again, I see two possibilities:

  • CEM is a very general heuristic[4] that aligns with standard intelligence when there is uncertainty in the utility of different courses of action
  • CEM can be shown to be equivalent if there exists an encoding that represents specific goals via blocked off regions in configuration space (macrostates)

The idea behind the first possibility is very simple. If an agent is faced with many possibilities where it is unclear which one will lead to achieving its goals, then maximizing expected utility would seek to follow courses of action that allow it to react adaptively and flexibly when more information becomes available. This heuristic is just a version of keep your options open.

The second idea is just a matter of realizing that CEM’s resulting behaviour depends on how you count possible paths to determine the causal entropy of a macrostate. If one were to rule out paths that result in low utility given certain goals, then CEM could turn out to be equivalent to existing approaches to intelligence. Is it possible to recover intelligent goal directed behaviour as an instance of CEM given the right configuration space restrictions?


References

[1] Our intuitions about intelligence exist prior to any technical definition. For example, we would agree that a monkey is more intelligent than a rock, and that a person is more intelligent than a fly. A definition that does not fit these basic notions would be unsatisfactory.

[2] http://prd.aps.org/abstract/PRD/v76/i4/e043513

[3] http://www.vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf

[4] This seems related to the idea of basic AI drives identified by Omohundro in his paper http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf. In particular 6. AIs will want to acquire resources and use them efficiently. Availability of resources translates to the ability to follow more paths in configuration space, paths that would be unavailable otherwise.

Live programming and the feedback loop

Continuing along the theme of tightening the feedback loop, I saw this post on LtU mentioning a paper on live programming

Live programming allows programmers to edit the code of a running program and immediately see the effect of the code changes. This tightening of the traditional edit-compile-run cycle reduces the cognitive gap between program code and behavior, improving the learning experience of beginning programmers while boosting the productivity of seasoned ones. Unfortunately, live programming is difficult to realize in practice as imperative languages lack well-defined abstraction boundaries that make live programming responsive or its feedback comprehensible.

This paper enables live programming for user interface programming by cleanly separating the rendering and non-rendering aspects of a UI program, allowing the display to be refreshed on a code change without restarting the program. A type and effect system formalizes this separation and provides an evaluation model that incorporates the code update step. By putting live programming on a more formal footing, we hope to enable critical and technical discussion of live programming systems.

 

 

A cognitive approach to evaluating programming languages

Programmers have been known to engage in flame wars about programming languages (and related matters like choice of text editor, operating system or even code indent style). Rational arguments are absent from these heated debates, differences in opinion usually reduce to personal preferences and strongly held allegiances without much objective basis. I have discussed this pattern of thinking before as found in politics.

Although humans have a natural tendency to engage in this type of thought and debate for any subject matter, the phenomenon is exacerbated for fields in which there is no available objective evidence to reach conclusions; no method to settle questions in a technical and precise way. Programming languages are a clear example of this, and so better/worse opinions are more or less free to roam without the constraints of well established knowledge. Quoting from a presentation I link to below

Many claims are made for the efficacy and utility of new approaches to software engineering – structured methodologies, new programming paradigms, new tools, and so on. Evidence to support such claims is thin and such evidence, as there is, is largely anecdotal. Of proper scientific evidence there is remarkably little. – Frank Bott

Fortunately there is a recognized wisdom that can settle some debates: there is no overall better programming language, you merely pick the right tool for the job. This piece of wisdom is valuable for two reasons. First, because it is most probably true. Second, its down to earth characterization of a programming language as just a tool inoculates against religious attitudes towards it; you dont worship tools, you merely use them.

But even though this change of attitude is welcome and definitely more productive than the usual pointless flame wars, it does not automatically imply that there is no such thing as a better or worse programming language for some class of problems, or that better or worse cannot be defined in some technical yet meaningful way. After all, programming languages should be subject to advances like any other engineering tool The question is, what approach can be used to even begin to think about programming, programs, and programming languages in a rigorous way?

One approach is to establish objective metrics on source code that reflect some property of the program that is relevant for the purposes of writing better software. One such metric is the Cyclomatic complexity as a measure of soure code complexity. The motivation for this metric is clear, complex programs are harder to understand, maintain and debug. In this sense, cyclomatic complexity is an objective metric that tries to reflect a property that can be interpreted as better/worse; a practical recommendation could be to write and refactor programs code in a way that minimizes the value of this metric.

But the problem with cyclomatic complexity, or any measure, is whether it in fact reflects some property that is relevant and has meaningful consequences. It is not enough that the metric is precisely defined and objective if it doesn’t mean anything.  In the above, it would be important to determine that cyclomatic complexity is in fact correlated with difficulty in understading, maintaining, and debugging. Absent this verified correlation, one cannot make the jump from an objective metric on code to some interpretation in terms of better/worse, and we’re back where we started.

The important thing to note is that correctly assigning some property of source code a better/worse interpretation is partly a matter of human psychology, a field whose methods and conclusions can be exploited. The fact that some program is hard to understand (or maintain, debug, etc) is a consequence both of some property of the program and some aspect of the way we understand programs. This brings us to the concept of the psychology of programming as a necessary piece in the quest to investigate programming in a rigorous and empirical way.

Michael Hansen discusses these ideas in this talk: Cognitive Architectures: A Way Forward for the Psychology of Programming. His approach is very interesting, it attempts to simulate cognition via the same cognitive architectures that play a role in artificial general intelligence. Data from these simulations can cast light as to how how different programming language features impact cognition, and therefore how these features perform in the real world.

The ACT-R cognitive architecture

I have to say, however, that this approach seems very ambitious to me. First, because modeling cognition is incredibly hard to get right. Otherwise we’d already have machine intelligence. Secondly, because it is hard to isolate the effects of anything beyond a low granularity feature. And programming languages, let alone paradigms, are defined by the interplay of many of these features and characteristics. Both of these problems are recognized by the speaker.


[1] Image taken from http://www.lackuna.com/2013/01/02/4-programming-languages-to-ace-your-job-interviews/

Make static type checking work for you

(Beta reduction)

When first using programming languages like C, C++, Java, I never stopped to think about what the compilation phase was actually doing, beyond considering it a necessary translation from source code to machine code. The need to annotate variables and methods with types seemed an intrinsic part of that process, and I never gave much thought about what it really meant, or the purpose it served; it was just part of writing and compiling code.

Using dynamic languages changes things, you realize that code may need some form of translation or interpretation for execution, but does not absolutely require type annotations as with statically typed languages. In this light, one reconsiders what a type system is from scratch, how it plays a part in compilation, and what compilation errors and type errors really are.

A type error (I am not talking about statistics) is an important concept to grasp well. In fact not just for programming, it is also a useful concept to illuminate errors in natural language and thinking. Briefly, a type error is treating data as belonging to a kind to which it does not in fact belong. The obvious example is invoking operations on an object that does not support them. Or alternatively, according to this page

type error: an attempt to perform an operation on arguments of the wrong types.

If one tries to perform such illegal operations (and there is no available conversion or coercion), the program can behave unexpectedly or crash. Statically typed languages can detect this error at compile time, which is a good thing. This is one of the main points advocates of statically typed languages make when discussing static and dynamic languages.

But the question is, how big an advantage is this? How important are type errors, what fraction of common programming errors do they make up? And how much of program correctness (in the non strict sense of the term) can a compiler check and ensure?

Compilers and type systems can be seen not just requirements to write programs that do not crash, but also tools with which to express and check problem domain information. The more information about the problem domain can be encoded in the type information in a program, the more the compiler can check for you. In this mindset one adds type information order to exploit the type system, rather than just conform to it.

Here are a few examples where problem domain information is encoded in types in order to prevent errors that otherwise could occur at runtime, and therefore would need specific checks. A concrete example taken from the first post I linked

  • If I call move on a tic-tac-toe board, but the game has finished, I should get a compile-time type-error. In other words, calling move on inappropriate game states (i.e. move doesn’t make sense) is disallowed by the types.
  • If I call takeMoveBack on a tic-tac-toe board, but no moves have yet been made, I get a compile-time type-error.
  • If I call whoWonOrDraw on a tic-tac-toe board, but the game hasn’t yet finished, I get a compile-time type-error.

By encoding these rules of the problem domain into the type system, it is not possible to write a program that violates the rule, logic errors in the program do not compile.

But it is unrealistic to go all the way and say, all the relevant information should be expressible in types, and its really seductive twin: all program errors are type errors. Unfortunately, the real world is not that tidy and convenient, unless you’re doing specialized things like theorem proving, or programming with an exotic programming language like Coq.

As advocates of dynamic languages know very well, there is no subsitute for unit testing and integration tests. The compiler should not make you feel safe enough to disregard testing. Compile-time checking and testing are complementary, not mutually exclusive. Compile-time checking does not eliminate the need for testing, nor does testing eliminate the benefits of compile-time checking.

Still, there is most probably room left to express more relevant information in the type system and making the compiler do more work for you. And this becomes even more plausible if you’re using a language with such a powerful type system as Scala. More on these matters: Static Typing Where Possible, Dynamic Typing When Needed: The End of the Cold War Between Programming Languages.