Art, regularity, novelty and dopamine

Dopamine (Wikipedia)

When writing up the post on regularity I was looking for an image that would match the content. As you can see if you go back to that entry the image shows a regular structure, a fragment of architecture. I was quite happy with it, but what turned out to be even more interesting was the article I lifted it from, which I did not look at until now.

It happens that in that post the author explores some of the same ideas I’ve considered when attempting to apply concepts from information theory[1] to yield an interpretation of artistic experience as a learning process. The notions of monotony and complexity are also given a similar (tentative) mathematical formalization in terms of algorithmic information theory, which is how we have just discussed regularity in the previous entry.

Of special interest is the hypothesis of how monotony (at the low complexity end of the spectrum, see below on AIC) is not only boring but in fact distressing, due to a mismatch between what our neural system is tuned to observe and what is actually observed

why is human neurological response actually negative? Some insight into the effect comes from the notion of Biophilia, which asserts that our evolution formed our neurological system within environments defined by a very high measure of a specific type of coherent complexity. That is, our neurological system was created (evolved) to respond directly and exquisitely to complex, fractal, hierarchical geometric environments. When placed in environments that have opposite geometrical features, therefore, we feel ill at ease.

In my brief essay on the matter I mentioned hierarchical nature of processing in the visual cortex, which is reflected in machine learning approaches using recurring neural networks such as HTM’s and deep learning. The hierarchical nature of the processing, as well as the dual top-down and bottom-up flow of information are well established in neuroscience. However, it remains to be seen if there is a neurological explanation for the observation that “we feel ill at ease” with, for example, monotonous geometry.

In any case, all this led me to to try to extend my information-theoretic picture of artistic expression and experience with a grounding in neurological processes. In particular, how does the brain deal with regularity and novelty as relates to learning and satisfaction/pleasure. Knowledge from neuroscience points to dopamine as both related to pleasure and reward and, critically, learning (temporal difference learning).

I’m still thinking many of these ideas through, but I’m going to try to summarize some of them in their crude form[6] so I can return to this as reference, please excuse the lack of polish.

Here are the main points:

* Artistic expression can be characterized in terms of its position in a spectrum whose extremes are the obvious/boring on one side and the unintelligible on the other. This spectrum corresponds to the algorithmic information content of the expression.

* Artistic experience is a learning process where regularities are extracted and predictions are made according to those regularities

* Novelty is the mismatch between prediction and observation, novelty is a departure from regularity

* Novelty, either as reward-prediction error, or as an intrinsic reward (indirect indicator of reward) motivating exploration[2], activates dopamine in the brain. In the latter case, this is related to the exploration vs exploitation aspect of reinforcement learning[3].

* Different dopamine response profiles yield different levels of sensation-seeking in individuals[4], which may partially account for different taste in terms of desired optimum points on the AIC spectrum

* The optimum balance between insufficient and excessive complexity in the AIC spectrum parallels the requirement of balance between task difficulty and skill in flow

* Artistic satisfaction/pleasure is an interplay between novelty and regularity[5], it may be possible to explain this within some model of reward mediated by dopamine[6]


Notes/References

[1] Perhaps the canonical example of this line of work is Jurgen Schmidhuber’s treatment

[2]  Absolute Coding of Stimulus Novelty in the Human Substantia Nigra/VTA [2006] , Pure novelty spurs the brain

[3] Exploration & Exploitation Balanced by Norepinephrine & Dopamine [2007]

[4] Midbrain Dopamine Receptor Availability Is Inversely Associated with Novelty-Seeking Traits in Humans [2008]

[5] Dopamine: generalization and bonuses [2002]

[6] In particular, it is unclear whether a dopaminergic model is applicable to standalone perception which is the case for artistic experience, since actions and explicit reward are absent, and the timescales may be too small to be compatible with such a model.

What we mean by regularity

Regular structure (Nikos A. Salingaros)

I’ve spoken before of regularity, but haven’t defined it exactly. But before going into that, let’s first consider the intuitive notion that comes to mind. By regularity we mean something exhibiting pattern, repetition, invariance. We say something is regular if it follows a rule. In fact, the word’s etymology matches this, as regular is derived from the latin regula, rule. Repetition and invariance result from the continued applicability of the rule, over time and/or space, to that which is regular. For example

1, 3, 5, 7, 9, 11, 13, 15….

we say this sequence is regular because it follows a rule. The rule is

each number is the result of adding 2 to the number before it

As per our scheme above, the rule is applicable throughout the sequence, the +2 difference repeats, and it is invariant.

This way of looking at regularity matches the language we’ve used previously when defining the key aspect of learning as the extraction of generally applicable knowledge from specific examples. In this case the specific examples would be any subset of the sequence, the general case is the sequence in its entirety, and the extracted knowledge is the rule “+2”.

We can take this further and try to formalize it by realizing one consequence of the repetition characteristic. And it is that something that repeats can be shortened. The reason is simple, if we know the rule, we can describe the entire object[1] by just describing the rule. The rule, by continued repetition, will reproduce the object up to any length. We can use the example before, and note how the sequence can be described succinctly as

f(x) = 2x + 1

which is much shorter than the sequence (which can be infinite in fact). So we can think of the rule as a compression of the object, or from the other point of view, the object is the expansion of the rule. Here’s another example

Mandelbrot set (wikipedia)

In this case, the object is a fractal, which can be described graphically by a potentially infinite set of points. The level of detail is infinite in the sense that one can zoom-in to arbitrary levels without loss of detail. This is why the description at a literal level (ie pixels) is infinitely long. However, like all fractals, the Mandelbrot set can be compactly described mathematically. So we say the set is highly regular by virtue of the existence of a short description that can reproduce all its detail. Here’s the short (formal) description for the Mandelbrot set

In mathematics the length of what we have called short description is known as Kolmogorov complexity, or alternatively, Algorithmic Information Content. It is a measure of the quantity of information in an object, and is the inverse of regularity as we have discussed it here[2]. We say that something with a comparably low AIC exhibits regularity as it can be compressed down to something much shorter.

I’ll regularly return to the concept of regularity as it is a fundamental way to look at pretty much everything, and is thus a very deep subject.


[1] For lack of a better word, I’m using the word object to refer to that which can house regularity, which is basically anything you can think of

[2] Note that this is not the only way to look at regularity, but rather one of two main formalizations. What we have seen here is the algorithmic approach to complexity (and regularity), but there is also a statistical view that is more suited to objects that do not have a fixed description, but rather a statistical one.