Mind Matters is a newsletter written by Oshan Jarow, exploring post-neoliberal economic possibilities, contemplative philosophy, consciousness, & some bountiful absurdities of being alive. If you’re reading this but aren’t subscribed, you can join here:
Dear fellow humans,
Today, a few areas of exploration.
— How do we get from neurons to consciousness? How do brains internally model complex worlds?
— The economy is a lens through which the future refracts into the present. AI is coming, in some form, and Americans report being generally scared they’ll lose their jobs. But what form of political economy could transmute our fear into delight? Take our shitty jobs, ye AI’s! You are but the herald of a more beautiful world!
— Mind control is possible, but only in the long-term.
In we go.
How does a wet clump of neurons confined inside the dark chamber of a skull generate an internal model of the world?
Or: predictive processing makes sense conceptually, but how does it actually happen, at the cellular level?
It’s strange - and also somewhat fitting, for us - that we have such a bland & unexciting label attached to so dazzling, & important, a theory as “predictive processing” (PP).
In short, since I’ve filled enough pixels elsewhere on PP: the brain is always generating, and cultivating, mental models of the world. Through subjective experience, we don’t encounter the world ‘out there’. Instead, we’re interfacing with our brain’s internally generated model — the same kind it generates as a dream when we’re asleep.
In recent months, I’ve explored this idea from three different angles: meditation, psychedelics, and fiction. But across all three, I simply accepted that, somehow, the brain, this wet clump of neurons locked up inside a skull, plus some neurons tucked down along the spine, can both generate and modulate such a splendidly rich, varied, & complex phenomenon as my waking conscious experience.
…
How does predictive processing actually work, at the cellular level? How do things like neurons and dendrites and electricity generate something as abstract as consciousness?
How do neurons generate worlds?
I found hints of an answer, or at least a theory, in a recent essay by Michael Levin & Rafael Yuste.
…
The human brain has nearly 100 billion interconnected neurons. These neurons form cliques, like a high school cafeteria; ‘modules’, or neural circuits (modules is in quote marks because we have a history of taking that metaphor too far). If you look at a single neuron, it’s tough to make sense of how something so simple could generate an experiential depth so complex.
But if you zoom out & see the brain as a whole, you see a system of interacting modules. The key is that these modules can generate activity endogenously, without any help from the outside world:
“Because neurons can set each other off, neural circuits can generate internal states of activity that are independent of the outside world. A group of connected neurons could auto-excite each other and become active together for a period of time, even if nothing external is happening. This is how we might understand the existence of the concepts and abstractions that populate the human mind – as the endogenous activity of modules, made up of ensembles of neurons.”
So the concepts and abstractions that populate the human mind are the endogenous activity of auto-exciting neural modules. Ok. But still, how do we get from the raw data, ‘look, these 18 different neural modules are firing in this particular pattern’, to something so abstract as ‘point my toes and twirl because I’m performing in a ballet’?
The answer given: turn the neural modules into symbols. The brain can use the inventory of intrinsically generated, patterned activity in the same way humans use letters in the alphabet. Language is basically a collective decision to let, for example, the combination of five symbols (“a+p+p+l+e”) equal the concept “apple”.
Similarly, the brain can say: let the combination of these five activated neural circuits equal the concept “apple”. All the possible states of internal activity thus provide the brain an ‘alphabet’ it can use to construct symbolic models of the world:
“Using those intrinsic activity states as symbols, evolution could then build formal representations of reality. It could manipulate those states instead of manipulating the reality, just as we do by defining mathematical terms to explore relationships between objects. From this point of view – the gist of which was already proposed by Immanuel Kant in his Critique of Pure Reason (1781, 1787) – the evolution of the nervous system represents the appearance of a new formal world, a symbolic world, that greatly expands the possibilities of the physical world because it gives us a way to explore and manipulate it mentally.”
Ok, so brains can turn intrinsic patterns of activity into symbols that represent abstractions, and then manipulate that internally symbolized reality to run predictions about the world ‘out there’.
But all predictions are not equal. Some are very simple, like modeling the act of twitching my big toe. Others are more complex, like modeling the “distress that makes human wills founder daily under the crushing number of living things and stars” (according to Annie Dillard, this condition vexed a particular French paleontologist).
To deal with this complexity, the brain uses hierarchy and pattern completion. Simple concepts - like twitching one’s toe - can be modeled on lowly neurons, like those down in the spinal cord. More complex models, like the French paleontologist’s distress, can be modeled by the big-shot neurons in the prefrontal cortex.
This affords the brain an efficient trick. To model complex things, it doesn’t need to activate every single neuron that’s part of the symbol assigned to that complex thing. Instead, it just endogenously activates only enough relevant neurons to ‘trigger’ the rest into the proper pattern.
For example, take the image below. Once you know the letter “G” (image 1), you can then recognize G from only half the inputs (image 2), from which you can complete the pattern on your own (image 3).
“Using modules nested in a hierarchy provides a neat solution to a tough design challenge: instead of specifying and controlling every element, one at a time, nature uses neuronal ensembles as computational building blocks to perform different functions at different levels. This progression towards increasing abstraction could help explain how cognition and consciousness might arise, as emergent functional properties, from relatively simple neural hardware.”
…
The metaphor of using neural activity like a language is actually, I think, literal. When infants learn languages, they’re learning to assign conceptual meaning to pre-existing neural activity. Learning a language is learning how to construct a particular internal model of the world.
This bamboozles me. For example, when we tell an infant that this red circular thing is an “apple”, are their brains going, “okay, this set of neural circuitry that’s active when I behold this weird red circular thing is, forever more, to be known as “apple”. Having linked the concept “apple” to the relevant neural activity, it can now model that apple internally.
This would explain why they say that you know you’ve really learned a language once you start reflexively thinking in it. It suggests you’re no longer actively carrying out the conversion of “this word means that concept”. Instead, that process happens automatically, because you have a functional, internal representation of the world using that language.
…
Anyway.
…
What if AI stealing our jobs didn’t scare us? Can’t we make this place beautiful?
A recent Gallup poll found that Americans are generally worried about AI. Of the 37% who reported being “more concerned than excited” about the increased presence of AI in daily life, the most common reason was “loss of human jobs”.
This makes sense, because the US has a terrible safety net. Losing your job can be a really big, bad deal.
The irony is that this is exactly what Americans hoped would happen during the period from 1830 - 1930. Economic development was to deliver technology that would increasingly liberate us from labor. The future was full of leisure (I explored the decline of this vision in my conversation with the historian Benjamin Hunnicutt).
Instead, we’ve designed an economy that leaves us shaking in our boots. But of course, things can be otherwise. We can alter the bones of the economy, and transmute our fear into elation. There is a world - well within reach - where we’d rejoice at the prospect of unemployment. Take our shitty jobs, ye AI’s1!
To put this a little differently: the economy is a lens through which the prospect of AI refracts. With a different economy, the refraction changes. That which at first presented as frightful, may in fact prove delightful. The question isn’t whether or not AI will take our jobs, but what sort of economic lens we have in place to refract whatever AI does do. Are we well positioned to see beautiful refractions? Or terrifying ones?
Despite the dystopian flavor, the realtor in this Maggie Smith poem is right. To build beauty, start with the bones:
Life is short, though I keep this from my children. Life is short, and I’ve shortened mine in a thousand delicious, ill-advised ways, a thousand deliciously ill-advised ways I’ll keep from my children. The world is at least fifty percent terrible, and that’s a conservative estimate, though I keep this from my children. For every bird there is a stone thrown at a bird. For every loved child, a child broken, bagged, sunk in a lake. Life is short and the world is at least half terrible, and for every kind stranger, there is one who would break you, though I keep this from my children. I am trying to sell them the world. Any decent realtor, walking you through a real shithole, chirps on about good bones: This place could be beautiful, right? You could make this place beautiful. [Good Bones, by Maggie Smith]
We can make this place beautiful; we should start with good bones.
[and just because I hate when people call for economic change but don’t specify some ideas, here are some better bones I’m interested in: guaranteed income, universal healthcare, sovereign wealth funds, public investment banks, land value taxation, and codetermination, to name a few. If you’re interested in this sort of policy discourse, I’m working on this over at the Library of Economic Possibility.]
…
Mind control is impossible in the short-term, but increasingly feasible in the long-term
Some folks recently published an absurdly interesting paper that integrates biology, Buddhism, & AI. I think my next newsletter will be a walkthrough/response, but here’s one fun tidbit about mind control.
Mind control is impossible in the short-term. For example, don’t think about a purple octopus.
You probably did. If not, here:
Ok, maybe you’re some great meditator and still haven’t. Fine. Then here’s a paradox to seal the deal:
“Would not perfect control of one’s mind imply that one knew exactly what one was going to think, and then subsequently thought it? In that case, whenever a new thought arose, we would, absurdly, be rethinking what we had thought already, or otherwise there would, just as absurdly, have to be an infinite line of prior control modules in place for a single controlled thought to occur. Such consequences suggest that the concept of individual mind control is incoherent.”
This all being the case, the longer our time horizon, the more control we do have:
“‘In control of my mind’ (a necessary aspect of the common notion of free will) is logically impossible on the short time scale, but may be coherent on a very long time scale (‘I've undertaken practices to eventually change the statistical distribution of the kinds of thoughts I will have in the future’). This in turn underscores the importance of long-term strategies, such as a vow to expand cognition.”
We can control minds in the long-term by undertaking practices (and institutional strategies) to alter the statistical distribution of the kinds of thoughts we may have in the future.
This idea of intentionally altering the statistical distribution of potential conscious states ties in very nicely with Karl Friston’s free energy principle (the FEP, which I undertook to explain here). The FEP is an organizing principle for all living systems (modest, I know). It states that living systems are characterized by the efforts they undertake to bias the probability distribution of states they inhabit towards those that are most conducive for their continued existence.
A tree is an assemblage of matter that may occupy a vast number of states in the future. Only a sliver of those states are ones where the tree continues to exist as a system with clear boundaries between itself and other things. So the tree, as a living system, acts so as to alter the probability distribution of its future states towards those where it continues to exist.
Humans do this too. How? Predictive processing. The point of this whole thing I’ve described here, brains generating internal models of the world, is just evolution’s latest trick in the game of survival. So if you learn anything more about how it actually works, I’m all ears.
You can find more writing, podcasts, or even explore my research garden, on my website. If you’d like to talk, reach out! You can reply directly, find me on Twitter, or join the Discord.
Until next time,
~ Oshan
If we do usher in a world where AI’s do all the shitty work, the imperative to develop understandings, and ethical frameworks, around ‘how AI’s feel’ becomes massive (it already is, but anyway). I don’t want our freedom to come at the cost of enslaving robots, who feel even a minor semblance of suffering we’re struggling to escape.