
Discover more from Mind Matters
Mind Matters is a newsletter written by Oshan Jarow, exploring post-neoliberal economic possibilities, contemplative philosophy, consciousness, & some bountiful absurdities of being alive. If you’re reading this but aren’t subscribed, you can join here:
Hello, fellow humans!
I just released a new episode of the Musing Mind Podcast - it’s a conversation with Emma Stamm about digital capitalism, the data-fication of consciousness, and acid communism.
You can find it here, or read more about it below.
Ok, in we go.
Algorithms, Minds, & the Impending Death of Novelty
If you compare the predictive processing (PP) theory of human minds and the role of algorithms in digital capitalism, you discover an interesting nugget: both minds and algorithms are driven by the same directive, which is to minimize prediction error.
PP theory hypothesizes that the brain constructs internally generated models of the world, like those you experience in dreams, and that what we experience in our waking consciousness is not the world 'out there', but our models, or "controlled hallucinations", of that world. As we receive stimuli from the world out there, our brains use that stimuli to check the veracity of their predictions. When stimuli confirm our predictive models, we gain confidence in the relevant assumptions used to generate the model. When stimuli contradict our models, a quick error-correction process ensues, where the brain searches for a more appropriate prediction to explain the stimuli. Throughout this process, minimizing prediction error is the primary 'purpose', or function of the cognitive system.
When we talk about algorithms in the context of digital capitalism, the implication is usually predictive algorithms. Predictive algorithms are where the new money is. The ones used by companies to decide what ads to show you; the ones being adopted by hospitals to determined what condition your symptoms map onto; the ones used by finance professionals to predict where the market is heading. Predictive algorithms are trying to predict the future on the basis of the past. The less prediction error, the more valuable the algorithm.
What's interesting in both cases - human minds and predictive algorithms - is that we're terrified of what would happen if they achieve their goal. We're terrified of eliminating prediction error.
...
Minds
A cognitive system with 100% predictive success would literally never be surprised, and this sounds like no good, existentially. Minimizing predictive error may make sense for a disembodied informational system, but it makes a pretty paltry meaning of life, as far as humans are concerned. Andy Clark, a philosopher at the helm of PP theory, encapsulates these concerns by writing that minimizing predictive error is in direct opposition with more humane visions of what flourishing looks like:
"Prediction error minimizing agents are driven – or so the worry goes - by a fundamental information-theoretic goal that is itself inimical to human flourishing. For such agents, the ultimate information-theoretic goal is a state in which there is zero prediction error. This looks diametrically opposed to oft-lauded goals such as continued personal growth and self-actualization. How, if at all, are we to reconcile such expansive visions of human flourishing with the information-theoretic goal of prediction error minimization?"
He has a nifty solution. As a proponent of PP theory, he holds that humans are driven by the directive of their cognitive systems to minimize predictive error. However, we've yet to take into account the role of cultural evolution in 'subverting' that quest.
Cultural evolution is a perpetrator of "information-theoretic subversion". By changing the cultural environments that PP systems endeavor to predict, cultural evolution keeps resetting, or subverting, the process made towards minimizing error. Old assumptions don't always map onto new environments, and so our PP systems must constantly re-learn the shifting cultural environments, ensuring there always remains a degree of predictive error.
...
Algorithms
I don't feel it necessary to expand on the potential consequences of machine learning algorithms that achieve 100% predictive accuracy. We've got heaps of Sci-Fi movies, books, and shows that do that, and converge on the same tune: Bad.
And when you look at the constraint that's kept our cognitive PP systems from getting too good at prediction (cultural evolution), there's more reason for concern. In theory, algorithms could be subverted by evolving environments just as much as PP systems. But the more that algorithms become forces that shape culture, the less vulnerable they are to being subverted by it. That is, if algorithms drive cultural evolution, they'll no longer be surprised by new cultural environments, because it is on the basis of their predictions that these new environments are taking shape.
One way out of this bind might be making a two-way relationship out of what is, presently, a one-way street. Algorithms, or the data that trains them, are not presently subject to democratic governance. We interact with algorithms on a daily basis, but have little-to-no power in shaping how they're deployed. This arrangement allows algorithms an unimpeded path to shaping culture, and remaining insulated from information-theoretic subversion. Instead, we could establish institutions of democratic data governance that makes algorithms subject to us. Democratic governance of the data that trains algorithms would introduce a deeper layer of constraint upon their predictive success.
On that front, Salme Viljoen is doing some really exciting work. And for more on the predictive processing idea, my conversation with Chris Letheby explored some details.
…
Digital Positivism & Psychedelics
One of the most interesting areas of Emma Stamm’s work is at the intersection of what she calls “digital positivism” and psychedelics.
Positivism is essentially the idea that what counts as “knowledge” must be verifiable by others. It’s a knowledge-as-consensus-reality kind of deal. If I claim there’s an oyster floating in the sky, but no one else can see it, that isn’t knowledge. It isn’t verifiable. But if I say there’s a tree in the distance, and you all can see the tree, then great. Knowledge.
Digital positivism, then, is the idea that what counts as knowledge must be able to be represented in digital format, as data. Here, data is understood as computational information, which means that it must be representable via binary code, as 0’s and 1’s. The problem, on Emma’s account, is that consciousness is not representable in its totality via 0’s and 1’s. Consciousness is also composed of these ineffable regions and experiences that we can’t quite express. So under a digital positivist framework, these ineffable parts of consciousness don’t matter, because they cannot be definitely represented to others.
But psychedelic science is mounting a very compelling case that these ineffable areas of consciousness matter Very Much. Digital positivism amputates some of the most important parts of human experience, and psychedelic science proves this. As Emma writes, “Psychedelics provide an empirical refutation to digital positivism.”
The first clue is that the mechanism of action in psychedelic therapy is not a clear formula of: “If you take X amount of LSD, you will have Y effect.” Psychedelic therapy not dose-dependent, but experience-dependent. If I take a light dose but have a very meaningful experience regarding some point of trauma or tension in my life, I will likely experience greater therapeutic benefits than someone who took 3x my dose, but had no such experience.
On the whole, one of the most common features of a psychedelic trip is that they’re ineffable - they cannot be satisfactorily transcribed into words. I cannot tell you about my trip in a way that conveys the actual gravity of the experience. My experience is not verifiable by others. And yet, psychedelic trips consistently rank amongst the most meaningful experiences in people’s lives.
Things get really interesting once you tie this in with the above bit on algorithms. We’re living in a world that is increasingly shaped by algorithms, which both predict and shape the future on the basis of data about the past. But now, we know that certain elements - very important elements - of the human experience cannot be represented via data, and so are left out of these methodologies.
What are the consequences of living in a world that is increasingly shaped by methods that cannot, by definition, include some of the most important parts of consciousness?
Emma & I explore this space, as well as fun ideas like “acid communism”, in our podcast convo. You can find more info here:
The Meaning of Life is Complexity (?)
In response to the question, “What are you optimistic about?”, the psychologist Martin Seligman wrote: “that God may come at the end.”
By God, he means the progressive realization of Omniscience (the ultimate end product of science), Omnipotence (the ultimate end product of technology), and Righteousness (the ultimate end product of institutions), via tendency of biological and cultural evolution to select for greater and greater complexity, since complexity is associated with positive sum games that have the survival and reproductive edge over simplicity.
Complexity is given as a path towards God, and evolution is naturally tending towards complexity, so God may come along in the end, after all.
The whole thing is a nice & short read, but this passage summarizes the complexity bit:
On this view, a “meaningful life” is to enable the evolution of more and more complexity, as this is the path towards God’s coming.
End
As always, you can respond directly to this email with responses, or reach out on Twitter. You can find more essays & podcasts on my website. I’m here for conversation & community.
Until next time,
Oshan
Mind Matters
God comes at the end? Sounds like The Last Question!
Humans always want to minimize prediction error because we want think we can be God - to control our environments endlessly. But we aren't God, and we wouldn't enjoy it. Our models can get ever more precise and granular in predicting the world. But, as I like to say, the humanity is in the gaps!