Is biology necessary for consciousness? A response to Seth's paper on biological naturalism
In this post, I explain why I think that Anil Seth’s recent paper does not make a conclusive argument that phenomenal consciousness is inherently biological and that therefore artificial intelligence (AI) is unlikely to ever be conscious.
To preface this, I should clarify that at the same time, I’m not arguing here that AI is likely to be conscious, now or in the future (or that computational functionalism is true). I certainly don’t think you can just naively take what current chatbots tell you as evidence: chatbots are trained to be masters at playing any role. Even academics who are sympathetic to the possibility of AI consciousness raise similar caveats (see e.g. whenaiseemsconscious.org).
My own personal position is to agree with Jonathan Birch’s “centrist manifesto”: both over- and under-attribution of consciousness are challenges to be taken seriously. It’s an important topic, where the starting point should be uncertainty and the goal to reduce that uncertainty with great care. This includes critically engaging with views that claim some degree of certainty in either direction – such as expressed in Seth’s paper.
Setting the scene
As artificial intelligence (AI) becomes more intelligent, many wonder whether machines can be – ever could be – phenomenally conscious: could they experience what it is like to see something, think something, feel something, etc. Long relegated to sci-fi, this question suddenly seems more salient with large language models, which are trained to imitate writing text like humans do, but can also process images or videos, or act as a cognitive ‘brain’ for robots with actual physical bodies. Conscious or not, we will have (or already have) machines that seem like they could be conscious.
Some thinkers in AI, neuroscience, and philosophy come down on the side of “no”: artificial intelligence, current or future, cannot have (non-trivial) consciousness, at least as long as AI is implemented on similar hardware as current computers. One recent prominent example is the paper by Anil Seth: Conscious artificial intelligence and biological naturalism. Seth argues against computational functionalism, a philosophical position often used to support the possibility of AI consciousness, and instead for biological naturalism, which sees consciousness as intimately connected to biological organisms.
I’ve seen Seth’s paper discussed in a number of places. Just a few weeks ago, Mustafa Suleyman, CEO of Microsoft AI, wrote a blog post to warn about the dangers of mistakenly attributing consciousness to AI. That’s a fair worry to have. At the same time, Suleyman is dismissive of the possibility of AI consciousness altogether, and he frequently refers to Seth’s paper to justify his position.
Seth, an expert in the field, appreciates that this thesis is not self-evidently true, and presents a detailed argument. He does also acknowledge that he could be wrong (somewhat contrary to Suleyman’s interpretation), so even if we find his paper convincing, we shouldn’t just dismiss the issue of AI consciousness out of hand.
In any case, I don’t think Seth actually makes a strong case against AI consciousness, and I’ll explain why below.
The crux
For the impatient reader, here is the crux of my criticism. My biggest problem with Seth’s paper is not that the argument he is presenting is necessarily wrong; it’s that it is not leading to the conclusion he is claiming it is.
In the abstract of the paper, he writes he will be “concluding that real artificial consciousness is unlikely along current trajectories” – or “very unlikely” on Twitter.
But the argument he is actually making is that computational functionalism isn’t necessarily correct, and that biological naturalism could be an alternative (one of the many other proposals for theories of consciousness). IMO the appropriate conclusion should then be uncertainty about whether AI can be conscious or not, not near-certainty that it cannot.
Prerequisites
I won’t cover Seth’s paper in all detail here and just address some key issues. I recommend reading the paper for context.
Here are some quick prerequisites.
The issue at hand is phenomenal consciousness (whether or not for a thing ‘it is like something to be that thing’), not other interpretations of the word. See this post by Nick Alonso for a nice introduction and overview.
Next, computational functionalism is the position that, roughly, any system that implements the right computation (whatever that might be) would be conscious. For example, the brain is a system that implements the right computation (or so it seems, see Frankish, 2016), and thus brings about consciousness. But if a computer implements that same computation, it would also be conscious.
Seth’s argument
Overall there are two parts to his argument.
First, proponents of the possibility of AI consciousness often rely on the possibility of computational functionalism being true (for example, see Butlin et al., 2023). Seth argues against computational functionalism.
Second, he argues for an alternative position, namely biological naturalism, which in turn is also an argument against machine consciousness / computational functionalism, because it is incompatible with the latter.
I’ll discuss these two parts in turn.
Against computational functionalism?
From the framing of the paper one would expect Seth to build a case for why computational functionalism is unlikely to be true. But from what I can tell, all the points he is making are instead arguing that one shouldn’t just assume, or be naively misled to believe, that computational functionalism definitely is true.
Seth starts out by identifying various “biases” that might lead to us mistakenly attributing consciousness to AI, such anthropomorphism. Without going into the details here, I mostly agree with his statements as such. For example, we can’t just take language models at their words, especially given that they have been directly optimised to imitate humans. Just because something quacks like a duck doesn’t mean it’s necessarily a duck.
That said, the converse doesn’t follow either: we can’t be sure it’s not a duck either; nor should we dismiss the quacking of a duck, or the human-like behavioural capabilities of AI, as a possible source of evidence in general, under the right conditions.
Seth also discusses various issues with computational functionalism itself. His points basically all boil down to the same core argument: the brain might implement various processes that are not (Turing-)computable; or at least not exactly computable, or not in practice.
Again this might be correct, but it doesn’t show that computational functionalism is wrong or that AI can’t be conscious either. A lot happens in the brain, only some of which might matter for consciousness. Perhaps those subsets of processes that happen to be relevant for consciousness are Turing-computable. Or, maybe the details don’t matter and the computations match in a more abstract sense. Or, the details matter for the exact specific experience realised by a system, but not for the capability for having qualitatively similar experiences, or even very different kinds of experience.
For example, a machine might never experience vision in exactly the same way as me, because it can’t implement exactly the same computations or processes my brain does; but it might still experience vision in its own way.
As per Seth, “computational functionalism is the view that computations of some kind are sufficient to instantiate consciousness”. This does not appear to entail that any specific instantiation of experience, e.g. exactly what is going on in my head down to the last detail, must be exactly reproducible in other systems.
In general, Seth establishes that there are various differences between brains and computers, but not necessarily that those differences are important as far as consciousness is concerned.1
He does acknowledge that in multiple places. For example:
The arguments so far do not disprove computational functionalism. But they do render it less plausible, and less appealing [...]
Whether Seth’s points make computational functionalism less plausible or appealing will depend on one’s broader beliefs. Perhaps if your initial view was that the brain only performs Turing-computable processes and that this is the main reason to accept computational functionalism, learning that the brain’s biological processes are idiosyncratic in various ways might move the needle somewhat. Even then, the needle might not move very far towards computational functionalism being implausible, which is what Seth’s overall conclusion appears to require.
Ultimately, he concludes this part of the paper with (italics mine):
[...] the functions or computations implemented by (conscious) biological systems may not be separable from their material basis. This means that the substrate flexibility required for conscious AI may not hold. There are also alternatives to assuming that brains compute, or that consciousness is a matter of (Turing) computation, undermining assumptions of computational functionalism.
Thus: computational functionalism may not be correct. But it also may be.
Computational functionalism is not even necessary for AI consciousness
By the way: even if Seth did make the argument that computational functionalism is unlikely to be true, it would not strictly follow that AI cannot be conscious.
For example, preceding the last quote, he writes:2
Conscious AI requires both computational functionalism to hold, and sufficient substrate flexibility such that the computations sufficient for consciousness can be implemented in AI hardware (silicon).
I don’t think that’s correct. As he says, computational functionalism is not sufficient on its own, because we also need the second part (the relevant computation needs to be also implementable in AI hardware). But at the same time, it is not necessary either for computational functionalism to be true for conscious AI to be possible. That’s because it could be that, counter to computational functionalism, the substrate does matter beyond computation; but also that both carbon and silicon happen to be valid substrates (or that any other conditions are met as required by theories that aren’t computational functionalism).
For biological naturalism?
If computational functionalism isn’t the right framework to explain consciousness, what is? In the second part of the paper, Seth presents his alternative, namely biological naturalism:
[…] the claim that consciousness is a property of only (but not necessarily all) living systems.
To make his case, Seth appeals to Karl Friston’s free energy principle (FEP) / predictive processing (PP) framework and various pieces of work in that space.
In my view, this part of the paper is a mirror image of the first and ultimately fails to make a strong case against the possibility of AI consciousness for similar reasons. In the first part, Seth’s conclusion would require an argument against computational functionalism, but he only really presents an argument that computational functionalism isn’t necessarily true. In the second part, at best, he sketches out what an alternative biology-focused story might look like. But this is still quite a distance from making a strong argument that biological naturalism is likely to be true.
Caveats on the free energy principle and predictive processing
How good of a “story”3 is it? I’ll discuss some of the details next, but one of my main high-level issues is this: like others, I believe some skepticism is justified when it comes to the FEP framework. This especially applies to claims of the FEP being a “grand unifying principle for cognitive science and biology” (Hohwy, 2020). It has been argued that such claims are unwarranted, and rely on taking a number of pieces, which can be individually sound (like having a generative model in the brain, or that living systems self-maintain), and then erroneously taking them to be necessary parts of a greater whole. For example papers on the fairly critical end of the spectrum and representative quotes, see this footnote.4
It is this linkage that Seth bases his argument on, because he aims to show a unified picture of both processes in the brain and life more generally (it’s also not clear that FEP’s version of PP is the right model of the brain5).
When I did my PhD and postdoc in computational neuroscience (studying a different flavour of generative model as a model of the brain), I used to be quite interested in teasing apart the claims of the FEP and analyzing them critically. Discussing all of this here at length would be far beyond the scope of the post. I also have to confess I have not followed the FEP literature in the last ten years or so since, nor have I looked in detail at Seth’s various references. That said, having read aforementioned recent critical papers in preparation for this post, it looks like the same issues are still in play and certainly are not resolved.
Below I’ll focus on Seth’s argument as presented in his paper, not this wider context. But on a high level, in as far as Seth’s case relies on an assumption that in FEP we have a good theory of how the brain works, how life works, and how they’re connected, and that we can use that as a background foundation for a theory of consciousness, some initial doubt seems appropriate.
From predictive processing in the brain to all of life?
Seth reiterates the FEP predictive processing view: in common with other theories, the brain is thought to implement a hierarchical generative model, where during perception, top-down generated priors interact with the bottom-up sensory evidence to infer the causes of sensory input. Specific to FEP’s version of PP, by assuming a particular form of model and approximation scheme, one arrives at a specific algorithmic model where the bottom-up messages are prediction errors to be minimised by the system. Consciousness then is argued to be related to top-down predictions.
I certainly buy into the idea that the brain (and the cortex in particular) might implement a generative model and the notion of perception as “controlled hallucination” (Reichert et al., 2013). And it’s plausible that at least in humans or other animals with a cortex, this plays into what we consciously perceive. What’s less clear is the details of the brain’s model (there are many ways to realise similar ideas that don’t end up with a model that specifically sends around prediction errors, see Gershman, 2019), and crucially, whether these aspects are necessary for consciousness in other systems.
Seth then observes that the same lens of prediction error minimisation can also be applied to acting in the environment via active inference, to control of the body more generally, and ultimately to various processes in service of self-preservation/maintenance/production (autopoiesis) in living organisms. He writes:
“These observations together suggest a kind of epistemic biological naturalism in which human conscious experiences can only be understood in light of our nature as living, self-sustaining organisms”
(whether we should accept that inference I’ll get to soon).
Seth then argues that, beyond PP, the FEP framework further allows one to both broaden the scope and deepen the connection to all living things. Free energy minimisation is meant to capture both perceptual inferences from sensory observations and fundamental processes of life in general. A common framing in FEP work that is meant to convey the idea is this:
“The overall intuition here is that, by minimising sensory prediction error through active inference, living systems will naturally tend to be in states they expect – or predict – themselves to be in, and so will continue to exist”.
In tying together perceptual or sensory processes with biophysical processes, like persistence in the face of thermodynamic pressures, this connection is meant to go beyond just mathematical correspondence:
“Putting all this together suggests that the (minimisation of) free energy driving metabolism, autopoiesis, and perception is not just metaphorically equivalent, but is in some physical sense the same thing.”
There are multiple issues here, reminiscent of those we encountered in the first part.
First, as mentioned earlier, whether the FEP can do the required heavy lifting can be disputed (see footnote 4; from my past exposure, this includes how FEP connects information-theoretic and thermodynamic ideas, both conceptually and mathematically). Seth does also mention that there is a lot of debate here (Section 4.2).
Second, even if one buys the premise of an underlying unifying principle here that relates perceptual processing in the brain with biological processes in living systems more generally, it’s simply not clear if all that can be explained together by this principle is also necessary for consciousness. Sure, the concerns that come with having a body and staying alive are likely an important lens under which to consider specifics of human and biological consciousness, but can we be sure they can’t be dissociated?
Third, a lot of this, especially as it relates to PP, sounds very much like functional descriptions that could be implemented on computers. At worst, physical embodiment might be a necessary ingredient, but that could be realised with robots.
Seth does bring up that this all looks like computational processes (end of Section 4.2), but maintains that the fact that the FEP unifies the computational aspects with the processes of the living substrate somehow makes the difference here. Later on in the paper (Section 5.7), he also brings up that robotic implementations could cover some embodiment/embeddedness conditions, though perhaps not everything that makes biology special, such as continuous self-regeneration (autopoiesis).
This brings us to the fourth issue. A lot of the paper focuses on how biology is different from computers; in biological systems, any (widely construed) computation might be deeply entwined with the hardware and not easily divorced from it, as might be the case in robots. And, biological systems, including individual cells, do a lot of things other than just computing some input-output functions.
But the question at hand isn’t just if biology is different. The question is why should that difference matter for consciousness (and how certain can we be about that). Taking a step back from all the FEP framing for a moment: why exactly should it matter for consciousness that animals are made from cells that regenerate?
To conclude this part, here is my best attempt at identifying and condensing down the core structure of Seth’s argument:
Consciousness has to do with the brain making predictions as part of perceptual processing.
The FEP account unifies perception with more general biological processes and establishes a deeper physical correspondence.
Consciousness (therefore?) is also necessarily associated (how exactly?) with those more general processes.
These more general processes or the nature of the correspondence are particular to biological systems and cannot be reproduced in standard AI hardware.
Conclusion: AI (using current approaches / hardware) cannot be conscious.
As I indicated, I believe all of these premises can be questioned.
Consciousness is deeply connected to life… hey not that deeply!
A significant part of Seth’s paper is thus about establishing a deep connection between perceptual processing in the brain and fundamental aspects of life down to single cells by ways of the FEP. But Seth himself appears to have some qualms about where this argument might ultimately lead us: to biopsychism, the view that all life is conscious.
Seth writes he does not endorse biopsychism, and that this “raises the challenging question of what distinguishes conscious from non-conscious living systems”. Seth says he won’t address this question here. He merely points to the possibility that only some forms of life might have the right kind of “multimodal survival-relevant integration”.
This strikes me as another sign that the connection to biology-specific processes was tenuous to begin with, and that we might be arguing backwards from the conclusion here. That conclusion is meant to be that specifically biological brains, but not artificial brains or biology in general, are the carriers of consciousness.
It’s a natural starting point to look for consciousness in the generative perceptual model in human and similar brains. But this might suggest that merely implementing the corresponding computations is what matters; instead, the biological naturalist must argue for some important difference in how the biological models are instantiated, one that doesn’t seem arbitrary like the substrate being made from carbon (which Seth also rejects). The FEP is claimed to offer a path, where perceptual inference is just a special case of a much wider applicable principle of prediction error / free energy minimisation that permeates all life. The connection to consciousness becomes blurry though, and, to avoid biopsychism, now we have to weaken the broadening of scope again and demarcate brain-based perceptual models (etc.) as special somehow. At this point, should we be convinced that this was the right path to follow, or should we rather retreat to where we started, and seek consciousness in perceptual world models (or something akin to that)?
Conclusion
In summary, Seth sets out to show that AI consciousness is “unlikely” or “very unlikely” along current trajectories. But his argument doesn’t disprove computational functionalism, it merely shows that it shouldn’t be a foregone conclusion. Similarly, he sketches out an alternative framework in the form of biological naturalism, but in my view it’s just that, a sketch, not a strong argument to accept it at the exclusion of other possibilities. The biological naturalism part of the argument also heavily relies on the most ambitious claims of the FEP as its backbone, so if like me you hold some scepticism there, your needle might not move a lot.
Throughout the paper, Seth appears to switch back and forth between acknowledging the uncertainty on one hand, and on the other hand maintaining that his argument renders computational functionalism less plausible and biological naturalism the (no pun intended) natural choice.
In a later section (Section 5), he gives quite a helpful overview over how the various different hypotheses on consciousness would play out when it comes to AI, which again only goes to show how complex and multifaceted the landscape is.
In Section 6., aside from addressing dangers with merely conscious-seeming AI, he raises that if AI consciousness is possible after all, there are grave risks associated with building it, whether on purpose or inadvertently. As he explains, this is because with consciousness might come the capacity for suffering and other morally significant properties. This part also appears to go somewhat counter to how he presents his main take-home message. Even if one’s probability for machine consciousness is somewhat low, such ethical concerns suggest we should not dismiss the possibility prematurely.
Unfortunately, in his conclusion, he appears bullish again about his thesis, even a bit polemic:
“If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted language models then we overestimate the machines, and we underestimate ourselves”.
(FWIW I think there’s plenty to criticise in this space, but this seems a bit out of character with the rest of the paper).
I think it would have been really helpful if Seth had more explicitly spelled out the premises of his argument and how they are meant to lead to its conclusion; and indeed, what exactly that conclusion is: unlikely AI consciousness, or uncertain AI consciousness.
But if his thesis indeed is that AI consciousness is very unlikely, I remain unconvinced by the argument provided.
Notes
For helpful comments, thanks to: Adam Bales, Shane Legg, Rif A. Saurous, Swante Scholz, and Murray Shanahan.
You can cite this post as:
Reichert, D. P. (2025). ‘Is biology necessary for consciousness? A response to Seth’s paper on biological naturalism’, David P. Reichert’s Substack, 27 September. Available at: https://davidpreichert.substack.com/p/is-biology-necessary-for-consciousness. (Accessed: <your date>)
I’m guessing that some of his references do make that case, but I’m only considering Seth’s own argument here.
Also “And if computational functionalism is false, then conscious AI is not possible anyway”, “since conscious AI depends on computational functionalism holding [...]”, etc.
Seth’s section titles are, “The story from predictive processing”, “The story from the free energy principle”.
For an informal post on some of the issues with the FEP, see this 2023 post by Steven Byrnes.
For a detailed philosophical argument, see Williams (2021) (you can just skim the conclusion to get an idea): “If I am right, such a claim is extremely misleading: there is no chain of reasoning—no ‘‘high road’’—that takes one from first principles concerning life or existence to a mechanistic brain theory or framework for generating such theories. Once one recognises this, it is reasonable to ask what could justify even a moderate degree of confidence in some of the most ambitious claims associated with the FEP [...]”).
For a similar criticism concerning PP and the brain, see Litwin & Miłkowski (2020): “A unified theory of anything cannot be created simply via a series of equivocations, which are then used to argue that the same theoretical construct underlies various phenomena.”
For conceptual issues with how certain mathematical models are used in FEP, see Bruineberg et al. (2021).
Finally, see Gershman (2019) for a (more sympathetically framed) analysis of how various aspects of PP can be dissociated and be related to wider ‘Bayesian brain’ work.
Last time I checked, the more brain-specific claims of FEP/PP, like the specific form of the predictive model to be implemented by the cortex, aren’t clearly decided by evidence (at least vs. generative alternatives of non-FEP flavour; see Walsh et al., 2020, Hodson et al., 2024, for recent reviews).


I haven't fully read his paper yet but my encounters with his ideas have always suggested a back to front nature. Less like “here is evidence that consciousness requires these aspects of biology” and more "I don't think machines can be conscious” → let me find things brains have that machines don’t → “this is what causes consciousness.” The features proposed (especially autopoiesis) seem to lack any good reason to think they’re relevant to consciousness beyond this.