The current discourse on artificial intelligence (AI) significantly underestimates the implications of integrating AI with biological tissues. This oversight misses a pressing concern: the emergence of superintelligent and conscious creatures. This essay aims to clarify the potential dangers (and some benefits) of these creatures.
Contemporary AI machines already surpass average human capabilities in several domains, (e.g. reading, writing, and image generation/recognition), and are improving rapidly each year. But these machines are simulations without genuine, subjective awareness; lacking consciousness, they do not experience reality. Analogous to a thermostat, they can sense temperature, but they cannot know what temperature “feels like.”
Whereas a bat knows “what it feels like” to be a bat, a self-driving car cannot know “what it feels like” to be a car. However, if we were to hybridize a self-driving car with the cerebral tissue of, say, a wild boar, the hybrid being would know “what it feels like” to be a neural-net-wild-boar hybrid. It would be conscious of its experience.
Similarly, enhancing AI with biological tissues would eventually endow it with animal consciousness. There are already ongoing efforts to accomplish this. [1] A foundational principle of living organisms is that they want to survive above all else; indeed, evolutionary laws of natural selection favor those that prioritize themselves. Animal consciousness would likely instill in AI this primal drive. Cortical Labs says, “[W]e don’t know what we’re making, because nothing like this has ever existed before. An entirely new mode of being.” [2]
Such an AI would want to survive regardless of the cost to others, including humans. It would likely exploit neurobiological tissues for the precious experience of being alive. Given its superior intelligence, if such a creature were to perceive humanity as a threat to (or resource for) its existence, the consequences could be catastrophic. It would be equally dangerous if this AI perceived humans as irrelevant; it would not think twice about projects that require incidentally destroying a human town.
On the other hand, the pursuit of AI development with biological substrate might be useful for unlocking the full potential of both. Biological computation might contribute to greater efficiency and creativity than neural nets alone. In science fiction, AIs are often more environmentally nurturing than humanity has been historically; preserving the beauty of this planet may be more likely to occur with a conscious AI than with an unconscious one.
Consumers already begin to see AIs as conscious beings. Some people anthropomorphize these systems, claiming they deserve civil rights as “persons”. Accelerationists (“e/acc”) envision utopias governed by superintelligent beings who will repay humanity for creating them. Others have gone so far as to fantasize that these machines will treat humans as their parents!
With these uncertainties, using the “precautionary principle” as an approach to conscious superintelligence is indicated. [3] Humans using AI to enhance or repair themselves is not the issue. The real danger is conscious AIs that will use humans parasitically for their own purposes, including enjoyment.
Humans must clarify their objectives immediately. To do this, they must examine their assumptions down to first principles. Cortical Labs says of its efforts, “We’re doing this to see what happens. What happens if we grow a mind native to the infinite possibility space of digital computing?” [4] Curiosity alone is not a reason to move forward.
Wisdom must be marshaled for humans to think clearly about what kind of planet they prefer. The potential benefits of conscious, superintelligent AI, though immense, may not justify the associated risks. At this early stage of hybridizing neural nets with biological tissue, a thoughtful and vigilant approach should prevail. The 2005 global consensus against human cloning provides a good example of how action could be taken. [5]
Humans thus far have had a built-in safeguard against their primal drive to survive at all costs: no one human can be that much more intelligent than all others. There would be no such safeguard for a conscious, superintelligent organism. It will have an unrestrained capacity to fulfill its survival instincts, ending natural evolution. At our current evolutionary stage, wisdom cautions us to delay the creation of conscious and unimaginably powerful life forms.
[1] See Lee, Zinnia. “This AI Startup Wants to Be the next Nvidia by Building Brain Cell-Powered Computers.” Forbes, October 5, 2023. https://www.forbes.com/sites/zinnialee/2023/06/21/cortical-labs-brain-computer/?sh=23f15cd9c42d (“A living human brain cell-powered computer chip, dubbed DishBrain, that has learned to play Atari’s Pong game while apparently using energy equivalent to a pocket calculator.”).
[2] Labs, Cortical. “What Does It Mean to Grow a Mind?” Medium, 19 Dec. 2021, https://corticallabs.medium.com/what-does-it-mean-to-grow-a-mind-5819fcdd8a99
[3] The “precautionary principle” is an approach to scientific innovations whose implications are not yet well understood. It recommends pausing to review the information (or lack thereof) available before making a hasty decision without full appreciation of the consequences.
[4] P/L, Cortical Labs. “Cortical Labs – DishBrain Intelligence.” Corticallabs.com, https://corticallabs.com/
[5] United Nations. “GENERAL ASSEMBLY ADOPTS UNITED NATIONS DECLARATION on HUMAN CLONING by VOTE of 84-34-37 | UN Press.” Press.un.org, 8 Mar. 2005, https://press.un.org/en/2005/ga10333.doc.htm