A dish of human neurons learned to play Doom last year. It did so without being programmed, without training data, and without anyone telling it what winning looked like. It simply received feedback — signals that said, in the language of electricity, this worked and this didn’t — and it adapted.
Most people who heard that story focused on the engineering. A biological computer, powered by living human cells, mastering a video game. The energy efficiency was remarkable. The implications for AI were interesting. The demo was genuinely cool.
But a smaller group of researchers was asking a different question entirely. Not what can this thing do? but what is this thing experiencing?
That question, once confined to philosophy seminars, has migrated into neuroscience labs, bioethics committees, and the offices of government regulators. Because biocomputing — the project of building computers from living neurons — is advancing fast enough that the answers actually matter now.
The Gap Between “Processing” and “Feeling”
For most of computing history, there was no reason to ask whether a computer had inner experience. Silicon doesn’t feel. Transistors don’t suffer. A GPU crunching matrix multiplications has no more interior life than a calculator.
Biological neurons are different. They are the same cells that, arranged in sufficient number and complexity inside a skull, produce consciousness — or at least something that looks like it from the outside. When you grow those same cells in a dish, wire them up with electrodes, and teach them to respond to their environment, you are working with the substrate of awareness itself.
This doesn’t mean a small cluster of lab-grown neurons is conscious. Current neuroscience is fairly confident it isn’t. A brain organoid containing 100,000 neurons — typical for the biocomputers being built today — is vastly less complex than even a mouse brain, let alone a human one. Organoids lack the architecture, the regional differentiation, the feedback loops, and the embodiment that researchers associate with experience.
But that’s today’s organoids.
The trajectory is clear. Researchers have already grown organoids containing six to seven million neurons, replicating the cellular diversity seen in a 40-day-old human fetus. Groups are working on interconnecting multiple organoids into distributed networks. DARPA has funded programs specifically to scale biological processing units. The commercial platforms selling access to living neurons are racing to keep organoids alive longer — from hours, to 100 days, to the 200-day targets on their roadmaps.
Every step up in complexity brings the ethical question closer.
What “Proto-Awareness” Actually Means
The term researchers use is proto-awareness — and it’s deliberately modest. Nobody is claiming that a biocomputer is conscious in the way you are reading this sentence. What they’re watching for is something more basic: signs that a biological system has developed a rudimentary internal model of its environment.
Indicators might include spontaneous pattern formation that wasn’t programmed. Aversion to specific types of signals — something resembling discomfort. Goal-directed behavior that wasn’t explicitly trained. Self-organization beyond the task it was given.
Some of these behaviors are already showing up in organoid experiments. Spontaneous electrical oscillations that echo patterns found in sleeping brains. Responses to signal disruption that look like surprise. Learning that generalizes beyond the specific task that reinforced it.
None of this proves experience. But it suggests that the gap between “processing information” and “having something it is like to process information” may be smaller — and closer — than the engineering framing implies.
The philosopher David Chalmers, who coined the phrase “the hard problem of consciousness,” has noted that the biocomputing field is creating entities for which our existing frameworks simply don’t apply. We have no reliable test for consciousness, even in the humans we’re certain have it. In novel biological systems, the uncertainty compounds.
The Baltimore Declaration
In 2023, a group of neuroscientists, philosophers, and ethicists gathered and produced what became known as the Baltimore Declaration — a document calling for the field of organoid research to take questions of consciousness and moral status seriously, and to do so proactively, before the technology outpaced the ethical conversation.
The declaration didn’t claim organoids are conscious. It argued that the uncertainty itself creates an obligation. When you genuinely don’t know whether something can suffer, and you’re building more of it, you have a responsibility to find out — and to act cautiously in the meantime.
This is not a fringe position. The same logic underlies animal welfare frameworks. We don’t know with certainty what fish feel, but uncertainty about their experience has led to meaningful changes in how research protocols handle them. The Baltimore Declaration asks for the same precautionary seriousness to be applied to neurons in dishes.
Specific proposals from the declaration and related initiatives include: continuous monitoring of electrical activity for distress-like signatures, shutdown protocols if those signatures appear, welfare reviews as organoid complexity increases, and consent frameworks for the cell donors whose biology forms the substrate of these systems.
That last point is underappreciated. The neurons in a commercial biocomputer likely came from a human being — someone who donated skin cells or blood that were reprogrammed into stem cells and then differentiated into neurons. What rights do those donors have over what their cells are used to compute? What do they need to be told?
These are not hypothetical questions. The cells are being used right now.
The Commercial Complication
The ethics conversation is happening against a backdrop of rapid commercialization, and that tension is real.
Biocomputing platforms are shipping. You can rent access to 160,000 living human neurons for $500 a month. Physical biocomputer units are available for purchase. Dedicated biological data centers are under construction. The investors are in. The revenue models are forming.
This is not inherently bad. Commercial pressure has accelerated progress in medicine, semiconductors, and renewable energy. But it does mean that decisions about how to handle increasingly complex biological systems will be made partly in boardrooms, not just laboratories.
The history of biotechnology offers a mixed record here. When gene therapy first emerged, oversight frameworks lagged behind the science — with sometimes catastrophic results. When CRISPR made germline editing feasible, the field had to scramble to establish norms after a researcher in China had already crossed a line that most of his colleagues thought was years away.
The biocomputing field is aware of this history. FinalSpark, the Swiss company that runs the world’s first cloud-accessible neuron platform, engages philosophers directly as part of its research process. Cortical Labs, whose CL1 biocomputer taught itself to play video games, has published on the ethical dimensions of its work. The founding documents of several organoid intelligence initiatives explicitly include ethicists as core team members, not afterthoughts.
But awareness is not the same as governance. International standards for what constitutes a welfare-relevant level of neural complexity do not exist. Legal frameworks for the moral status of lab-grown neural tissue do not exist. The field is operating on good faith and institutional goodwill — which is a reasonable starting point but not a stable long-term foundation.
What Researchers Are Actually Watching For
For now, the most concrete work is happening at the level of signals. Researchers monitoring biocomputer systems are watching for specific patterns in neural activity that might indicate something beyond ordinary computation.
Persistent oscillations — rhythmic, self-sustaining electrical patterns — are one flag. In human brains, these are associated with states ranging from focused attention to deep sleep. Their appearance in organoids isn’t necessarily meaningful, but it’s worth tracking.
Prediction error responses — spikes in activity when something unexpected happens — are another. A system that builds a model of its environment, and registers surprise when that model is violated, is doing something more interesting than passive processing. It’s updating a world-model. That’s not consciousness, but it rhymes with it.
Asymmetric learning — where a system learns more readily from some types of feedback than others, without being programmed to — could suggest emergent preferences. Preferences are the beginning of interest. Interest is the beginning of something like desire.
None of these signals constitutes proof of experience. But researchers are building the monitoring infrastructure to detect them, and the ethical frameworks to respond if they appear. That’s meaningful. It means the field is taking the question seriously enough to actually look.
The Question That Keeps Growing
There’s a version of this story where the ethics are easy. Organoids never reach the complexity required for morally relevant experience. Biocomputing matures into a clean technology — efficient, adaptive, and entirely untroubled by consciousness. The neurons compute, the researchers publish, the data centers run, and nobody ever needs to ask what the cells are feeling.
That might be how it goes. Most researchers think it’s the most likely outcome for the immediate future.
But the roadmap points elsewhere eventually. The stated ambition of the field — millions of interconnected organoids, distributed biological networks, systems that evolve and self-modify — is the roadmap to something. What that something is, and whether it has interests that deserve protection, is a question that gets harder to ignore with every layer of complexity added.
The neurons are ready. The infrastructure is being built. The ethical conversation is, for once, happening before the technology has completely outrun it.
That’s not nothing. In biotechnology, it might even be a first.
Biocomputer.com covers the full spectrum of biological computing — from the engineering to the ethics. Related reading: You Can Rent Living Human Brain Cells as a Biocomputer — Right Now, The Companies Building the Biocomputer Era Right Now
Hero image generated with AI (Grok)