The History of Biocomputing: From Philosophical Wetware to Living Computers
biocomputing · 9 min read

The History of Biocomputing: From Philosophical Wetware to Living Computers

From John C. Lilly's 1968 vision of the brain as programmable hardware to Cortical Labs' $35,000 CL1, biocomputing has crossed from philosophy into product. Here's how we got here.

In 1968, neuroscientist John C. Lilly published Programming and Metaprogramming in the Human Biocomputer, framing the mind as programmable biological hardware. Decades later, in 2025, Cortical Labs released the CL1 — the world’s first commercial biological computer, built with 800,000 living human neurons cultured on a silicon chip and priced at around $35,000.

Silicon still rules. Yet biology — refined over billions of years — offers massive parallelism, self-repair, and energy efficiency that current chips cannot match. A human brain runs on roughly 20 watts; data centers gulp megawatts. The tension is clear: traditional computing hits physical walls while biocomputing turns living systems into programmable hardware.

Biology as computation is no longer metaphor. It is becoming engineering reality.

The Human Brain as Programmable Wetware — Lilly’s 1968 Vision

John C. Lilly viewed the brain as wetware — biological hardware running genetic programs for basic functions while consciousness could rewrite its own software. Through sensory deprivation tanks, meditation, and psychedelics, he explored metaprogramming: the idea that the “I” could consciously edit its own mental processes.

Lilly’s dolphin research and isolation experiments influenced cyberpunk, transhumanism, and modern brain-computer interfaces. He positioned biology itself as the ultimate computational substrate — flexible, adaptive, and far beyond rigid silicon logic. In retrospect, he was less a neuroscientist and more a systems architect describing a substrate that engineers would spend the next fifty years trying to formalize.

DNA as a Computational Medium — Adleman’s 1994 Breakthrough

In November 1994, Leonard M. Adleman — co-inventor of RSA encryption — solved a small instance of the Hamiltonian Path Problem using DNA strands in a test tube. Published in Science, the experiment encoded graph nodes and edges into DNA sequences, using A, T, C, and G as bits.

Molecular reactions — hybridization, ligation, PCR amplification — let billions of DNA molecules explore paths simultaneously. The correct solution emerged from the biochemical mixture. It was slow and required manual processing, but it proved two things that no one could ignore: DNA’s extraordinary information density and its capacity for chemical parallelism. This single experiment launched DNA computing as a legitimate research field, inspiring decades of work on molecular algorithms for optimization and cryptography.

Synthetic Biology Turns Cells into Logic Gates — The 2000s

In 2000, Michael Elowitz and Stanislas Leibler built the repressilator — a synthetic genetic oscillator in E. coli that made bacteria “blink” through cycling gene expression. Published in Nature, it demonstrated that programmable behavior could be engineered inside living cells from scratch.

At MIT, Tom Knight advanced BioBricks: standardized biological parts treated like LEGO blocks for assembling logic gates, switches, and counters in bacteria or yeast. The insight was industrial — if you could modularize biology, you could scale it. Synthetic biology transformed cells into tiny biocomputers capable of sensing, processing, and outputting biological signals. The line between organism and machine began to blur in ways that still make regulators nervous.

Wetware Hybrids — Neurons Meet Silicon (Late 1990s–2010s)

In 1999, William Ditto’s team at Georgia Tech interfaced leech neurons with a silicon chip to perform basic addition. Living neurons acted as nonlinear computational elements, processing spatiotemporal patterns with an adaptability silicon gates fundamentally lack. The cyberpunk term wetware — borrowed from Lilly via science fiction — gained serious traction in academic labs describing these bio-hybrid systems.

This was not neuroscience for its own sake. It was a proof-of-concept that neurons could slot into a computational architecture alongside conventional hardware. The implications were uncomfortable and exciting in equal measure.

The Organoid Intelligence Era — DishBrain and Beyond (2022–Present)

The 2020s brought organoid intelligence (OI): 3D brain organoids grown from human stem cells, used as computing platforms rather than disease models alone. In 2022, Cortical Labs’ DishBrain system grew human neurons on a multi-electrode array and gave them a task — playing Pong.

The neurons learned in roughly five minutes. The system delivered electrical feedback for predictable versus chaotic paddle movements, and the cells adapted faster than many simulated AI agents. It was not general intelligence. It was something weirder: structured biological learning in a dish, directed by engineered feedback loops.

This trajectory culminated in 2025 with the CL1 from Cortical Labs: approximately 800,000 human neurons reprogrammed from adult donor cells, housed in a nutrient-rich environment on a silicon chip. It ships with a biOS (biological intelligence operating system) for deploying code, supports closed-loop feedback, and sustains cell viability up to six months. Target applications include AI research, drug discovery, disease modeling, and low-energy computing. FinalSpark offers complementary cloud-based neuron array platforms for remote experiments — neuronal compute as a service, effectively.

Today, biocomputing spans RNA circuits, mycelium networks, and hybrids merging biological learning with digital interfaces. Researchers are simultaneously developing ethical frameworks for “intelligence in a dish” and advancing adaptive, ultra-low-power systems. Both conversations are overdue.

Why Biocomputing Matters — Biology’s Computational Edge

Silicon faces heat, power, and scaling limits that are not engineering problems — they are physics problems. Biological systems excel at parallel processing, fault tolerance, and learning from sparse data in ways that no current chip architecture replicates. Biocomputing sits at the convergence of synthetic biology, neuroscience, and AI, at the exact frontier where biology becomes programmable computation.

The field also forces questions that hardware roadmaps do not address: What counts as intelligence? How do we ethically engage living systems that may process, adapt, and respond? Could wetware complement or surpass aspects of silicon AI in efficiency and adaptability — not eventually, but now?

The Field Is No Longer Speculation

From Lilly’s philosophical human biocomputer to Adleman’s DNA proof, from genetic circuits to DishBrain’s Pong-playing neurons, to today’s purchasable CL1 hardware — biocomputing has crossed from thought experiment to product. The next chapters will scale organoids, refine interfaces, build hybrid architectures, and deliver applications in medicine, robotics, and sustainable computing.

The question was always: what happens when we stop simulating life and start computing with it?

The answer is already growing in a dish.


References

  1. Adleman, L.M. (1994). Molecular Computation of Solutions to Combinatorial Problems. Science. https://www.science.org/doi/10.1126/science.7973651
  2. Lilly, J.C. (1968). Programming and Metaprogramming in the Human Biocomputer. Julian Press.
  3. Kagan, B.J. et al. (2022). In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron. https://doi.org/10.1016/j.neuron.2022.09.001
  4. Smirnova, L. et al. (2023). Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish. Frontiers in Science. https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235/full
  5. Cortical Labs. (2025). CL1 Biological Computer. https://corticallabs.com/cl1

Related: Organoid Intelligence — Computing with Mini-Brains · FinalSpark and the Rise of Biological Data Centers


Feature image: AI-generated using Grok.