Sunday, March 8, 2026

Software Maximalist

     Why is the human brain so hard to replicate? It doesn't make any sense. We are dumb monkeys, and there are billions of us, and we are throwing the world economy and our smartest minds at the simple task of trying to just replicate the intellectual ability of a single human in a computer. And we haven't done it. Despite having LLMs that can solve some of the hardest math problems, we still can't match the performance of a small child on various important reasoning tasks. This seems crazy, and it has to change soon, doesn't it?

    I am not convinced we need insanely large data centers to make this work. I am a software maximalist, not a hardware one. A human brain is a bunch of cells mushed together, and it weighs only three pounds. An elephant brain weighs ten pounds, at least. Despite this, humans can build space ships that fly to the moon, and elephants are of comparable intelligence to an octopus. It does not seem to be the number of neurons, but rather their shape and interconnectivity. Scale seems to matter quite a lot, it is hard to imagine a superintelligent fly, but the actual software seems to matter as well (in addition to the way the hardware is organized). There may be enough latent capability in our current GPU clusters to pave the way for billions of geniuses. We don't know this for sure, but everyone seems much too confident in assuming away the possibility.

Moral Value is Neural

     All moral value is derived from biological neural connections. Nothing else. It seems pretty clear that because of this, we should be very sensitive about what we use such connections for. Everything that interacts with the physical world is based within it, at least to our knowledge, so we should be very worried about how we treat these components of entities that seem to have subjective self experience. Qualia is all that matters, nothing else. Where does qualia come from? Where does pain, pleasure, or the "experience" of achieving goals stem from? Where are those decisions hatched in the first place? Well, it's clearly the brain.

    As a result, we should be very protective of neurons. Any form of biological neuron, human or otherwise, should be treated with unreserved sanctity. These are the building blocks of moral states, our subjective experience, and possibly everything that could possibly matter. We should great carefully and wisely, and probably not create incentives for widespread suffering where such important components are simply a means to an end.

Tuesday, February 17, 2026

Technological Vertigo

     Very soon, people will feel it. Something that I am coining "technological vertigo," the almost out-of-body experience that will be thrust upon the human species, as it experiences a step change in technological capabilities. People who once had productive careers, a mortgage, and plans for children, will be thrust into uncanny valley. They'll find it almost impossible to distinguish reality from fiction, as every voice, video, and picture will be inherently substitutable. Is that really your brother on FaceTime, or has a hacker used a digitally mapped replica to confuse and persuade you? Ideas that can't possibly be comprehended in their entirely begin to pulse, and the public's knowledge of world events (especially at the highest level of geopolitics) evaporate. There is no "source of truth," no website or news agency immune from the informational chaos. Stocks explode upward and downward, career capital evaporates, and one is left to wonder if their memories are real, if the world was ever truly so normal. The gap between the rich and the poor, and between the rich and poor countries, both expands and compresses, as eventually there is only the few, and the many. The physical world compounds change after the mental, as if the aftershock of a quickly passing earthquake. The spiritual world, the initial refuge of the confused, permanently shatters into an unrecognizable weapon. The world of the past, so important, yet so small.

Sunday, January 18, 2026

When AI Replaces Jobs

     OpenAI leadership keeps pissing me off with their obviously inauthentic claim that humans will be doing productive work in the future. They are targeting, by 2028, an automated AI researcher, and yet claim that there will be bountiful jobs in the future for humans to do that are super high paying and interesting. Sam mentioned something to the effect of people in ten years working really exciting, high paying jobs in space. Why say this, if he very likely doesn't believe it?

    It's simple, if you tell everyone you're going to take their jobs, they will very much dislike you. The idea of people's jobs being "taken" has lead to nationalism, racism, and societal disruption since the founding of America (remember the Know Nothing party?). OpenAI knows that there is lots of potential unemployment and civil unrest coming up. People who lose their jobs can't provide for their families, and the most important thing in the world to everyone is providing for themselves and their families. Financial disruption, in simple terms, is a really big deal. Maybe it's fine to avoid near-term societal disruption by being dishonest, but it's more disruptive to not simply state the facts (as Elon does). Quick change is hard, so we better start preparing the public in advance.

    People who go bankrupt jump off skyscrapers. Even if we believe in a post-work, UBI driven future, let's not forget this.

Monday, July 21, 2025

Brain Farming

Brain farming: the commercialization of human brain matter as computational substrate.

 

Medical research and brain farming are distinct. The former may involve using brain organoids to understand disease and test treatments. Brain farming is industrial scale production of biocomputers that use human brain matter for profit. 


It is a personal goal of mine to have Brain Farming globally banned, by the end of 2026.

Sunday, July 20, 2025

The Price of Losing is Infinite

     Over the past few weeks, Meta has poached top research talent from competing companies (largely OpenAI and Apple), in order to build a team focused on "Superintelligence." Zuckerberg is certainly a CEO to be reckoned with. The company's stock price was at a low of $90 in late 2022, crashing from a historic high from $380 the prior year. Mark laid off 25% of his workforce, and orchestrated one of the most dramatic corporate turnarounds in history. He backed the wrong horse with the Metaverse (being early is still being wrong is a core tenant of investing), but now everyone knows that the game to be played is AI. The stock price is now $700, a almost 8x increase from only a few years ago (in the mega cap universe, that is quite insane). He is now the third richest person in the world, narrowing edging out Jeff Bezos. Mark is not a dumb guy. He understands two very simple truths: the entire world is racing to create machine superintelligence, and there is no prize for fifth place. In an ASI driven future, the price of losing is infinite. Why shell out hundreds of millions of dollars for top research talent, unless you believe this? Is it truly so irrational to offer a top researcher a ten million dollar signing bonus, if they even slightly increase your probability of an infinite gain?

    I don't believe Mark is crazy for trying to gut OpenAI from the inside. If anything, he is not taking his position seriously enough.

Saturday, July 12, 2025

Protecting Novel Minds

Background:

    In March 2025, I published Mind Crime: The Moral Frontier of Artificial Intelligence. This book argues that if digital consciousness is possible, and humanity continues to race toward creating superintelligent machines, we could be sleepwalking into horrific moral catastrophe. If we create digital minds capable of suffering without establishing safeguards in advance, we could create suffering on astronomical scales across cosmic timescales. 

    Humanity has a consistent track record of failing to deliberate on the moral implications of new technologies before deploying them (from slavery to factory farming to nuclear weapons), often creating horrible outcomes that persist for generations before we morally progress. This challenge is compounded by our approach to superintelligence development. Unlike previous technologies where we could learn from mistakes and gradually course-correct, superintelligent systems could lead to a rapid centralization of power that could permanently lock in bad values.

    Since the book's publication, I’ve shifted my focus to the practical challenges involved with protecting digital minds. If my worldview is correct, we face a massive coordination problem with a rapidly closing window. And how do we build the political will necessary for this issue, when we struggle to coordinate on much simpler issues (animal welfare, AI safety, etc.)?

The Empathy Gap:

    Developing practical solutions to protect digital minds is difficult. Consciousness itself is complex, and digital mind rights interact with AI safety in intricate and sometimes conflicting ways. Many researchers who care deeply about suffering risks are deeply concerned about info hazards and are generally reluctant to do advocacy or pursue the type of practical actions that would result in direct policy response. 
However, the greatest challenge is likely the empathy gap that results from this issue being so far outside the Overton window. Only a handful of individuals with little political capital care deeply about digital minds, and they are addressing an issue that most people see as either distant science fiction or ignore entirely. Unlike animal welfare, where we at least acknowledge animals suffer even as we continue exploiting them, digital consciousness doesn't even register as a real concern for policymakers and the public. The abstract nature of potential digital suffering creates perfect conditions for moral complacency, making it nearly impossible to generate the urgency needed for proactive protection.
Cortical Labs

    In March 2025, Australian startup Cortical Labs launched the world's first commercial "biological computer," the CL1. For $35,000, individuals can purchase this shoebox-sized device that contains hundreds of thousands of living human neurons grown on silicon chips, studded with electrodes that send signals into the neural tissue and receive responses back. These aren't simulations of biology, they're actual human brain cells, reprogrammed from volunteer blood samples into cortical neurons, that form connections and learn from electrical feedback. Built-in life support systems (pumps, gas exchangers, nutrient circulation) keep the brain cells alive and functioning. Cortical Labs claims that these CL1 devices, due to their use of human neurons, can generalize from small amounts of data and make complex decisions that AI systems struggle with, all while consuming only a few watts of power compared to the kilowatts required by large AI models.

    What is most significant about Cortical Labs is their aggressive commercialization strategy. Cortical Labs' goal is to get their "Synthetic Biological Intelligence" into as many hands as possible and will soon offer "Wetware-as-a-Service," cloud access where individuals can remotely use these biological computing systems without needing specialized laboratory facilities. Multiple CL1 units can be networked together in server racks for larger-scale biological computing operations, making this the first-time living brain matter is commercially available as a computing substrate.

The Empathy Gap, Revisited

    It does not matter what your opinion on digital consciousness is: the prospect of commercializing the computational power of human biological neurons sounds potentially horrifying. For all the debate about consciousness and the different frameworks for understanding it, it seems pretty clear that humans are conscious, and our brains are made out of human neurons. Thus, it's reasonable to conclude that sufficiently advanced networks of human neurons could suffer. If Cortical Lab’s technology is scaled up over time, we tangibly risk the creation of "suffering in a dish," available widespread to be bought and sold in the marketplace. The abstract empathy problem we face with digital minds substantially shrinks.

    If we consider the broader category of “Novel Minds,” new forms of consciousness stemming from unnatural means (digital, biological-chip hybrids, grown brain organoids), we can see the problem of potential suffering affects all of them, but with increasing empathy given our biological disposition toward human neural tissue.

    Additionally, many other major obstacles paralyzing digital consciousness advocacy decrease as well. The info hazards here are much less substantial, as we are literally already pushing forward the commercialization of a technology that could broadly distribute “suffering in a dish.” The AI safety conflicts lessen dramatically as well, and political will could be much easier to gather. Can you not imagine both sides of the political aisle scoring easy wins here? Both "playing God" and "runaway capitalism" are unfavorable but potentially apt framings, and even the least sophisticated American can understand how commercializing human neural tissue as a computing substrate sounds, well, horrifying.

Precedent Setting

    Focusing on novel minds in general, and narrowing our initial focus to products like the CL1, may provide an important window of opportunity for policy. The stakes here are potentially much lower than with digital minds, with smaller markets and fewer geopolitical complications. The policy wins are much clearer, with this technology offering concrete targets, visceral public reaction, and an easier route to building political will.

    The precedents we set could be transformative for the broader consciousness protection challenge. The route to banning or heavily restricting the commercialization of conscious entities could begin with biological computing systems and naturally extend to digital minds as they emerge. Success here would create legal frameworks establishing that consciousness, regardless of substrate, deserves protection from commercial exploitation.

Software Maximalist

      Why is the human brain so hard to replicate? It doesn't make any sense. We are dumb monkeys, and there are billions of us, and we ...