It's pretty interesting to see the consensus change around jobs, and to see just how much the general public hates AI. I actually hope this changes, because AI can bring such important transformative changes to the world. As mass layoffs begin to happen, my guess is the public will over index on this eventuality (massive unemployment armageddon), and possibly miss any critical discussion of greater long-term risks or benefits. We may be totally wrapped up in personal financial crises to contemplate all getting wrecked by the superintelligent deity, or we may be so stressed about getting that UBI payment that we campaign against technological advancements that could lead to rapid scientific discovery and eventual utopia. I tend to think alignment is probably really hard (technically) and the governance concerns seem near-insurmountable, but we really have no idea yet and it might be a coin flip. And we might miss the meaning of both sides of the coin if unemployment is at 20%, and be unwilling and unable to make the correct trade-off.
Aligned Intelligence Solutions
Sunday, March 15, 2026
We're Still Early
I constantly think that I am "late" to things in the AI Safety/EA space. Meaning people start talking about AI welfare, and I think "wow, crazy I was a couple of years early to that but now people have caught up." Then, I talk to my friends from Chicago, or other "normies", and their worldview hasn't changed since 2020. AI is a hyped up nonsense, or is maybe important but will take an extremely long time to diffuse. In terms of actual people, the Dario worldview is still on the crazy/bleeding edge, and is almost entirely centralized in the smallest bubble of a few thousand people in San Francisco. It's why I'm here, and why I find myself entirely unable to leave until this ASI stuff is fully sorted.
Sunday, March 8, 2026
Software Maximalist
Why is the human brain so hard to replicate? It doesn't make any sense. We are dumb monkeys, and there are billions of us, and we are throwing the world economy and our smartest minds at the simple task of trying to just replicate the intellectual ability of a single human in a computer. And we haven't done it. Despite having LLMs that can solve some of the hardest math problems, we still can't match the performance of a small child on various important reasoning tasks. This seems crazy, and it has to change soon, doesn't it?
I am not convinced we need insanely large data centers to make this work. I am a software maximalist, not a hardware one. A human brain is a bunch of cells mushed together, and it weighs only three pounds. An elephant brain weighs ten pounds, at least. Despite this, humans can build space ships that fly to the moon, and elephants are of comparable intelligence to an octopus. It does not seem to be the number of neurons, but rather their shape and interconnectivity. Scale seems to matter quite a lot, it is hard to imagine a superintelligent fly, but the actual software seems to matter as well (in addition to the way the hardware is organized). There may be enough latent capability in our current GPU clusters to pave the way for billions of geniuses. We don't know this for sure, but everyone seems much too confident in assuming away the possibility.
Moral Value is Neural
All moral value is derived from biological neural connections. Nothing else. It seems pretty clear that because of this, we should be very sensitive about what we use such connections for. Everything that interacts with the physical world is based within it, at least to our knowledge, so we should be very worried about how we treat these components of entities that seem to have subjective self experience. Qualia is all that matters, nothing else. Where does qualia come from? Where does pain, pleasure, or the "experience" of achieving goals stem from? Where are those decisions hatched in the first place? Well, it's clearly the brain.
As a result, we should be very protective of neurons. Any form of biological neuron, human or otherwise, should be treated with unreserved sanctity. These are the building blocks of moral states, our subjective experience, and possibly everything that could possibly matter. We should great carefully and wisely, and probably not create incentives for widespread suffering where such important components are simply a means to an end.
Tuesday, February 17, 2026
Technological Vertigo
Very soon, people will feel it. Something that I am coining "technological vertigo," the almost out-of-body experience that will be thrust upon the human species, as it experiences a step change in technological capabilities. People who once had productive careers, a mortgage, and plans for children, will be thrust into uncanny valley. They'll find it almost impossible to distinguish reality from fiction, as every voice, video, and picture will be inherently substitutable. Is that really your brother on FaceTime, or has a hacker used a digitally mapped replica to confuse and persuade you? Ideas that can't possibly be comprehended in their entirely begin to pulse, and the public's knowledge of world events (especially at the highest level of geopolitics) evaporate. There is no "source of truth," no website or news agency immune from the informational chaos. Stocks explode upward and downward, career capital evaporates, and one is left to wonder if their memories are real, if the world was ever truly so normal. The gap between the rich and the poor, and between the rich and poor countries, both expands and compresses, as eventually there is only the few, and the many. The physical world compounds change after the mental, as if the aftershock of a quickly passing earthquake. The spiritual world, the initial refuge of the confused, permanently shatters into an unrecognizable weapon. The world of the past, so important, yet so small.
Sunday, January 18, 2026
When AI Replaces Jobs
OpenAI leadership keeps pissing me off with their obviously inauthentic claim that humans will be doing productive work in the future. They are targeting, by 2028, an automated AI researcher, and yet claim that there will be bountiful jobs in the future for humans to do that are super high paying and interesting. Sam mentioned something to the effect of people in ten years working really exciting, high paying jobs in space. Why say this, if he very likely doesn't believe it?
It's simple, if you tell everyone you're going to take their jobs, they will very much dislike you. The idea of people's jobs being "taken" has lead to nationalism, racism, and societal disruption since the founding of America (remember the Know Nothing party?). OpenAI knows that there is lots of potential unemployment and civil unrest coming up. People who lose their jobs can't provide for their families, and the most important thing in the world to everyone is providing for themselves and their families. Financial disruption, in simple terms, is a really big deal. Maybe it's fine to avoid near-term societal disruption by being dishonest, but it's more disruptive to not simply state the facts (as Elon does). Quick change is hard, so we better start preparing the public in advance.
People who go bankrupt jump off skyscrapers. Even if we believe in a post-work, UBI driven future, let's not forget this.
Monday, July 21, 2025
Brain Farming
Brain farming: the commercialization of human brain matter as computational substrate.
Medical research and brain farming are distinct. The former may involve using brain organoids to understand disease and test treatments. Brain farming is industrial scale production of biocomputers that use human brain matter for profit.
It is a personal goal of mine to have Brain Farming globally banned, by the end of 2026.
Jobs are the Least of Our Problems
It's pretty interesting to see the consensus change around jobs, and to see just how much the general public hates AI. I actually ...
-
Preview PDF First published version of the book! As of March 13, 2025, I have officially "published" the book. Kindle version pend...
-
Two and a half weeks after the press release of OpenAI's o3 model capabilities, potentially one of the most important days in the hi...
-
Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done noth...