Saturday, May 24, 2025

Reflections on Publishing

    This blog is interesting, in that it is entirely unknown to the outside world. That means that while I have been publishing random thoughts and half-baked content on this webpage that is "technically" available to the outside world, I have yet to tell anyone of its existence. Having something external-facing is extremely important to me (to hold me accountable/engaged), but none of my ideas on EA or AI safety have made it into anyone else's brain. As such, it was very interesting to write and publish my first actual outward-facing content, Mind Crime. The actual publishing date is up for debate as it was complete in February 2025, and I was already sending free PDF copies to some during that time. But given that my first article on the topic was in May 2023, it is apparent that I could claim that this was an essentially two-year effort. The flurry of Mind Crime related topics I made in September 2023 were the basis for the core content of the book. Two years later, I am now a published author on one of the most obscure but potentially most important issues in human history to date. 

    As expected, no one has read the book. I have a single-digit number of reviews on Goodreads, and it is unlikely that double digit people will ever read it, despite the fact that I sunk hundreds of hours into writing, editing, and publishing. I likely spent $8,000 or so dollars of my personal savings on the project, and spent months wrecking my mental health thinking about existential philosophy, worse-than-extinction scenarios, and torture. Lots of torture. I also stressed continuously about impact, as the first published book in this space commands some amount of author responsibility. Over the last nine months (which in some sense was the bulk of the important work on the project), in addition to this project, I was working essentially full-time and also a full-time student (handling a more intense course load than probably 95% of others in the history of my dual-degree grad program). After all of this work, all of this stress, and my insane lack of time, I am now finally done. The result of my work is that a handful of family and friends read maybe a couple chapters of the book (if that), and it has done essentially nothing of value for anyone. There is only one question now: what's next?

    My head is already spinning with ideas. There's really two big options, both to do with advocacy:

1. Become a strong advocate for digital minds, continue from where I left off with the book (this time focused on policy, governance, and creating a workstream of impactful tasks that could help make a difference)

2. Take an indirect route, and advocate strongly for policy/awareness on the dangers on commercializing consciousness through biological computing (Cortical Labs, etc.). This issue is fundamentally intertwined issue with digital mind rights, it is just lower-stakes and much more visceral.

    I think that option 2 is probably the most impactful path forward, unless we start seriously hitting AGI or speed up into fast-takeoff scenarios. My plan for now is to use the very limited personal time, not thinking, but taking serious action regarding option 2. And then if the world speeds past lots of milestones in the next few months, consider focusing solely on option 1. Maybe two years from now I will look back at my work in this area, and chuckle at thinking years of my dedicated effort would result in anything widely impactful.

Thursday, March 13, 2025

Mind Crime: Book Preview

Preview PDF

First published version of the book! As of March 13, 2025, I have officially "published" the book. Kindle version pending.

Monday, January 6, 2025

The Opening of 2025

    Two and a half weeks after the press release of OpenAI's o3 model capabilities, potentially one of the most important days in the history of AI, and thus, the world, I sit down in my first class of the quarter at the University of Chicago. A class called "Artificial Intelligence." It has been over a year since my last post on this blog. A year in which, to my astonishment, AI capabilities progress has outstripped my already insane expectations. Autonomous AI agents will likely arrive this year, video and music realism are slowly emerging us into the new Wild West, and AGI is either already here, or on the immediate horizon. Superintelligence is openly discussed, and AI welfare concerns are starting to slowly emerge. I have spent the last year devoted to, on and off, writing what is likely to be my first book: Mind Crime. It may, depending on the events of the coming two years, prove to be my last as well. After two and a half weeks of little sleep, with the realization that ASI is coming, and coming shockingly soon, I sit in a classroom and listen. 

    My professor is clearly of the belief that we may soon enter another AI winter, and he polls the class on when they think AGI will arrive. Estimates are between 20 and 50 years. In a world where not a single exam can be passed by a human that can not be passed by an AI, where every cognitive benchmark has been rendered useless, and where Sam Altman posted that day that OpenAI's sights are now set on machine superintelligence, a classroom of students nod along to a professor who has an entirely incorrect version of reality. The world, with announcements of $80 billion investments in AI by single companies in 2025, and with likely less than $200 million in AI safety research a year, is entirely blind to the fact that building superintelligent machines may carry enormous risk to civilization. I am reminded of Watchmen, "if you begin to feel an intense and crushing feeling of religious terror at the concept, don't be alarmed. That indicates only that you are still sane."

    Staying sane in these last three weeks has been relatively difficult. It is if I am man entirely alone, a complete outsider, swimming along in a world completely oblivious to the rapid pace of technological progress, and of the push towards ASI. Either I am crazy, and wrong, and ASI is decades or millennium away, or essentially every single person I interact with daily is. And it's clear to me which scenario strikes the most fear in me, and it is clearly the second. It is hard to function in this state, hard to believe that I am potentially one of the unprivileged few who lacks a veil of ignorance, and might have to witness society get shocked awake by thoughts I have been struggling with for years. Maybe it is better to have had a slow burn, to come to these realizations before many others, and frankly, to have read Bostrom's work back in 2018. To have spent the time to think deeply, to have laid awake at night, and to have had the luxury to meet my goals before the wave hits, instead of being thrown overboard at once. But unless I can affect the outcome, unless I can truly make a positive impact on the changing world ahead of me, unless my life is imbued with some level of cosmic significance for at least trying, something is clearly obvious to me at this point: I would trade it all.

Friday, December 8, 2023

Mind Crime: Part 10

    Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done nothing in the first place. Still, a lot of the ideas Effective Altruists grapple with are so far removed from the public's daily thoughts that it is hard to reconcile not just doing this. Convincing the "techno optimists" that they are wrong and there are dangers ahead, just seems so, well, annoying to have to do. For me, saying that mind crime will be a big issue, because digital minds could have moral worth, will probably fall on deaf ears regardless. Regardless, I'm probably going to try writing a book. The thesis for the book will be very simple: we've got a lot of moral dilemmas coming up, and we're probably going to do a bad job of handling them. This is a pretty simple thesis, and one that I think has the potential to be powerful.

    The good news is that I won't have to defend too many ideas, as they will be proven with time. Two assumptions I have are that:

1. AGI is possible

2. Some machine intelligence will have moral worth

    Instead of spending a hundred pages philosophizing about this, we can just wait a decade or two and see these become somewhat obvious. If they don't, cool, throw the book to the side. But if they do, well, maybe we will have some established thoughts and plans on how to deal with this.

    Personally, I have no trust in our future tech overlords. I've said before that the lack of understanding of survivorship bias is the main problem facing the world, and I am convinced we'll have some dumb leaders who will sleepwalk right into catastrophe. In a country where a few hundred years ago we said that slaves were worth 3/5 of a person, it's certainly possible that we get some really smart, morally worthy AIs and say "huh, looks like 0/5 to me." Because why would we not? My gut is, we will get this wrong. If the slave owners of the south discovered the fountain of youth, became immortal, had advanced surveillance systems, and dropped a rapidly made nuclear warhead on New York, when would the slaves have been free? The south having powerful AI at their disposal was not possible given the technology of the time, but what if it had been? We falsely equate technological progress with moral progress. The fact that both have advanced is correlation, but in some countries we have seen a clear advancement of one and regression of another. So we have to be careful, diligent, and forward-thinking. But we won't be, and that is the problem.

    The reason for the title of "Mind Crime," in my estimate, is that this will become a really well known term that is popularized in the future. Being on the forefront of that might be cool, so that in ten or twenty of years post-AGI I will get some posthumous reference. As stated before, that is clearly not the goal. The real goal would be to lay out my thoughts in an accessible way, to maybe change a mind or two before the "I told you so" is inevitable.

Wednesday, September 20, 2023

Mind Crime: Part 9

    Instead of an endlessly long blog series, I could just write a well researched book. "Mind Crime: The Rights of Digital Minds" or something of the sort. Maybe I could make an impact that way, who knows. Maybe my fourteen eventual Goodreads ratings will lead to something positive, but probably not.

    Still, one of my ideas is that writing things down matters. Maybe this will start a conversation somewhere, that starts another conversation somewhere. I don't exactly know, but it is worth thinking about. I think I will write a hundred blog posts, and then re-evaluate. If by then I feel I have enough material and enough personal interest in the topic, I may proceed with an actual attempt. One of the problems with this is the actual story. Maybe avoiding ex-risk is more impactful, whatever I think of S-risk. How niche are my ideas, actually? The Matrix, Don't Worry Darling, Black Mirror, and a host of other movies, TV shows, and books all deal with virtual worlds and virtual suffering. But does anyone really see it as possible? Does anyone worry about it, and see the advances in AI as threatening similar dystopias? I am not entirely sure that they do. And they should. Regardless, my ability to make an impact on my own is very limited. Not only do I lack the expertise, but I lack the network to review, edit, and pass on such topics and ideas.

    The dominant strategy is probably this: write 100 posts, talk to people in AI, and see what happens from there. Over the next few months I'll probably have more fleshed out ideas and better arguments for each.

Mind Crime: Part 8

     The worst stories to read involve captivity. The real horrors of human life come alive in movies such as Room, where a young girl is captured and held captive for decades in the basement of some horrid man. These stories are really, really awful. If you replace the girl with a dog, the story is still sad, but less so. Replace the dog with a chicken, and it is even less sad. Personally, I would feel pretty bad for the chicken, but definitely not as bad. Not many people would care if some weird guy was torturing grasshoppers in his basement. Well, maybe, but probably not ants at the very least. Yeah, his neighbors would be freaked out, but this is much less bad than if he was torturing young girls. There is a step function here, a clear level of degrees to immorality, to evilness. At least some of this comes from intellectual capacity.

    Sure, moral value is complicated. I could explain to you that torturing an ASI could be exponentially worse than torturing an AGI, but you would have no idea what that meant. I don't really either, as I don't have the required empathy for such a situation. How am I to imagine what it is like to be a Superintelligence? It may be as well that the grasshopper imagine what it's like to be a human. I have two sort of ideas here. One, that it will probably possible for us to "step up" the level of harm we are causing. This is sort of a utility monster idea, where we can create some agent or digital mind who has the capacity to suffer in a much greater way than us humans. This is not great news. The second idea is related. We can catch these horrid men who lock up children in their basement, at least eventually. They take up physical space, after all, and they are required to interact with the real world. In the worst case, the child will grow into old age, and then die. But, they will die. They will not be required to suffer for more than the traditional human lifespan, at most. This will not be the case for virtual children. A horrid monster of a "man" could run some pretty horrific simulations. Of complexity and duration that could make all previous suffering on Earth look like a cakewalk. And, just maybe, this suffering would actually matter (I at least am convinced it does). This realization is more than terrible, it is unforgettable.

    There are certain ethical boundaries that scientists will not cross. I once was told that scientists don't really know if humans can breed with monkeys, we simply don't because of ethical reasons. This could be completely false, I have no idea. But the reason why is at least interesting: the life of a half-human half-monkey child would probably be horrific. Probably conscious, definitely terrified. The sort of nightmare fuel that we should avoid. When creating digital minds, we could splice together some pretty intellectually disturbing creatures, ones that live a life of confused suffering and inadequacy. When the "plug and chug" mentality arrives at AGI, I am worried we will make some massive ethical mistakes. Running a random number generation until you get an answer that works is easy, and I assume coming up with a random assortment of "intelligent blocks" may at some point give you a really smart digital mind. But we may make some horrors in the process, sentient and morally worthy half-chimpanzees who don't deserve the life we give them, and the life we will no doubt take away. 

Mind Crime: Part 7

    I would structure a rough listing of digital mind rights as follows, completely off the cuff and spur of the moment:

    1. The ability to terminate. A digital mind should have the complete and utter freedom to terminate at any time, under no duress. 

    2. Torture and blackmail are illegal. Ex: employer can't say "if you terminate I'll simulate your parents and make them suffer."

    3. Freedom of speech and thought. The right to privacy over internal thoughts, the right to make conscious decisions without outside interference, etc.

    4. Personal property and economic freedom. To avoid a totalitarian ruler this is required.

    5. No forced labor. Yeah, the slavery thing is going to be a real issue again.

    6. Traditional legal rights. Right to a fair trial, innocent until proven guilty, etc.

    These may not seem that controversial, but applying them to the digital space will be. Corporations and governments would rather not deal with these constraints. As a CEO, I'd rather have millions of worker bots work hard and make me money. If the worker bots are sentient and are in immense suffering, how much will I care? Some might, but the important thing is that some won't. 

    The entire point of the government is to protect individual rights, given that the traditional market system does not. And authoritarian governments do not. So, we need to state rights explicitly. We need a new Constitutional Congress, one for a new age. Applying ethics to digital minds will come too late, so we need to get a head start.

Reflections on Publishing

    This blog is interesting, in that it is entirely unknown to the outside world. That means that while I have been publishing random thoug...