Two and a half weeks after the press release of OpenAI's o3 model capabilities, potentially one of the most important days in the history of AI, and thus, the world, I sit down in my first class of the quarter at the University of Chicago. A class called "Artificial Intelligence." It has been over a year since my last post on this blog. A year in which, to my astonishment, AI capabilities progress has outstripped my already insane expectations. Autonomous AI agents will likely arrive this year, video and music realism are slowly emerging us into the new Wild West, and AGI is either already here, or on the immediate horizon. Superintelligence is openly discussed, and AI welfare concerns are starting to slowly emerge. I have spent the last year devoted to, on and off, writing what is likely to be my first book: Mind Crime. It may, depending on the events of the coming two years, prove to be my last as well. After two and a half weeks of little sleep, with the realization that ASI is coming, and coming shockingly soon, I sit in a classroom and listen.
My professor is clearly of the belief that we may soon enter another AI winter, and he polls the class on when they think AGI will arrive. Estimates are between 20 and 50 years. In a world where not a single exam can be passed by a human that can not be passed by an AI, where every cognitive benchmark has been rendered useless, and where Sam Altman posted that day that OpenAI's sights are now set on machine superintelligence, a classroom of students nod along to a professor who has an entirely incorrect version of reality. The world, with announcements of $80 billion investments in AI by single companies in 2025, and with likely less than $200 million in AI safety research a year, is entirely blind to the fact that building superintelligent machines may carry enormous risk to civilization. I am reminded of Watchmen, "if you begin to feel an intense and crushing feeling of religious terror at the concept, don't be alarmed. That indicates only that you are still sane."
Staying sane in these last three weeks has been relatively difficult. It is if I am man entirely alone, a complete outsider, swimming along in a world completely oblivious to the rapid pace of technological progress, and of the push towards ASI. Either I am crazy, and wrong, and ASI is decades or millennium away, or essentially every single person I interact with daily is. And it's clear to me which scenario strikes the most fear in me, and it is clearly the second. It is hard to function in this state, hard to believe that I am potentially one of the unprivileged few who lacks a veil of ignorance, and might have to witness society get shocked awake by thoughts I have been struggling with for years. Maybe it is better to have had a slow burn, to come to these realizations before many others, and frankly, to have read Bostrom's work back in 2018. To have spent the time to think deeply, to have laid awake at night, and to have had the luxury to meet my goals before the wave hits, instead of being thrown overboard at once. But unless I can affect the outcome, unless I can truly make a positive impact on the changing world ahead of me, unless my life is imbued with some level of cosmic significance for at least trying, something is clearly obvious to me at this point: I would trade it all.
No comments:
Post a Comment