The world is full of authoritarian governments that are doing terrible things. Two of the three military superpowers (Russia and China) have horrible human rights track records and have a strong drive towards increasing power and influence. Russia invaded a sovereign country recently, and China is doing very, very bad things (oppression of Uyghurs, Hong Kong takeover, general police state tendencies). The West is constantly on the brink of nuclear war with these countries, which would result in billions of deaths. Chemically engineered pandemics become both more likely and more dangerous over time. The barriers to creating such viruses are being knocked down and the world is becoming more and more interconnected. What are our odds? If our odds of dying off soon are great, or if it will take us a long, long time to reach a place where most humans are free and thriving, maybe we make the trade. Maybe we decide that we understand the risks, and push forward. Maybe we demand utopia, now.
Well, there is another problem with AI: suffering risk. This is not often discussed, but there is a very real possibility that the development of transformative AI leaves the world in a much, much worse place than before (ex: ASI decides it wants to torture a bunch of physical people or simulate virtual hell for a bunch of digital minds for research purposes). Another factor in your AI hesitancy should your estimated probability of a perpetual dystopia. This is where I differ from other people. I believe that the risk of things going really, really wrong as a result of AI (worse than AI simply killing everyone) is massively understated. We should hold off on AGI as long as possible, until we have a better understanding of the likelihood of this risk.
No comments:
Post a Comment