Let's assume that alignment works. Against all odds, we pull it off and we have human-level AGI in the hands of every man, woman, and child on the planet Earth. The type of AGI that you can run on your smartphone. Well, things are going to get really weird, really fast.
Honestly, maybe the good years will all be pre-AGI. Maybe we should enjoy our uncomplicated lives while they last, because traditional life is coming to and end. From a governance standpoint, I have absolutely no idea how we will regulate any of these developments. Having an actually coherent supercomputer in my pocket, one that can do everything I can do except way faster and better, does more than just make me obsolete: it makes me dangerous. If AGI becomes cheap enough for me to run multiple copies, I now have an entire team, or an entire company. An entire terrorist cell, or an entire nonprofit organization. Really the only constraining resource is compute. With an AGI as fast as GPT-4, I could write books in the time it now takes me to write a page. Sure, AGI will probably start out very slow, but incremental increases would lead to a world with trillions of more minds than before.
Not only is this a logistical nightmare for governments, but also it is a human rights nightmare for effective altruists. I have no idea how we will control for mind crime, and if the shift towards fast AGI is rapid we'll probably cause a whole lot of suffering. We'll also probably break pretty much every system currently set up. Well, fortunately or unfortunately, we likely won't actually solve alignment and won't have an AGI that is actually useful for our needs. We'll probably hit a similar level of rapid intelligence that breaks everything and maybe kills everyone, but we won't need to worry about drafting legislation that controls the use of our digitally equivalent humans. I guess that's the good news?
No comments:
Post a Comment