The worst stories to read involve captivity. The real horrors of human life come alive in movies such as Room, where a young girl is captured and held captive for decades in the basement of some horrid man. These stories are really, really awful. If you replace the girl with a dog, the story is still sad, but less so. Replace the dog with a chicken, and it is even less sad. Personally, I would feel pretty bad for the chicken, but definitely not as bad. Not many people would care if some weird guy was torturing grasshoppers in his basement. Well, maybe, but probably not ants at the very least. Yeah, his neighbors would be freaked out, but this is much less bad than if he was torturing young girls. There is a step function here, a clear level of degrees to immorality, to evilness. At least some of this comes from intellectual capacity.
Sure, moral value is complicated. I could explain to you that torturing an ASI could be exponentially worse than torturing an AGI, but you would have no idea what that meant. I don't really either, as I don't have the required empathy for such a situation. How am I to imagine what it is like to be a Superintelligence? It may be as well that the grasshopper imagine what it's like to be a human. I have two sort of ideas here. One, that it will probably possible for us to "step up" the level of harm we are causing. This is sort of a utility monster idea, where we can create some agent or digital mind who has the capacity to suffer in a much greater way than us humans. This is not great news. The second idea is related. We can catch these horrid men who lock up children in their basement, at least eventually. They take up physical space, after all, and they are required to interact with the real world. In the worst case, the child will grow into old age, and then die. But, they will die. They will not be required to suffer for more than the traditional human lifespan, at most. This will not be the case for virtual children. A horrid monster of a "man" could run some pretty horrific simulations. Of complexity and duration that could make all previous suffering on Earth look like a cakewalk. And, just maybe, this suffering would actually matter (I at least am convinced it does). This realization is more than terrible, it is unforgettable.
There are certain ethical boundaries that scientists will not cross. I once was told that scientists don't really know if humans can breed with monkeys, we simply don't because of ethical reasons. This could be completely false, I have no idea. But the reason why is at least interesting: the life of a half-human half-monkey child would probably be horrific. Probably conscious, definitely terrified. The sort of nightmare fuel that we should avoid. When creating digital minds, we could splice together some pretty intellectually disturbing creatures, ones that live a life of confused suffering and inadequacy. When the "plug and chug" mentality arrives at AGI, I am worried we will make some massive ethical mistakes. Running a random number generation until you get an answer that works is easy, and I assume coming up with a random assortment of "intelligent blocks" may at some point give you a really smart digital mind. But we may make some horrors in the process, sentient and morally worthy half-chimpanzees who don't deserve the life we give them, and the life we will no doubt take away.
No comments:
Post a Comment