Standing atop the grave of humanity, smugly looking down, and saying "I told you so," is just as worthless as having done nothing in the first place. Still, a lot of the ideas Effective Altruists grapple with are so far removed from the public's daily thoughts that it is hard to reconcile not just doing this. Convincing the "techno optimists" that they are wrong and there are dangers ahead, just seems so, well, annoying to have to do. For me, saying that mind crime will be a big issue, because digital minds could have moral worth, will probably fall on deaf ears regardless. Regardless, I'm probably going to try writing a book. The thesis for the book will be very simple: we've got a lot of moral dilemmas coming up, and we're probably going to do a bad job of handling them. This is a pretty simple thesis, and one that I think has the potential to be powerful.
The good news is that I won't have to defend too many ideas, as they will be proven with time. Two assumptions I have are that:
1. AGI is possible
2. Some machine intelligence will have moral worth
Instead of spending a hundred pages philosophizing about this, we can just wait a decade or two and see these become somewhat obvious. If they don't, cool, throw the book to the side. But if they do, well, maybe we will have some established thoughts and plans on how to deal with this.
Personally, I have no trust in our future tech overlords. I've said before that the lack of understanding of survivorship bias is the main problem facing the world, and I am convinced we'll have some dumb leaders who will sleepwalk right into catastrophe. In a country where a few hundred years ago we said that slaves were worth 3/5 of a person, it's certainly possible that we get some really smart, morally worthy AIs and say "huh, looks like 0/5 to me." Because why would we not? My gut is, we will get this wrong. If the slave owners of the south discovered the fountain of youth, became immortal, had advanced surveillance systems, and dropped a rapidly made nuclear warhead on New York, when would the slaves have been free? The south having powerful AI at their disposal was not possible given the technology of the time, but what if it had been? We falsely equate technological progress with moral progress. The fact that both have advanced is correlation, but in some countries we have seen a clear advancement of one and regression of another. So we have to be careful, diligent, and forward-thinking. But we won't be, and that is the problem.
The reason for the title of "Mind Crime," in my estimate, is that this will become a really well known term that is popularized in the future. Being on the forefront of that might be cool, so that in ten or twenty of years post-AGI I will get some posthumous reference. As stated before, that is clearly not the goal. The real goal would be to lay out my thoughts in an accessible way, to maybe change a mind or two before the "I told you so" is inevitable.
No comments:
Post a Comment