I've been reading Eliezer Tudowsky's posts at Overcoming Bias religiously (sic) for a few months now, but there's still one problem that bothers me, and which I've yet to see him address (I'm sure he has addressed it, I'm just not sure where, or where to find it - at some point, if I don't find it, or figure it out for myself, I'll ask). If the singularity really is near, and Unfriendly AI really is about to wipe out the human race, why should I care?
Yes, it would be a tragedy if an asteroid hit the Earth and killed us all, or if we managed to kill several billion people in a nuclear war, but is there really any moral imperative to privilege future intelligent beings which happen to be made out of the same sort of squishy stuff as we are ahead of future beings that are made out of silicon, or toilet paper and stones, or whatever it might be? When I read Eliezer's story about the Alien Message, I'm on the side of the people - and not just because they're made out of the same stuff as me. Put the people outside the box, and I'd be on the side of the computers. Is trying to develop friendly AI really a rational goal? Or is it an obvious bias?
Disclaimer: I haven't read Eliezer's wiki post of the Knowability of Friendly AI yet - some of the questions I ask here might well be answered there.
Sunday, 1 June 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment