“The fact that we seem to be hastening towards some sort of digital apocalypse poses several intellectual and ethical challenges. For instance, in order to have any hope that a super-intelligent AGI would have values commensurate with our own we would have to instil those values in it, or otherwise get it to emulate us. But whose values should count? Should everyone get a vote in creating the utility function of our new colossus?
“If nothing else the invention of an AGI would force us to resolve some very old and boring arguments in moral philosophy.
“It’s interesting that once you imagine having to build values into a super-intelligent AGI, you then realise that you need to get straight about what you think is good, and I think the advent of this technology would cut through moral relativism like a laser. I mean, who is going to want to engineer into this thing the values of theocracy?”
— Sam Harris in the most recent episode of his podcast.