And We Shall Evangelize Our Robots

And We Shall Evangelize Our Robots

We must skip the question of whether or not androids dream of electric sheep and get to the most crucial point of all: are we going to program our machines to have a soul?

There are machine utopianists who call themselves "transhumanists" who believe (among other things) that evolution is a progression to a higher type, and that the convergence of technology and biology will usher in a fantastical age where people never die as human knowledge is transferred from one body to another. In any other parlance, this would be called the Kingdom of Heaven with the elect benefitting from this astounding turn of events.

But in all the talk of artificial intelligence and the approach of the Singularity is a naive belief that we will not have any need of religious morality. That we would only program machines with the best examples of our collective wisdom. Yet this is incorrect. We will create thinking machines according to our image and we will also program them with a morality that stems from religion, not the collected works of Sam Harris or Richard Dawkins. Would you rather teach a machine Milton or Hitchens? We will want to give our artificial creations a soul, that undefinable, messy essence of what we believe ourselves to be. It will neither be clean nor clean cut.

This is not mindless speculation. Why shouldn't we evangelize our robots, those machines who will look similar to us? If scores of humans believe their pets will also be in heaven, then why not our artificially intelligent creations? Or shall we consider them slaves? It was a serious theological problem, this issue of whether or not human slaves in the Americas had souls and should (and could) they be saved. If we bond with inanimate objects now, why should it be so hard to consider *not* bonding with a machine that we have created and programmed to be just like us?

"This machine wants to physically join with a human? Is that possible?" asks the irascible Dr. McCoy in "Star Trek: The Motion Picture," when it becomes apparent the massive alien intruder is actually a living machine (from Earth, no less!) that has reached the limits of material knowledge. The machine itself wants to be human, given the chance "to leap beyond logic." In short, it wants a soul, it wants to believe even if it has no idea what "belief" actually means. Who is to say that a scientist will not want to program his thinking machine with a moral code that reflects his own deep-seated feelings of comfort? We will not be happy with scores of thinking robots if we think they have no soul, even if the creator is a staunch atheist.

Are we humans obligated to create machines with souls, especially when we want them to be as human as possible? Or will we purposefully keep them dumb? Moreover, if we make these fantastical leaps and bounds in technology, are we prepared to tell our thinking machines "there is a limit to where you can go"? The convergence of the human and the machine will not produce paradise or eternal life, but merely the perpetuation of the human species by other means, and part of being human is the acceptance of the irrational, the capacity to leap beyond logic. Some technological state of grace is not going to happen, but a machine that questions its own salvation will exist because we will program it thus.

Human perfectibility is a myth, along with its alleged inevitable appearance. Not one utopian scheme in all of human history has ever come to fruition, neither as it was preached in the Galilee nor flowing from the pen of Marx in "Kapital" or even in its modern form of transhumanism. We will not become a perfect species, no matter how many biochips we may insert into our bodies or increase of our life spans. There has never been a Golden Age and there will never be a Golden Future, but a series of rises and falls, of triumphs and failures all accompanied by the most constant aspect of human development: stupidity.

Not arrogance or hubris, but stupidity. Interesting how no one has ever composed a tome about the role of stupidity in human development, relegating it to tactical mistakes committed on the battlefield. Likewise, who is to say we will not stumble into a horrible new world because we felt the need to create thinking machines and then give them souls? How many science-fiction stories have taken up the theme of machine creation and subsequent machine war upon humans? But with few exceptions, no fictional robot has declared its own kind the New Israel, armed with religious fervor given to it by its human creators. We will not program our artificial children with a simple base code of morality ("thou shalt not kill a human") but with a complexity mirroring our own. And in our drive to create life (seeing how unsatisfied we are doing it the evolutionary way), we will give our machines names, hair, looks, a sense of humor, a capacity to smile and our religious beliefs. We will, in the end, want them to be saved. We *need* them to be redeemed.

And that may well be the stupidest mistake we will inevitably make.