Stephen Hawking and the Golem

Advertisements
Advertisements

5 min read

FacebookTwitterLinkedInPrintFriendlyShare

A cautionary tale about artificial intelligence.

Stephen Hawking is much in the news these days. His personal story, the subject of the recently released film The Theory of Everything, is already spoken of as an Oscar contender. Diagnosed in 1963 with the dreaded Lou Gehrig’s disease and given two years to live, he went on to a brilliant career, became the author of international best-sellers, received dozens of honorary degrees and gained broad recognition as one of the most brilliant theoretical physicists since Einstein.

Hawking is clearly someone undaunted by personal fears. Yet in a recent BBC interview, Hawking confided that he was deeply concerned for the future of humanity. The cause of his concern is artificial intelligence, or AI, the creation of intelligent machines able to “outthink” their creators. What began with IBM’s Watson supercomputer, capable of handily beating chess grandmasters and the best players on “Jeopardy!,” may in the near future, Hawking warned, checkmate its designers to become the Earth’s ruler.

“The development of full artificial intelligence could spell the end of the human race,” Hawking said.

Science fiction already has prepared us to contemplate such a scenario. Films like The Terminator and The Matrix pit puny humans against AI-driven enemies. The upcoming Avengers movie depicts superheroes forced to battle Ultron, an AI machine determined to destroy mankind.

There’s a world of difference between the ability to create and the power to control.

There’s a world of difference between the ability to create and the power to control. As Google’s director of engineering, Ray Kurzweil, has put it, “It may be hard to write an algorithmic moral code strong enough to constrain and contain super-smart software.” The greatest danger of scientific progress is the possibility that what we bring into being realizes a life of its own and is no longer subservient to its maker or to human values.

That is what has been the subliminal message for centuries of the famous legend of the golem of Prague. In Jewish tradition, the Maharal, Rabbi Judah Loew, the 16th century rabbi of Prague, used his knowledge of Jewish mysticism to magically animate a lifeless lump of clay and turn it into a super human defender of the Jewish people. On its forehead he wrote the Hebrew word for truth, “emet,” which mystically gave the creature its power.

Much to his consternation however, the Maharal soon realized that once granted its formidable strength, the golem became impossible to fully control. Versions of the story differ. In one the golem fell in love and, when rejected, turned into a murderous monster. In another the golem went into an unexplained murderous rampage. In perhaps the most fascinating account, the Maharal himself was at fault – something akin to a computer programmer’s error – by forgetting to deactivate the golem immediately prior to the Sabbath, as was his regular custom. This caused the golem to profane the holiness of the day and be guilty of the death penalty.

Whatever the cause, the Maharal came to conclude that the golem had to be put to rest. The rabbi erased the first letter of emet – the aleph with a numerical value of one, representing the one God above who alone can give life. That left only the two letters spelling the Hebrew word for death, “met.” No longer representing the will of the ultimate creator, nor bearing the mark of God on his forehead, the golem turned into dust.

Many scholars believe that it was the legend of the golem that inspired Mary Shelley to write her famous Frankenstein novel about an unorthodox scientific experiment that creates life, only to reap the horrifying results when the achievement goes terribly wrong.

Creation without control is a formula for catastrophe. The history of scientific achievement bears ample testimony to the simple truth that progress detached from the restraints of moral and ethical considerations may grant us the knowledge to penetrate the secrets of nuclear fission, but at the cost of placing mankind in danger of universal annihilation.

The story of the golem of Prague is a paradigm for the hazard of permitting what we create to go far beyond our intent. Artificial intelligence, as an extension of our intellectual ability, certainly has many advantages. Yet it cannot really “think.” It has no moral sensitivity. It does not share the ethical limitations of its programmer. And it is not restricted by the values of those who brought it into being.

Stephen Hawking has done us a much-needed favor by alerting us to the very real dangers of AI. But what I find striking – and highly serendipitous – is the other major revelation just recently ascribed to him: Hawking publicly admitted that he is in fact an atheist. In response to a journalist questioning him about his religious leanings, he said unequivocally, “There is no God.”

Perhaps the biblical God in whom I and so much of the world believe must also deeply regret the “artificial intelligence” with which he imbued mankind. Perhaps we are the greatest illustration of the fear we now verbalize for our technology – creations capable of destroying our world because we doubt our Creator.

Click here to comment on this article
guest
0 Comments
Inline Feedbacks
View all comments
EXPLORE
LEARN
MORE
Explore
Learn
Resources
Next Steps
About
Donate
Menu
Languages
Menu
oo
Social
.