Murray Shanahan


Murray Patrick Shanahan is a Professor of Cognitive Robotics at Imperial College London, in the Department of Computing, and a senior scientist at DeepMind. He researches artificial intelligence, robotics, and cognitive science.

Education

Shanahan was educated at Imperial College London and completed his PhD at the University of Cambridge in 1987 supervised by William F. Clocksin.

Career and research

At Imperial College, in the Department of Computing, Shanahan was a postdoc from 1987 to 1991, an advanced research fellow until 1995. At Queen Mary & Westfield College, he was a senior research fellow from 1995 to 1998. Shanahan joined the Department of Electrical Engineering at Imperial, and then the Department of Computing, where he was promoted from Reader to Professor in 2006. Shanahan was a scientific advisor for Alex Garland's 2014 film Ex Machina. Garland credited Shanahan with correcting an error in Garland's initial scripts regarding the Turing test. Shanahan is on the six-person ethics board for Texan startup Lucid.AI as of 2016, and as of 2017 is on the external advisory board for the Cambridge Centre for the Study of Existential Risk. In 2016 Shanahan and his colleagues published a proof-of-concept for "Deep Symbolic Reinforcement Learning", a specific hybrid AI architecture that combines GOFAI with neural networks, and that exhibits a form of transfer learning. In 2017, citing "the potential on academia of the current tech hiring frenzy" as an issue of concern, Shanahan negotiated a joint position at Imperial College London and DeepMind. The Atlantic and Wired UK have characterized Shanahan as an influential researcher.

Books

In 2010, Shanahan published Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds, a book that helped inspire the 2014 film Ex Machina. The book argues that cognition revolves around a process of "inner rehearsal" by an embodied entity working to predict the consequences of its physical actions.
In 2015, Shanahan published The Technological Singularity, which runs through various scenarios following the invention of an artificial intelligence that makes better versions of itself and rapidly outcompetes humans. The book aims to be an evenhanded primer on the issues surrounding superhuman intelligence. Shanahan takes the view that we do not know how superintelligences will behave: whether they will be friendly or hostile, predictable or inscrutable.
Shanahan also authored Solving the Frame Problem and co-authored Search, Inference and Dependencies in Artificial Intelligence.

Views

As of the 2010s, Shanahan characterizes AI as lacking the common sense of a human child. He endorses research into artificial general intelligence to fix this problem, stating that AI systems deployed in areas such as medical diagnosis and automated vehicles should have such abilities to be safer and more effective. Shanahan states that there is no need to panic about an AI takeover because multiple conceptual breakthrough will be needed for AGI, and that "it is impossible to know when might be achievable". Shanahan states "The AI community does not think it's a substantial worry, whereas the public does think it's much more of an issue. The right place to be is probably in-between those two extremes." In 2014 Shanahan stated there will be no AGI in the next ten to twenty years, but also stated "on the other hand it's probably a good idea for AI researchers to start thinking about the issues that Stephen Hawking and others have raised." Shanahan is confident that AGI will eventually be achieved. In 2015 he speculated that AGI is "possible but unlikely" in 2025 to 2050, and becomes "increasingly likely, but still not certain" in the second half of the 21st century. Shanahan has advocated that such AGI should be taught human empathy.