Eric Horvitz


Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

Biography

Horvitz received his Ph.D and M.D. from Stanford University in 1991 and 1994, respectively. His doctoral dissertation, , and follow-on research introduced models of bounded rationality founded in probability and decision theory. He did his doctoral work under advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes.
He is currently Technical Fellow at Microsoft, where he serves as director of . He has been elected Fellow of the Association for the Advancement of Artificial Intelligence, the National Academy of Engineering, the American Academy of Arts and Sciences, and of the American Association for the Advancement of Science. He was elected to the ACM CHI Academy in 2013 and ACM Fellow 2014 "For contributions to artificial intelligence, and human-computer interaction."
He was elected to the American Philosophical Society in 2018.
In 2015, he was awarded the AAAI Feigenbaum Prize, a biennial award for sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems.
In 2015, he was also awarded the ACM - AAAI Allen Newell Award, for "contributions to artificial intelligence and human-computer interaction spanning the computing and decision sciences through developing principles and models of sensing, reflection, and rational action."
He serves on the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence, the Computer Science and Telecommunications Board of the US National Academies, and on the Board of Regents of the US National Library of Medicine. He was nominated in 2019 to serve on the U.S. National Security Commission on AI.
He has served as president of the Association for the Advancement of AI, on the NSF Advisory Board, on the council of the Computing Community Consortium, and chair of the Section on Information, Computing, and Communications of the American Association for the Advancement of Science.

Work

Horvitz's research interests span theoretical and practical challenges with developing systems that perceive, learn, and reason. His contributions include advances in principles and applications of machine learning and inference, information retrieval, human-computer interaction, bioinformatics, and e-commerce.
Horvitz played a significant role in the use of probability and decision theory in artificial intelligence. His work raised the credibility of artificial intelligence in other areas of computer science and computer engineering, influencing fields ranging from human-computer interaction to operating systems. His research helped establish the link between artificial intelligence and decision science. As an example, he coined the concept of bounded optimality, a decision-theoretic approach to bounded rationality. The influences of bounded optimality extend beyond computer science into cognitive science and psychology.
He studied the use of probability and utility to guide automated reasoning for decision making. The methods include consideration of the solving of streams of problems in environments over time. In related work, he applied probability and machine learning to identify hard problems and to guide theorem proving. He introduced the anytime algorithm paradigm in AI, where partial results, probabilities, or utilities of outcomes are refined with computation under different availabilities or costs of time, guided by the expected value of computation.
He has issued long-term challenge problems for AI—and has espoused a vision of open-world AI, where machine intelligences have the ability to understand and perform well in the larger world where they encounter situations they have not seen before.
He has explored synergies between human and machine intelligence. In this area, he studied the value of displayed information, methods for guiding machine versus human initiative, learning models of human attention, and using machine learning and planning to identify and merge the complementary abilities of people and AI systems.
He also investigated the use of Bayesian methods to provide assistance to users. he made contributions to multimodal interaction. In 2015, he received the ACM ICMI Sustained Accomplishment Award for contributions to multimodal interaction. His work on multimodal interaction includes studies of situated interaction, where systems consider physical details of open-world settings and can perform dialog with multiple people.
He co-authored probability-based methods to enhance privacy, including a model of altruistic sharing of data called community sensing and stochastic privacy.
Horvitz speaks on the topic of artificial intelligence, including on NPR and the Charlie Rose show. Online talks include both technical lectures and presentations for general audiences. His research has been featured in the New York Times and the Technology Review.
He has testified before the US Senate on progress, opportunities, and challenges with AI.

AI and Society

He has addressed technical and societal challenges and opportunities with the fielding of AI technologies in the open world, including beneficial uses of AI, AI safety and robustness, and where AI systems and capabilities can have inadvertent effects, pose dangers, or be misused. He has presented on caveats with applications of AI in military settings.

Asilomar AI Study

He served as President of the AAAI from 2007-2009. As AAAI President, he called together and co-chaired the which culminated in a meeting of AI scientists at Asilomar in February 2009. The study considered the nature and timing of AI successes and reviewed concerns about directions with AI developments, including the potential loss of control over computer-based intelligences, and also efforts that could reduce concerns and enhance long-term societal outcomes. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI and attracted interest by the public.
In coverage of the Asilomar study, he said that scientists must study and respond to notions of superintelligent machines and concerns about artificial intelligence systems escaping from human control. In a later NPR interview, he said that investments in scientific studies of superintelligences would be valuable to guide proactive efforts even if people believed that the probability of losing of control of AI was low because of the cost of such outcomes.

One Hundred Year Study on Artificial Intelligence

In 2014, Horvitz defined and funded with his wife the at Stanford University. According to Horvitz, the gift, which may increase in the future, is sufficient to fund the study for a century. A Stanford press release stated that sets of committees over a century will "study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play." A framing memo for the study calls out 18 topics for consideration, including law, ethics, the economy, war, and crime. Topics include abuses of AI that could pose threats to democracy and freedom and addressing possibilities of superintelligences and loss of control of AI.
The One Hundred Year Study is overseen by a Standing Committee. The Standing Committee formulates questions and themes and organizes a Study Panel every five years. The Study Panel issues a report that assesses the status and rate of progress of AI technologies, challenges, and opportunities with regard to AI's influences on people and society.
The 2015 study panel of the One Hundred Year Study, chaired by Peter Stone, released a report in September 2016, titled "." The panel advocated for increased public and private spending on the industry, recommended increased AI expertise at all levels of government, and recommended against blanket government regulation. Panel chair Peter Stone argues that AI won’t automatically replace human workers, but rather, will supplement the workforce and create new jobs in tech maintenance. While mainly focusing on the next 15 years, the report touched on concerns and expectations that had risen in prominence over the last decade about the risks of superintelligent robots, stating "Unlike in the movies, there's no race of superhuman robots on the horizon or probably even possible. Stone stated that "it was a conscious decision not to give credence to this in the report."

Founding of Partnership on AI

He co-founded and has served as board chair of the , a non-profit organization bringing together Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft with representatives from civil society, academia, and non-profit R&D. The organization's website points at initiatives, including studies of risk scores in criminal justice, facial recognition systems, AI and economy, AI safety, AI and media integrity, and documentation of AI systems.

Microsoft Aether Committee

He founded and chairs the Aether Committee at Microsoft, Microsoft’s internal committee on the responsible development and fielding of AI technologies. He reported that the Aether Committee had made recommendations on and guided decisions that have influenced Microsoft’s commercial AI efforts. In April 2020, Microsoft published content on principles, guidelines, and tools developed by the Aether Committee and its working groups, including teams focused on AI reliability and safety, bias and fairness, intelligibility and explanation, and human-AI collaboration.

Publications

Books

Selected articles

Podcasts

*