David Wolpert


David Hilton Wolpert is an American mathematician, physicist and computer scientist. He is a professor at Santa Fe Institute. He is the author of three books, three patents, over one hundred refereed papers, and has received numerous awards. His name is particularly associated with a group of theorems in computer science known as "no free lunch".

Career

David Wolpert took a B.A. in Physics at Princeton University, then attended the University of California, Santa Barbara, where he took the degrees of M.A. and Ph.D..
Between 1989 and 1997 he pursued a research career at Los Alamos National Laboratory, IBM, TXN Inc. and Santa Fe Institute.
From 1997 to 2011 he worked as senior computer scientist at NASA Ames Research Center, and became visiting scholar at the Max Planck Institute. He spent the year 2010-11 as Ulam Scholar at the Center for Nonlinear Studies at Los Alamos.
He joined the faculty of Santa Fe Institute in 2011 and became a professor there in September 2013. His research interests have included statistics, game theory, machine learning applications, information theory, optimization methods and complex systems theory.

"No free lunch"

One of Wolpert’s most discussed achievements is known as No free lunch in search and optimization. By this theorem, all algorithms for search and optimization perform equally well averaged over all problems in the class with which they are designed to deal. The theorem holds only under certain conditions that are not often encountered precisely in real life, although it has been claimed that the conditions can be met approximately. The theorem lies within the domain of computer science, but a weaker version known as the “folkloric no free lunch theorem” has been drawn upon by William A. Dembski in support of intelligent design. This use of the theorem has been rejected by Wolpert himself and others.

Limitation on knowledge

Wolpert has put forward a formal argument to show that it is in principle impossible for any intellect to know everything about the universe of which it forms a part, in other words disproving "Laplace's demon". This has been seen as an extension of the limitative theorems of the twentieth century such as those of Heisenberg and Gödel. In 2018 Wolpert published a proof revealing the fundamental limits of scientific knowledge.

Machine learning

Wolpert made many contributions to the early work on machine learning. These include the first Bayesian estimator of the entropy of a distribution based on samples of the distribution, disproving formal claims that the "evidence procedure" is equivalent to hierarchical Bayes, a Bayesian alternative to the chi-squared test, a proof that there is no prior for which the bootstrap procedure is Bayes-optimal, and Bayesian extensions of the bias-plus-variance decomposition. Most prominently, he introduced "stacked generalization", a more sophisticated version of cross-validation that uses held-in / held-out partitions of a data set to combine learning algorithms rather than just choose one of them. This work was developed further by Breiman, Smyth, Clarke and many others, and in particular the top two winners of 2009 Netflix competition made extensive use of stacked generalization.

Academic memberships