Leave-one-out error


Leave-one-out error can refer to the following:
, with and going to zero for

Preliminary notations

With X and Y being a subset of the real numbers R, or X and Y ⊂ R, being respectively an input space X and an output space Y, we consider a training set:
of size m in drawn independently and identically distributed from an unknown distribution, here called "D". Then a learning algorithm is a function from into which maps a learning set S onto a function from the input space X to the output space Y. To avoid complex notation, we consider only deterministic algorithms. It is also assumed that the algorithm is symmetric with respect to S, i.e. it does not depend on the order of the elements in the training set. Furthermore, we assume that all functions are measurable and all sets are countable which does not limit the interest of the results presented here.
The loss of an hypothesis f with respect to an example is then defined as.
The empirical error of f can then be written as.
The true error of f is
Given a training set S of size m, we will build, for all i = 1....,m, modified training sets as follows: