Radial basis function


A radial basis function is a real-valued function whose value depends only on the distance between the input and some fixed point, either the origin, so that, or some other fixed point, called a center, so that. Any function that satisfies the property is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection which forms a basis for some function space of interest, hence the name.
Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977.
RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.

Definition

A radial function is a function. When paired with a metric on a vector space a function is said to be a radial kernel centered at. A Radial function and the associated radial kernels are said to be radial basis functions if, for any set of nodes
is non-singular.

Examples

Commonly used types of radial basis functions include :
These radial basis functions are from and are strictly positive definite functions that require tuning a shape parameter
for several choices of.
with several choices of.
These RBFs are compactly supported and thus are non-zero only within a radius of, and thus have sparse differentiation matrices

Approximation

Radial basis functions are typically used to build up function approximations of the form
where the approximating function is represented as a sum of radial basis functions, each associated with a different center, and weighted by an appropriate coefficient The weights can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights .
Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour and 3D reconstruction in computer graphics.

RBF Network

The sumcan also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number ' of radial basis functions is used.
The approximant is differentiable with respect to the weights
'. The weights could thus be learned using any of the standard iterative methods for neural networks.
Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically. However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.