Bernstein polynomial


In the mathematical field of numerical analysis, a Bernstein polynomial, named after Sergei Natanovich Bernstein, is a polynomial in the Bernstein form, that is a linear combination of Bernstein basis polynomials.
A numerically stable way to evaluate polynomials in Bernstein form is de Casteljau's algorithm.
Polynomials in Bernstein form were first used by Bernstein in a constructive proof for the Weierstrass approximation theorem. With the advent of computer graphics, Bernstein polynomials, restricted to the interval , became important in the form of Bézier curves.

Definition

The n+1 Bernstein basis polynomials of degree n are defined as
where is a binomial coefficient. So, for example,
The first few Bernstein basis polynomials for blending 1, 2, 3 or 4 values together are:
The Bernstein basis polynomials of degree n form a basis for the vector space Πn of polynomials of degree at most n with real coefficients. A linear combination of Bernstein basis polynomials
is called a Bernstein polynomial or polynomial in Bernstein form of degree n. The coefficients are called Bernstein coefficients or Bézier coefficients.
The first few Bernstein basis polynomials from above in monomial form are:

Properties

The Bernstein basis polynomials have the following properties:
Let ƒ be a continuous function on the interval . Consider the Bernstein polynomial
It can be shown that
uniformly on the interval .
Bernstein polynomials thus provide one way to prove the Weierstrass approximation theorem that every real-valued continuous function on a real interval can be uniformly approximated by polynomial functions over .
A more general statement for a function with continuous kth derivative is
where additionally
is an eigenvalue of Bn; the corresponding eigenfunction is a polynomial of degree k.

Probabilistic proof

This proof follows Bernstein's original proof of 1912. See also Feller or Koralov & Sinai.
Suppose K is a random variable distributed as the number of successes in n independent Bernoulli trials with probability x of success on each trial; in other words, K has a binomial distribution with parameters n and x. Then we have the expected value and
By the weak law of large numbers of probability theory,
for every δ > 0. Moreover, this relation holds uniformly in x, which can be seen from its proof via Chebyshev's inequality, taking into account that the variance of K, equal to x, is bounded from above by irrespective of x.
Because ƒ, being continuous on a closed bounded interval, must be uniformly continuous on that interval, one infers a statement of the form
uniformly in x. Taking into account that ƒ is bounded one gets for the expectation
uniformly in x. To this end one splits the sum for the expectation in two parts. On one part the difference does not exceed ε; this part cannot contribute more than ε.
On the other part the difference exceeds ε, but does not exceed 2M, where M is an upper bound for |ƒ|; this part cannot contribute more than 2M times the small probability that the difference exceeds ε.
Finally, one observes that the absolute value of the difference between expectations never exceeds the expectation of the absolute value of the difference, and

Elementary proof

The probabilistic proof can also be rephrased in an elementary way, using the underlying probabilistic ideas but proceeding by direct verification:
The following identities can be verified:



In fact, by the binomial theorem
and this equation can be applied twice to. The identities,, and follow easily using the substitution.
Let
and
Thus
so that
Since f is uniformly continuous, given, there is a such that whenever
. Moreover, by continuity,. But then
The first sum is less than ε. On the other hand, since, the second sum is bounded by 2M times
It follows that the polynomials fn tend to f uniformly.