Lyapunov exponent


In mathematics the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge at a rate given by
where is the Lyapunov exponent.
The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the Maximal Lyapunov exponent, because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic. Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.
The exponent is named after Aleksandr Lyapunov.

Definition of the maximal Lyapunov exponent

The maximal Lyapunov exponent can be defined as follows:
The limit ensures the validity of the linear approximation
at any time.
For discrete time system ,
for an orbit starting with this translates into:

Definition of the Lyapunov spectrum

For a dynamical system with evolution equation in an n–dimensional phase space, the spectrum of Lyapunov exponents
in general, depends on the starting point. However, we will usually be interested in the attractor of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix
this Jacobian defines the evolution of the tangent vectors, given by the matrix, via the equation
with the initial condition. The matrix describes how a small change at the point propagates to the final point. The limit
defines a matrix . The Lyapunov exponents are defined by the eigenvalues of.
The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system.

Lyapunov exponent for time-varying linearization

To introduce Lyapunov exponent consider a fundamental matrix
and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable.
Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial.

Perron effects of largest Lyapunov exponent sign inversion

In 1930 O. Perron constructed an example of a second-order system, where the first approximation has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of the original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also, it is possible to construct a reverse example in which the first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system
is Lyapunov stable.
The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently
called the Perron effect.
Perron's counterexample shows that a negative largest Lyapunov exponent does not, in general, indicate stability, and that
a positive largest Lyapunov exponent does not, in general, indicate chaos.
Therefore, time-varying linearization requires additional justification.

Basic properties

If the system is conservative, a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative.
If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of with an eigenvector in the direction of the flow.

Significance of the Lyapunov spectrum

The Lyapunov spectrum can be used to give an estimate of the rate of entropy production,
of the fractal dimension, and of the Hausdorff dimension of the considered dynamical system. In particular from the knowledge
of the Lyapunov spectrum it is possible to obtain the so-called
Lyapunov dimension ,
which is defined as follows:
where is the maximum integer such that the sum of the largest exponents is still non-negative. represents an upper bound for the information dimension of the system. Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem.
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, which is based on the direct Lyapunov method with special Lyapunov-like functions.
The Lyapunov exponents of bounded trajectory
and the Lyapunov dimension of attractor are invariant under diffeomorphism of the phase space.
The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time, and defines the characteristic e-folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite.

Numerical calculation

Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964. Currently, the most commonly used numerical procedure estimates the matrix based on averaging several finite time approximations of the limit defining.
One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic
Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion.
For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum, provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored.

Local Lyapunov exponent

Whereas the Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes of interest to estimate the local predictability around a point x0 in phase space. This may be done through the eigenvalues of the Jacobian matrix J 0. These eigenvalues are also called local Lyapunov exponents. .

Conditional Lyapunov exponent

This term is normally used regarding synchronization of chaos, in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive system and a response system. The conditional exponents are those of the response system with the drive system treated as simply the source of a drive signal. Synchronization occurs when all of the conditional exponents are negative.

Software