Global optimization
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is obviously equivalent to the minimization of the function.
Given a possibly nonlinear and non-convex continuous function with the global minima and the set of all global minimizers in, the standard minimization problem can be given as
that is, finding and a global minimizer in ; where is a compact set defined by inequalities.
Global optimization is distinguished from local optimization by its focus on finding the minimum or maximum over the given set, as opposed to finding local minima or maxima. Finding an arbitrary local minimum is relatively straightforward by using classical local optimization methods. Finding the global minimum of a function is far more difficult: analytical methods are frequently not applicable, and the use of numerical solution strategies often leads to very hard challenges.
General theory
A recent approach to the global optimization problem is via minima distribution. In this work, a relationship between any continuous function on a compact set and its global minima has been strictly established. As a typical case, it follows that
meanwhile,
where is the -dimensional Lebesgue measure of the set of minimizers. And if is not a constant on, the monotonic relationship
holds for all and, which implies a series of monotonous containment relationships, and one of them is, for example,
And we define a minima distribution to be a weak limit such that the identity
holds for every smooth function with compact support in. Here are two immediate properties of :
As a comparison, the well-known relationship between any differentiable convex function and its minima is strictly established by the gradient. If is differentiable on a convex set, then is convex if and only if
thus, implies that holds for all, i.e., is a global minimizer of on.
Applications
Typical examples of global optimization applications include:- Protein structure prediction
- Computational phylogenetics
- Traveling salesman problem and electrical circuit design
- Chemical engineering
- Safety verification, safety engineering
- Worst-case analysis
- Mathematical problems
- Object packing problems
- The starting point of several molecular dynamics simulations consists of an initial optimization of the energy of the system to be simulated.
- Spin glasses
- Calibration of radio propagation models and of many other models in the sciences and engineering
- Curve fitting like non-linear least squares analysis and other generalizations, used in fitting model parameters to experimental data in chemistry, physics, biology, economics, finance, medicine, astronomy, engineering.
Deterministic methods
Inner and outer approximation
In both of these strategies, the set over which a function is to be optimized is approximated by polyhedra. In inner approximation, the polyhedra are contained in the set, while in outer approximation, the polyhedra contain the set.Cutting-plane methods
The cutting-plane method is an umbrella term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are popularly used to find integer solutions to mixed integer linear programming problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory and Václav Chvátal.Branch and bound methods
Branch and bound is an algorithm design paradigm for discrete and combinatorial optimization problems. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.Interval methods
Interval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. Interval arithmetic helps find reliable and guaranteed solutions to equations and optimization problems.Methods based on real algebraic geometry
Real algebra is the part of algebra which is relevant to real algebraic geometry. It is mostly concerned with the study of ordered fields and ordered rings and their applications to the study of positive polynomials and sums-of-squares of polynomials. It can be used in convex optimizationStochastic methods
Several exact or inexact Monte-Carlo-based algorithms exist:Direct Monte-Carlo sampling
In this method, random simulations are used to find an approximate solution.Example: The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize the total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain. As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account.
Stochastic tunneling
Stochastic tunneling is an approach to global optimization based on the Monte Carlo method-sampling of the function to be objectively minimized in which the function is nonlinearly transformed to allow for easier tunneling among regions containing function minima. Easier tunneling allows for faster exploration of sample space and faster convergence to a good solution.Parallel tempering
Parallel tempering, also known as replica exchange MCMC sampling, is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo sampling methods more generally. The replica exchange method was originally devised by Swendsen, then extended by Geyer and later developed, among others, by Giorgio Parisi.,Sugita and Okamoto formulated a molecular dynamics version of parallel tempering: this is usually known as replica-exchange molecular dynamics or REMD.
Essentially, one runs N copies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method
is to make configurations at high temperatures available to the simulations at low temperatures and vice versa.
This results in a very robust ensemble which is able to sample both low and high energy configurations.
In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision.
Heuristics and metaheuristics
Other approaches include heuristic strategies to search the search space in a more or less intelligent way, including:- Ant colony optimization
- Simulated annealing, a generic probabilistic metaheuristic
- Tabu search, an extension of local search capable of escaping from local minima
- Evolutionary algorithms
- Differential evolution, a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality
- Swarm-based optimization algorithms
- Memetic algorithms, combining global and local search strategies
- Reactive search optimization
- Graduated optimization, a technique that attempts to solve a difficult optimization problem by initially solving a greatly simplified problem, and progressively transforming that problem until it is equivalent to the difficult optimization problem.
Response surface methodology-based approaches
- IOSO Indirect Optimization based on Self-Organization
- Bayesian optimization, a sequential design strategy for global optimization of black-box functions using Bayesian statistics
Footnotes