Ant colony optimization algorithms


In computer science and operations research, the ant colony optimization algorithm is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial Ants stand for multi-agent methods inspired by the behavior of real ants.
The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of Artificial Ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing. The burgeoning activity in this field has led to conferences dedicated solely to Artificial Ants, and to numerous commercial applications by specialized companies such as AntOptima.
As an example, Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee, another social insect.
This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and shares some similarities with estimation of distribution algorithms.

Overview

In the natural world, ants of some species wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep travelling at random, but instead to follow the trail, returning and reinforcing it if they eventually find food.
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over more frequently, and thus the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation also has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained. The influence of pheromone evaporation in real ant systems is unclear, but it is very important in artificial systems.
The overall result is that when one ant finds a good path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads to many ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.

Ambient networks of intelligent objects

New concepts are required since “intelligence” is no longer centralized but can be found throughout all minuscule objects. Anthropocentric concepts have been known to lead to the production of IT systems in which data processing, control units and calculating forces are centralized. These centralized units have continually increased their performance and can be compared to the human brain. The model of the brain has become the ultimate vision of computers. Ambient networks of intelligent objects and, sooner or later, a new generation of information systems which are even more diffused and based on nanotechnology, will profoundly change this concept. Small devices that can be compared to insects do not dispose of a high intelligence on their own. Indeed, their intelligence can be classed as fairly limited. It is, for example, impossible to integrate a high performance calculator with the power to solve any kind of mathematical problem into a biochip that is implanted into the human body or integrated in an intelligent tag which is designed to trace commercial articles. However, once those objects are interconnected they dispose of a form of intelligence that can be compared to a colony of ants or bees. In the case of certain problems, this type of intelligence can be superior to the reasoning of a centralized system similar to the brain.
Nature offers several examples of how minuscule organisms, if they all follow the same basic rule, can create a form of collective intelligence on the macroscopic level. Colonies of social insects perfectly illustrate this model which greatly differs from human societies. This model is based on the co-operation of independent units with simple and unpredictable behavior. They move through their surrounding area to carry out certain tasks and only possess a very limited amount of information to do so. A colony of ants, for example, represents numerous qualities that can also be applied to a network of ambient objects. Colonies of ants have a very high capacity to adapt themselves to changes in the environment as well as an enormous strength in dealing with situations where one individual fails to carry out a given task. This kind of flexibility would also be very useful for mobile networks of objects which are perpetually developing. Parcels of information that move from a computer to a digital object behave in the same way as ants would do. They move through the network and pass from one knot to the next with the objective of arriving at their final destination as quickly as possible.

Artificial pheromone system

Pheromone-based communication is one of the most effective ways of communication which is widely observed in nature. Pheromone is used by social insects such as
bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its feasibility, artificial pheromones have been adopted in multi-robot and swarm robotic systems. Pheromone-based communication was implemented by different means such as chemical or physical ways. However, those implementations were not able to replicate all the aspects of pheromones as seen in nature.
Using projected light was presented in an 2007 IEEE paper by Garnier, Simon, et al. as an experimental setup to study pheromone-based communication with micro autonomous robots. Another study that proposed a novel pheromone communication method, COSΦ, for a swarm robotic system is based on precise and fast visual localization.
The system allows simulation of a virtually unlimited number of different pheromones and provides the result of their interaction as a gray-scale image on a horizontal LCD screen that the robots move on. In order to demonstrate the pheromone communication method, Colias autonomous micro robot was deployed as the swarm robotic platform.

Algorithm and formulae

In the ant colony optimization algorithms, an artificial ant is a simple computational agent that searches for good solutions to a given optimization problem. To apply an ant colony algorithm, the optimization problem needs to be converted into the problem of finding the shortest path on a weighted graph. In the first step of each iteration, each ant stochastically constructs a solution, i.e. the order in which the edges in the graph should be followed. In the second step, the paths found by the different ants are compared. The last step consists of updating the pheromone levels on each edge.
procedure ACO_MetaHeuristic is
while not_termination do
generateSolutions
daemonActions
pheromoneUpdate
repeat
end procedure

Edge selection

Each ant needs to construct a solution to move through the graph. To select the next edge in its tour, an ant will consider the length of each edge available from its current position, as well as the corresponding pheromone level. At each step of the algorithm, each ant moves from a state to state, corresponding to a more complete intermediate solution. Thus, each ant computes a set of feasible expansions to its current state in each iteration, and moves to one of these in probability. For ant, the probability of moving from state to state depends on the combination of two values, the attractiveness of the move, as computed by some heuristic indicating the a priori desirability of that move and the trail level of the move, indicating how proficient it has been in the past to make that particular move. The trail level represents a posteriori indication of the desirability of that move.
In general, the th ant moves from state to state with probability
where
is the amount of pheromone deposited for transition from state to, 0 ≤ is a parameter to control the influence of, is the desirability of state transition and ≥ 1 is a parameter to control the influence of. and represent the trail level and attractiveness for the other possible state transitions.

Pheromone update

Trails are usually updated when all ants have completed their solution, increasing or decreasing the level of trails corresponding to moves that were part of "good" or "bad" solutions, respectively. An example of a global pheromone updating rule is
where is the amount of pheromone deposited for a state transition, is the pheromone evaporation coefficient and is the amount of pheromone deposited by th ant, typically given for a TSP problem by
where is the cost of the th ant's tour and is a constant.

Common extensions

Here are some of the most popular variations of ACO algorithms.

Ant System (AS)

The Ant System is the first ACO algorithm. This algorithm corresponds to the one presented above. It was developed by Dorigo.

Ant Colony System (ACS)

In the Ant Colony System algorithm, the original Ant System was modified in three aspects: the edge selection is biased towards exploitation ; while building a solution, ants change the pheromone level of the edges they are selecting by applying a local pheromone updating rule; at the end of each iteration, only the best ant is allowed to update the trails by applying a modified global pheromone updating rule.

Elitist Ant System

In this algorithm, the global best solution deposits pheromone on its trail after every iteration, along with all the other ants.

Max-Min Ant System (MMAS)

This algorithm controls the maximum and minimum pheromone amounts on each trail. Only the global best tour or the iteration best tour are allowed to add pheromone to its trail. To avoid stagnation of the search algorithm, the range of possible pheromone amounts on each trail is limited to an interval . All edges are initialized to τmax to force a higher exploration of solutions. The trails are reinitialized to τmax when nearing stagnation.

Rank-based Ant System (ASrank)

All solutions are ranked according to their length. Only a fixed number of the best ants in this iteration are allowed to update their trials. The amount of pheromone deposited is weighted for each solution, such that solutions with shorter paths deposit more pheromone than the solutions with longer paths.

Continuous Orthogonal Ant Colony (COAC)

The pheromone deposit mechanism of COAC is to enable ants to search for solutions collaboratively and effectively. By using an orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently, with enhanced global search capability and accuracy. The orthogonal design method and the adaptive radius adjustment method can also be extended to other optimization algorithms for delivering wider advantages in solving practical problems.

Recursive Ant Colony Optimization

It is a recursive form of ant system which divides the whole search domain into several sub-domains and solves the objective on these subdomains. The results from all the subdomains are compared and the best few of them are promoted for the next level. The subdomains corresponding to the selected results are further subdivided and the process is repeated until an output of desired precision is obtained. This method has been tested on ill-posed geophysical inversion problems and works well.

Convergence

For some versions of the algorithm, it is possible to prove that it is convergent. The first evidence of convergence for an ant colony algorithm was made in 2000, the graph-based ant system algorithm, and later on for the ACS and MMAS algorithms. Like most metaheuristics, it is very difficult to estimate the theoretical speed of convergence. A performance analysis of a continuous ant colony algorithm with respect to its various parameters showed that its performance and rate of convergence are sensitive to the chosen parameter values, and especially to the value of the pheromone evaporation rate. In 2004, Zlochin and his colleagues showed that COAC-type algorithms could be assimilated methods of stochastic gradient descent, on the cross-entropy and estimation of distribution algorithm. They proposed these metaheuristics as a "research-based model".

Applications

Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to protein folding or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations.
It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.
The first ACO algorithm was called the ant system and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules:
  1. It must visit each city exactly once;
  2. A distant city has less chance of being chosen ;
  3. The more intense the pheromone trail laid out on an edge between two cities, the greater the probability that that edge will be chosen;
  4. Having completed its journey, the ant deposits more pheromones on all edges it traversed, if the journey is short;
  5. After each iteration, trails of pheromones evaporate.

Scheduling problem

To optimize the form of antennas, ant colony algorithms can be used. As example can be considered antennas RFID-tags based on ant colony algorithms., loopback and unloopback vibrators 10×10

Image processing

The ACO algorithm is used in image processing for image edge detection and edge linking.
The graph here is the 2-D image and the ants traverse from one pixel depositing pheromone. The movement of ants from one pixel to another is directed by the local variation of the image's intensity values. This movement causes the highest density of the pheromone to be deposited at the edges.
The following are the steps involved in edge detection using ACO:
Step1: Initialization:
Randomly place ants on the image where . Pheromone matrix are initialized with a random value. The major challenge in the initialization process is determining the heuristic matrix.
There are various methods to determine the heuristic matrix. For the below example the heuristic matrix was calculated based on the local statistics:
the local statistics at the pixel position.
Where is the image of size

, which is a normalization factor
can be calculated using the following functions:




The parameter in each of above functions adjusts the functions’ respective shapes.
Step 2 Construction process:
The ant's movement is based on 4-connected pixels or 8-connected pixels. The probability with which the ant moves is given by the probability equation
Step 3 and Step 5 Update process:
The pheromone matrix is updated twice. in step 3 the trail of the ant is updated where as in step 5 the evaporation rate of the trail is updated which is given by the below equation.
, where is the pheromone decay coefficient
Step 7 Decision Process:
Once the K ants have moved a fixed distance L for N iteration, the decision whether it is an edge or not is based on the threshold T on the pheromone matrixτ. Threshold for the below example is calculated based on Otsu's method.
Image Edge detected using ACO:
The images below are generated using different functions given by the equation to.
With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. Ants mark the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. According to some authors, the thing which distinguishes ACO algorithms from other relatives is precisely their constructive aspect. In combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. Thus, in the example of the Travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. However, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. The collective behaviour of social insects remains a source of inspiration for researchers. The wide variety of algorithms seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit.

Stigmergy algorithms

There is in practice a large number of algorithms claiming to be "ant colonies", without always sharing the general framework of optimization by canonical ant colonies. In practice, the use of an exchange of information between ants via the environment is deemed enough for an algorithm to belong to the class of ant colony algorithms. This principle has led some authors to create the term "value" to organize methods and behavior based on search of food, sorting larvae, division of labour and cooperative transportation.

Related methods

The inventors are Frans Moyson and Bernard Manderick. Pioneers of the field include Marco Dorigo, Luca Maria Gambardella.
Chronology of ant colony optimization algorithms.