Consistent hashing


In computer science, consistent hashing is a special kind of hashing such that when a hash table is resized, only keys need to be remapped on average where is the number of keys and is the number of slots.
In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped because the mapping between the keys and the slots is defined by a modular operation. Consistent hashing is a particular case of rendezvous hashing, which has a conceptually simpler algorithm, and was first described in 1996. Consistent hashing first appeared in 1997, and uses a different algorithm.

History

The term "consistent hashing" was introduced by Karger et al. at MIT for use in distributed caching. This academic paper from 1997 introduced the term "consistent hashing" as a way of distributing requests among a changing population of web servers. Each slot is then represented by a node in a distributed system. The addition and removal of nodes only requires items to be re-shuffled when the number of slots/nodes change. The authors mention linear hashing and its ability to handle sequential node addition and removal, while consistent hashing allows buckets to be added and removed in arbitrary order.
Teradata used this technique in their distributed database, released in 1986, although they did not use this term. Teradata still use the concept of a hash table to fulfill exactly this purpose. Akamai Technologies was founded in 1998 by the scientists Daniel Lewin and F. Thomson Leighton to apply this algorithm, which gave birth to the content delivery network industry.
Consistent hashing has also been used to reduce the impact of partial system failures in large web applications to provide robust caching without incurring the system-wide fallout of a failure.
Consistent hashing is also the cornerstone of distributed hash tables, which employ hash values to partition a keyspace across a distributed set of nodes, then construct an overlay network of connected nodes that provide efficient node retrieval by key.
Rendezvous hashing, designed in 1996, is a simpler and more general technique. It achieves the goals of consistent hashing using the very different highest random weight algorithm.

Motivations

While running collections of caching machines some limitations are experienced. A common way of load balancing cache machines is to put object in cache machine number. But this will not work if a cache machine is added or removed because changes and every object is hashed to a new location. This can be disastrous since the originating content servers are flooded with requests from the cache machines. Hence consistent hashing is needed to avoid swamping of servers.
Consistent hashing maps objects to the same cache machine, as far as possible. It means when a cache machine is added, it takes its share of objects from all the other cache machines and when it is removed, its objects are shared among the remaining machines.
The main idea behind the consistent hashing algorithm is to associate each cache with one or more hash value intervals where the interval boundaries are determined by calculating the hash of each cache identifier. If the cache is removed, its interval is taken over by a cache with an adjacent interval while all the remaining caches are unchanged.

Technique

Consistent hashing is based on mapping each object to a point on a circle. The system maps each available machine to many pseudo-randomly distributed points on the same circle.
To find where an object should be placed, the system finds the location of that object's key on the circle; then walks around the circle until falling into the first bucket it encounters. The result is that each bucket contains all the resources located between each one of its points and the previous points that belong to other buckets.
If a bucket becomes unavailable, then the points it maps to will be removed. Requests for resources that would have mapped to each of those points now map to the next highest points. Since each bucket is associated with many pseudo-randomly distributed points, the resources that were held by that bucket will now map to many different buckets. The items that mapped to the lost bucket must be redistributed among the remaining ones, but values mapping to other buckets will continue to map to those same buckets, and therefore do not need to be moved.
A similar process occurs when a bucket is added. By adding new bucket points, we make any resources between those and the points corresponding to the next smaller angles map to the new bucket. These resources will no longer be associated with the previous buckets, and any value previously stored there will not be found by the selection method described above.
The portion of the keys associated with each bucket can be altered by altering the number of angles that bucket maps to.

Comparison with Rendezvous Hashing and other alternatives

, designed in 1996, is a simpler and more general technique, and permits fully distributed agreement on a set of options out of a possible set of options. It can in fact be shown that consistent hashing is a special case of rendezvous hashing. Because of its simplicity and generality, Rendezvous Hashing is now being used in place of Consistent Hashing in many applications.
If key values will always increase monotonically, an alternative approach using a hash table with monotonic keys may be more suitable than consistent hashing.

Complexity

The is an average cost for redistribution of keys and the complexity for consistent hashing comes from the fact that a binary search among nodes angles is required to find the next node on the ring.

Examples

Known examples of consistent hashing use include: