Array Based Queuing Locks


Array-Based Queuing Lock is an advanced lock algorithm that ensures that threads spin on unique memory locations thus ensuring fairness of lock acquisition coupled with improved scalability.

Overview

is a major issue in the designing and programming of shared memory multiprocessors. The common problem with lock implementations is the high network contention due to the processors spinning on a shared synchronization flag or memory location. Thus the scalability of the lock is reduced significantly in terms of the number of contending processors.
The Array Based Queuing Lock is an extension to the ticket lock algorithm which ensures that, on a lock release, only one processor attempts to acquire the lock, decreasing the number of cache misses. This effect can be achieved by having all the processors spin on unique memory locations. One analogy used to explain the lock mechanism is the relay race where the athlete passes on the baton to the next athlete in the queue, which ensures that only one athlete acquires the baton at a time.
ABQL also guarantees fairness in lock acquisition by using a first in, first out queue-based mechanism. Additionally, the amount of invalidation is significantly less than ticket-based lock implementations since only one processor incurs a cache miss on a lock release.

Implementation

The foremost requirement of the implementation of array based queuing lock is to ensure that all the threads spin on unique memory locations. This is achieved with an array of length equal to the number of threads which are in contention for the lock. The elements of the array are all initialized to 0 except the first element which is takes the value 1, thus ensuring successful lock acquisition by the first thread in the queue. On a lock release, the hold is passed to the next thread in queue by setting the next element of the array to 1. The requests are granted to the threads in FIFO ordering.
Pseudo Code example is listed below.

ABQL_init
ABQL_acquire
ABQL_release

To implement ABQL in the pseudo code above, 3 variables are introduced namely can_serve, next_ticket and my_ticket. The roles of each are described below:
In the initialization method, the variable next_ticket is initialized to 0. All the elements of the can_serve array except the first element are initialized to 0. Initialization of the first element in the array can_serve to 1, ensures successful lock acquisition by the first thread in the queue.
The acquire method uses an atomic operation fetch_and_inc to fetch the next available ticket number that the new thread will use to spin on. The threads in the queue spin on their locations until the value of my_ticket is set to 1 by the previous thread. On acquiring the lock the thread enters the critical section of the code.
On release of a lock by a thread, the control is passed to the next thread by setting the next element in the array can_serve to 1. The next thread which was waiting to acquire the lock can now do so successfully.
The working of ABQL can be depicted in the table below by assuming 4 processors contending to enter the critical section with the assumption that a thread enters the critical section only once.
Execution Stepscan_serveComments
initially00000initial value of all variables is 0
P1: fetch_and_inc10000P1 attempts and successfully acquires the lock
P2: fetch_and_inc20100P2 attempts to acquire the lock
P3: fetch_and_inc30120P3 attempts to acquire the lock
P4: fetch_and_inc40123P4 attempts to acquire the lock
P1: can_serve = 1;
can_serve = 0
40123P1 releases the lock and P2 successfully acquires the lock
P2: can_serve = 1;
can_serve = 0
40123P2 releases the lock and P3 successfully acquires the lock
P3: can_serve = 1;
can_serve = 0
40123P3 releases the lock and P4 successfully acquires the lock
P1: can_serve = 040123P4 releases the lock

Performance metrics

The following performance criteria can be used to analyse the lock implementations: