Selection sort


In computer science, selection sort is an in-place comparison sorting algorithm. It has an O time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.
The algorithm divides the input list into two parts: a sorted sublist of items which is built up from left to right at the front of the list and a sublist of the remaining unsorted items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest element in the unsorted sublist, exchanging it with the leftmost unsorted element, and moving the sublist boundaries one element to the right.
The time efficiency of selection sort is quadratic, so there are a number of sorting techniques which have better time complexity than selection sort. One thing which distinguishes selection sort from other sorting algorithms is that it makes the minimum possible number of swaps, in the worst case.

Example

Here is an example of this sort algorithm sorting five elements:
Sorted sublistUnsorted sublistLeast element in unsorted list
11
12
22
25
64

Selection sort can also be used on list structures that make add and remove efficient, such as a linked list. In this case it is more common to remove the minimum element from the remainder of the list, and then insert it at the end of the values sorted so far. For example:

arr = 64 25 12 22 11
// Find the minimum element in arr
// and place it at beginning
11 25 12 22 64
// Find the minimum element in arr
// and place it at beginning of arr
11 12 25 22 64
// Find the minimum element in arr
// and place it at beginning of arr
11 12 22 25 64
// Find the minimum element in arr
// and place it at beginning of arr
11 12 22 25 64

Implementations

Below is an implementation in C. More implementations can be found on.

/* a to a is the array to sort */
int i,j;
int aLength; // initialise to a's length
/* advance the position through the entire array */
/* */
for

Complexity

Selection sort is not difficult to analyze compared to other sorting algorithms since none of the loops depends on the data in the array. Selecting the minimum requires scanning elements and then swapping it into the first position. Finding the next lowest element requires scanning the remaining elements and so on. Therefore, the total number of comparisons is
By arithmetic progression,
which is of complexity in terms of number of comparisons. Each of these scans requires one swap for elements.

Comparison to other sorting algorithms

Among quadratic sorting algorithms, selection sort almost always outperforms bubble sort and gnome sort. Insertion sort is very similar in that after the kth iteration, the first k elements in the array are in sorted order. Insertion sort's advantage is that it only scans as many elements as it needs in order to place the k + 1st element, while selection sort must scan all remaining elements to find the k + 1st element.
Simple calculation shows that insertion sort will therefore usually perform about half as many comparisons as selection sort, although it can perform just as many or far fewer depending on the order the array was in prior to sorting. It can be seen as an advantage for some real-time applications that selection sort will perform identically regardless of the order of the array, while insertion sort's running time can vary considerably. However, this is more often an advantage for insertion sort in that it runs much more efficiently if the array is already sorted or "close to sorted."
While selection sort is preferable to insertion sort in terms of number of writes swaps versus Ο, it almost always far exceeds the number of writes that cycle sort makes, as cycle sort is theoretically optimal in the number of writes. This can be important if writes are significantly more expensive than reads, such as with EEPROM or Flash memory, where every write lessens the lifespan of the memory.
Finally, selection sort is greatly outperformed on larger arrays by Θ divide-and-conquer algorithms such as mergesort. However, insertion sort or selection sort are both typically faster for small arrays. A useful optimization in practice for the recursive algorithms is to switch to insertion sort or selection sort for "small enough" sublists.

Variants

greatly improves the basic algorithm by using an implicit heap data structure to speed up finding and removing the lowest datum. If implemented correctly, the heap will allow finding the next lowest element in Θ time instead of Θ for the inner loop in normal selection sort, reducing the total running time to Θ.
A bidirectional variant of selection sort is an algorithm which finds both the minimum and maximum values in the list in every pass. This reduces the number of scans of the input by a factor of two. Each scan performs three comparisons per two elements, a 25% savings over regular selection sort, which does one comparison per element. Sometimes this is double selection sort.
Selection sort can be implemented as a stable sort. If, rather than swapping in step 2, the minimum value is inserted into the first position, the algorithm is stable. However, this modification either requires a data structure that supports efficient insertions or deletions, such as a linked list, or it leads to performing Θ writes.
In the bingo sort variant, items are ordered by repeatedly looking through the remaining items to find the greatest value and moving all items with that value to their final location. Like counting sort, this is an efficient variant if there are many duplicate values. Indeed, selection sort does one pass through the remaining items for each item moved. Bingo sort does one pass for each value : after an initial pass to find the biggest value, the next passes can move every item with that value to its final location while finding the next value as in the following pseudocode :

bingo
begin
max := length-1;

nextValue := A;
for i := max - 1 downto 0 do
if A > nextValue then
nextValue := A;
while and do
max := max - 1;
while max > 0 do begin
value := nextValue;
nextValue := A;
for i := max - 1 downto 0 do
if A = value then begin
swap;
max := max - 1;
end else if A > nextValue then
nextValue := A;
while and do
max := max - 1;
end;
end;

Thus, if on average there are more than two items with the same value, bingo sort can be expected to be faster because it executes the inner loop fewer times than selection sort.