## Jesse's Software Engineering Blog

### Sorting Algorithms

Here is a great resource animating the efficiencies of the various sorting algorithms.

#### Selection Sort

**Time Complexity:** O(N^2 / 2)

**Space Complexity:** O(1)

The selection sort iterates an array, finds the smallest value and swaps it with the value in the current slot (starting at the beginning of the array), then moves onto the next slot and repeats. The apparent downfall to the algorithm is that each search for smallest item does not carry over into future iterations. This means that an array which is nearly sorted will have the same time complexity as an array that is completely random. The one benefit of the selection sort is that it is done inline so the space complexity is constant. Example

#### Insertion Sort

**Time Complexity:** O(N^2 / 4)

**Space Complexity:** O(1)

The insertion sort, starting at the end of the array, compares an item to the item to the left, swapping the values if current is smaller, continually moving the item to the left until there are no smaller values. While worst case time complexity is equivalent to the selection sort, arrays that already have some sorted values, which is common in practice, will out perform the selection sort as the insertion sort only needs to scan values needed to determine current position vs. the selection sort having to scan all remaining values. An already sorted array for example would be linear time with the insertion sort but quadratic time with the selection sort. This makes the insertion sort ideal for smaller data sets that are partially sorted. As with the selection sort, the insertion sort is done inline and has constant space complexity. Example

#### Merge Sort

**Time Complexity:** O(N log N)

**Space Complexity:** O(N)

The merge sort is a divide-and-conquer algorithm, taking the array and breaking it into smaller sub arrays. The merge sort recursively divides an array in half until each element is alone, then merges each element back together ensuring each element is correctly sorted. The benefit of the merge sort is that it has much lower time complexity than the insertion and selection sorts, but has the cost of additional space requirements due to the merging of the sub arrays. If space is limited, another sorting algorithm may be better. It’s important to do the sorting inline, by keeping the data in the array and moving the elements around opposed to passing copies of the data around for merging. There are some areas for performance improvements due to the recursive nature of the algorithm and the implied smaller size of the subarrays, for example using a simple sort algorithm like the insertion sort on the subarrays can provide performance gains. Example

#### Quicksort

**Time Complexity:** O(N^2)

**Space Complexity:** O(N)

The quicksort is a divide and conquer algorithm, and while worst case sorting is O(N^2), it is typically O(N logN) in practical applications. However, some care needs to be involved in the implementation to be sure to avoid bad performance. There are numerous examples of mistakes leading to quadratic performance in poorly implemented quicksorts. The quicksort algorithm takes a pivot value from a subset of the array and moves all greater than values to the right and less than values to the left in a process called partitioning. After each partition, the algorithm recurses over the remaining array subsets. Quicksort is typically the native searching algorithm used in programming language’s standard libraries, i.e. PHP, Ruby, Hacklang, but some, like Java, use the merge sort, and others, like Python, use a hybrid approach (merge-insertion hybrid). There are also variations which can be made such as partitioning based on three sections to help speed up data sets where there are few unique values. Example