paint-brush
Essential Algorithms: The Quick Sortby@joshuaecampbell
2,149 reads
2,149 reads

Essential Algorithms: The Quick Sort

by Joshua CampbellMarch 25th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The Quick Sort is an interesting algorithm and a favorite among software engineers. Quick Sort can be highly efficient, often outperforming Merge Sort. The algorithm works by selecting an item from somewhere inside of an unsorted array, and comparing all of the items to that one. When an array is sorted, everything to the left of our pivot will be smaller than the pivot. When it finds an item on the left that should be on the right, it swaps these two items. This recursive division and comparison scheme is the same divide-and-conquer approach that Merge Sort takes. This allows an array to be sorted by performing operations directly on the actual array.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Essential Algorithms: The Quick Sort
Joshua Campbell HackerNoon profile picture

The Quick Sort is an interesting algorithm and a favorite among software engineers, with some unique advantages and quirks worth looking into. Quick Sort can be highly efficient, often outperforming Merge Sort, although certain cases can make it behave slowly like Bubble Sort. As always, we'll jump in first with a broad-strokes overview of how this particular algorithm works before exploring the finer points about why it behaves the way it does.

The Quick Sort: An Overview

We'll start with an unsorted array:

arr = [9,7,4,2,3,6,8,5,1]

The Quick Sort works by selecting an item from somewhere inside of the array, and comparing all of the items to that one. We'll call this item our pivot. When an array is sorted, everything to the left of our pivot will be smaller than the pivot, and everything to the right of it will be larger. Quick Sort makes it's way from the ends of the unsorted array towards middle. When it finds an item on the left that should be on the right, and then also identifies an item on the right that should be on the left, it swaps these two items.

You can think of the part of the array on the left of the pivot and the part of the array on the right of the pivot as their own sub-arrays. For now, we'll treat them as their own, distinctive sub-arrays, and then recursively apply the algorithm to each sub-array. This recursive division and comparison scheme is the same divide-and-conquer approach that Merge Sort takes, and thus the parallels here make it easy to see why it takes O(n*log(n)) time on average.

To illustrate this point and analyze how this works with a divide-and-conquer implementation, we will select the element as close to the middle of the array as possible. In the first iteration of the algorithm, we'll select number

3
, in the middle, as the pivot. With our pivot selected, this is what our sub-arrays look like before we get started:

So, how do we efficiently sort these 2 sub-arrays around the pivot? We can simply iterate over the arrays to see if anything in the right side is smaller than the pivot, and move them to the left side, and vice versa. If we iterated over the left and right sides, moving the appropriate items, we'd eventually wind up with one array that belongs on it's side of the pivot, and an array with other, unsorted elements. We would need to iterate over the rest of the unsorted array, and push the items that belong in the other array onto the other array.

end up with a pair of arrays that looks like this:

Now, we know that everything in the left array belongs left of the pivot, and everything in the right array belongs right of the pivot. We can now recursively apply this logic to all of these sub-arrays until each item is sorted. At least, that's how the divide-and-conquer approach works.

The actual quick sort algorithm doesn't break anything up into smaller sub-arrays. Here, the act of dividing the arrays into pairs recursively, before performing comparisons, is used merely to illustrate intuitively why it's average complexity is O(n*log(n)), which we'll explore more later.

Time and Space

While we've discussed time complexity quite a bit in previous installations, one thing we have yet to discuss with similar fervor is space complexity. The Quick Sort algorithm, when done well, does not actually recursively divide sub-arrays that get fed into itself. Before jumping into what it does instead, let's look at why it doesn't do this. We can refer back to our Python code for Merge Sort from one of the previous parts of this series:

def merge_sort(unsorted):
    if len(unsorted) > 1:
        mid = len(unsorted)//2
        left = merge_sort(unsorted[:mid])
        right = merge_sort(unsorted[mid:])  
        result = merge(left, right)
        return result
    else:
        return unsorted

Here, we can start analyzing how it uses space. It takes an unsorted array, and allocates two more arrays, each at half of the size of the array it was passed. It then feeds both of these arrays into the same function, which again, allocates space for 2 more arrays, recursively. So, for example, let's take an array with 8 elements. In the first iteration, we always allocate n/2 space for a new array, going down the entire left side before recursively moving back up and working into the right side. The exact space complexity here isn't important, what is important to understand is that it requires additional space, and allocating and deallocating memory for these operations effects performance.

Rather than allocating additional space to hold sub-arrays being worked on, a function can be passed only the indices that outline the sub array being worked on on the original array. This allows an array to be sorted by performing operations directly on the actual array, and is called Sorting In Place.

Sorting In Place

Sorting in place has the advantage of taking up only O(1) extra space. Say your function for Quick Sort only has 3 variables: the pivot, left, and right side boundaries. If you're writing in C, that means each function call only has to allocate space for 3 variables, which are probably only going to be 4 byte unsigned ints, or a total of 12 bytes. It doesn't matter if the array being passed to it is 40 items for 40,000,000, it still only needs to allocate 12 bytes when it gets called. That is why it's considered to have an O(1) space complexity when sorting in place, the amount of space needed is constant and doesn't grow.

In the earlier overview, the algorithm was explained as manually iterating over the sub arrays and merely moving items around after comparing them to the pivot. Doing this in place requires a slightly different approach to do the same thing. Consider our original unsorted array

arr
,
[9,7,4,2,3,6,8,5,1]
. With 9 items, if we selected the middle item,
arr[4]
, our pivot would be
3
. Instead of making a separate set of left and right arrays, we'll sort in place by making a
left index
and a
right index
, that will begin on the left and right boundaries of our array.

We start with the left item and compare it to our pivot. If the item to the left is less than the pivot, that is to say, the item pointed to by the left pivot belongs to the left of the pivot, and we move the

left index
forward by one and compare that number. We keep moving the
left index
forward until we find an item that doesn't belong to the left of the pivot. When we find such an item, we stop the
left index
and begin comparing the
right index
to the pivot. When an item to the left belongs on the right, and an item on the right belongs on the left, the two items are swapped. Since the first item the
left index
looks at,
arr[0]
, is
9
, which belongs to the right of the pivot, we start with
arr[0]
. Since the first item
right index
looks at,
arr[8]
, is
1
, which belongs to the left of the pivot, both of these items switch places. After the switch, the
left index
increments and the
right index
decrements because both of these items are now where they should be, and the process begins again.

This behavior ensures that at all times, everything to the left of

left index
is always going to belong on the left side, and everything to the right of
right index
will always belong to the right of the pivot.

This method of sorting will continue until the left and right indices meet each other and pass each other. So in this example, the left index is pointing to

7
, which is greater than
3
, so we start moving the right index down towards the left until we find an item that belongs on the left of the
3
. So we move the right down, comparing
3
to
8
, then
6
, then
2
. The left and right index will always ignore the actual pivot and skip over it, as the pivot will be correctly placed in the last step. So, our array now looks like this:

Now, the

7
and
2
switch places. With this, the left index and right index move, but now point to the same item,
arr[2]
which is
4
. Even though they're pointing to the same item, we continue the same logic as before. We compare
4
to our pivot,
3
. It belongs on the right side of it, so we start moving the right pivot looking for something smaller than
3
. Since
4
is not smaller than
3
, we decrement the right pivot.

This gives brings us to our final step. We know from before everything to the right of the right index belongs to the right of the pivot, and everything to the left of the left index belongs left of the pivot. With the right pivot moving past the left pivot, we now know that everything except the pivot is in it's final place where it belongs.

The right index only passes the left when everything else is sorted, so that means the left index is pointing to the last unsorted item, and this can be simply swapped out with the pivot, giving us an array sorted in place relative to the pivot.

Looking at our array now, we can say that everything right of the pivot is where it belongs relative to the pivot, and everything left of it is where is belongs relative to the pivot. This algorithm can now be applied recursively, to each side of the array. The left side would start it's left index at it's original left index again, and it's right index would be at

pivot-1
. The right side would start it's left index at
pivot+1
, and the right side's right index would be the original right index.

Now that we have a high-level overview of how Quick Sort puts items into place, we can start discussing finer details and exploring other questions, such as how to determine the pivot in a way that lets us sort it with the most efficiency.

Pivot Selection

Selecting the pivot for Quick Sort is the key to efficient, or inefficient, time complexity. The worst-case scenario for Quick Sort is O(n^2), yet when done correctly, can be O(n*log(n)). By remembering what we did with Merge Sort, and by looking at both 1) the recursive, dividing nature of Quick Sort, and 2) that the number of comparisons being done grows directly with the rate of input, we can see easily why Quick Sort can be O(n*log(n)). But what behavior causes it to degrade to O(n^2)?

There are two common methods of picking out the pivot that are straightforward and easy, but not necessarily the best: picking the first item, and picking the last item. These can be chosen instead of using a pseudo-random number generator, as using a PRNG multiple times can slow down the machine and effect performance. Let us consider this already-sorted array:

arr = [1,2,3,4,5,6,7,8,9]
. Let's also say we don't want to use a pseudo-random number generator, either, so the be quick, we decide to just pick the last item in the unsorted partition of our array. Here,
arr[-1]
is
9
. There is no right side array that winds up being made, the entire left-side array is the rest of the array. That means on the second pass, our pivot is
arr[-2] = 8
, and we continue. In fact, for an array of length n, we make n-1 passes over it, starting at n-1 comparisons, then n-2, so on until we are at the last item. This reveals that this implementation works much the same as Bubble Sort, with an actual complexity of n(n-1)/2, lending us the O(n^2) complexity as the size of input grows. Of course, this happens with already sorted, or mostly sorted lists, when the first or last item is consistently selected. So, Quick Sort should not implement this pivot selection scheme whenever a list may be passed to it in an already sorted fashion.

Knowing this, we can rule out picking just the first or last items as an ideal way of pivot selection. Given this, there are a few ways of selection that can be implemented. Selecting a number at random means that the odds of selecting items in an order that causes it to behave in O(n^2) time exponentially less likely with every consecutive item being passed to it. So, selecting an item completely at random can be an effective method. However, creating a "random" number with a PRNG can be computationally expensive and slow, which can cause it's own set of performance issues when it has to run several times, as with huge lists.

Optimization and Scalability

In order to run the algorithm at maximum efficiency, the goal should be to create the most balanced left and right partitions as possible. The behavior that causes performance to degrade to O(n^2) arises when the lists are as unbalanced as possible, where all of the elements are partitioned onto one side. The behavior of it's best case performance, O(n*log(n)), arises when the lists are the most balanced. Therefore, to create the most efficient implementation of the algorithm, we know the following from our analysis:

  1. We should not always select the first item, as that can cause O(n^2) runtime.
  2. We should not always select the last item, as that can cause O(n^2) runtime.
  3. We should not use a pseudo-random number generator, because they are slow and will cause their own performance issues.
  4. We should end each partitioning with the most balanced partitions we can reasonably expect for our best performance.

The trick here is to figure out how to select a pivot from a partition that will leave you with 2 relatively balanced arrays. At first, it may seem to make sense to just pick the item closest to the middle of the array. However, if in doing so, you wind up selecting the first or last items, you end up with heavily unbalanced arrays even though the ones you started with were already evenly partitioned. While this is not likely to happen each time, it's still not going to lead to the best, consistent performance.

There is one method, called the Median Of Three, which gives rise to reasonably balanced lists. This method requires you to pick the first item, the last item, and the middle item. These 3 items need to be sorted (and since there are only 3 items, something simple like Bubble Sort could be used without worrying about performance). By taking the first, the last, and a middle item, we have a sample of what the type of range we are looking at is. With this sorted set of 3 items, we can select the median item, knowing that the larger and smaller items would create a more unbalanced list. Thus, the median of three will allow you to create the most balanced partitions.

Let's look again at our first list:

[9,7,4,2,3,6,8,5,1]
. The first item is
9
, the last item is
1
, and the middle item is
3
. When this gets sorted, we get the items
[1,3,9]
. By selecting the
3
, we will create the least unbalanced pair of partitions possible, as the other items are guaranteed to create partitions that are even more unbalanced.

If you find yourself in a situation where you aren't actually too worried about how slow the PRNGs in your language of choice run, you could easily opt to just randomly select an item from within the partition you're working with and use that as your pivot. Sometimes it would create really unbalanced partitions, but on average, it would create a pretty balanced pairs of partitions. In the real world, this is often more than sufficient for most use cases.

However, if you find yourself having to scale up, you will want to go back into your codebase and make your sorting algorithms more efficient. Taking out the PRNG and replacing it with a Median of Three implementation can provide a small amount of optimization in two places: first, the PRNG is slow, and selecting the first, last and middle may be faster, and likely will be faster to bubble sort into place and select the median pivot. Second, the Median of Three implementation does not encounter extremely inefficient cases as much as the randomly selected pivot does.

While Quick Sort does decay to O(n^2) in certain cases, it's ability to sort in place, unlike Merge Sort, means that it may be able to run faster than Merge Sort due to not having to deal with all of the allocating and freeing of working space in memory. Allocating, freeing, writing to and reading from memory take time, and minimizing the read/write operations by sorting in place gives Quick Sort a performance advantage over Merge Sort.

An Example in Python

This example is with Python, and will sort in place, using a median-of-three selection scheme. The median-of-three scheme will only ever be passed 3 numbers, so a bubble sort could be used to easily implement a way to put the 3 numbers in order and select the middle one. We'll first create a function that gets passed an array of 3 values, and returns them sorted.

def bubble(array):
    swapped = False
    for i in range(2):
        if(array[i] > array[i+1]):
            array[i],array[i+1] = array[i+1],array[i]
            swapped = True
    if not swapped:
        return array
    else:
        return bubble(array)

With this in place, we can start our

quicksort()
function. It will be passed 3 arguments: the array it's sorting, the left boundary of the partition being sorted, and the right boundary of the partition being sorted.

def quicksort(arr, left, right):

    if(len(arr[left:right+1])>2):
        middle = (left + right)//2
        three = bubble([arr[left], arr[middle], arr[right]])
        if(three[1] == arr[left]):
            pivot = left
        elif(three[1] == arr[right]):
            pivot = right
        else:
            pivot = middle
    else:
        pivot = right

    left_index = left
    right_index = right

If the partition being sorted is more than 2 items long, we'll pick the first, last, and middle items and pick the median value to be our pivot. If we have only 2, we'll just pick the right item to be the pivot. The

left_index
and
right_index
are set to the
left
and
right
variables to keep track of the items actually being compared, whereas
left
and
right
themselves will keep track of the bounds of the array being worked on.

Now, onto the main event loop:

    while(left_index<=right_index):
        if(left_index == pivot):
            left_index += 1
        if(right_index == pivot):
            right_index -= 1

        if(arr[left_index] > arr[pivot]):
            if(arr[right_index] < arr[pivot]):
                arr[left_index], arr[right_index] = arr[right_index],arr[left_index]
                left_index += 1
                right_index -= 1
            else:
                right_index -= 1
        else:
            left_index += 1

The

while
loop conditions are set to keep running as long as the left index hasn't passed the right index. If the
left_index
makes it to the right boundary of the partition without finding anything that belongs to the right of the array, the loop stops. In that scenario, the
left_index
and
right_index
would both be stopped on the right end of the partition.

It starts out by checking whether or not the

left_index
or
right_index
are on the
pivot
. If they are, it moves them in the appropriate direction to skip over the pivot. With the indices guaranteed to be in a proper place, they can start to compare the items to the
pivot
. The item at the
left_index
is compared to the pivot. If it's smaller, the
left_index
gets incremented and compares itself to the pivot again. If it's larger, then we start looking for an item pointed to by the
right_index
that is smaller than the pivot. If it's not, it get decremented and continues the comparisons. When 2 swappable items are identified, they get swapped, and both the
left_index
and
right_index
change because both of those items are now in place and do not need to be compared to the pivot again.

Once the

left_index
has passed the
right_index
, or run to the end of the partition, it's time put the
pivot
into place:

    if(left_index < pivot):
        arr[left_index], arr[pivot] = arr[pivot],arr[left_index] 
        pivot = left_index
    elif(right_index > pivot):
        arr[right_index], arr[pivot] = arr[pivot],arr[right_index]
        pivot = right_index

Everything to the left of the

left_index
belongs left of the
pivot
. Likewise, everything to the right of the
right_index
belongs to the right of it. The only exception is now the
pivot
itself. Because the
left_index
is larger than the
pivot
when the
right_index
passes it, the
pivot
should only be swapped with the
left_index
if the
left_index
is still to the left of the
pivot
, as that item will need to be on the right of the
pivot
. If the
left_index
has passed the
pivot
and is now on the right, then swapping
left_index
with the
pivot
will result with an item larger than the
pivot
on the left of it. Instead, the
pivot
will be switched with the
right_index
, which is the last of the items that should be on the left of the
pivot
. After performing the swap, it updates the index of the
pivot
.

And finally, we wrap up with this:

    if(len(arr[left:pivot]) > 1):
        quicksort(arr, left, pivot-1)
    if(len(arr[pivot+1:right]) > 1):
        quicksort(arr, pivot+1,right)
    return arr

The new left partition is

arr[left:pivot]
, and the new right partition is
arr[pivot+1:right]
. If there is only one item in any of these, we know that one item is in proper place. However, if there are 2 or more, then those items will need to be evaluated and sorted into proper place. The
quicksort()
function can then be called again, with different left and right boundaries for the partitions, recursively until the entire list is sorted.

Our entire

quicksort.py
file looks like this:

def bubble(array):
    swapped = False
    for i in range(2):
        if(array[i] > array[i+1]):
            array[i],array[i+1] = array[i+1],array[i]
            swapped = True
    if not swapped:
        return array
    else:
        return bubble(array)

def quicksort(arr, left, right):
    if(len(arr[left:right+1])>2):
        middle = (left + right)//2
        three = bubble([arr[left], arr[middle], arr[right]])
        if(three[1] == arr[left]):
            pivot = left
        elif(three[1] == arr[right]):
            pivot = right
        else:
            pivot = middle
    else:
        pivot = right

    left_index = left
    right_index = right
    while(left_index<=right_index):
        if(left_index == pivot):
            left_index += 1
        if(right_index == pivot):
            right_index -= 1

        if(arr[left_index] > arr[pivot]):
            if(arr[right_index] < arr[pivot]):
                arr[left_index], arr[right_index] = arr[right_index],arr[left_index]
                left_index += 1
                right_index -= 1
            else:
                right_index -= 1
        else:
            left_index += 1

    if(left_index < pivot):
        arr[left_index], arr[pivot] = arr[pivot],arr[left_index] 
        pivot = left_index
    elif(right_index > pivot):
        arr[right_index], arr[pivot] = arr[pivot],arr[right_index]
        pivot = right_index

    if(len(arr[left:pivot]) > 1):
        quicksort(arr, left, pivot-1)
    if(len(arr[pivot+1:right]) > 1):
        quicksort(arr, pivot+1,right)
    return arr

Testing Speed

Now that we've gone over Quick Sort and Merge Sort, two different algorithms that typically perform in O(n*log(n)) time, we can write a unit test that allows us to directly compare the two's performance, and analyze it for ourselves. In this example, I'll be importing the

timeit
module to keep track of performance.

The test will be simple: I want to create a test array full of random numbers for each test. When the array is ready, I'll use

timeit
to capture the current time, then run the sorting algorithm. When it's done, I'll use
timeit
again to capture the ending time, and calculate the runtime. These runtimes will be kept in their own array, and a thousand tests can be performed. With all of this data, we can find the highest runtime, lowest runtime, and calculate the average. If we do this the same way with Quick Sort and Merge Sort, we can build an apples-to-apples comparison of performance.

def time_test():
    times = []
    for i in range(1000):
        test_arr = []
        for j in range(1000):
            test_arr.append(random.randint(1,15000))
        start = timeit.default_timer()
        quicksort(test_arr, 0, (len(test_arr)-1))
        stop = timeit.default_timer()
        exec_time = stop-start
        times.append(exec_time)
    quicksort(times,0,len(times)-1)
    average = sum(times)/len(times)
    print "Lowest exec time: %s" % min(times)
    print "Highest exec time: %s" % max(times)
    print "Mean exec time: %s" % average

This is the timing test I wrote for Quick Sort. I essentially wrote the same thing for Merge Sort, as well, just with a few tweaks. After running the two tests, these were my results:

Here we can see that Quick Sort is faster than Merge Sort, performing nearly twice as fast.

With this function to test performance, we can also explore how the different pivot selection methods effect performance. First, we'll see how Median of 3, Random, and Right-Side pivot selection stack up on a fully randomized array:

Apparently, calculating the median of 3 is not the most efficient way of pivot selection. It's fastest runtime was very, very close to the random's fastest runtime, but it's highest and mean exec times were the highest of the 3. Randomly selected a pivots had the fastest worst-case performance. Always picking the rightmost part of the partition as the pivot resulted in a much more efficient worst-case performance than the median-of-3, but wasn't quite as fast as the randomly selected one. However, it's lowest run time was by far the fastest, taking only 2/3 of the time of the other two. The right-side pivot selection also had a much, much faster average runtime.

Clearly, if the array is going to be completely unsorted, then performing a right-side array selection is the winner. It won't have to bother randomly generating numbers or sorting items to select the median of 3, and thus will be faster. The completely random nature of the array means the function will tend to create fairly balanced arrays. But what if the array is already mostly sorted, or completely sorted?

To test this, the test function will be slightly modified again to randomly generate an array, but to sort it first, and then pass the sorted array to the function. The performance change was quite stark:

Right-side pivot selection took 30 times longer on average than the Median of 3! The right-side pivot selection works far faster than the other methods but only while it's passed a truly random, unsorted array. The advantage quickly disappears and becomes a severe performance penalty if it's passed an array that's mostly or completely already sorted.

While a randomly selected pivot behaves almost the exact same way in completely random and completely sorted lists, the Median of 3 pivot selection is the clear winner when working with arrays that can be already somewhat sorted. In fact, the Median of 3 selection scheme took approximately 2/3 of the time as the random selection scheme in all categories. This is because the Median of 3, in this case, always creates perfectly balanced arrays, whereas the randomly selected one makes reasonably balanced, yet still unbalanced arrays.

With all of this in front of us, it's easy to see why the Quick Sort is a favorite among software engineers. It's reliably fast and efficient, and can be implemented with minimal space requirements. If you enjoyed this or found my examination of this algorithm useful, you can show your appreciation by sharing this or buying me a coffee (although now, I'll probably be looking for toilet paper instead of coffee). And if you've actually read this far, thank you for reading, I hope you enjoyed it.