# Merge Sorting Algorithm in Go

There are so many ways and well known algorithms for sorting data. Some of them are more or less efficient than others. Some perform faster, others use less memory and when it comes to a job like sorting, there is no best all in one solution which works the best for all the cases.

That’s why in this article we’ll focus our attention on Merge Sorting Algorithm, by covering everything from how it works under the hood till implementing it step by step in Go aka Golang.

Before we dive into the topic, I wanted to let you know that I’m more active on YouTube @ SteveHook where I create videos on many more other topics. Also feel free to check out the repo where I stored all the resources, presentation notes, animations & coding examples.

# Overview

Before we throw in code and how the algorithm works technically, what I usually love to begin things with are day by day examples, meaning we’re going to go back to our good old deck of cards.

The easiest way to reason about the algorithm is: imagine you want to sort a big pile of cards. What you’d typically do, is keep on dividing the pile in 2 smaller piles and repeat the process, till each bigger pile is divided in very small piles consisting of 1 card, after which you start merging all the piles back into 1 pile. When merging 2 piles into 1 is important to keep the sorting order in mind.

By repeating this recursive process, which is also known as the Divide & Conquer principle. We split the big problem into smaller problems, then solve each individual problem and at the end we merge the solutions, thus solving the big initial problem. Merge Sort on a Deck of Cards

# The Algorithm Overview

Now that we know what is the Merge Sort algorithm in a nutshell, let us define couple of rules and technical steps required for this algorithm to work.

1. On every call find the middle of the incoming array/slice (divide it in 2)
2. Sort the LEFT side (recursively)
3. Sort the RIGHT side (recursively)
4. Merge the LEFT & RIGHT side (recursively)
5. Array/Slice of 1 element is considered sorted
6. Repeat the process using the Divide & Conquer principle (Recursive)

Here’s a small example of how the process pretty much looks: Merge Sort Algorithm Overview

# The Divide & Conquer Principle

While every problem out there can be solved using the good old procedural approach, in many cases using a Divide & Conquer or recursive approach is just much easier and simpler in order to solve a problem.

The Divide & Conquer principle consists of 3 simple steps:

1. Divide the problem into smaller problems of the same type
2. Repeat the process till the problem can no longer be divided
3. Collect & Merge indivisible problems (solutions) or Conquer

That’s the Divide & Conquer principle in a nutshell or we just call it a recursive approach. This is how the recursive approach looks like for our specific Merge Sorting Algorithm:

1. We’ll start by dividing the slice into smaller slices
2. When each slice consists of 1 element only we start the merging process
3. In the merging process we also sort out the resulting slice

This way we repeat the process till we reach the very bottom of the recursion, meaning we’ll only have slices consisting of 1 element, then start climbing back the recursion stairs by merging smaller slices into bigger ones, till there is nothing to merge anymore, resulting in a nicely sorted single slice. Divide & Conquer Principle

# Merge Sort Algorithm Recipe

So, we’re getting closer to putting the Merge Sort Algorithm together, we just need to establish couple of things we’ll need and how we’re planning to do things around.

1. We need a Merge Function that merges and sorts the final array/slice
2. We need a recursive Sort Function
3. Call Sort recursively for the LEFT & RIGHT sides
4. Call Merge recursively for the sorted LEFT & RIGHT sides
5. Return the sorted slice back to the caller

With all of the above steps, we can start having a look at the meat of the algorithm, meaning the merge and the sort functions.

# Merge Function

Right of the bat, the merge function’s responsibility is to merge 2 given arrays/slices into 1, as well as sorting the result in the merging process.

So, the merge function that we have to create will primarily do these 3 things:

1. Iterate as many times as the smallest length of the 2 arrays/slices, meaning if we pass a slice of length 7 and a slice of length 3, the loop will iterate 3 times only. This of course is for the highest efficiency while doing comparisons.
2. On each iteration we compare the elements of LEFT & RIGHT sides and insert the smaller one in the resulting array/slice, till we run out of iterations.
3. We insert all the missing elements, when we run out of iterations. Like the case when one side has the most smaller elements, the other side will have elements we have not inserted in the resulting array/slice.

While the merge function does all the things we enumerated above, it does not do couple of things which We, the developers are responsible for before calling the merge function. There is 1 thing that the merge function expects the developer to take care of, in order for it to work correctly:

The RIGHT & LEFT arrays/slices MUST be Sorted

You may be thinking, if the arrays/slices have the same length then the check we make on each iteration should result in a correctly sorted slice. Well, that’s quite not true. We start running into issues when we have arrays/slices that either have different lengths or the elements on one side are mostly sorted. For example, running the following, will result in wrongly sorted results.

`merge([]int{2,3,4,6}, []int{0,1,5,7,9,8}) // [0 1 2 3 4 5 3 6 9 8]merge([]int{1,2,3,5}, []int{4,6,9,8})     // [1 2 3 4 5 6 9 8]`

The merge function must be able to work with different array/slice lengths, however, because on every sort call we divide the array/slice in 2, it’s clear that the merge function will be called with n or n+1 elements difference.

Here’s a small overview of the merge function and how it should work when for example we call it with the following LEFT & RIGHT sides:

`merge([]int{4,5,7}, []int{1,3,6})` Merge Function

# Merge Function Pseudocode

`function merge(L, R) {  let A[0..L.length+R.length]  i, j, k = 0, 0, 0  // compare LEFT & RIGHT side elements before merging  while i < L.length and j < R.length {    if L[i] < R[j] {      A[k] = L[i]      i++    } else {      A[k] = R[j]      j++    }  }    // check if any elements from the left/right side  // were missed in the comparison section above  while i < L.length {    A[k] = L[i]    i++    k++  }  while j < R.length {    A[k] = R[j]    j++    k++  }    return A}`

As you can see from the pseudocode above we’re mainly doing 2 things:

1. Compare the elements and decide which ones goes first in the resulting A array/slice.
`...while i < L.length and j < R.length {  if L[i] < R[j] {    A[k] = L[i]    i++  } else {    A[k] = R[j]    j++  }}...`

Also notice the sorting condition above is hardcoded. Instead we could also receive a callback function that accepts a and b and returns bool, giving the control of sorting to the caller.

`type sortFunc func(a, b int) bool`

2. At the end we check for any left elements and we add them at the end of the resulting array/slice A

`...while i < L.length {  A[k] = L[i]  i++  k++}while j < R.length {  A[k] = R[j]  j++  k++}return A...`

# Merge Function implemented in Go

The implementation translates more than 90% from the pseudocode above, so there’s really no need for any kind of explanations.

If you don’t want to run the code above on your machine, feel free to experiment with the merge function repl.

To make the merge function less coupled to the sorting condition, we can go ahead and accept a callback function that returns a boolean and leave this responsibility to the caller.

If you don’t want to run the code above on your machine, feel free to experiment with the merge function with sorting func repl.

# Sort Function

You may be asking yourself, how the hell do we sort things when all we do is merge 2 already sorted arrays/slices and call a function that uses merge recursively. Well that’s the beauty of the Divide & Conquer principle.

By dividing the initial array/slice consisting of perhaps a huge amount of elements into smaller arrays/slices and repeating the process we reach the very bottom of the recursion, where by definition an array/slice consisting of 1 element are trivially considered already sorted, which means when we start merging we meet both conditions:

1. We provide sorted LEFT & RIGHT side arrays/slices (beginning from arrays/slice of length 1)
2. Repeat the process till we’ve merged everything back to the initial big array/slice, this time in the sorted order

Have a look at how exactly the sort function will get called for a simple example like the following:

`sort([]int{6,2,1,3,5,4}) // [1, 2, 3, 4, 5, 6]` Sort Function

# Sort Function Pseudocode

`function sort(A) {  if A.length > 1 {    n1 = A.length / 2    n2 = A.length - n1    let L[0..n1] and R[0..n2]        // split A in half => L and R arrays/slices    for i=0 to n1 {      L[i] = A[i]    }    for j=0 to n2 {      R[j] = A[n1+j]    }    // for languages that use clever constructs    // m = A.length / 2    // L = A[:m]    // R = A[m:]    // recursively sort & merge L & R arrays/slices    // A starts growing when recursion reaches the very bottom    A = merge(sort(L), sort(R))  }    // return when array/slice consists of 1 element  return A}`

The sort function primarily consists of 3 steps:

1. Split the incoming array/slice in 2 sides, on every recursive call & create LEFT & RIGHT arrays/slices.
`...n1 = A.length / 2n2 = A.length - n1let L[0..n1] and R[0..n2]    // split A in half => L and R arrays/slicesfor i=0 to n1 {  L[i] = A[i]}for j=0 to n2 {  R[j] = A[n1+j]}...`

2. Return array/slice when it cannot be divided anymore, or it has only 1 element.

`if A.length > 1 {  ...}// Here we have an array/slice of 1 element onlyreturn A`

3. Call sort and merge for LEFT & RIGHT sides recursively, till exit condition is met (array/slice has 1 element only or both sides were merged).

`...// recursively sort & merge L & R arrays/slices// A starts growing when recursion reaches the very bottomA = merge(sort(L), sort(R))...`

# Sort Function implemented in Go

The implementation translates more than 90% from the pseudocode above, so there’s really no need for any kind of explanations in this case as well.

If you don’t want to run the code above on your machine, feel free to experiment with the sort function repl.

# CAN we IMPROVE it?

We all know there is no perfect way of doing things, like there is no perfect sorting algorithm. So here are couple of things we should know about the Merge Sort Algorithm, which can suggest us ways we can improve its performance/memory usage

• Merge Sort uses temporary memory (L, R slices) this may be a problem for huge data sets where we deal with millions of objects that take up a lot of space.
• Merge Sort works with giant data sets and is also stable & very predictable.

Now, while Merge Sort is considered a decent algorithm to deal with big data, it can be an issue for applications that heavily rely on memory.

However a while ago we studied the Insertion Sort Algorithm, which we know that:

• Insertion Sort is faster on small data sets
• Insertion Sort does not use temporary memory

If you want to know more about the Insertion Sort Algorithm, make sure to check out the Medium article or the YouTube video, where I covered the algorithm in details. Choose whatever works better for you.

Now if you’re me, you may be asking yourself, why not combine the 2 so that we take the best out of both worlds and make our sorting as CPU and memory efficient as possible. Well, there is already an algorithm called Insertion-Merge Sort, aka Ford–Johnson algorithm, which does exactly that.

So here’s how we’re going to use the 2 together:

1. Use Merge Sort most of the times
2. Check the data length before splitting the array/slice
3. Use Insertion Sort instead if the data length is small enough

Now you may be asking yourself, what exactly small enough means for Insertion Sort? Well, that’s a good question. The generic answer is, use something between: 8–20 elements.

However that is too generic, normally you want to test things out, and adjust it for the amount of data you are sorting. The bigger the threshold the more trouble the Insertion Sort will have if you will deal with big data like millions or even billions of elements. So you want to keep it low, but not too low, otherwise there is no use of the Insertion Sort.

Here’s a different implementation of the sort function which implements all of the above:

If you don’t want to run the code on your machine, feel free to experiment with the insertion merge sort repl. And if you ran the example on your machine or inside the repl, you could see that we trivially and easily sorted 1 million records.

As I said before the Merge Sort or in this case the Insertion Merge Sort algorithm are both very predictable, meaning we can expect the same amount of execution time for most data.

# Performance & Benchmarks

There’s no better proof than numbers & benchmarks. These show how would the algorithms approximately perform in the real world. These are the results I got on my personal Macbook Pro 2019 machine, yours might be different:

`goos: darwingoarch: amd64pkg: github.com/algorithms-go/sorting/sortcpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHzBenchmarkInsertionSortBenchmarkInsertionSort-16                   21               52535921 ns/opPASSgoos: darwingoarch: amd64pkg: github.com/algorithms-go/sorting/sortcpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHzBenchmarkMergeSortBenchmarkMergeSort-16                   747      1608912 ns/opPASSgoos: darwingoarch: amd64pkg: github.com/algorithms-go/sorting/sortcpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHzBenchmarkMergeSortFuncBenchmarkMergeSortFunc-16                   705      1614946 ns/opPASSgoos: darwingoarch: amd64pkg: github.com/algorithms-go/sorting/sortcpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHzBenchmarkInsertionMergeSortFuncBenchmarkInsertionMergeSortFunc-16          915      1293846 ns/op`

Pay attention at the first number which is how many times the function was executed in a small time window (bigger is better) , and also at the second number which shows how many nanoseconds/operation it took (smaller is better).

For benchmark tests have a look at the benchmarks repl, where there are regular tests, as well as the exact benchmarks I ran on my machine. Or feel free to download the code directly from the repo and run them on your machine. Enjoy.

Again these results will vary from machine to machine, but generally they execute without issues on most machines including single core test servers like repl.it or Go Playgrounds. Feel free to run the tests and the benchmarks on your own and check the results for yourself:

You can also go ahead and run the exact same code inside the benchmarks repl. Even run the tests and benchmarks inside the console tab.

# Merge Sort Facts

As I say in everyone of my tutorials, there is really no perfect tool/solution or algorithm that does it all. You really have to use the right tool for the right problem, or I should say use the right Sorting Algorithm for the right situation.

That’s why before we wrap this up, let’s explore couple of facts about the Merge Sort Algorithm, so that you know when is best to use it and when not:

1. Merge Sort is Efficient & Very Fast for big data sets
2. Merge Sort Time Complexity is: O(n*log(n))
3. Merge Sort Space Complexity is: O(n)

Here are couple of facts about the Insertion Sort Algorithm, in case you did not check out the article or video above:

1. Insertion Sort is Efficient for small data sets
2. Insertion Sort Best Time Complexity is: O(n)
3. Insertion Sort Worst/Average Time Complexity is: O(n*n)
4. Insertion Sort Space Complexity is: O(1)

Knowing the above facts, helps identify & diagnose the sorting problem and give us an idea of how we shall proceed with sorting and what kind of sorting algorithms to use.

# Conclusion

There is no all in 1 solution, like there is no algorithm which works the best in every case. That’s why I wanna wrap up this article, by saying always choose the right algorithm for the right situation. It’s never one versus the other, it’s one or the other, in fact most of the times problems require hybrid implementations and a mix of a bunch of algorithms.

# Resources

Make sure to check out the GitHub repo for all the presentation notes, animations, coding examples and more. Also make sure to check out this nice cheatsheet that very well explains the Time Complexity of most Sorting Algorithms & not only.

# Support Me

I’ve recently gave up my job and I made YouTube and online content creation my full time job. If you like what I do and find any value and want to be part of my dream feel free to support me on Patreon or with a small Paypal donation.

I’m a passionate self taught Software Engineer, who also happens to be a YouTuber @SteveHook. https://www.youtube.com/c/SteveHook

I’m a passionate self taught Software Engineer, who also happens to be a YouTuber @SteveHook. https://www.youtube.com/c/SteveHook