Adam Zerner

Overview of heaps

There's a lot that we can say about heaps. Let's start with discussing how they can be used to implement a priority queue.

Priority queue

Imagine you're credit card company. You have a security division that has the job of checking transactions for fraud.

You have a constant stream of transactions coming in. Millions and millions of them.

$30  $225  $2  $8  $6000...

But your security team is only big enough to check a couple hundred transactions. What do you do? You prioritize.

You decide to start with the largest transaction and work your way down from there. It's ok if there's a fradulent charge for $1.49, but not if there's one for $250,000, so you want to address the largest charges first.

One way you can do that would be to store all of the charges in an array, and keep the array sorted. Let's do that with the stream we have above.

[ 2, 8, 30, 225, 6000 ]

Cool! Now our agents can just pop from the end of the array in O(1) time when they have a transaction they want to verify.

But remember, we have a constant stream of new transactions coming in. Each time we have a new transaction we have to insert it into the array. And how long does that take? O(n) time. Imagine that a transaction comes in for 1. We'd want to insert it at index 0, but to do that, we have to scoot everything else over to the right.

// original array
[ 2, 8, 30, 225, 6000 ]

// scoot to the right
[ null, 2, 8, 30, 225, 6000 ]

// insert new element
[ 1, 2, 8, 30, 225, 6000 ]

O(n) isn't terrible, but since we have so many transactions, it's starting to give us trouble. Can we do better? Yes. It turns out that we can get O(logn) by using a data structure called a heap.

Heap

A heap is just a special type of tree.

1) It's a binary tree, so each node has potentially two children. Well, some variants might have three or four children, but we'll talk more about that later.

2) It's complete tree. Like this:

This means that every level, except possibly the last, is completely filled, and all nodes are as far left as possible. The below diagram would violate those rules due to the brown nodes. But in the above diagram, H, I, J, K and L are ok.

3) Most importantly, children are less than or equal to their parent. So as you go from the top to the bottom, the values get smaller.

Insertion

Ok, that's straightforward enough right? But how does this get us our O(logn) insertion time? Well, to answer that, we have to talk about how insertion into this special tree would work.

Out heap starts off like this:

Now imagine that we want to insert 6500. We'd start off inserting it into the first available slot on the bottom row. By "first available" I mean that we fill it in from left to right.

But by doing that, we've screwed up our heap. More specifically, we've violated condition #3 by having a child that is greater than it's parent.

That's ok. It just means that we have to clean it up.

And how do we clean it up? We "swim" upwards until it's fixed. So here, since 6500 is less than it's parent 225, we swap them.

We're making progress. Do we want to continue to "swim" upwards? Yes. 6500 is still less than it's parent. Let's swap again.

Now that we've reached the top, we're done. Condition #3 is now satisfied again. All children are less than their parents.

This approach of swimming upwards happens in O(logn) time. Since we're making sure that our tree is complete, we know for sure that the height will be log(n).

I think this is a little bit clearer when you look at a bigger input size.

The array on the bottom shows an O(n) insert. All of the elements have to be scooted over ot the right.

The heap on the top shows a O(logn) insert. See how the purple path is much shorter than the purple in the array on the bottom? That's why we use a heap to implement a priority queue instead of an array.

Removal

We're talking about a queue here, so removal only happens at one end. Ours is a priority queue, so removal will be happening at the top, whereas if it were a normal queue, removal would happen at the front. This means that we don't have to worry about removing random elements, just the top element. Here's how that works.

Let's start here. We're going to remove 6500.

First we go to the bottom row. Then we go all the way to the right and get the 250 node.

Then we replace 6500 with this value of 250, and get rid of the 250 node.

But now we've broken our invariant again (it's called an invariant because it's not supposed to vary). Children aren't supposed to be larger than their parents, but 6000 is larger than 250. So we have to correct this. We have to re-establish our invariant. We have to make sure that all children are less than their parents.

How do we do this? It's actually very similar to what we did before: we swim down until our invariant has been re-established. Since 250 is less than 6000, we swap them.

And now our invariant has been re-established, so we're done! If it hadn't been re-established, we'd have to continue swimming down until it was.

Notice that this is O(logn) as well.

Representation

We're using a binary tree as our datastore. Binary trees consist of nodes that look like this:

{
  val: null,
  left: null,
  right: null
}

Usually. There is actually another way to store binary tree data other than in nodes like the one above: we can use an array.

I know, I know. That's really counterintuitive. But here, I'll show you how it works.

That tree starting at root A could be stored in an array where index 0 is empty (makes the math easier), index 1 stores A, index 2 stores B, etc.

To go from a node to it's left child, you do 2i where i is it's index. Right child is 2i + 1. Parent is floor(i/2). So for node C at index 3...

Using an array to store the elements in the tree has the advantage of providing better locality of reference. However, the downside is that it has the potential to waste memory.

See how indexes 4 and 5 are empty?

We won't run into this problem though, because our binary tree is complete. So using an array to store the elements is a good choice for us.

Size

Sometimes it makes sense to have a fixed-size priority queue. Our credit card situation from before is a perfect example of that. We have millions of transactions coming in but only enough capacity to process a couple hundred of them. So for our priority queue, it doesn't need to hold more than a couple hundred items. Storing millions of items in memory would be wasteful, so we can just "throw away" the ones that don't fit.

Other times we want to fit an arbitrary number of elements. In that case you'd have to deal with resizing the array.

Interlude: Min heap vs max heap

We've been saying that the heap should get smaller as you start at the root and work your way down. That children should be smaller than their parents. But it doesn't have to be like this. It could be the opposite.

When children are smaller than their parents, like we've been assuming, it's called a max heap. But when it's the opposite, it's called a min heap. You'd want to use a min heap when smaller items have higher priorities. Like with, say, golf scores.

Interlude: Heap memory vs heap the data structure

By "heap memory", I'm referring to this (explained well here):

If you haven't heard of this "heap memory" before, don't worry about it. You can just skip this section, I don't want to confuse you.

But if you have heard of it, you may be wondering what it has to do with the heap data structure we've been talking about. The name is the same, but they seem like pretty different things.

It turns out that they have nothing to do with each other. It's just a bad accident that they have the same name, just like it is with "left" (past tense of leave) and "left" (opposite of right). The "heap" in "heap memory" is using the term in the sense of "a heap of dirty laundry", not in reference to the heap data structure.

Implementation

Now that we went over the array representation of our binary tree, we can progress past the high level approach and go over how the code actually works. Here's my stab at it:

class PriorityQueue {
  constructor() {
    this._data = [null]; // we're starting at index 1 so we put a null placeholder in index 0
  }
  _getParentIndex(index) {
    return Math.floor(index / 2);
  }
  _getLeftChildIndex(index) {
    return 2 * index;
  }
  _getRightChildIndex(index) {
    return 2 * index + 1;
  }
  insert(val) {
    this._data.push(val);
    this._swim(this._data.length - 1);
  }
  _swim(i) {
    let parentIndex = this._getParentIndex(i);

    while (i > 1 && this._isLess(parentIndex, i)) {
      this._swap(parentIndex, i);
      i = parentIndex;
      parentIndex = this._getParentIndex(i);
    }
  }
  _isLess(i, j) {
    return this._data[i] < this._data[j];
  }
  _swap(i, j) {
    const temp = this._data[i];
    this._data[i] = this._data[j];
    this._data[j] = temp;
  }
  remove() {
    const first = this._data[1];
    const last = this._data.pop();

    this._data[1] = last;
    this._sink(1);

    return first;
  }
  _sink(i) {
    while (2 * i < this._data.length) {
      let indexOfLargerChild = this._getIndexOfLargerChild(i);

      if (this._isLess(i, indexOfLargerChild)) {
        this._swap(i, indexOfLargerChild);
        i = indexOfLargerChild;
      } else {
        break;
      }
    }
  }
  _getIndexOfLargerChild(i) {
    const indexOfLeftChild = this._getLeftChildIndex(i);
    const indexOfRightChild = this._getRightChildIndex(i);

    if (indexOfRightChild >= this._data.length) {
      return indexOfLeftChild;
    } else if (this._isLess(indexOfLeftChild, indexOfRightChild)) {
      return indexOfRightChild;
    } else {
      return indexOfLeftChild;
    }
  }
}

Interlude: Multiway heaps

We've been using a binary tree. Y'know, where parents can have two children. But it doesn't have to be this way. You could also do it with three children. Or four children. Or d children.

The reason why you'd want to do this is because it spreads the tree out horizontally and makes for a shorter height, which means that there's less distance for you to sink or swim. The downside is that sink takes longer to find the largest child, because instead of checking two children, it has to check three or four children.

Don't get too bogged down in this though. It's an advanced topic that you probably don't need to know about. It's just good to understand that it's a possibility and to have a high-level idea about what the trade-offs are.

Heapsort

How can we utilize this data structure we've been talking about to get sorting in O(nlogn) time. You may want to spend a little bit of time thinking about it. The answer is pretty fun and simple.

Suppose we have the following array that we want to sort:

[ 30, 225, 2, 8, 6000 ]

Let's go through each element, one by one, and insert it into a heap. We end up here:

How long did it take us to get there? Well, each insert was O(logn), and we performed n inserts, so we're at O(nlogn).

Now that we have our heap, let's remove the max element and put it into a new array.

Now let's do it again. And again. And again. Do that n times, and boom: you have a sorted array!

Due to similar reasoning, this removal process is also O(nlogn). So in total, all of this took O(nlogn) time.

That wasn't too bad right? Just insert each array element into a heap and then remove each element from that heap, placing it into a new array. Now we have another O(nlogn) sorting algorithm.

In practice there are optimizations you could do, but this is how it works at a high level.


If you have any thoughts, I'd love to discuss them over email: adamzerner@protonmail.com.

If you'd like to subscribe, you can do so via email or RSS feed.

#code