Binary (min) heap  

Type  binary tree/heap  
Invented  1964  
Invented by  J. W. J. Williams  

A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues.^{[1]}^{: 162–163 } The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.^{[2]}
A binary heap is defined as a binary tree with two additional constraints:^{[3]}
Heaps where the parent key is greater than or equal to (≥) the child keys are called maxheaps; those where it is less than or equal to (≤) are called minheaps. Efficient (logarithmic time) algorithms are known for the two operations needed to implement a priority queue on a binary heap: inserting an element, and removing the smallest or largest element from a minheap or maxheap, respectively. Binary heaps are also commonly employed in the heapsort sorting algorithm, which is an inplace algorithm because binary heaps can be implemented as an implicit data structure, storing keys in an array and using their relative positions within that array to represent child–parent relationships.
Both the insert and remove operations modify the heap to conform to the shape property first, by adding or removing from the end of the heap. Then the heap property is restored by traversing up or down the heap. Both operations take O(log n) time.
To insert an element to a heap, we perform the following steps:
Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with its parent, are called the upheap operation (also known as bubbleup, percolateup, siftup, trickleup, swimup, heapifyup, cascadeup, or fixup).
The number of operations required depends only on the number of levels the new element must rise to satisfy the heap property. Thus, the insertion operation has a worstcase time complexity of O(log n). For a random heap, and for repeated insertions, the insertion operation has an averagecase complexity of O(1).^{[4]}^{[5]}
As an example of binary heap insertion, say we have a maxheap
and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However, the heap property is violated since 15 > 8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap:
However the heap property is still violated since 15 > 11, so we need to swap again:
which is a valid maxheap. There is no need to check the left child after this final step: at the start, the maxheap was valid, meaning the root was already greater than its left child, so replacing the root with an even greater value will maintain the property that each node is greater than its children (11 > 5; if 15 > 11, and 11 > 5, then 15 > 5, because of the transitive relation).
The procedure for deleting the root from the heap (effectively extracting the maximum element in a maxheap or the minimum element in a minheap) while retaining the heap property is as follows:
Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with one of its children, are called the downheap (also known as bubbledown, percolatedown, siftdown, sinkdown, trickle down, heapifydown, cascadedown, fixdown, extractmin or extractmax, or simply heapify) operation.
So, if we have the same maxheap as before
We remove the 11 and replace it with the 4.
Now the heap property is violated since 8 is greater than 4. In this case, swapping the two elements, 4 and 8, is enough to restore the heap property and we need not swap elements further:
The downwardmoving node is swapped with the larger of its children in a maxheap (in a minheap it would be swapped with its smaller child), until it satisfies the heap property in its new position. This functionality is achieved by the MaxHeapify function as defined below in pseudocode for an arraybacked heap A of length length(A). A is indexed starting at 1.
// Perform a downheap or heapifydown operation for a maxheap // A: an array representing the heap, indexed starting at 1 // i: the index to start at when heapifying down MaxHeapify(A, i): left ← 2×i right ← 2×i + 1 largest ← i if left ≤ length(A) and A[left] > A[largest] then: largest ← left
if right ≤ length(A) and A[right] > A[largest] then: largest ← right if largest ≠ i then: swap A[i] and A[largest] MaxHeapify(A, largest)
For the above algorithm to correctly reheapify the array, no nodes besides the node at index i and its two direct children can violate the heap property. The downheap operation (without the preceding swap) can also be used to modify the value of the root, even when an element is not being deleted.
In the worst case, the new root has to be swapped with its child on each level until it reaches the bottom level of the heap, meaning that the delete operation has a time complexity relative to the height of the tree, or O(log n).
Inserting an element then extracting from the heap can be done more efficiently than simply calling the insert and extract functions defined above, which would involve both an upheap
and downheap
operation. Instead, we can do just a downheap
operation, as follows:
Python provides such a function for insertion then extraction called "heappushpop", which is paraphrased below.^{[6]}^{[7]} The heap array is assumed to have its first element at index 1.
// Push a new item to a (max) heap and then extract the root of the resulting heap. // heap: an array representing the heap, indexed at 1 // item: an element to insert // Returns the greater of the two between item and the root of heap. PushPop(heap: List<T>, item: T) > T: if heap is not empty and heap[1] > item then: // < if min heap swap heap[1] and item _downheap(heap starting from index 1) return item
A similar function can be defined for popping and then inserting, which in Python is called "heapreplace":
// Extract the root of the heap, and push a new item // heap: an array representing the heap, indexed at 1 // item: an element to insert // Returns the current root of heap Replace(heap: List<T>, item: T) > T: swap heap[1] and item _downheap(heap starting from index 1) return item
Finding an arbitrary element takes O(n) time.
Deleting an arbitrary element can be done as follows:
The decrease key operation replaces the value of a node with a given value with a lower value, and the increase key operation does the same but with a higher value. This involves finding the node with the given value, changing the value, and then downheapifying or upheapifying to restore the heap property.
Decrease key can be done as follows:
Increase key can be done as follows:
Building a heap from an array of n input elements can be done by starting with an empty heap, then successively inserting each element. This approach, called Williams' method after the inventor of binary heaps, is easily seen to run in O(n log n) time: it performs n insertions at O(log n) cost each.^{[a]}
However, Williams' method is suboptimal. A faster method (due to Floyd^{[8]}) starts by arbitrarily putting the elements on a binary tree, respecting the shape property (the tree could be represented by an array, see below). Then starting from the lowest level and moving upwards, sift the root of each subtree downward as in the deletion algorithm until the heap property is restored. More specifically if all the subtrees starting at some height have already been "heapified" (the bottommost level corresponding to ), the trees at height can be heapified by sending their root down along the path of maximum valued children when building a maxheap, or minimum valued children when building a minheap. This process takes operations (swaps) per node. In this method most of the heapification takes place in the lower levels. Since the height of the heap is , the number of nodes at height is . Therefore, the cost of heapifying all subtrees is:
This uses the fact that the given infinite series converges.
The exact value of the above (the worstcase number of comparisons during the heap construction) is known to be equal to:
where s_{2}(n) is the sum of all digits of the binary representation of n and e_{2}(n) is the exponent of 2 in the prime factorization of n.
The average case is more complex to analyze, but it can be shown to asymptotically approach 1.8814 n − 2 log_{2}n + O(1) comparisons.^{[10]}^{[11]}
The BuildMaxHeap function that follows, converts an array A which stores a complete binary tree with n nodes to a maxheap by repeatedly using MaxHeapify (downheapify for a maxheap) in a bottomup manner. The array elements indexed by floor(n/2) + 1, floor(n/2) + 2, ..., n are all leaves for the tree (assuming that indices start at 1)—thus each is a oneelement heap, and does not need to be downheapified. BuildMaxHeap runs MaxHeapify on each of the remaining tree nodes.
BuildMaxHeap (A): for each index i from floor(length(A)/2) downto 1 do: MaxHeapify(A, i)
Heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, the parent and children of each node can be found by arithmetic on array indices. These properties make this heap implementation a simple example of an implicit data structure or Ahnentafel list. Details depend on the root position, which in turn may depend on constraints of a programming language used for implementation, or programmer preference. Specifically, sometimes the root is placed at index 1, in order to simplify arithmetic.
Let n be the number of elements in the heap and i be an arbitrary valid index of the array storing the heap. If the tree root is at index 0, with valid indices 0 through n − 1, then each element a at index i has
Alternatively, if the tree root is at index 1, with valid indices 1 through n, then each element a at index i has
This implementation is used in the heapsort algorithm which reuses the space allocated to the input array to store the heap (i.e. the algorithm is done inplace). This implementation is also useful as a Priority queue. When a dynamic array is used, insertion of an unbounded number of items is possible.
The upheap
or downheap
operations can then be stated in terms of an array as follows: suppose that the heap property holds for the indices b, b+1, ..., e. The siftdown function extends the heap property to b−1, b, b+1, ..., e.
Only index i = b−1 can violate the heap property.
Let j be the index of the largest child of a[i] (for a maxheap, or the smallest child for a minheap) within the range b, ..., e.
(If no such index exists because 2i > e then the heap property holds for the newly extended range and nothing needs to be done.)
By swapping the values a[i] and a[j] the heap property for position i is established.
At this point, the only problem is that the heap property might not hold for index j.
The siftdown function is applied tailrecursively to index j until the heap property is established for all elements.
The siftdown function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log_{2} e steps are required.
For big heaps and using virtual memory, storing elements in an array according to the above scheme is inefficient: (almost) every level is in a different page. Bheaps are binary heaps that keep subtrees in a single page, reducing the number of pages accessed by up to a factor of ten.^{[12]}
The operation of merging two binary heaps takes Θ(n) for equalsized heaps. The best you can do is (in case of array implementation) simply concatenating the two heap arrays and build a heap of the result.^{[13]} A heap on n elements can be merged with a heap on k elements using O(log n log k) key comparisons, or, in case of a pointerbased implementation, in O(log n log k) time.^{[14]} An algorithm for splitting a heap on n elements into two heaps on k and nk elements, respectively, based on a new view of heaps as an ordered collections of subheaps was presented in.^{[15]} The algorithm requires O(log n * log n) comparisons. The view also presents a new and conceptually simple algorithm for merging heaps. When merging is a common task, a different heap implementation is recommended, such as binomial heaps, which can be merged in O(log n).
Additionally, a binary heap can be implemented with a traditional binary tree data structure, but there is an issue with finding the adjacent element on the last level on the binary heap when adding an element. This element can be determined algorithmically or by adding extra data to the nodes, called "threading" the tree—instead of merely storing references to the children, we store the inorder successor of the node as well.
It is possible to modify the heap structure to make the extraction of both the smallest and largest element possible in time.^{[16]} To do this, the rows alternate between min heap and maxheap. The algorithms are roughly the same, but, in each step, one must consider the alternating rows with alternating comparisons. The performance is roughly the same as a normal single direction heap. This idea can be generalized to a minmaxmedian heap.
In an arraybased heap, the children and parent of a node can be located via simple arithmetic on the node's index. This section derives the relevant equations for heaps with their root at index 0, with additional notes on heaps with their root at index 1.
To avoid confusion, we define the level of a node as its distance from the root, such that the root itself occupies level 0.
For a general node located at index i (beginning from 0), we will first derive the index of its right child, .
Let node i be located in level L, and note that any level l contains exactly nodes. Furthermore, there are exactly nodes contained in the layers up to and including layer l (think of binary arithmetic; 0111...111 = 1000...000  1). Because the root is stored at 0, the kth node will be stored at index . Putting these observations together yields the following expression for the index of the last node in layer l.
Let there be j nodes after node i in layer L, such that
Each of these j nodes must have exactly 2 children, so there must be nodes separating i's right child from the end of its layer ().
Noting that the left child of any node is always 1 place before its right child, we get .
If the root is located at index 1 instead of 0, the last node in each level is instead at index . Using this throughout yields and for heaps with their root at 1.
Every nonroot node is either the left or right child of its parent, so one of the following must hold:
Hence,
Now consider the expression .
If node is a left child, this gives the result immediately, however, it also gives the correct result if node is a right child. In this case, must be even, and hence must be odd.
Therefore, irrespective of whether a node is a left or right child, its parent can be found by the expression:
Since the ordering of siblings in a heap is not specified by the heap property, a single node's two children can be freely interchanged unless doing so violates the shape property (compare with treap). Note, however, that in the common arraybased heap, simply swapping the children might also necessitate moving the children's subtree nodes to retain the heap property.
The binary heap is a special case of the dary heap in which d = 2.
Here are time complexities^{[17]} of various heap data structures. Function names assume a minheap. For the meaning of "O(f)" and "Θ(f)" see Big O notation.
Operation  findmin  deletemin  insert  decreasekey  meld 

Binary^{[17]}  Θ(1)  Θ(log n)  O(log n)  O(log n)  Θ(n) 
Leftist  Θ(1)  Θ(log n)  Θ(log n)  O(log n)  Θ(log n) 
Binomial^{[17]}^{[18]}  Θ(1)  Θ(log n)  Θ(1)^{[c]}  Θ(log n)  O(log n) 
Skew binomial^{[19]}  Θ(1)  Θ(log n)  Θ(1)  Θ(log n)  O(log n)^{[d]} 
Pairing^{[20]}  Θ(1)  O(log n)^{[c]}  Θ(1)  o(log n)^{[c]}^{[e]}  Θ(1) 
Rankpairing^{[23]}  Θ(1)  O(log n)^{[c]}  Θ(1)  Θ(1)^{[c]}  Θ(1) 
Fibonacci^{[17]}^{[24]}  Θ(1)  O(log n)^{[c]}  Θ(1)  Θ(1)^{[c]}  Θ(1) 
Strict Fibonacci^{[25]}  Θ(1)  O(log n)  Θ(1)  Θ(1)  Θ(1) 
Brodal^{[26]}^{[f]}  Θ(1)  O(log n)  Θ(1)  Θ(1)  Θ(1) 
2–3 heap^{[28]}  O(log n)  O(log n)^{[c]}  O(log n)^{[c]}  Θ(1)  ? 