Binary search, revisited

Binary search with confidence

In the previous post on binary search, we wrote the following binary search implementation and justified its correctness:

def binary_search(array, is_green):
    left, right = 0, len(array) - 1
    if not array:
        return 0
    if not is_green(array[left]):
        return 0
    if is_green(array[right]):
        return len(array)

    # Main loop which narrows our search range.
    while left + 1 < right:
        middle = (left + right) // 2
        if is_green(array[middle]):
            left = middle
        else:
            right = middle

    return right

# Call as such:
binary_search(array, lambda x: x < 6)

A shift in perspective

I want to change the parameters slightly: instead of passing an array, let’s try passing in left and right indices. Since we no longer have access to array inside the binary_search function, we have to modify the is_green parameter to take in an index instead of a value.

def binary_search(left, right, is_green):
    assert left <= right

    if left == right:
        # 0 length interval; return one past the end.
        return left + 1
    if not is_green(left):
        # Entire interval is red, return the first red (left).
        return left
    if is_green(right):
        # Entire interval is green, pretend one past the end is red.
        return right + 1

    while left + 1 < right:
        middle = (left + right) // 2
        if is_green(middle):
            left = middle
        else:
            right = middle

    return right

# Usage.
binary_search(0, len(array) - 1, lambda i: array[i] < 6)

Why make this change? Many references say that binary search is an algorithm for finding the position of a target value in an array. But now our new binary_search function doesn’t even take an array anymore. Why differ from “convention”?

The generalization

Writing binary search as a function that takes a range (a low and high value) and a function reveals the generalization: binary search is an algorithm that finds the boundary between contiguous sections of “green” and “red” elements in some range.

Notice that the above definition doesn’t mention an array at all. As long as we have a known range and the requisite structure on that range (all green elements, followed by all red elements), we can use binary search.

A small example

Let’s use binary search to find the first power of two larger than 1 million. That is, we want to find an integer such that but .

Consider . If , let’s call green. Otherwise, , so let’s call red. We could then write is_exp_green as follows:

def is_exp_green(n):
    return (2 ** n) <= 1_000_000

Recall the structure that we saw in the previous article: we can perform binary search on ranges where we have green elements, then red elements, in that order. This structure is monotonic: once the array changes from green to red, it stays red. Is is_exp_green also monotonic?

Yes! First, we know that green elements must be preceded by green elements: Suppose that for some , is green; that is, . Then, if for some , we have , it must be the case that . So must also be green.

We can make a similar argument that all red elements must be followed by more red elements. Thus, we have a region of green, followed by a region of red.

Finally, we know that 0 is green (). Let’s guess 100 is red. So we can call binary search (Godbolt):

binary_search(0, 100, is_exp_green)  # 20

And it works! Here, binary search helps us search for the first false value of is_exp_green. That function definition, plus upper and lower bounds, are all we need. In other words, to run binary search we don’t need a literal array of values to look in; instead, we can also binary search over this “virtual” array that contains powers of two. At no point do we construct a concrete array; instead, all we need to do is repeatedly ask whether we are before or after . That is exactly what is_exp_green does.

Let’s try to use binary search to solve an interview question.

A delicious chocolate bar with almonds

Suppose you have a chocolate bar with almonds. The chocolate bar can be broken up at multiple grooves. The region between grooves has a certain number of almonds:

Chocolate bar, to be split into three parts.
The first region has 6 almonds, the second has 3, and so on.

632875

Your job is to break up the almonds into parts along the grooves. You have friends, who all love almonds, so they will take the parts with the most number of almonds. That leaves you with the part with the least number of almonds. How can you divide the chocolate bar to maximize the number of almonds you get? (You aren’t allowed to reorder regions: you must break the chocolate at exactly points.)

Chocolate bar divided into 3 partitions.
The first partition has the fewest almonds (9).
So you end up with only 9 almonds. Can you do better?

632875

Read on for a solution.


The main idea

Another way to state the problem is: find the largest value of such that we can give everyone at least almonds by partitioning the chocolate bar.

Concretely, it is clear that with the above chocolate bar, we can give everyone five or more almonds (with some almonds left over)!

Everyone gets 5 almonds in their piece. There are 12 almonds left over!

532875

And it is also clear that with the above chocolate bar, it’s impossible to give all three friends 32 almonds—that’s how many there are in the entire bar!

One (very lucky) friend gets all 32 almonds!

532875

Our problem has a monotonic structure: if it’s possible to give everyone almonds, then it must be possible to give everyone almonds. And if it’s not possible to give everyone almonds, then it’s also not possible to give everyone almonds.

Because it is possible to give all three at least 7 almonds…

632875

It must also possible to give all three at least 6 almonds.

632875

It is not possible to give all three at least 11 almonds.

632875

Therefore, it is not possible to give all through at least 12 almonds.

632875

You can imagine that we have a “virtual” array. Each entry in the array corresponds to attempting to give everyone almonds. On the left, we have a region where we can give everyone almonds. On the right, we have a region where we cannot give everyone almonds.

We can give everyone 0, 1, 2, … almonds. We can’t give everyone 31, 30, 29, … almonds. When do we transition from can to can’t?

012???293031
GGGRRR

How can we find the boundary between the two regions? Binary search!

What should our is_green(k) function do? It could try to break apart the chocolate, ensuring that each part has at least almonds. If it’s possible to create parts each with at least almonds, it returns true. Otherwise, it returns false.

Then we can use our binary search implementation above to find the smallest value for which we cannot divide the chocolate bar. Then the value one smaller would be the largest value for which we can divide the chocolate bar. That is exactly how many almonds we could maximize for ourselves.

Suppose there are total almonds in the bar, and the bar has are sections. If we brute-force searched through every , our algorithm would take time, which is exponential in the input size. On the other hand, the binary search algorithm only takes time, which is only polynomial in the input.

Decision problems and complexity theory

I am (obviously) not the first person to have noticed this generalization of binary search. There are a few competitive programming articles that teach binary search from a similar perspective.

Another place where this variant of binary search appears is when “reducing” optimization problems to their decision variants. Very informally, reducing a problem to another problem means that you solve problem by calling problem one or more times 1.

We saw that above with the chocolates problem: in order to solve the chocolate optimization problem (“maximize the number of almonds I receive”), we instead reduced it to , the decision problem (“can I give myself at least almonds?”) using calls to . Binary search helped us accomplish the reduction with as few calls to possible. In general, binary search can help us quickly solve many optimization problems that have decision variants with a similar structure.

It turns out that binary search in very many cases can show that “decision” problems (return feasible/infeasible answer) as just as hard as their “optimization” counterparts (return the minimum/maximum feasible value). For example, a famous problem in computer science is the travelling salesman problem (TSP-DECIDE). The usual statement of TSP-DECIDE is: given some cities and roads (each with travel cost) connecting cities, is it possible to visit every city exactly once and return to the original city in under cost ? Notice that this is a decision problem: the answer is either true or false. And it also has the same structure that we can use for binary search: if it’s possible to make such a trip in cost , it is be possible by definition with cost , and if it’s impossible in cost , it must also be impossible with cost .

So, consider an optimization variant of TSP-DECIDE, which we will call TSP-OPT: given a list of cities and roads, what is the cost of the shortest path that visits every city exactly once and returns to the original city? We can solve this with calls to TSP-DECIDE: pick some lower bound on the cost and some upper bound and use binary search. In other words, even though TSP-OPT seems intuitively “much harder” than TSP-DECIDE, in reality, they are of approximately equal difficulty (they only differ by at most a factor, which is small compared to the time required to run either TSP-DECIDE or TSP-OPT).

Footnotes

  1. Not a precise definition, but the details aren’t important here.