In the previous post on binary search, we wrote the following binary search implementation and justified its correctness:
def binary_search(array, is_green):
left, right = 0, len(array)  1
if not array:
return 0
if not is_green(array[left]):
return 0
if is_green(array[right]):
return len(array)
# Main loop which narrows our search range.
while left + 1 < right:
middle = (left + right) // 2
if is_green(array[middle]):
left = middle
else:
right = middle
return right
# Call as such:
binary_search(array, lambda x: x < 6)
A shift in perspective
I want to change the parameters slightly: instead of passing an array, let’s
try passing in left and right indices. Since we no longer have access to
array
inside the binary_search
function, we have to modify the is_green
parameter to take in an index instead of a value.
def binary_search(left, right, is_green):
assert left <= right
if left == right:
# 0 length interval; return one past the end.
return left + 1
if not is_green(left):
# Entire interval is red, return the first red (left).
return left
if is_green(right):
# Entire interval is green, pretend one past the end is red.
return right + 1
while left + 1 < right:
middle = (left + right) // 2
if is_green(middle):
left = middle
else:
right = middle
return right
# Usage.
binary_search(0, len(array)  1, lambda i: array[i] < 6)
Why make this change?
Many
references
say
that binary search is an algorithm for finding the position of a target value
in an array. But now our new binary_search
function doesn’t even take an
array anymore. Why differ from “convention”?
The generalization
Writing binary search as a function that takes a range (a low and high value) and a function reveals the generalization: binary search is an algorithm that finds the boundary between contiguous sections of “green” and “red” elements in some range.
Notice that the above definition doesn’t mention an array at all. As long as we have a known range and the requisite structure on that range (all green elements, followed by all red elements), we can use binary search.
A small example
Let’s use binary search to find the first power of two larger than 1 million.
That is, we want to find an integer
Consider is_exp_green
as follows:
def is_exp_green(n):
return (2 ** n) <= 1_000_000
Recall the structure that we saw in the previous article: we can perform binary
search on arrays where we have green elements, then red elements, in that
order. Does is_green
have the correct structure? Yes. First, we know that
green elements must be preceded by green elements: Suppose
We can make a similar argument that all red elements must be followed by more red elements. Thus, we have a region of green, followed by a region of red.
Finally, we know that 0 is green (
binary_search(0, 100, is_exp_green) # 20
And it works! Here, binary search helps us search for the first false value of
is_exp_green
. That function definition, plus upper and lower bounds, are all
we need. In other words, to run binary search we don’t need a literal array of
values to look in; instead, we can also binary search over this “virtual” array
that contains powers of two. At no point do we construct a concrete array;
instead, all we need to do is repeatedly ask whether we are before or after
is_exp_green
does.
Let’s try to use binary search to solve an interview question.
A delicious chocolate bar with almonds
Suppose you have a chocolate bar with almonds. The chocolate bar can be broken up at multiple grooves. The region between grooves has a certain number of almonds:
Your job is to break up the almonds into
Read on for a solution.
The main idea
Another way to state the problem is: find the largest value of
Under this formulation, the problem now has a structure on which we can apply
binary search: if it’s possible to give everyone
So suppose we have a “virtual” array. Each entry
0  1  2  …  ???  …  29  30  31 
G  G  G  R  R  R 
How can we find the boundary between the two regions? Binary search!
What should our is_green(k)
function do? It could try to break apart the
chocolate, ensuring that each part has at least
Then we can use our binary search implementation above to find the smallest value for which we cannot divide the chocolate bar. Then the value one smaller would be the largest value for which we can divide the chocolate bar. That is exactly how many almonds we could maximize for ourselves!
Decision problems and complexity theory
I am (obviously) not the first person to have noticed this generalization of binary search. There are a few competitive programming articles that teach binary search from a similar perspective.
Another place where this variant of binary search appears is when “reducing”
optimization problems to their decision variants. Very informally, reducing a
problem
We saw that above with the chocolates problem: in order to solve the chocolate
optimization problem
It turns out that binary search in very many cases can show that “decision”
problems (return feasible/infeasible answer) as just as hard as their
“optimization” counterparts (return the minimum/maximum feasible value).
For example, a famous problem in computer science is the travelling salesman
problem (TSPDECIDE). The usual statement of TSPDECIDE is: given some
cities and roads (each with travel cost) connecting cities, is it possible to
visit every city exactly once and return to the original city in under cost
So, consider an optimization variant of TSPDECIDE,
which we will call TSPOPT: given a list of cities and
roads, what is the cost of the shortest path that visits every city exactly
once and returns to the original city? We can solve this with
Footnotes

Not a precise definition, but the details aren’t important here. ↩