Floating Point Binary Search
floats, programming, ruby
Photo by Jan Antonin Kolar on Unsplash.
(This post was inspired by this other post that I thought was super cool.)
Binary Search
Binary searching is awesome. It lets you find a target among a potentially huge search space or list with relatively few comparisons, and it's something that we do in real life (more or less). Imagine searching for the name "Hagrid" in a phone book: You open up roughly to the middle and you see the name "Malfoy." That's further through the alphabet then you want, so you open up the book more or less in the middle of the first half of the phone book. Then you land at "Granger," so you open up somewhere between "Granger" and "Malfoy," and so on.
The key requirement for binary searching is that the list you're searching through must be sorted.
Ruby lets you perform a binary search on ranges using Range#bsearch
. The method takes a block that
should evaluate to false
when it receives a value less than the target and true
when the value
is greater than or equal to the target. For example, to find the (positive) square root of 1000:
(0..1000).bsearch do |n|
n * n >= 1000
end
#=> 32
Oh wait... That's not right. 32 * 32 = 1024
. What's going on?
OIC. The range (0..1000)
in Ruby only includes the integers, so the binary search will find the
first integer greater than or equal to our answer.
Hm. Well, can Ruby do a binary search on a range of floats?
(0.0 .. 1000.0).bsearch do |n|
n * n >= 1000
end
#=> 31.622776601683793
There we go. That's nice.
Anyway, another cool thing is exponential search.
Exponential Search
Exponential search is similar to binary search, but it's used when we don't have an upper bound (or when we're just too lazy to find an upper bound ourselves... because programmers are like that sometimes). And, luckily for this blog post, Ruby is cool enough to support unbounded ranges, so here's what it looks like:
(0.0..).bsearch do |n|
n * n >= 1000
end
#=> 31.622776601683793
(Yeah, the method is still called bsearch
, because it does basically the same thing, it just uses
a different algorithm under the hood. Convenient for remembering, but I guess not strictly accurate.)
The key difference here is that exponential search finds an upper bound for us since we haven't
given it one. It does this by guessing an upper bound, and then doubling it until the upper bound is
bigger than our target (in Ruby's case, until the block evaluates to true
). We can see that happen
under the hood by printing some debugging statements.
(0..).bsearch do |n|
puts "Testing #{n}"
n * n >= 1000
end
# Testing 1
# Testing 2
# Testing 4
# Testing 8
# Testing 16
# Testing 32
# Testing 16
# Testing 24
# Testing 28
# Testing 30
# Testing 31
# Testing 32
# => 32
Ruby starts with 1
and doubles it until it reaches 32
. With 32
the block returns true
, so
Ruby knows it's found an upper limit. From there, Ruby performs a regular binary search on the range
(0..32)
.
Oh, but this is integers again, so the answer isn't exact. Here's the same thing with an unbounded floating-point range:
(0.0 ..).bsearch do |n|
puts "Testing #{n}"
n * n >= 1000
end
# Testing 1.5
# Testing 1.6759759912428246e+154
# Testing 1.5921412270130977e+77
# Testing 4.8915590244884904e+38
# Testing 2.7093655358260904e+19
# Testing 6375342080.0
# Testing 97792.0
# { snip 55 lines }
# Testing 31.622776601683793
# => 31.622776601683793
So here Ruby starts with 1.5
, and then... Wait a sec. That's not doubling. The next value is
1.6e154
? That's huge. What's going on?
Ruby & Unbounded Floating-point Ranges
Well, it turns out that Ruby doesn't have a maximum integer value, so it has to use the exponential
algorithm there, but it does have a maximum Float value: Float::MAX #=> 1.7976931348623157e+308
.
So when performing a binary search on an unbounded floating-point range, maybe Ruby just sticks the
maximum float value in there? If that's right, then the same binary search on (0.0..Float::MAX)
should yield the same debugging output:
(0.0 .. Float::MAX).bsearch do |n|
puts "Testing #{n}"
n * n >= 1000
end
# Testing 1.4999999999999998
# Testing 1.6759759912428243e+154
# Testing 1.5921412270130974e+77
# Testing 4.89155902448849e+38
# Testing 2.70936553582609e+19
# Testing 6375342079.999999
# Testing 97791.99999999999
# { snip 55 lines }
# Testing 31.62277660168379
# => 31.622776601683793
Hm. Close, but this is starting with 1.49999...
, and the first one started with 1.5
. We're
slightly off. Maybe Ruby just throws infinity in there?
(0.0 .. Float::INFINITY).bsearch do |n|
puts "Testing #{n}"
n * n >= 1000
end
# Testing 1.5
# Testing 1.6759759912428246e+154
# Testing 1.5921412270130977e+77
# Testing 4.8915590244884904e+38
# Testing 2.7093655358260904e+19
# Testing 6375342080.0
# Testing 97792.0
# { snip 55 lines }
# Testing 31.622776601683793
# => 31.622776601683793
There we go.
So, when using unbounded floating-point ranges Ruby skips exponential search and just throws
infinity in as the upper limit. Then it follows the binary search algorithm, where it checks the
midpoint of the min and max values and... Wait. How is there a meaningful midpoint between zero and
infinity? And how is that midpoint 1.5
?
Well... When we used Float::MAX
instead of Float::INFINITY
the midpoint was still close to
1.5
; it was 1.4999...
. So according to Ruby the maximum Float and infinity are pretty close
together.
Not sure what's going on here. Let's take a small tangent.
Tangent: Stubborn Binary Searches
Here's another thought: what happens when we perform a search with a block that always, stubbornly,
returns false
?
(0.0 ..).bsearch do |n|
puts "Testing #{n}"
false
end
# Testing 1.5
# Testing 1.6759759912428246e+154
# Testing 1.7465828538382976e+231
# Testing 5.613129393316443e+269
# Testing 1.004985507425625e+289
# Testing 4.25098019208419e+298
# { snip 52 lines (I'm keeping several trailing lines so you can see the values are not doubling) }
#Testing 1.797693134862313e+308
#Testing 1.7976931348623145e+308
#Testing 1.7976931348623153e+308
#Testing 1.7976931348623157e+308
# Testing Infinity
# => nil
Okay, so the algorithm looks similar to whatever is happening above: it starts at 1.5
(for
whatever reason), and that's not big enough, so it jumps up to bigger and bigger numbers until it
gets to infinity and gives up. Notably, the "jumping" happens in a binary-search pattern, not an
exponential search pattern. We can tell because the numbers are not doubling as with exponential
search; instead, they are getting closer and closer together until they reach infinity.
And when we search on the integers?
(0..).bsearch do |n|
puts "Testing #{n}"
false
end
# Testing 1
# Testing 2
# Testing 4
# Testing 8
# Testing 16
# Testing 32
# Testing 64
# Testing 128
# { snip 220 lines }
# Testing 431359146674410236714672241392314090778194310760649159697657763987456
# Testing 862718293348820473429344482784628181556388621521298319395315527974912
# Testing 1725436586697640946858688965569256363112777243042596638790631055949824
# Testing 3450873173395281893717377931138512726225554486085193277581262111899648
# Testing 6901746346790563787434755862277025452451108972170386555162524223799296
# Testing 13803492693581127574869511724554050904902217944340773110325048447598592
# Testing 27606985387162255149739023449108101809804435888681546220650096895197184
# Testing 55213970774324510299478046898216203619608871777363092441300193790394368
# { ... Continuing until I kill the program... }
(I let the program continue until the numbers were over 20,000 digits long.)
In Ruby integers and floats are fundamentally different: there is no limit to the size of an integer, but there is a limit to the size of floats.
Back to the 1.5
Mystery: Floating-point Representation
So now we need to look into Ruby's internals: if we want to understand why Ruby starts the
(0.0 .. Float::INFINITY)
search at 1.5
we need to know how Ruby stores and thinks about those
numbers under the hood. Nothing very mysterious is happening when using Ruby's integers (I can't
find leaky abstractions, anyway), but the floats are another story.
From Ruby's Float
documentation,
"Float objects represent inexact real numbers using the native architecture's double-precision floating point representation."
Since basically all normal computers use the IEEE 754 standard for representing doubles, that must be what my computer is using. So let's look at the structure for a 64-bit double in IEEE 754.
Sign Bit (1) | Exponent Bits (11) | Significand bits (52)
0 | 10000000100 | 0110011000000000000000000000000000000000000000000000
Simple explanation: this is scientific notation, but made to be efficient for computers.
- The sign bit is
0
for positive and1
for negative - The exponent bits are "biased" so that they don't have to deal with a sign; instead, they're all
shifted up by
1023
; - The significand is the trailing fractional digits of a binary number (and there is an
implicit leading
1.
).
That was probably confusing. Let's walk through the number in my ASCII table above.
The sign bit is 0
, so we know the number is positive.
The exponent is 10000000100
, or in decimal 1028
. Except it's been biased by 1023
, so
subtract that to get 5
for our exponent.
The significand is 0110011000000000000000000000000000000000000000000000
, but we need to add the
leading 1.
to get 1.0110011
. This evaluates in base-10 to 1.3984375
.
Putting this all together we get +1.3984375 * 2^5
or 44.75
.
(Check out this excellent site for playing with these calculations---where I double-checked my work---and the writer's post with a much better explanation of doubles.)
That was probably more detail than we strictly need to solve the mystery of where Ruby gets 1.5
.
Sorry, I got distracted by how cool IEEE 754 is.
Anyway, doubles have 64 bits. What happens when we interpret them as an unsigned 64-bit integer?
Type Punning
(Apparently, that's what this is called.)
Integer bits (64)
0100000001000110011000000000000000000000000000000000000000000000
By squashing all the digits of our previous float together we get a 64-digit binary number. In this
case, the number in base-10 is 4631494819913400320
. What happens when we increment this number by
one and re-interpret its bits as a double? Well, the bits of the significand are incremented by one,
so we get the new double:
Sign Bit (1) | Exponent Bits (11) | Significand bits (52)
0 | 10000000100 | 0110011000000000000000000000000000000000000000000001
(Note the 1
at the end of the significand bits)
Skipping over the math, this works out to the next float after 44.75
:
44.75000000000000710542735760100185871124267578125
.
And that's pretty cool.
In general, we get this neat little property when interpreting doubles as unsigned integers:
- Incrementing an integer will increment its double (away from zero).
- Adjacent doubles have adjacent integers.
There is a catch at zero, though. IEEE 754 defines two zeros: positive and negative. The positive
zero has all bits set to 0
; the negative zero is identical, except its sign bit is 1
. This is
convenient: incrementing from positive zero lets us increment through the positive floats, and
incrementing from negative zero lets us increment through the negative floats. However, note that
our rules above are broken here: +0
and -0
would seem to be adjacent, but they do not have
adjacent integers. So our rules only hold for floats of the same sign.
For now, let's not worry about negative numbers. Instead, let's use this "type punning" to finally
solve Ruby's 1.5
mystery.
Use Type Punning
Using type punning we can re-interpret the range (0.0 .. Float::INFINITY)
as a range of integers:
(0..9218868437227405312)
. This integer range has a definite midpoint: divide the max by 2
to get
4609434218613702656
, and reinterpret as a float to get: 1.5
. 🎉
So we've figured out where Ruby gets 1.5
, but why does Ruby use 1.5
? Why use type punning
like this? Why interpret the unbounded range (0.0 ..)
as (0.0 .. Float::INFINITY)
when we could
interpret it as (0.0 .. Float::MAX)
? Then the midpoint is easy to find: divide Float::MAX
by 2
to get 8.988465674311579e+307
, and the binary search algorithm can work like normal.
Let's write our own binary search method to test this.
def bsearch(a, b)
loop do
return b if a >= b
mid = a / 2.0 + b / 2.0 # (a + b) / 2.0 might overflow, so do the division step first
yield(mid) ? b = mid : a = mid
end
end
puts bsearch(0.0, Float::MAX) { |n| n * n >= 1000 }
Alrighty, that was simple. And when we run the program we get... Um. Hmm. It's stuck in an infinite loop.
Let's add some logging.
def bsearch(a, b)
counter = 0
loop do
return b if a >= b
counter += 1
puts "[#{counter}] bsearch(#{a}, #{b})"
mid = a / 2.0 + b / 2.0 # (a + b) / 2.0 might overflow, so do the division step first
yield(mid) ? b = mid : a = mid
end
end
puts bsearch(0.0, Float::MAX) { |n| n * n >= 1000 }
# [1] bsearch(0.0, 1.7976931348623157e+308)
# [2] bsearch(0.0, 8.988465674311579e+307)
# [3] bsearch(0.0, 4.4942328371557893e+307)
# [4] bsearch(0.0, 2.2471164185778946e+307)
# [5] bsearch(0.0, 1.1235582092889473e+307)
# [6] bsearch(0.0, 5.6177910464447366e+306)
# [7] bsearch(0.0, 2.8088955232223683e+306)
# [8] bsearch(0.0, 1.4044477616111841e+306)
# [9] bsearch(0.0, 7.022238808055921e+305)
# { snip a few }
# [1070] bsearch(31.622776601683764, 31.622776601683793)
# [1071] bsearch(31.62277660168378, 31.622776601683793)
# [1072] bsearch(31.622776601683785, 31.622776601683793)
# [1073] bsearch(31.62277660168379, 31.622776601683793)
# [1074] bsearch(31.62277660168379, 31.622776601683793)
# [1075] bsearch(31.62277660168379, 31.622776601683793)
# { continues printing this line forever }
What's going on? The assignment to mid
on line 9 is somehow returning exactly a
, so then when
a
is assigned to mid
its value doesn't change. That's not right. mid
should not be either a
or b
, it should be their midpoint... OIC. Unless a
and b
are adjacent floats. There is no
float between adjacent floats. Okay, well, let's change our return condition.
def bsearch(a, b)
counter = 0
loop do
counter += 1
puts "[#{counter}] bsearch(#{a}, #{b})"
mid = a / 2.0 + b / 2.0 # (a + b) / 2.0 might overflow, so do the division step first
return mid if mid == a || mid == b
yield(mid) ? b = mid : a = mid
end
end
puts bsearch(0.0, Float::MAX) { |n| n * n >= 1000 }
# [1] bsearch(0.0, 1.7976931348623157e+308)
# [2] bsearch(0.0, 8.988465674311579e+307)
# [3] bsearch(0.0, 4.4942328371557893e+307)
# [4] bsearch(0.0, 2.2471164185778946e+307)
# { skip a few }
# [1071] bsearch(31.62277660168378, 31.622776601683793)
# [1072] bsearch(31.622776601683785, 31.622776601683793)
# [1073] bsearch(31.62277660168379, 31.622776601683793)
# => 31.62277660168379
And there we go, the same answer as above. So why doesn't Ruby do that? Why go to the effort of converting floats to integers?
Well, we didn't count how many searches Ruby's binary search had to do. Let's count that!
counter = 0
(0.0 .. Float::MAX).bsearch do |n|
counter += 1
puts "[#{counter}] Testing #{n}"
n * n >= 1000
end
# [1] Testing 1.4999999999999998
# [2] Testing 1.6759759912428243e+154
# [3] Testing 1.5921412270130974e+77
# [4] Testing 4.89155902448849e+38
# { snip 55 lines }
# [61] Testing 31.622776601683785
# [62] Testing 31.622776601683793
# [63] Testing 31.62277660168379
# => 31.622776601683793
Ah. So Ruby's implementation is more efficient: it can find the answer in 63 searches, but our naive implementation needs 1073 searches.
And that makes sense: the double type has 64 bits, so an optimal binary search algorithm should be able to find a single value within 64 searches, because each search cuts the space in half.
The naive implementation pretends that the distribution of doubles along the number line is even,
but there are the same number of floats between 0
and 1.5
as there are between 1.5
and
Float::INFINITY
. And that's why the type punning technique is awesome: it makes it super easy to
find the central float between two others.
Note: Yes, it's a long and winding post on a site called Less Noise. Is winding noise? "Journey before destination."