Bug in Binary Search - Computerphile
Вставка
- Опубліковано 30 лис 2023
- Mike talks through a binary search bug that was undiscovered for years!
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com
Thank you to Jane Street for their support of this channel. Learn more: www.janestreet.com
Just a random little fact: You can use underscores to make numbers more readable in java (for example: 1_000_000.)
It's also a thing in Python 3.6+, JS, Rust, C++14 (though with an apostrophe instead of an underscore), and probably a bunch of others
@@Qbe_Rootand c#
Niice.
That will come in handy for a project I'm working on.
Many thanks! :D
🖤
In pretty much any language! Rust, python, C#, etc.
It's funny how so many comments point out the performance cost of adding one arithmetic operation. They overlook the fact that arithmetic operation between two already loaded registers is almost instant vs the cache miss monster which is accessing the large array at random positions. You won't measure any significant difference.
Yep. That might have been a valid concern 40/50 years ago, but not today
I'm more baffled that those people preferred speed over correctness in logN algorithm...
@@Herio7 Speed may be a reasonable choice sometimes if it's significant and you can assert that your use case won't be problematic regarding correctness. But even then, speed won't even change by a significant amount here.
true that, its the people who never measured performance themself and rely on their (very) limited knowledge of the insanely sophisticated CPU
The people who will spend an hour and a megabyte to save a microsecond and part of a byte
The error is funnier in languages that have unsigned types
The sum/2 will end somewhere inside the array and not throw any errors, just search weird places
Takes longer to discover the bug, but it's more fun I promise!
That's exactly the comment, I was going to write. 🤝🖖
The amd64 ISA has hardware overflow flags on registers. So there is saturating arithmetic in some languages that just returns maximum value of the type in the event of a rollover.
YES! It's even sneakier if using unsigned types because the values won't even become negative but wrap around to smaller but still wrong values so that often you don't get page faults but simply wrong answers.
This is why in C++ unsigned integer overflow is undefined behaviour.
There is a LeetCode binary search problem specifically designed to teach you about this.
left + (right - left) / 2 is algebraically equivalent to (left + right) / 2 and avoid overflow. It's a handy identity to memorize for coding interviews.
Doing this before leetcodes
Now try (left & right) + ((left ^ right) >> 1)
Or better yet: (left>>1) + (right>>1)
note that it can still overflow in general.
A/2 + B/2 however can't ( i think )
@@__Just_This_Guy__ have you tested it on two odd numbers
I love that remark about not having a bug in HIS code because Python protects you from it. But also recognizing this is can be a real problem sometimes and teaching us how to avoid it.
Fun fact: l + (r-l)/n is a general formula for equal division where n is the number of subsections. (l+r)/2 is just a special case that happens to work for binary search, but when you move up to ternary search (two division points), (l+r)/3 no longer cuts it and you need the general formula.
Someone please explain this with an example. It looks so obvious but I don’t know what it means 😂
@@scottbeard9603lets say L=10 and R=40. You want to divide into 3 equal pieces instead of 2. Formula (L+R)/3 would tell you to split at (40+10)/3 = 50/3 = 16 (and 33 if you do 2(50/3). General formula L + (R-L)/N will tell you to split at 10+30/3 = 20 (and 30 if you do 10+2(30/3)), giving you a split into 3 equal parts.
@@scottbeard9603 since someone else already replied with an example - here have the working out of why it doesnt work for 3, but does work for 2^^
lets have a look at the case of n = 2:
l + (r-l)/2 = (2l)/2 + (r-l)/2 = (2l + r - l)/2 = (l + r)/2
now lets look at what happens when you try the same with n = 3:
l + (r-l)/3 = (3l)/3 + (r-l)/3 = (3l + r - l)/3 = (2l + r)/3 != (l+r)/3
in general:
l + (r-l)/n = (nl)/n + (r-l)/n = (nl + r - l)/n = ((n-1) l + r)/n
so technically that way of writing it down does work... as long as you dont omit the n-1
mind you, having the computer work out that division point that way will only get way worse as you add more subdivisions - better to keep the numbers small^^
"Fun fact" -- you obviously don't know the meaning of the word "fun"
Well, I had fun.
In java in particular, you could just do: (l+r) >>> 1.
This should give the correct answer even if l+r overflows, by treating the resulting sum as unsigned during the "division by 2" step.
I think you're wrong: If l and r are both 32 bits in size, then l+r will be 33 bits. Where do you store this intermediate upper bit?
@@MichaelFJ1969 That's the thing: They're not 32 bits - they are 31 bits. Java does not have unsigned integers. And because of the properties of the twos complement bit representation used in signed numbers, l+r will have the same representation as if l and r were unsigned - but division does not have this property. Luckily we can do a bitshift to emulate it.
I also know that llvm can be weird about unsigned pointer offsets (or so the rust docs say), so this trick will probably (?) also work in C/C++ - but I'd have to dig into the docs to make sure of that.
That's actually what OpenJDK's (the reference Java implementation) binary search does too.
If anyone's curious what >>> does in Java, it's the unsigned right shift. Right shifting by 1 is the same as dividing by two, but a typical right shift (>>) or division by two will leave the leading bit intact, so if the number was negative, it will stay negative. An unsigned right shift fills the leading bits with zeroes, so when the value is interpreted as a signed value, it's always positive.
Java doesn't have unsigned integers? Wow that is terrible.
6:13 "when your integer becomes 33 bits" He probably means when it requires 32 bits unsigned or more than 32 bits signed. The 2.1 billion he mentions before is roughly 2^31 so he's mostly talking about the range limits of signed 32-bit integers. Unsigned 32-bit integers go up to over 4 billion.
Let's try to offer an alternative that does what so many seem to be expecting: division before the addition:
m = l/2 + r/2
This breaks when l and r are both odd as there are two individual roundings, for example 3/2+5/2=3 in integer arithmetic, which prevents the algorithm from making progress beyond that point.
What you could do is add the last bit back with some bitwise operations (which boils down to "if both l and r are odd, then add 1 to the result"):
m = l/2 + r/2 + (l & r & 1)
Or just do it as in the video. The /2 will almost certainly be optimized into >>1 by the compiler if it is advantageous on the target cpu.
I'm sure most of us plebe would use reminder operator to add remainders if we for some reason wanted to complicated the code by making 4 divisions instead of only one.
I planned to use a floating point to remove doing the bitwise operation and later cast it back to int.
@@rabinadk1 Sounds like a huge extra can of worms and also likely a bit slower (although might not be noticeable compared to the cost of memory accesses).
My first idea🤣 Thx for highlighting the problem it would have and for the solution ;0)
Our Prof had a rubber shark he called Bruce. It was supposed to remind us of trouble with integers lurking in the deep. I can't remember the exact lesson. Prof Bill Pulling. Great guy.
TBF, when Java was written in the 90s, there weren't too many cases where arrays with 2^30 elements were being used. It's not really too surprising it went undetected for so long.
At the time, Java's demand for 32-bit processing was remarkable and often wasteful. Today, waste has been normalized to the point people get offended if you remark a 100GB patch is excessive, and Java has been awkwardly forced into smartcards.
In the 90s, engineers would have never done l+r halved and floor bs, they would have done r-l, shift, and indexed address, because any Intel 8086 can do this in the address generator unit almost twice as fast, in fact some processors would even fuse this instruction sequence. Java, Python. Estrogen for Programs! Nothing else!
Programmers should know to check their indexes before arbitrarily throwing them into an array though, that was a problem way back even in the days of C, it’s not a Java-specific bug
@@0LoneTech I always assumed they needed the 32 bit thing because they wanted to do a lot of linked lists or something, which are pretty useless in a 16-bit address space. They sure didn't do it for speed (I hope)!
Before 64 bit operating systems, you couldn't have an array that big anyway.
Integer overflow is how the infamous infinite lives bug on the original Super Mario Bros. for NES works. Lives are stored as a signed value so if you go over what fits you get weird symbols for your number of lives and it becomes impossible to lose cuz the way the math is set up you can’t subtract from a full negative number and wrap back into the positives, but can add to a full positive and wrap into the negatives. Since it only checks if you’re equal to 0 not lower than it, you end up with infinite lives. But it overflows into other important memory and causes other bugs as well.
That's weird that it corrupts other memory. I would have expected lives to be a single unsigned 8 bit variable. Especially since once you get more than 99 the game's print routine indexes out of bounds and starts showing adjacent graphics instead of digits. So obviously the game designers figured 'eh, nobody will get that many extra lives' and didn't bother to check.
I wish I had teachers like you guys on my university. Thank you for making such great quality videos.
Rust has a function for "midpoint between two numbers" for this exact situation somehow.
😂 what
C++ just recently added std::midpoint function as well.
This is called hadd or rhadd (h=half, r=right) in OpenCL, and there's mix() for arbitrary sections of floating point values. It's an ancient issue; compare e.g. Forth's */ word, which conceptually does a widening multiply before a narrowing divide.
It's almost as if this particular operation is remarkably common actually.
@@dojelnotmyrealname4018, it is very common! It's arithmetic average of two values, and any codebase is bound to have more than a few of those. What's not common at all is operating with values that risk overflowing. Especially in 64 bits; with 32 it's much more of a concern.
"That's a Python comment, not a Java comment"
THE PAIN! Every goddamn time!
Appropriate punishment for using end-of-line comment. 😂
The way of explaining this was really good.
This scenario reminds me about calculating the average using floating point values. The first instinct would be just to sum everything then divide by the amount of values, but floats get more imprecise the further from zero they go. So the average might be off if the intermediate sum is too big. A better approach might be going by partial average, since the intermediate values are smaller. But there are other techniques too that go over my head.
I remember one day finding a paper with dozens of pages just detailing how to average floating point numbers, it's one of those problems that at first appear to be simple but are anything but that.
Kahan sum is your friend.
Yep. It's really a matter of "pick your poison".
There is a little bit of significance to the number 17. If you ask people to pick a number from one to twenty, apparently 17 is the most popular choice. Plus, if you have a unbalanced twenty sided die that rolls 17 all the time, people are less likely to realize somethings up as it's not as likely to be noticed vs a die that keeps rolling 20. Especially since it will get obfuscated by different modifiers that get added to the roll so it's usually never just a 17. Also, it's prime.
wait what why? source?
@@asynchronousongswhy would someone go on the internet and confidently spread misinformation 🙃
@@asynchronousongs I have no sources, but it does make sense that 17 is the most "random-looking" number from the group. Evens are roundish, so they're out. Same with multiples of 5. Single digit numbers are too simple. 13 has a reputation for being either lucky or unlucky. 19 is too close to the maximum. 11's two digits match. The only number left is 17. Would be interested to see if someone can find an actual source though.
17 is known to be "the least random number", the "Jargon File" say.
I was expecting him to choose 42. 😢
I remember coding a binary-search function by hand once (and it was probably susceptible to this edge case). I specifically wanted it to search for a specific value and return its index, OR if the value was ultimately not found, return the index of the first greater-than value (for use as an insertion point). Nothing too complicated technically, but DANG was it frustrating to debug.
Sounds like you should've used a hash table
iam convinced best teachers are really good communicators
Happened to me a decade ago in my home build embedded application. Eeprom with 65536 bytes. Using unsigned index variables only. No signed index at all !! The overflow wrap around is sufficient to cause that bug. Would have caused an endless loop at startup in the eeprom search. Luckily I ve spotted it in a code review befor Eprom was more than half full/used. Thanks for that video.
ho li fuk that takes me back. i remember eprom's from my TSR days in the middle 1980s
I was wondering at first why just r/2 + l/2 wouldn't work, but with integers, the parts would get floored separately and that would give wrong answers when both r and l are odd.
wouldn't
r
i thik it should be r >> 1 + l >> 1
Plus, division is expensive compared to addition and subtraction. I'm not sure how much difference it would make in practice, but it makes sense to do it only once instead of twice if you can
@@dualunitfold5304 that is true, but integer division by 2 can be done by a single bitshift?
@@nimcompoo Yeah you're right, I didn't think about that :D
5:23 the graphic is wrong. it'd result in a negative sum as r>l. it should say r-l.
Nicely explained.
Quite interesting that this was a hidden problem in Java for so long. In microarchitectures this is a well known problem for decades as the numbers don't need to be absolut big, it's enough for both the left and right side number to be greater than half of the max. number representation.
There are also patterns where you divide both l and r by 2 with bitshifting every iteration and add or substract them together according the controlflow.
Great job with the lighting.
4:31 The problem is even worse if your language *doesn't* do overflow or bounds-checking, (more common than you might think!) in which case your code will be looking at memory outside your array & Very Bad Things will happen. The way to prevent this in your code is to subtract your current position (CP) from the size of your array, (integer) divide that by 2, add it to CP, giving your next CP. This will work for any size of array that is no larger than your largest possible positive integer. This, of course, is how you handle a task like this in assembler.
this comment is the whole video summed up.
In assembler (well, in x86 at least), you can just add the numbers anyway, then do the division with a `rotate right with carry` instruction and are done. :D
I like the related ternary search used to find a minimum in a convex array/function.
Binary Search can be used for more complex tasks, like finding the answer to a problem where you know the range of the possible answers and the problem boils down to solving a monotonic function. I recently stumbled upon a bug with the (l + r) / 2 when l and r can be negative numbers. I was using C++, and in C++ the integer division round in the direction of 0; so, for example, 3 / 2 = 1 and -3 / 2 = -1. But I was expecting -3 / 2 = -2, as it is the nearest integer less than the actual result. In Python that is the behaviour of the integer division: -3 // 2 = -2.
@@GeorgeValkov yep, I learned about it after I found the bug.
These are insidious bugs, I ran into one due to different rounding modes for floating points (towards zero, nearest, round up, round down). I guess the integer division is rounding towards zero.
@@GeorgeValkovwhat's that? I'm new to c++, so i don't really understand what that means
That's bit shifting. It exists in all languages, but it's a "low level" operation. Anyways, since each bit in a binary number is a power of two, shifting the representation is the same as doubling/dividing by 2 depending on endianess. Now most languages implement their bitshift operators indépendantly of endianess tho
In this case of right shifting, the unit bit is discarded, so it's integeger division, the remainder is discarded. Note that the unit sigit being the same as the potential remainder only works because the divisor is the same of the base (like dividing 42/10 as remainder 2, the last digit. )
Whenever he said "l+r" I already knew from experience to write some kind of code to prevent an overflow.
Has no one else had to endure a C++ class where everything is trying to make your program break?
C++ programmers have a different mindset 😂
@@worabnag _C++_ has a different mindset, it will try to break _you_
People who only know high-level languages will not likely be thinking of overflows. If they got a crazy error message about an array being indexed at negative one million or whatever they might eventually realise what is going on, but hand on my heart I'm sure it would take me a while. Hours or days. I wouldnt have anticipated it before time.
@@3rdalbum I primarily use Javascript and that language is so poorly designed that it has me writing custom code to verify an integer instead of it having that built-in type.
I think because I am so used to having to fight against the language constantly and being forced to adopt an extremely defensive style of coding, I was more aware of such an error, despite javascript being a higher level language.
More of this please.
I remember a kind-of similar bug in Intersystems Caché: 4 * 3 / 6 returned 2, but 4 / 6 * 3 returned 2.00001 (and this little butterfly ended up destroying the city of New York).
This commutative/grouping operations that are the basics in arithmetic, in our code some times they create a lot of mess when released into production environments.
I think it is important to add comments in the code so the newer generations understand why we make these changes. E.g.:
int m = l + (r - l) / 2; // =====> Do NOT change to m = (l + r) / 2 =====> it can potentially lead to a positive int overflow
Looks like your first example is just an example of floating point coercion (assuming you expected integers), along with the fact that floating point numbers are an approximation of a value, so due to the underlying math behind them you sometimes get tiny weird rounding errors.
One has to be a fool to think computers can support ℝ numbers. They do not, it is absolutely not possible. As such they do not support associativity or commutativity, or many other properties, even the more obvious. 1+ε-1 may not be ε. etc...
Even Integer support is partial, at best.
Newbies think they can solve the problem by using "fuzzy" rounding or other tricks.
Nerds know they have to proof the code they wrote.
No need to add comment. Add a unit test with big number. It will fail when someone changes it.
You can also interpret the sum of two signed integers unsigned (has one more bit) and divide it by 2 to get it back into range
A topic I would love to see Computerphile cover: relocatable code. How can a binary be loaded in an arbitrary memory location and still have valid addresses for load/store/branch? How did that work in the old days vs now? What is real mode and what is protected mode? Are there different strategies on different platforms?
Yes! I support your request!
The binary is loaded a different address each time ( I'm assuming PIE is set when compiling) and instructions within the binary are always at a constant offset from the base of the ELF with which it's loaded
As long as the elf knows it's loading address the instructions are always at a constant offset
Real mode is an outdated operating mode, Intel is removing in X86S.
Modern UEFI BIOS'es make it really hard to enter real mode (aka 16 bit mode) outside of SMM. Pushing UEFI boot with secure boot forward does just that.
For now SMM (system management mode) starts in real mode and very first thing a CPU does AFTER entering SMM - switches into long protected mode (64 bit mode) inside a SMM mode.
One more thing about different modes. Modern ARM cores are majorly 64bit only (doesnt support 32bit neither for whole OS nor for just applications nor for ARM TrustZone (ARM variant of SMM mode on x86)
I remember hitting that bug and investigating a bit on it. But it happened so rarely, that we decided not to put any effort into it.
Don't remember exactly, would assume we didn't have test cases for that because you don't test standard libraries. "Millions of people use that every single day, and you think _you_ found a bug in that?!?"
Still holds an appropriate amount of humility. But _sometimes_ ...
I can see a similar overflow problem happening when summing up for an 'average'
😬
Divide by how much if you dont know the number of items yet ?
@@pvandewyngaerde
You just have to keep track of the number you're currently on. For example, if you have two numbers, the average is (a + b) / 2. Let's call this A1.
If you add a third number 'c', it's (a + b + c) / 3, which you can rewrite as (a + b) / 3 + c / 3. We can then rewrite the first term as A1 * (2 / 3), giving us A1 * (2 / 3) + c / 3, allowing us to divide before adding.
Just to continue the example, if we call our last result A2, then when we add a fourth number 'd', we can compute A2 * (3 / 4) + d / 4.
@@DanStoza That would lead to less accurate results though, computers are bad with accuracy in divisions.
If you are dealing with super large numbers, just use a 64-bit integer. If you can max out 9 quintillion in the addition then used BCD (binary coded decimal), it's guaranteed to max out your memory before you can max it out. Also, it will be a performance hog.
A crossover with Numberphile would be nice here. You could have them show why (r+L)/2 = L+(r-L)/2.
It's not too hard to show here, though. So I'll try:
(r+L)/2
= r/2 + L/2
= r/2 + L/2 + (L/2 - L/2)
= r/2 - L/2 + (L/2 + L/2)
= (r-L)/2 + L
= L + (r-L)/2
Bonus question: why not use the second step? 2 reasons:
1. Division is generally the slowest arithmetic operation, so you want to do as few of them as possible.
2. The fastest math uses integers. Integer division will mean the .5 part gets dropped. So, if both r and L are odd, the midpoint will be off by 1.
At least, those are my answers.
Funnily enough I just last week watched some C++ convention talk, might've been Kevlin Henney or someone else, mentioning this exact issue where the integers were so big they overflowed before getting the average. Might've actually been about satellite arrays. Perhaps it was Billy Hollis after all, but someone anyway.
I was thinking maybe you'd just halve them first, but then again it's possibly two float arithmetic operations which isn't as lovely. Although, you could probably get away with a >> 1 type of trick and get almost free out of jail. Anyway Pound's method is pretty obviously better when it's just one halving and addition/subtraction.
funny i had that problem a few months ago while implementing a binary search on a Microcrontroller. I had to use a Word as the Index so i only had 65535 as Max index and i noticed that the overflow was not handled correctly in the basic binary search. It didnt even ocure to me that such an obvius problem was unknown for a long time.
I haven't written binary search in a long time, but I wonder if I used to write it the "right" way. It does make more sense to me that I want to find the midpoint of this leftover chunk then index it by adding it to the end of the left bit.
also you can do (r + l) >> 1. shifting right will effectively divide by two regardless of overflow
Nice, I thought, the solution would be: m = l/2 + r/2, but maybe you get in trouble for odd number of l and r.
Integer division can't result in a decimal. In Java, 5/2 == 2, not 2.5
So for (l, r) = (1, 3), you get (1/2) + (3/2) = 0 + 1 = 1.
I thought the same at first, but addition is a simpler operation than division is why they did it this way. You could also do r - (r - l) / 2.
@@thomasbrotherton4556 Division by 2 is a simple bit shift
This was my first thought, though I was thinking with a bit shift as that should be faster: m = (l >> 1) + (r >> 1);
@@SomeNerdOutThere ... + (l & r & 1) . odd numbers fixed.
For demonstration purposes, could also have defined l and r as byte or short integers...
The array index are always int
@@ats10802b Not a big Java user here, but can't you index with a fewer-bit integer in Java? I thought it would be implicitly cast.
The point is. you could define the l and r variables as 1-byte or 2-byte ints, then you don't need the billion-element array to recreate the bug.
i don't think you can have anything other than int for indicies
IIRC Java does arithmetic in int and long only, so shorts and bytes won't actually have this problem.
Also, this bug didn't get spotted for so long because nobody had enough RAM to hold arrays of such sizes.
@@tylerbird9301 I think what he's getting at is that the overflow doesn't actually happen at the indexing part of the code, but at the addition part of it. So if L and R are defined at a smaller datatype, then when you added them they'd still overflow resulting in a negative number when it gets used as the array index.
this is more of a standard practice of embedded software engineering than a "bug" in the binary search. you always consider intermediate overflows when doing data unit conversions or working with fixed point decimal numbers.
Are there any situations when (l+r)/2 and l+(r-l)/2 give different values when (r-l)/2 is truncated?
L+R and R-L should have the same remainder mod 2 and will truncate the same way i believe
L + (R/2 - L/2) maybe would be different than (L + R)/2?
Testing an example
L=1, R=6
(L+R)/2 = 3.5 → 3
L +( R/2 - L/2) = 1 + 3 - 0 = 4
Yup
l + (r - l) * 0.5
That is a standard linear interpolation function. Basically, you walk the value from 'l' to 'r', with the given progress value, which is 0.5 with the example above.
IntelliJ/JetBrains IDEs are awesome.
I wouldn't have thought to use pointers that way, obviously you subtract l from r and add that offset/2 to l. No overflows possible if you use pointer arithmetic correctly
Would it not be more efficient to just divide both by 2 inside then add them? Then you just have (r>>1) + (l>>1).
Though you might have to add the remainder if both are odd, so
(r>>1)+(l>>1) + 1&r&l
Pretty much the same number operations but you’re not limited to using signed ints.
Well why not l + (r-l)>>1 and cut out the comparison. The addition/subtraction is nothing compared to division
@@rb1471 right, I was thinking l could be bigger than r, but that can never happen :)
(l + r) >>> 1 works as long as they are signed (limited to max positive int value). >>> is unsigned shift.
5:11 You can also do ((x/2) + (y/2)). Some people may find this easier.
That has its own problems. As someone else pointed out, if l=3 and r=5 and your language's integer division operator rounds down, 3/2 + 5/2 gives you 3 and the algorithm loops forever.
I learned r-l in college on a PDP11 running COBOL as we had to keep the numbers small on a database for theatre ticket sales.
ok so some may have already posted this but couldn't you just do (R*0.5)+(L*0.5) to get the mid point? or am i missing something ?
As usual, Rust is safe with no overhead. Just a fun fact about this language: It won't let you implement this algorithm without specfying explicitly what's going to happen in the case of overflow. It literally won't compile. You have to use methods like `overflowing_add` and similar to do what you want to do.
Sounds like a pain, but is actually less an issue than you'd expect, because in fact, the only place where you're really making integer operations like that is when you're working with a data where an overflow would be fatal (like e.g with array indices). In cases of float types, you don't have to specify this (but you can't compare floats with `==` ad hoc, you have to know what you're doing).
If the type of the length was an unsigned int, could it just become a loop without ever letting the user know? (example: if the new m for the user's luck becomes the first average, it would loop again until it got back to the back sum and go back to the initial average)
Edit: fix the type of L, R and M should be unsigned int, actually (and length as well, I suppose?)
Java has 'unsigned bit shift right' operator '>>>' which works perfectly for division by powers of 2. (l + r) >>> 1 will work just fine. Intermediate result, the sum of two integers can at most overflow by a single bit which occupies the sign bit. No information is lost, it's just treated as negative number. So, when shifted back with single zero it produces correct positive number.
In Java and C# the array length is an int. You can easily solve the problem by casting to long first.
If you need larger arrays or matrices, they are usually sparse arrays or matrices, which need a more sophisticated datatype implementation anyway.
"You can easily solve the problem by casting to long first."...until the boundaries of long are breached, then we are back to square one.
Rule #1: if you are writing a general-purpose routine, always, always write in a 'safe' way. Never, ever assume a limit when none has been implemented.
@@ChrisM541 As i said, the array length is guaranteed to be an integer in Java and C#. The range of int and long are exactly specified.
Doesn't make sense to program 'safer' than the language spec.
In fact. the current OpenJDK version even takes advantage of the fact, that int is signed:
int mid = int mid = (low + high) >>> 1;
One other thing I would add is a defensive early return that checks if left is less than right as we assume, because if it is greater than right then it will lead to an underflow.
if (l > r) {
return new Error("Left pointer is greater than Right pointer. It is a programming bug.");
}
int midpoint = l + (r - l) / 2;
That won't ever be the case as long as we're searching the _index_ of an array, which can only be a positive integer.
First; the following expression: (L + (R - L) / 2) will never underflow, as long as L>=0 and R>=0. This is because L is added back into the result of (R - L) / 2.
So even if (R - L) / 2 would be a negative number we add back L to correct for it.
This is simply because L is added back into the result of ((R - L) / 2). So even if the result of (R - L) / 2 is negative, it is corrected by the fact that we add back L.
Secondly, it would be a compiler/runtime bug, because of the while loop counts from L until R which are bounded by 0...length of array, and only shrunk towards the midpoint resulting in a positive integer.
In case we can have negative indices, then we'd need to condtionally check the bound and then (R + L) / 2 instead, otherwise fallback to the previous equation.
If you use signed integers, it would never happen in C, because signed integer overflow is undefined. Basically, compiler (CAN) assume that `l` will never be less than `r` because you never change them in that way. So, be carefull with assumtions like that.
I wish you would link the relevant video(s) in the description... At the start he refers to another video I haven't seen and need to go look for.
I guess not that big a deal... Only 3 videos back...
Bit captivated by the lighting in this ep. Made me feel like Dr Mike was giving me a Voight-Kampff test.
Hey,
It was a really great explanation, appreciate the effort.
There is one small doubt, what if we do L/2 + R/2, this should also work as both L and R in the range so L/2 and R/2 are in the range too.
I was thinking the same, but l + (r - l) / 2 is only 1 division, so maybe it is more efficient.
L/2 + R/2 gives the incorrect result when L and R are both odd numbers. Can you think of why?
@alstuart @miamore_un
**Assuming Java:**
It gives the incorrect result for both even and odd numbers, because L and R aren't defined variables. That's easy to fix by correcting the variable names to l and r. (Java variable names are case-sensitive.)
It also gives the incorrect result for odd numbers due to integer division, but that's also easy to fix by changing it to l/2.0+r/2.0 .
Regardless, their intent is correct. Who knows, maybe they were writing code in a language that doesn't do integer division like that, and which has case-insensitive variable names!
@@alstuart Not sure my other comment notified you correctly. Sorry if this is a double-ping!
I like to just implement binary search using bitwise operations.
So, for the index that you are looking at, just go through the bits one by one (starting at most significant bit and starting with index zero), set them to 1, and then reset them to 0 if the index is too high. Just make sure to check whether you are within bounds.
This way, you don’t need math and therefore can‘t run into integer overflows.
That sounds so wasteful and also weird. Say have an index of 12, everything up to 16 is too big (that's like at least 27 comparisons), then you reach 8, 8 is okay, 8 + 4 is too big, so we turn 4 off again (right?), then 8+2 is okay, and 8+2+1 is okay. So we end up with 11? Or do we stop at 8, which again is not correct. What reason do we ever have to turn off the 8?
Why does Java use signed integers for an array structure that doesn't allow negative integers?
Java primitives are signed. Also, operating with signed numbers can be less error prone than with unsigned numbers. For example, in C/C++ when you take unsigned number and subtract another unsigned number but greater to it, you'll get the wrong result. For example doing the following operation with unsigned 8 bit numbers: 1-3 won't be -2, it will be 254.
Firstly, the problem remains with unsigned integers; the incorrectly calculated index just might access defined memory and lead to more confusing misbehaviour, such as an infinite loop or incorrect answer. Secondly, it can be useful to apply an offset to an index, such as the (r-l)/2 value here, and in other algorithms it wouldn't be odd for such a step to be negative. Thirdly, Java doesn't know the number is an index until it's used to index, and the algorithm isn't array specific. There do exist languages which can restrict index types to match, like Ada or Clash. In Python negative indices index from the right.
The even funnier part is if they were unsigned the bug might have been harder to spot, as the overflow still happens, but this time it starts from 0, so it will give a valid index inside the array, thus not throwing an error. The result would still be incorrect, just harder to notice since no exception was thrown so it would be obvious something was fishy
Java doesn't do unsigned. The creators never liked the idea, and it's just the way the language is.
@@antoniogarest7516That just shifts the over/underflow boundaries around, though. -100-100 isn't 56 either. Java was designed with a 32 bits is enough attitude, Python switches to arbitrary precision, and Zig allows you to specify (u160 and i3 are equally valid types). Ada is also quite happy to have a type going from 256 to 511.
haha. nice. as an aside, could I request a topic? been reading a lot about the progress in SAT Solvers over the last decade or so. tried to watch one of Marijn Heule's videos (think it was on the Simons Insitute) but got a bit lost (too many acronyms!) but sounded really, really useful, aside from the big numbers that make the popular science news. the trade off between fast and clever intrgiued me. so did the idea of looking for a solution vs showing their isnt one.
needing a second to think of the number right before 1.2B is super relatable, lol
Would have thought that L/2 + R/2 would have been better than L + (R - L)/2. L/2 is just a bit shift right, so pretty fast. Handling the case where both L and R are odd is not too difficult (just add one).
So you’re performing the same number of operations, but swapping a subtraction for division. But also you’re branching to add 1/4 of the time?
@@Andersmithy I suspect Mike's implementation is more reliable across architectures and compliers. Dividing by 2 is just a bit shift so L >> 1 + R >> 1 which has got to be pretty quick. The issue is you have to add (L && 1) && (R && 1) to the result to account for both being odd numbers. But there are no branches in the code.
So ans = L >> 1 + R >> 1 + ((L && 1) && (R && 1))
(The compiler *could* have a branch in the logical bit - after all, if L && 1 is zero it doesn't have to do the R && 1 part)
However, Mike's code is just L + (L - R) >> 1 so, yeah - I suspect subtraction is pretty much implemented in hardware.
The thing is, L >> 1 and R >> 1 could be pre-computed and stored (as two equally long arrays), then, maybe my solution would be a femtosecond faster :)
In a C dev, are the stdlib qsort and bsearch ok to use?
The logical right shift >>> will also fix it, because it'll return from negative to positive if necessary: (l+r) >>> 1.
I'm used and mostly using. The minus approach to find the middle.
Idk that thing was counter intuitive by then.
How do you handle finding muliple matches in a binary search? I assume when find a match that you have to start a linear search to the left and right of the that and continue until you find something that doesnt match?
You can keep going with binary search, though many implementations only return the midpoint upon match, not the high and low range. Searching for any match, left or right edge are distinct goals, often expressed as looking for the first or last index a value could be inserted while maintaining order.
I don't know if its just me, but what about (ceil(l/2) + floor(r/2)), this should be correct right??
I'm not sure if I like 'r' indexing the last element as opposed to one past the last element similar to a sentinal index you might get in C / C++ or is it just me? Is Java different?
It makes the code somewhat simpler as you can test for l==r rather than l+1>=r. If I remember correctly from the time I played with binary search...
I remember this error.... probably the most famous bug of all time
At 5:23 is it supposed to be R-L instead of L-R? Maybe it works both ways I don’t know.
Yeah, it should have been R-L. It won't work both ways since (L-R)/2 will be a negative value.
Is there also any issue with any optimisers which could well 'simplify' the equation back to its original, or variants there of?
Depends on how you mean "any." Any respectable compiler should not have this bug.
Respectable compilers follow the standards, and their lies the problem. You are fighting the misunderstandings about unspecified, undefined and implementation defined behaviour. It's the YMMV problem. Expectations and reality often crash into each other.
@@philipoakley5498 This is a very impressionistic comment. Are you suggesting that it would sometimes be correct that an optimization changed behavior if it was someone's expectation?
@@pwhqngl0evzeg7z37 it can definitely happen. If it's allowed then the compiler can do what it likes, especially when it's a marginal case (less people complaining... ;-).
It's even worse when when you try to debug because (IDE dependent) the optimisation gets switched off to allow line by line debug.
@@philipoakley5498 Sure, it could happen hypothetically, but this would be a bug, hence not a respectable compiler.
Could you just do l/2 and r/2 then add both answers. Is that efficient?
If l and r are integers, and we do integer division, then l/2 + r/2 may not necessarily be the midpoint of l and r.
In Java for example, 5/2 + 9/2 = 2 + 4 = 6, but the midpoint of 5 and 9 is supposed to be 7.
The sun was setting quite beautifully through the window
Why not just divide both numbers by 2 then add them? (I know normally subtraction is faster than division but since it's dividing by 2 just bit shifting should work, plus if you use bit shifting of an integer instead of division then you can do away with flooring the result as well).
I'm really surprised that this program takes noticeable time to run, is filling the array really that slow?
(I expected that the CPU would be able to assign one index per cycle at least, maybe it is allocation that's slow?)
when you create a large array the memory is not actually allocated in physical memory at that time. Its only when its indexed is a physical page usually 4K allocated into the virtual address space. Create a loop when you initialise a very large array, then go back and do it a second time. The second time will normally be faster as all the memory has been allocated and assigned to physical memory
Well, quite honestly, you just taught a new vulnerability analysis trick :)
People nowadays completely ignore hardware and operating system, which has 80% impact on actual performance.
OOP languages like Java, Python and Javascript led us to this madness.
It's actually not a more complicated process it's actually less computationally complex, divides are computationally complex so the smaller your divide is the better (L+R)/2 will always be larger than (R-L)/2, division in a computer is done with N number of additions, so the the new method is actually much more simple for a computer to perform ((R-L)/2)+L
A simple int overflow was trivial but the talk and the simple solution was more interesting (as a paradigm on how to think when creating algorithms)
Just wondering, would using a long data type not work for this?
It would. It's slightly slower though. Especially on 32-bit architectures.
@@lokedhs and that's why modern languages have a specific pointer-width unsigned integer type which is exactly enough to address any object in memory on the target platform, the fastest option that provides correctness
See: Rust's usize, C/C++'s size_t
Nothing in the binary search algorithm requires your search space to be something physically stored in memory. Consider e.g. doing a binary search on a function to calculate a root.
@@lokedhs if you're on a 32-bit platform and dealing with billions of records, int overflow is just one of your concerns
Using a larger data type doesn't fix the problem though, all it does is move it somewhere else. Maybe it moves it far enough away that nobody encounters it, or maybe other restrictions in your specific use case can't encounter it. But in terms of a general implementation of a binary search, the issue of possible overflow will still exist.
You either need to be using something like Java that will sidestep the overflow issue, or you need to implement something like in the video so that overflows simply cannot happen.
What about doing 2 divisions and adding the two halfs?
What would have happened if the code had used unsigned integer as the type? Just incorrect results, right ? Is that why they chose to use signed integer so if something goes wrong, at least you know about it with an exception?
There was this one one-off project I had that I implemented a binary search in Java. I wasn't aware of this. I hope they don't process no more than 1.2 billion records.
what if the input itself is too large?
(is this a different topic?)
It can't ever be. If it was, it had already overflown and thus the bug would be somewhere else in the code.
Does Java allow for a shift right without sign extension? Because then the eventual result will be valid again.
I didn't understand, being from a c programming background how is array size limited by a language? Isn't that an addressable restriction depending on the architecture?
some languages do a lot of data management in the background with fancy data structures. some languages use dynamic arrays by default
Why not use l/2 + r/2 ? is it only because two separate divisions are computationally more intensive?
Because if l=1 and r=3, you get m=1 then r=0 and everything crashes since r
I guess I didn't notice you writing it that way in the first video, but I've known about this weird way of getting half the distance between two indices for a long time and specifically avoided it because of this known issue. I generally use C and adding a single extra instruction that takes one clock is not a big deal for a compiled language. Even at a billion entries it's more than fast enough to not warrant worrying about it. This is one of those edge cases that can ruin your year and is probably one of the reasons that Knuth came up with that stupid phrase. I guess I should've paid more attention in that first video and chided you for it, but I was fighting my Python installation that didn't want to use numpy. Bad Mike!
You could also go the more jank route and cast the l and r to longs, do the calculations, and then cast the resulting m back to an int
An important point to remember is that this sort of calculation will often appear in the middle of some deep loop, so performance is an issue.
Couldn't you simply have done r/2+l/2 ? Or there is also an issue with that which I'm missing?
Yes you can, just confirmed it via Leetcode 278.
wouldn’t l/2 + r/2 also work? it is easier to remember, but is it more costly to compute?
It needs to be at least l/2 + r/2 + l&r&1
Why the run around, isn't this simply (L/2 + R/2)?
We could use l/2 + r/2 for reading clarity. No?
If your machine model uses two's complement to represent negative numbers (i.e. left bit for the sign, like almost every computer; Java even requires it on a language level), then another solution would be, to replace the division-by-2 by an unsigned shift-right operation. This works, because an addition of non-negative numbers can at most overflow by one bit, i.e. into the sign-bit, but never beyond that, e.g. 01111111+01111111=11111110. An unsigned shift-right operation by 1 always shifts-in a zero on the left, thereby undoing the overflow while being mathematically equivalent to a truncated division-by-2 (exactly what we need), e.g. 11111110>>>1=01111111.
5:23 There's a typo. It should be R-L, not L-R
Would not a simpler fix be to just divide the two original numbers (L and R in this case) by two, and then add the result?
That's still 3 operations (two divisions and an addition), so no, not really.
@@xtifr Still a way to solve the problem by making sure the numbers do not overflow. May not be computationally superior, but it *IS* a solution to the problem.
I'll admit that doing an integer subtraction then a division, then an addition is computationally more efficient. I was looking at it from a pure mathematics perspective.
@@jeromethiel4323consider what happens if you do this with numbers 1 and 3. In integer math the result is not 2.
This only changes the place where a bug appears. If an array is defined indexed from -1,000,000,000 to 1,000,000,000 (minus a billion to plus a billion) then then very first step in a binary search will overflow with the new formula. But the original formula will not. To fully fail safe the code would require an if statement to choose the expression that does not overflow...