Bug in Binary Search - Computerphile

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 883

  • @HouseExpertify
    @HouseExpertify 11 місяців тому +1017

    Just a random little fact: You can use underscores to make numbers more readable in java (for example: 1_000_000.)

    • @Qbe_Root
      @Qbe_Root 11 місяців тому +185

      It's also a thing in Python 3.6+, JS, Rust, C++14 (though with an apostrophe instead of an underscore), and probably a bunch of others

    • @wboumans
      @wboumans 11 місяців тому

      ​@@Qbe_Rootand c#

    • @AterNyctos
      @AterNyctos 11 місяців тому +27

      Niice.
      That will come in handy for a project I'm working on.
      Many thanks! :D

    • @isotoxin
      @isotoxin 11 місяців тому +3

      🖤

    • @sly1024
      @sly1024 11 місяців тому +22

      In pretty much any language! Rust, python, C#, etc.

  • @dexter9313
    @dexter9313 11 місяців тому +649

    It's funny how so many comments point out the performance cost of adding one arithmetic operation. They overlook the fact that arithmetic operation between two already loaded registers is almost instant vs the cache miss monster which is accessing the large array at random positions. You won't measure any significant difference.

    • @axel77killer
      @axel77killer 11 місяців тому +62

      Yep. That might have been a valid concern 40/50 years ago, but not today

    • @Herio7
      @Herio7 11 місяців тому +139

      I'm more baffled that those people preferred speed over correctness in logN algorithm...

    • @dexter9313
      @dexter9313 11 місяців тому +43

      @@Herio7 Speed may be a reasonable choice sometimes if it's significant and you can assert that your use case won't be problematic regarding correctness. But even then, speed won't even change by a significant amount here.

    • @lethern2
      @lethern2 11 місяців тому +37

      true that, its the people who never measured performance themself and rely on their (very) limited knowledge of the insanely sophisticated CPU

    • @collin4555
      @collin4555 11 місяців тому +36

      The people who will spend an hour and a megabyte to save a microsecond and part of a byte

  • @LarkyLuna
    @LarkyLuna 11 місяців тому +298

    The error is funnier in languages that have unsigned types
    The sum/2 will end somewhere inside the array and not throw any errors, just search weird places

    • @rahul9704
      @rahul9704 11 місяців тому +51

      Takes longer to discover the bug, but it's more fun I promise!

    • @DoSeOst
      @DoSeOst 11 місяців тому +6

      That's exactly the comment, I was going to write. 🤝🖖

    • @mytech6779
      @mytech6779 11 місяців тому +6

      The amd64 ISA has hardware overflow flags on registers. So there is saturating arithmetic in some languages that just returns maximum value of the type in the event of a rollover.

    • @FrankHarwald
      @FrankHarwald 11 місяців тому +4

      YES! It's even sneakier if using unsigned types because the values won't even become negative but wrap around to smaller but still wrong values so that often you don't get page faults but simply wrong answers.

    • @th3hutch
      @th3hutch 11 місяців тому +8

      This is why in C++ unsigned integer overflow is undefined behaviour.

  • @B3Band
    @B3Band 11 місяців тому +300

    There is a LeetCode binary search problem specifically designed to teach you about this.
    left + (right - left) / 2 is algebraically equivalent to (left + right) / 2 and avoid overflow. It's a handy identity to memorize for coding interviews.

    • @asagiai4965
      @asagiai4965 11 місяців тому +6

      Doing this before leetcodes

    • @feliksporeba5851
      @feliksporeba5851 11 місяців тому +12

      Now try (left & right) + ((left ^ right) >> 1)

    • @__Just_This_Guy__
      @__Just_This_Guy__ 11 місяців тому +31

      Or better yet: (left>>1) + (right>>1)

    • @chri-k
      @chri-k 11 місяців тому +10

      note that it can still overflow in general.
      A/2 + B/2 however can't ( i think )

    • @AxelStrem
      @AxelStrem 11 місяців тому

      @@__Just_This_Guy__ have you tested it on two odd numbers

  • @Blackread
    @Blackread 11 місяців тому +111

    Fun fact: l + (r-l)/n is a general formula for equal division where n is the number of subsections. (l+r)/2 is just a special case that happens to work for binary search, but when you move up to ternary search (two division points), (l+r)/3 no longer cuts it and you need the general formula.

    • @scottbeard9603
      @scottbeard9603 11 місяців тому +1

      Someone please explain this with an example. It looks so obvious but I don’t know what it means 😂

    • @thijsyo
      @thijsyo 11 місяців тому +5

      ​​@@scottbeard9603lets say L=10 and R=40. You want to divide into 3 equal pieces instead of 2. Formula (L+R)/3 would tell you to split at (40+10)/3 = 50/3 = 16 (and 33 if you do 2(50/3). General formula L + (R-L)/N will tell you to split at 10+30/3 = 20 (and 30 if you do 10+2(30/3)), giving you a split into 3 equal parts.

    • @SharienGaming
      @SharienGaming 11 місяців тому

      @@scottbeard9603 since someone else already replied with an example - here have the working out of why it doesnt work for 3, but does work for 2^^
      lets have a look at the case of n = 2:
      l + (r-l)/2 = (2l)/2 + (r-l)/2 = (2l + r - l)/2 = (l + r)/2
      now lets look at what happens when you try the same with n = 3:
      l + (r-l)/3 = (3l)/3 + (r-l)/3 = (3l + r - l)/3 = (2l + r)/3 != (l+r)/3
      in general:
      l + (r-l)/n = (nl)/n + (r-l)/n = (nl + r - l)/n = ((n-1) l + r)/n
      so technically that way of writing it down does work... as long as you dont omit the n-1
      mind you, having the computer work out that division point that way will only get way worse as you add more subdivisions - better to keep the numbers small^^

    • @firstname4337
      @firstname4337 11 місяців тому +4

      "Fun fact" -- you obviously don't know the meaning of the word "fun"

    • @kirillvourlakidis6796
      @kirillvourlakidis6796 8 місяців тому +1

      Well, I had fun.

  • @TheFinagle
    @TheFinagle 11 місяців тому +8

    I love that remark about not having a bug in HIS code because Python protects you from it. But also recognizing this is can be a real problem sometimes and teaching us how to avoid it.

    • @Freshbott2
      @Freshbott2 Місяць тому

      It’s kind of self justifying though. It’s often considered a bad thing that Python doesn’t subject you to the horrors of systems programming. But if Python was trying to make you aware everything’s secretly an object and full of smarts we wouldn’t use it!

    • @TheFinagle
      @TheFinagle 26 днів тому +1

      @@Freshbott2 Python is intended to mix beginner friendly with being a fully capable language. Preventing oversight type bugs like this is just part of being approachable to those who wouldn't have the experience

  • @IARRCSim
    @IARRCSim 11 місяців тому +7

    6:13 "when your integer becomes 33 bits" He probably means when it requires 32 bits unsigned or more than 32 bits signed. The 2.1 billion he mentions before is roughly 2^31 so he's mostly talking about the range limits of signed 32-bit integers. Unsigned 32-bit integers go up to over 4 billion.

  • @gtsiam
    @gtsiam 11 місяців тому +20

    In java in particular, you could just do: (l+r) >>> 1.
    This should give the correct answer even if l+r overflows, by treating the resulting sum as unsigned during the "division by 2" step.

    • @MichaelFJ1969
      @MichaelFJ1969 11 місяців тому

      I think you're wrong: If l and r are both 32 bits in size, then l+r will be 33 bits. Where do you store this intermediate upper bit?

    • @gtsiam
      @gtsiam 11 місяців тому +4

      @@MichaelFJ1969 That's the thing: They're not 32 bits - they are 31 bits. Java does not have unsigned integers. And because of the properties of the twos complement bit representation used in signed numbers, l+r will have the same representation as if l and r were unsigned - but division does not have this property. Luckily we can do a bitshift to emulate it.
      I also know that llvm can be weird about unsigned pointer offsets (or so the rust docs say), so this trick will probably (?) also work in C/C++ - but I'd have to dig into the docs to make sure of that.

    • @roge0
      @roge0 11 місяців тому +3

      That's actually what OpenJDK's (the reference Java implementation) binary search does too.
      If anyone's curious what >>> does in Java, it's the unsigned right shift. Right shifting by 1 is the same as dividing by two, but a typical right shift (>>) or division by two will leave the leading bit intact, so if the number was negative, it will stay negative. An unsigned right shift fills the leading bits with zeroes, so when the value is interpreted as a signed value, it's always positive.

    • @williamdrum9899
      @williamdrum9899 11 місяців тому

      Java doesn't have unsigned integers? Wow that is terrible.

  • @simon7719
    @simon7719 11 місяців тому +43

    Let's try to offer an alternative that does what so many seem to be expecting: division before the addition:
    m = l/2 + r/2
    This breaks when l and r are both odd as there are two individual roundings, for example 3/2+5/2=3 in integer arithmetic, which prevents the algorithm from making progress beyond that point.
    What you could do is add the last bit back with some bitwise operations (which boils down to "if both l and r are odd, then add 1 to the result"):
    m = l/2 + r/2 + (l & r & 1)
    Or just do it as in the video. The /2 will almost certainly be optimized into >>1 by the compiler if it is advantageous on the target cpu.

    • @Greenmarty
      @Greenmarty 11 місяців тому

      I'm sure most of us plebe would use reminder operator to add remainders if we for some reason wanted to complicated the code by making 4 divisions instead of only one.

    • @rabinadk1
      @rabinadk1 11 місяців тому

      I planned to use a floating point to remove doing the bitwise operation and later cast it back to int.

    • @simon7719
      @simon7719 11 місяців тому +2

      @@rabinadk1 Sounds like a huge extra can of worms and also likely a bit slower (although might not be noticeable compared to the cost of memory accesses).

    • @macavitymacavity126
      @macavitymacavity126 5 місяців тому

      My first idea🤣 Thx for highlighting the problem it would have and for the solution ;0)

  • @gregmark1688
    @gregmark1688 11 місяців тому +224

    TBF, when Java was written in the 90s, there weren't too many cases where arrays with 2^30 elements were being used. It's not really too surprising it went undetected for so long.

    • @0LoneTech
      @0LoneTech 11 місяців тому +23

      At the time, Java's demand for 32-bit processing was remarkable and often wasteful. Today, waste has been normalized to the point people get offended if you remark a 100GB patch is excessive, and Java has been awkwardly forced into smartcards.

    • @der.Schtefan
      @der.Schtefan 11 місяців тому +12

      In the 90s, engineers would have never done l+r halved and floor bs, they would have done r-l, shift, and indexed address, because any Intel 8086 can do this in the address generator unit almost twice as fast, in fact some processors would even fuse this instruction sequence. Java, Python. Estrogen for Programs! Nothing else!

    • @diamondsmasher
      @diamondsmasher 11 місяців тому +8

      Programmers should know to check their indexes before arbitrarily throwing them into an array though, that was a problem way back even in the days of C, it’s not a Java-specific bug

    • @gregmark1688
      @gregmark1688 11 місяців тому +2

      ​@@0LoneTech I always assumed they needed the 32 bit thing because they wanted to do a lot of linked lists or something, which are pretty useless in a 16-bit address space. They sure didn't do it for speed (I hope)!

    • @joshuascholar3220
      @joshuascholar3220 11 місяців тому +5

      Before 64 bit operating systems, you couldn't have an array that big anyway.

  • @tiagobecerrapaolini3812
    @tiagobecerrapaolini3812 11 місяців тому +10

    This scenario reminds me about calculating the average using floating point values. The first instinct would be just to sum everything then divide by the amount of values, but floats get more imprecise the further from zero they go. So the average might be off if the intermediate sum is too big. A better approach might be going by partial average, since the intermediate values are smaller. But there are other techniques too that go over my head.
    I remember one day finding a paper with dozens of pages just detailing how to average floating point numbers, it's one of those problems that at first appear to be simple but are anything but that.

    • @mina86
      @mina86 11 місяців тому +3

      Kahan sum is your friend.

    • @MichaelFJ1969
      @MichaelFJ1969 11 місяців тому +1

      Yep. It's really a matter of "pick your poison".

  • @abhishekparmar4983
    @abhishekparmar4983 11 місяців тому +4

    iam convinced best teachers are really good communicators

  • @AnindoSarker
    @AnindoSarker 11 місяців тому +19

    I wish I had teachers like you guys on my university. Thank you for making such great quality videos.

  • @JohnSmith-qc4ye
    @JohnSmith-qc4ye 11 місяців тому +4

    Happened to me a decade ago in my home build embedded application. Eeprom with 65536 bytes. Using unsigned index variables only. No signed index at all !! The overflow wrap around is sufficient to cause that bug. Would have caused an endless loop at startup in the eeprom search. Luckily I ve spotted it in a code review befor Eprom was more than half full/used. Thanks for that video.

    • @zimriel
      @zimriel 11 місяців тому +1

      ho li fuk that takes me back. i remember eprom's from my TSR days in the middle 1980s

  • @JMcMillen
    @JMcMillen 11 місяців тому +77

    There is a little bit of significance to the number 17. If you ask people to pick a number from one to twenty, apparently 17 is the most popular choice. Plus, if you have a unbalanced twenty sided die that rolls 17 all the time, people are less likely to realize somethings up as it's not as likely to be noticed vs a die that keeps rolling 20. Especially since it will get obfuscated by different modifiers that get added to the roll so it's usually never just a 17. Also, it's prime.

    • @asynchronousongs
      @asynchronousongs 11 місяців тому +8

      wait what why? source?

    • @cataclystp
      @cataclystp 11 місяців тому +7

      ​@@asynchronousongswhy would someone go on the internet and confidently spread misinformation 🙃

    • @Phlarx
      @Phlarx 11 місяців тому +5

      @@asynchronousongs I have no sources, but it does make sense that 17 is the most "random-looking" number from the group. Evens are roundish, so they're out. Same with multiples of 5. Single digit numbers are too simple. 13 has a reputation for being either lucky or unlucky. 19 is too close to the maximum. 11's two digits match. The only number left is 17. Would be interested to see if someone can find an actual source though.

    • @mxMik
      @mxMik 11 місяців тому +1

      17 is known to be "the least random number", the "Jargon File" say.

    • @Uerdue
      @Uerdue 11 місяців тому +2

      I was expecting him to choose 42. 😢

  • @morwar_
    @morwar_ 11 місяців тому +1

    The way of explaining this was really good.

  • @Stratelier
    @Stratelier 11 місяців тому +6

    I remember coding a binary-search function by hand once (and it was probably susceptible to this edge case). I specifically wanted it to search for a specific value and return its index, OR if the value was ultimately not found, return the index of the first greater-than value (for use as an insertion point). Nothing too complicated technically, but DANG was it frustrating to debug.

  • @m1ch3lr0m3r0
    @m1ch3lr0m3r0 11 місяців тому +40

    Binary Search can be used for more complex tasks, like finding the answer to a problem where you know the range of the possible answers and the problem boils down to solving a monotonic function. I recently stumbled upon a bug with the (l + r) / 2 when l and r can be negative numbers. I was using C++, and in C++ the integer division round in the direction of 0; so, for example, 3 / 2 = 1 and -3 / 2 = -1. But I was expecting -3 / 2 = -2, as it is the nearest integer less than the actual result. In Python that is the behaviour of the integer division: -3 // 2 = -2.

    • @m1ch3lr0m3r0
      @m1ch3lr0m3r0 11 місяців тому +2

      @@GeorgeValkov yep, I learned about it after I found the bug.

    • @Shubham_Chaudhary
      @Shubham_Chaudhary 11 місяців тому +1

      These are insidious bugs, I ran into one due to different rounding modes for floating points (towards zero, nearest, round up, round down). I guess the integer division is rounding towards zero.

    • @yeetmaster6986
      @yeetmaster6986 11 місяців тому

      ​@@GeorgeValkovwhat's that? I'm new to c++, so i don't really understand what that means

    • @lodykas
      @lodykas 11 місяців тому +1

      That's bit shifting. It exists in all languages, but it's a "low level" operation. Anyways, since each bit in a binary number is a power of two, shifting the representation is the same as doubling/dividing by 2 depending on endianess. Now most languages implement their bitshift operators indépendantly of endianess tho

    • @lodykas
      @lodykas 11 місяців тому +1

      In this case of right shifting, the unit bit is discarded, so it's integeger division, the remainder is discarded. Note that the unit sigit being the same as the potential remainder only works because the divisor is the same of the base (like dividing 42/10 as remainder 2, the last digit. )

  • @MrHaggyy
    @MrHaggyy 11 місяців тому +3

    Quite interesting that this was a hidden problem in Java for so long. In microarchitectures this is a well known problem for decades as the numbers don't need to be absolut big, it's enough for both the left and right side number to be greater than half of the max. number representation.
    There are also patterns where you divide both l and r by 2 with bitshifting every iteration and add or substract them together according the controlflow.

  • @theantipope4354
    @theantipope4354 11 місяців тому +18

    4:31 The problem is even worse if your language *doesn't* do overflow or bounds-checking, (more common than you might think!) in which case your code will be looking at memory outside your array & Very Bad Things will happen. The way to prevent this in your code is to subtract your current position (CP) from the size of your array, (integer) divide that by 2, add it to CP, giving your next CP. This will work for any size of array that is no larger than your largest possible positive integer. This, of course, is how you handle a task like this in assembler.

    • @jnawk83
      @jnawk83 11 місяців тому +2

      this comment is the whole video summed up.

    • @Uerdue
      @Uerdue 11 місяців тому +2

      In assembler (well, in x86 at least), you can just add the numbers anyway, then do the division with a `rotate right with carry` instruction and are done. :D

  • @AndreuPinel
    @AndreuPinel 11 місяців тому +23

    I remember a kind-of similar bug in Intersystems Caché: 4 * 3 / 6 returned 2, but 4 / 6 * 3 returned 2.00001 (and this little butterfly ended up destroying the city of New York).
    This commutative/grouping operations that are the basics in arithmetic, in our code some times they create a lot of mess when released into production environments.
    I think it is important to add comments in the code so the newer generations understand why we make these changes. E.g.:
    int m = l + (r - l) / 2; // =====> Do NOT change to m = (l + r) / 2 =====> it can potentially lead to a positive int overflow

    • @Elesario
      @Elesario 11 місяців тому +15

      Looks like your first example is just an example of floating point coercion (assuming you expected integers), along with the fact that floating point numbers are an approximation of a value, so due to the underlying math behind them you sometimes get tiny weird rounding errors.

    • @tatoute1
      @tatoute1 11 місяців тому

      One has to be a fool to think computers can support ℝ numbers. They do not, it is absolutely not possible. As such they do not support associativity or commutativity, or many other properties, even the more obvious. 1+ε-1 may not be ε. etc...
      Even Integer support is partial, at best.
      Newbies think they can solve the problem by using "fuzzy" rounding or other tricks.
      Nerds know they have to proof the code they wrote.

    • @digital_hoboz
      @digital_hoboz 11 місяців тому +1

      No need to add comment. Add a unit test with big number. It will fail when someone changes it.

  • @l33794m3r
    @l33794m3r 11 місяців тому +1

    5:23 the graphic is wrong. it'd result in a negative sum as r>l. it should say r-l.

  • @karoshi2
    @karoshi2 9 місяців тому

    I remember hitting that bug and investigating a bit on it. But it happened so rarely, that we decided not to put any effort into it.
    Don't remember exactly, would assume we didn't have test cases for that because you don't test standard libraries. "Millions of people use that every single day, and you think _you_ found a bug in that?!?"
    Still holds an appropriate amount of humility. But _sometimes_ ...

  • @ProjSHiNKiROU
    @ProjSHiNKiROU 11 місяців тому +33

    Rust has a function for "midpoint between two numbers" for this exact situation somehow.

    • @Originalimoc
      @Originalimoc 11 місяців тому +2

      😂 what

    • @VioletGiraffe
      @VioletGiraffe 11 місяців тому +12

      C++ just recently added std::midpoint function as well.

    • @0LoneTech
      @0LoneTech 11 місяців тому +4

      This is called hadd or rhadd (h=half, r=right) in OpenCL, and there's mix() for arbitrary sections of floating point values. It's an ancient issue; compare e.g. Forth's */ word, which conceptually does a widening multiply before a narrowing divide.

    • @dojelnotmyrealname4018
      @dojelnotmyrealname4018 11 місяців тому +13

      It's almost as if this particular operation is remarkably common actually.

    • @VioletGiraffe
      @VioletGiraffe 11 місяців тому

      @@dojelnotmyrealname4018, it is very common! It's arithmetic average of two values, and any codebase is bound to have more than a few of those. What's not common at all is operating with values that risk overflowing. Especially in 64 bits; with 32 it's much more of a concern.

  • @mausmalone
    @mausmalone 11 місяців тому +1

    A topic I would love to see Computerphile cover: relocatable code. How can a binary be loaded in an arbitrary memory location and still have valid addresses for load/store/branch? How did that work in the old days vs now? What is real mode and what is protected mode? Are there different strategies on different platforms?

    • @MichaelFJ1969
      @MichaelFJ1969 11 місяців тому

      Yes! I support your request!

    • @agasthyakasturi6236
      @agasthyakasturi6236 11 місяців тому

      The binary is loaded a different address each time ( I'm assuming PIE is set when compiling) and instructions within the binary are always at a constant offset from the base of the ELF with which it's loaded
      As long as the elf knows it's loading address the instructions are always at a constant offset

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 11 місяців тому +1

      Real mode is an outdated operating mode, Intel is removing in X86S.
      Modern UEFI BIOS'es make it really hard to enter real mode (aka 16 bit mode) outside of SMM. Pushing UEFI boot with secure boot forward does just that.
      For now SMM (system management mode) starts in real mode and very first thing a CPU does AFTER entering SMM - switches into long protected mode (64 bit mode) inside a SMM mode.

    • @volodumurkalunyak4651
      @volodumurkalunyak4651 11 місяців тому

      One more thing about different modes. Modern ARM cores are majorly 64bit only (doesnt support 32bit neither for whole OS nor for just applications nor for ARM TrustZone (ARM variant of SMM mode on x86)

  • @Nellak2011
    @Nellak2011 11 місяців тому +26

    Whenever he said "l+r" I already knew from experience to write some kind of code to prevent an overflow.
    Has no one else had to endure a C++ class where everything is trying to make your program break?

    • @ali.ganbarov
      @ali.ganbarov 11 місяців тому +3

      C++ programmers have a different mindset 😂

    • @Takyodor2
      @Takyodor2 11 місяців тому

      @@ali.ganbarov _C++_ has a different mindset, it will try to break _you_

    • @3rdalbum
      @3rdalbum 11 місяців тому +2

      People who only know high-level languages will not likely be thinking of overflows. If they got a crazy error message about an array being indexed at negative one million or whatever they might eventually realise what is going on, but hand on my heart I'm sure it would take me a while. Hours or days. I wouldnt have anticipated it before time.

    • @Nellak2011
      @Nellak2011 11 місяців тому

      @@3rdalbum I primarily use Javascript and that language is so poorly designed that it has me writing custom code to verify an integer instead of it having that built-in type.
      I think because I am so used to having to fight against the language constantly and being forced to adopt an extremely defensive style of coding, I was more aware of such an error, despite javascript being a higher level language.

  • @drxyd
    @drxyd 5 місяців тому

    Recently wrote an N-dimensional N-ary search function. It's fun to try and scale these basic algorithms, solving all the bugs that show up and improving your understanding.

  • @schoktra
    @schoktra 11 місяців тому +9

    Integer overflow is how the infamous infinite lives bug on the original Super Mario Bros. for NES works. Lives are stored as a signed value so if you go over what fits you get weird symbols for your number of lives and it becomes impossible to lose cuz the way the math is set up you can’t subtract from a full negative number and wrap back into the positives, but can add to a full positive and wrap into the negatives. Since it only checks if you’re equal to 0 not lower than it, you end up with infinite lives. But it overflows into other important memory and causes other bugs as well.

    • @williamdrum9899
      @williamdrum9899 11 місяців тому +1

      That's weird that it corrupts other memory. I would have expected lives to be a single unsigned 8 bit variable. Especially since once you get more than 99 the game's print routine indexes out of bounds and starts showing adjacent graphics instead of digits. So obviously the game designers figured 'eh, nobody will get that many extra lives' and didn't bother to check.

  • @nio804
    @nio804 11 місяців тому +83

    I was wondering at first why just r/2 + l/2 wouldn't work, but with integers, the parts would get floored separately and that would give wrong answers when both r and l are odd.

    • @chingfool-no-matter
      @chingfool-no-matter 11 місяців тому +5

      wouldn't
      r

    • @nimcompoo
      @nimcompoo 11 місяців тому +5

      i thik it should be r >> 1 + l >> 1

    • @dualunitfold5304
      @dualunitfold5304 11 місяців тому +4

      Plus, division is expensive compared to addition and subtraction. I'm not sure how much difference it would make in practice, but it makes sense to do it only once instead of twice if you can

    • @nimcompoo
      @nimcompoo 11 місяців тому +7

      @@dualunitfold5304 that is true, but integer division by 2 can be done by a single bitshift?

    • @dualunitfold5304
      @dualunitfold5304 11 місяців тому +3

      @@nimcompoo Yeah you're right, I didn't think about that :D

  • @xJetbrains
    @xJetbrains 11 місяців тому +2

    The logical right shift >>> will also fix it, because it'll return from negative to positive if necessary: (l+r) >>> 1.

  • @tolkienfan1972
    @tolkienfan1972 11 місяців тому +1

    I like the related ternary search used to find a minimum in a convex array/function.

  • @AnotherPointOfView944
    @AnotherPointOfView944 11 місяців тому +10

    Nicely explained.

  • @Yupppi
    @Yupppi 11 місяців тому +3

    Funnily enough I just last week watched some C++ convention talk, might've been Kevlin Henney or someone else, mentioning this exact issue where the integers were so big they overflowed before getting the average. Might've actually been about satellite arrays. Perhaps it was Billy Hollis after all, but someone anyway.
    I was thinking maybe you'd just halve them first, but then again it's possibly two float arithmetic operations which isn't as lovely. Although, you could probably get away with a >> 1 type of trick and get almost free out of jail. Anyway Pound's method is pretty obviously better when it's just one halving and addition/subtraction.

  • @willd4686
    @willd4686 11 місяців тому +10

    Our Prof had a rubber shark he called Bruce. It was supposed to remind us of trouble with integers lurking in the deep. I can't remember the exact lesson. Prof Bill Pulling. Great guy.

  • @ZipplyZane
    @ZipplyZane 11 місяців тому +1

    A crossover with Numberphile would be nice here. You could have them show why (r+L)/2 = L+(r-L)/2.
    It's not too hard to show here, though. So I'll try:
    (r+L)/2
    = r/2 + L/2
    = r/2 + L/2 + (L/2 - L/2)
    = r/2 - L/2 + (L/2 + L/2)
    = (r-L)/2 + L
    = L + (r-L)/2
    Bonus question: why not use the second step? 2 reasons:
    1. Division is generally the slowest arithmetic operation, so you want to do as few of them as possible.
    2. The fastest math uses integers. Integer division will mean the .5 part gets dropped. So, if both r and L are odd, the midpoint will be off by 1.
    At least, those are my answers.

  • @unkn0vvnmystery
    @unkn0vvnmystery 11 місяців тому +2

    5:11 You can also do ((x/2) + (y/2)). Some people may find this easier.

    • @dfs-comedy
      @dfs-comedy 11 місяців тому +1

      That has its own problems. As someone else pointed out, if l=3 and r=5 and your language's integer division operator rounds down, 3/2 + 5/2 gives you 3 and the algorithm loops forever.

  • @Ny_babs
    @Ny_babs 11 місяців тому

    Great job with the lighting.

  • @Gunbudder
    @Gunbudder 11 місяців тому

    this is more of a standard practice of embedded software engineering than a "bug" in the binary search. you always consider intermediate overflows when doing data unit conversions or working with fixed point decimal numbers.

  • @monktoncrew
    @monktoncrew 11 місяців тому

    Bit captivated by the lighting in this ep. Made me feel like Dr Mike was giving me a Voight-Kampff test.

  • @kbsanders
    @kbsanders 11 місяців тому +6

    IntelliJ/JetBrains IDEs are awesome.

  • @dkickelbick
    @dkickelbick 11 місяців тому +24

    Nice, I thought, the solution would be: m = l/2 + r/2, but maybe you get in trouble for odd number of l and r.

    • @B3Band
      @B3Band 11 місяців тому +13

      Integer division can't result in a decimal. In Java, 5/2 == 2, not 2.5
      So for (l, r) = (1, 3), you get (1/2) + (3/2) = 0 + 1 = 1.

    • @thomasbrotherton4556
      @thomasbrotherton4556 11 місяців тому +4

      I thought the same at first, but addition is a simpler operation than division is why they did it this way. You could also do r - (r - l) / 2.

    • @SaHaRaSquad
      @SaHaRaSquad 11 місяців тому +2

      @@thomasbrotherton4556 Division by 2 is a simple bit shift

    • @SomeNerdOutThere
      @SomeNerdOutThere 11 місяців тому +2

      This was my first thought, though I was thinking with a bit shift as that should be faster: m = (l >> 1) + (r >> 1);

    • @DFPercush
      @DFPercush 11 місяців тому

      @@SomeNerdOutThere ... + (l & r & 1) . odd numbers fixed.

  • @pvandewyngaerde
    @pvandewyngaerde 11 місяців тому +9

    I can see a similar overflow problem happening when summing up for an 'average'

    • @warlockpaladin2261
      @warlockpaladin2261 11 місяців тому

      😬

    • @pvandewyngaerde
      @pvandewyngaerde 11 місяців тому

      Divide by how much if you dont know the number of items yet ?

    • @DanStoza
      @DanStoza 11 місяців тому +2

      @@pvandewyngaerde
      You just have to keep track of the number you're currently on. For example, if you have two numbers, the average is (a + b) / 2. Let's call this A1.
      If you add a third number 'c', it's (a + b + c) / 3, which you can rewrite as (a + b) / 3 + c / 3. We can then rewrite the first term as A1 * (2 / 3), giving us A1 * (2 / 3) + c / 3, allowing us to divide before adding.
      Just to continue the example, if we call our last result A2, then when we add a fourth number 'd', we can compute A2 * (3 / 4) + d / 4.

    • @SaHaRaSquad
      @SaHaRaSquad 11 місяців тому +1

      @@DanStoza That would lead to less accurate results though, computers are bad with accuracy in divisions.

    • @alexaneals8194
      @alexaneals8194 11 місяців тому +1

      If you are dealing with super large numbers, just use a 64-bit integer. If you can max out 9 quintillion in the addition then used BCD (binary coded decimal), it's guaranteed to max out your memory before you can max it out. Also, it will be a performance hog.

  • @svenbb4937
    @svenbb4937 11 місяців тому +1

    In Java and C# the array length is an int. You can easily solve the problem by casting to long first.
    If you need larger arrays or matrices, they are usually sparse arrays or matrices, which need a more sophisticated datatype implementation anyway.

    • @ChrisM541
      @ChrisM541 10 місяців тому

      "You can easily solve the problem by casting to long first."...until the boundaries of long are breached, then we are back to square one.
      Rule #1: if you are writing a general-purpose routine, always, always write in a 'safe' way. Never, ever assume a limit when none has been implemented.

    • @svenbb4937
      @svenbb4937 10 місяців тому

      @@ChrisM541 As i said, the array length is guaranteed to be an integer in Java and C#. The range of int and long are exactly specified.
      Doesn't make sense to program 'safer' than the language spec.
      In fact. the current OpenJDK version even takes advantage of the fact, that int is signed:
      int mid = int mid = (low + high) >>> 1;

  • @rich1051414
    @rich1051414 11 місяців тому

    l + (r - l) * 0.5
    That is a standard linear interpolation function. Basically, you walk the value from 'l' to 'r', with the given progress value, which is 0.5 with the example above.

  • @landsgevaer
    @landsgevaer 11 місяців тому +38

    For demonstration purposes, could also have defined l and r as byte or short integers...

    • @ats10802b
      @ats10802b 11 місяців тому +1

      The array index are always int

    • @landsgevaer
      @landsgevaer 11 місяців тому +3

      @@ats10802b Not a big Java user here, but can't you index with a fewer-bit integer in Java? I thought it would be implicitly cast.
      The point is. you could define the l and r variables as 1-byte or 2-byte ints, then you don't need the billion-element array to recreate the bug.

    • @tylerbird9301
      @tylerbird9301 11 місяців тому

      i don't think you can have anything other than int for indicies

    • @akompanas
      @akompanas 11 місяців тому +5

      IIRC Java does arithmetic in int and long only, so shorts and bytes won't actually have this problem.
      Also, this bug didn't get spotted for so long because nobody had enough RAM to hold arrays of such sizes.

    • @phiefer3
      @phiefer3 11 місяців тому +3

      @@tylerbird9301 I think what he's getting at is that the overflow doesn't actually happen at the indexing part of the code, but at the addition part of it. So if L and R are defined at a smaller datatype, then when you added them they'd still overflow resulting in a negative number when it gets used as the array index.

  • @berndeckenfels
    @berndeckenfels 11 місяців тому

    You can also interpret the sum of two signed integers unsigned (has one more bit) and divide it by 2 to get it back into range

  • @5-meo-dmt299
    @5-meo-dmt299 11 місяців тому

    I like to just implement binary search using bitwise operations.
    So, for the index that you are looking at, just go through the bits one by one (starting at most significant bit and starting with index zero), set them to 1, and then reset them to 0 if the index is too high. Just make sure to check whether you are within bounds.
    This way, you don’t need math and therefore can‘t run into integer overflows.

    • @grivza
      @grivza 10 місяців тому

      That sounds so wasteful and also weird. Say have an index of 12, everything up to 16 is too big (that's like at least 27 comparisons), then you reach 8, 8 is okay, 8 + 4 is too big, so we turn 4 off again (right?), then 8+2 is okay, and 8+2+1 is okay. So we end up with 11? Or do we stop at 8, which again is not correct. What reason do we ever have to turn off the 8?

  • @bitman6043
    @bitman6043 10 місяців тому

    also you can do (r + l) >> 1. shifting right will effectively divide by two regardless of overflow

  • @johnswanson217
    @johnswanson217 11 місяців тому +1

    People nowadays completely ignore hardware and operating system, which has 80% impact on actual performance.
    OOP languages like Java, Python and Javascript led us to this madness.

  • @blr-Oliver
    @blr-Oliver 11 місяців тому

    Java has 'unsigned bit shift right' operator '>>>' which works perfectly for division by powers of 2. (l + r) >>> 1 will work just fine. Intermediate result, the sum of two integers can at most overflow by a single bit which occupies the sign bit. No information is lost, it's just treated as negative number. So, when shifted back with single zero it produces correct positive number.

  • @lambdaprog
    @lambdaprog 11 місяців тому

    More of this please.

  • @KaiKunstmann
    @KaiKunstmann 10 місяців тому

    If your machine model uses two's complement to represent negative numbers (i.e. left bit for the sign, like almost every computer; Java even requires it on a language level), then another solution would be, to replace the division-by-2 by an unsigned shift-right operation. This works, because an addition of non-negative numbers can at most overflow by one bit, i.e. into the sign-bit, but never beyond that, e.g. 01111111+01111111=11111110. An unsigned shift-right operation by 1 always shifts-in a zero on the left, thereby undoing the overflow while being mathematically equivalent to a truncated division-by-2 (exactly what we need), e.g. 11111110>>>1=01111111.

  • @miamor_un
    @miamor_un 11 місяців тому +2

    Hey,
    It was a really great explanation, appreciate the effort.
    There is one small doubt, what if we do L/2 + R/2, this should also work as both L and R in the range so L/2 and R/2 are in the range too.

    • @someaccount3438
      @someaccount3438 11 місяців тому +1

      I was thinking the same, but l + (r - l) / 2 is only 1 division, so maybe it is more efficient.

    • @alstuart
      @alstuart 11 місяців тому +3

      L/2 + R/2 gives the incorrect result when L and R are both odd numbers. Can you think of why?

    • @fox_the_apprentice
      @fox_the_apprentice 11 місяців тому

      @alstuart @miamore_un
      **Assuming Java:**
      It gives the incorrect result for both even and odd numbers, because L and R aren't defined variables. That's easy to fix by correcting the variable names to l and r. (Java variable names are case-sensitive.)
      It also gives the incorrect result for odd numbers due to integer division, but that's also easy to fix by changing it to l/2.0+r/2.0 .
      Regardless, their intent is correct. Who knows, maybe they were writing code in a language that doesn't do integer division like that, and which has case-insensitive variable names!

    • @fox_the_apprentice
      @fox_the_apprentice 11 місяців тому

      @@alstuart Not sure my other comment notified you correctly. Sorry if this is a double-ping!

  • @Nellak2011
    @Nellak2011 11 місяців тому

    One other thing I would add is a defensive early return that checks if left is less than right as we assume, because if it is greater than right then it will lead to an underflow.
    if (l > r) {
    return new Error("Left pointer is greater than Right pointer. It is a programming bug.");
    }
    int midpoint = l + (r - l) / 2;

    • @dealloc
      @dealloc 11 місяців тому +1

      That won't ever be the case as long as we're searching the _index_ of an array, which can only be a positive integer.
      First; the following expression: (L + (R - L) / 2) will never underflow, as long as L>=0 and R>=0. This is because L is added back into the result of (R - L) / 2.
      So even if (R - L) / 2 would be a negative number we add back L to correct for it.
      This is simply because L is added back into the result of ((R - L) / 2). So even if the result of (R - L) / 2 is negative, it is corrected by the fact that we add back L.
      Secondly, it would be a compiler/runtime bug, because of the while loop counts from L until R which are bounded by 0...length of array, and only shrunk towards the midpoint resulting in a positive integer.
      In case we can have negative indices, then we'd need to condtionally check the bound and then (R + L) / 2 instead, otherwise fallback to the previous equation.

    • @rogo7330
      @rogo7330 11 місяців тому

      If you use signed integers, it would never happen in C, because signed integer overflow is undefined. Basically, compiler (CAN) assume that `l` will never be less than `r` because you never change them in that way. So, be carefull with assumtions like that.

  • @anon_y_mousse
    @anon_y_mousse 11 місяців тому +4

    I guess I didn't notice you writing it that way in the first video, but I've known about this weird way of getting half the distance between two indices for a long time and specifically avoided it because of this known issue. I generally use C and adding a single extra instruction that takes one clock is not a big deal for a compiled language. Even at a billion entries it's more than fast enough to not warrant worrying about it. This is one of those edge cases that can ruin your year and is probably one of the reasons that Knuth came up with that stupid phrase. I guess I should've paid more attention in that first video and chided you for it, but I was fighting my Python installation that didn't want to use numpy. Bad Mike!

  • @Jonathan-qi9rh
    @Jonathan-qi9rh 5 місяців тому

    I love how people comment about using shifts instead of dividing by 2, not realizing that any sane compiler or interpreter of any sane programming language in the last 25 years does this anyway. Don't try to outsmart the compiler in optimizations. You will fail. Let it optimize for you and only then check if there's anything you could improve (and if you can, you probably don't need this advice).
    Anyway, shifts would probably be a better choice in this code from a style point of view: (l+r)>>>1 looks better than l+(r-l)/2. This only works in Java though.

  • @prepe5
    @prepe5 11 місяців тому +1

    funny i had that problem a few months ago while implementing a binary search on a Microcrontroller. I had to use a Word as the Index so i only had 65535 as Max index and i noticed that the overflow was not handled correctly in the basic binary search. It didnt even ocure to me that such an obvius problem was unknown for a long time.

  • @lokpokit6018
    @lokpokit6018 11 місяців тому +1

    why don't simply change the formula: m = (L/2) + (R/2) instead? We know (L+R) / 2 = L/2 + R/2 and L/2 < L and R/2 < R; so, it would avoid overflow.

  • @mb-3faze
    @mb-3faze 11 місяців тому +3

    Would have thought that L/2 + R/2 would have been better than L + (R - L)/2. L/2 is just a bit shift right, so pretty fast. Handling the case where both L and R are odd is not too difficult (just add one).

    • @Andersmithy
      @Andersmithy 11 місяців тому +1

      So you’re performing the same number of operations, but swapping a subtraction for division. But also you’re branching to add 1/4 of the time?

    • @mb-3faze
      @mb-3faze 11 місяців тому +1

      @@Andersmithy I suspect Mike's implementation is more reliable across architectures and compliers. Dividing by 2 is just a bit shift so L >> 1 + R >> 1 which has got to be pretty quick. The issue is you have to add (L && 1) && (R && 1) to the result to account for both being odd numbers. But there are no branches in the code.
      So ans = L >> 1 + R >> 1 + ((L && 1) && (R && 1))
      (The compiler *could* have a branch in the logical bit - after all, if L && 1 is zero it doesn't have to do the R && 1 part)
      However, Mike's code is just L + (L - R) >> 1 so, yeah - I suspect subtraction is pretty much implemented in hardware.
      The thing is, L >> 1 and R >> 1 could be pre-computed and stored (as two equally long arrays), then, maybe my solution would be a femtosecond faster :)

  • @MartinBarker
    @MartinBarker 11 місяців тому

    It's actually not a more complicated process it's actually less computationally complex, divides are computationally complex so the smaller your divide is the better (L+R)/2 will always be larger than (R-L)/2, division in a computer is done with N number of additions, so the the new method is actually much more simple for a computer to perform ((R-L)/2)+L

  • @johnniefujita
    @johnniefujita 11 місяців тому +1

    I remember this error.... probably the most famous bug of all time

  • @joshuascholar3220
    @joshuascholar3220 11 місяців тому +4

    If your signed integer type is big enough to index the whole array, then you can safely find the average of two indexes using the simpler (L+R)/2 by doing the arithmetic with UNSIGNED numbers. And that's the solution that programmers would more naturally use.

    • @Phlarx
      @Phlarx 11 місяців тому +1

      Java does not actually have a primitive unsigned integer type, though Integer.divideUnsigned(L+R, 2) does do the trick. However, that still only kicks the can down the road by a factor of 2.

    • @Shadow4707
      @Shadow4707 11 місяців тому

      You should be able to just do (l+r)>>1, even with signed integers.

    • @joshuascholar3220
      @joshuascholar3220 11 місяців тому

      @@Shadow4707 signed right shift is different from unsigned right shift

  • @slipperynickels
    @slipperynickels 11 місяців тому +1

    needing a second to think of the number right before 1.2B is super relatable, lol

  • @TehPwnerer
    @TehPwnerer 11 місяців тому +1

    I wouldn't have thought to use pointers that way, obviously you subtract l from r and add that offset/2 to l. No overflows possible if you use pointer arithmetic correctly

  • @nutsnproud6932
    @nutsnproud6932 11 місяців тому

    I learned r-l in college on a PDP11 running COBOL as we had to keep the numbers small on a database for theatre ticket sales.

  • @Yotanido
    @Yotanido 11 місяців тому +12

    "That's a Python comment, not a Java comment"
    THE PAIN! Every goddamn time!

    • @AnttiBrax
      @AnttiBrax 11 місяців тому +2

      Appropriate punishment for using end-of-line comment. 😂

  • @beaconofwierd1883
    @beaconofwierd1883 11 місяців тому +2

    Would it not be more efficient to just divide both by 2 inside then add them? Then you just have (r>>1) + (l>>1).
    Though you might have to add the remainder if both are odd, so
    (r>>1)+(l>>1) + 1&r&l
    Pretty much the same number operations but you’re not limited to using signed ints.

    • @rb1471
      @rb1471 11 місяців тому +2

      Well why not l + (r-l)>>1 and cut out the comparison. The addition/subtraction is nothing compared to division

    • @beaconofwierd1883
      @beaconofwierd1883 11 місяців тому

      @@rb1471 right, I was thinking l could be bigger than r, but that can never happen :)

    • @jmodified
      @jmodified 11 місяців тому

      (l + r) >>> 1 works as long as they are signed (limited to max positive int value). >>> is unsigned shift.

  • @GuagoFruit
    @GuagoFruit 11 місяців тому +10

    Basically any code with a sum is prone to this bug then. I do wish languages have a built in error checker where it checks if the output sum is smaller than either input and raises an error if found, although that would also add ungodly overhead. Seems more like an implementation detail than a logical bug though.

    • @RepChris
      @RepChris 11 місяців тому +4

      I mean for one you can add negative numbers, so you cant just check if its larger. Secondly there is a way to do this, its just not the default since the overflow can be desired behavior. In lower level languages you can just check the CPU register that tells you if there has been an overflow, in java you cant do that but it has a Math.addExact(a,b) that does exactly what you want, it throws an error if there was an overflow. there isnt much overhead in that function, i assume, at least not much more than the java overhead.

    • @RepChris
      @RepChris 11 місяців тому +6

      you could also use a bigint library that doesnt have any overflow, but that will most definitely have much more overhead than using a intrinsic type.

    • @IceMetalPunk
      @IceMetalPunk 11 місяців тому +3

      Overflows are often used as clock signals, particularly in embedded electronics, so preventing them by default isn't ideal.

    • @marcogoncalves1073
      @marcogoncalves1073 11 місяців тому +7

      There are compiling flags for most languages that do exactly that, it's just that they are debugging flags and almost no one uses them unfortunately.

    • @andrewharrison8436
      @andrewharrison8436 11 місяців тому

      On the DEC PDP and VAX machines the default language was BASIC and that by default checked for overflow. It may not be trendy but that may have been a good thing.
      I did have COBOL code that used PIC 9 (i.e. a single character as a a digit from 0 to 9) and that happily looped trying to get to the 10th element of an array as "9" + "1" just saves "0" into the single character.

  • @ptl5585
    @ptl5585 11 місяців тому

    Instead of ( left + right) /2 you can just do ( left/2)+(right/2).
    This is way simpler and mathematicaly more intuitive than left +(right-left)/2

  • @lathamgreen
    @lathamgreen 11 місяців тому

    The sun was setting quite beautifully through the window

  • @transcendtient
    @transcendtient 11 місяців тому +4

    Why does Java use signed integers for an array structure that doesn't allow negative integers?

    • @antoniogarest7516
      @antoniogarest7516 11 місяців тому +3

      Java primitives are signed. Also, operating with signed numbers can be less error prone than with unsigned numbers. For example, in C/C++ when you take unsigned number and subtract another unsigned number but greater to it, you'll get the wrong result. For example doing the following operation with unsigned 8 bit numbers: 1-3 won't be -2, it will be 254.

    • @0LoneTech
      @0LoneTech 11 місяців тому +2

      Firstly, the problem remains with unsigned integers; the incorrectly calculated index just might access defined memory and lead to more confusing misbehaviour, such as an infinite loop or incorrect answer. Secondly, it can be useful to apply an offset to an index, such as the (r-l)/2 value here, and in other algorithms it wouldn't be odd for such a step to be negative. Thirdly, Java doesn't know the number is an index until it's used to index, and the algorithm isn't array specific. There do exist languages which can restrict index types to match, like Ada or Clash. In Python negative indices index from the right.

    • @dan00b8
      @dan00b8 11 місяців тому +1

      The even funnier part is if they were unsigned the bug might have been harder to spot, as the overflow still happens, but this time it starts from 0, so it will give a valid index inside the array, thus not throwing an error. The result would still be incorrect, just harder to notice since no exception was thrown so it would be obvious something was fishy

    • @rafagd
      @rafagd 11 місяців тому +2

      Java doesn't do unsigned. The creators never liked the idea, and it's just the way the language is.

    • @0LoneTech
      @0LoneTech 11 місяців тому +1

      ​@@antoniogarest7516That just shifts the over/underflow boundaries around, though. -100-100 isn't 56 either. Java was designed with a 32 bits is enough attitude, Python switches to arbitrary precision, and Zig allows you to specify (u160 and i3 are equally valid types). Ada is also quite happy to have a type going from 256 to 511.

  • @RickeyBowers
    @RickeyBowers 11 місяців тому

    Doesn't happen in assembler because we have a carry flag (RCR to divide 33-bit result by two). Also we know the indices are unsigned values. So, we can use the full range of the register size.

    • @turdwarbler
      @turdwarbler 11 місяців тому

      rubbish. Overflow happens in assembler in exactly the same way. Yes you have an overflow flag BUT you have to check it, it doesnt stop the overlfow in the first place and I bet you a car that most assembler programmers would not bother to check for overflow unless they have some idea the numbers were going to get large. Just like its a non event in a language like C unless you know you will be processing large numbers.

    • @RickeyBowers
      @RickeyBowers 11 місяців тому

      @@turdwarbler sure, we need a more verbose definition of the function which matches our assumptions. Only through intrinsics does C have access to the instructions or machine state to handle function definition with the assumptions made - the compiler is not going to grok what the math intends to use the instructions designed for this purpose.

  • @TheMR-777
    @TheMR-777 11 місяців тому +2

    Well, quite honestly, you just taught a new vulnerability analysis trick :)

  • @phasm42
    @phasm42 11 місяців тому +3

    You can also use unsigned right shift instead of dividing by two, mid = (left + right) >>> 1
    I'm pretty sure integer overflow has defined behavior in Java (as opposed to C++, where it may behave as expected, but it's technically undefined behavior).

    • @dogman_2748
      @dogman_2748 11 місяців тому +5

      I would like to believe the bytecode would optimize the division into a bitshift automatically

    • @phasm42
      @phasm42 11 місяців тому

      @@dogman_2748 that misses the point. Dividing by two is a signed operation. The unsigned right shift takes the overflowed negative number and returns the correct positive number

    • @ExEBoss
      @ExEBoss 11 місяців тому +1

      ⁠@@dogman_2748 It only does that with multiplication, as `x / 2`, `x >> 1`, and `x >>> 1` all produce different results for `x = ‑1`.

    • @simon7719
      @simon7719 11 місяців тому

      That shift solves nothing like the described bug, though.

    • @phasm42
      @phasm42 11 місяців тому

      @@simon7719 it absolutely does. You can add 2147483500 and 2147483600, get -196, do a right unsigned shift, and get 2147483550. Maybe try it out first next time 🙄

  • @mihiguy
    @mihiguy 11 місяців тому +3

    In Java, you actually could have also done `(l + r) >>> 1` (unsigned right shift), but probably the compiler would have created identical code anyway :)

  • @henrikcarlsen1881
    @henrikcarlsen1881 11 місяців тому

    A simple int overflow was trivial but the talk and the simple solution was more interesting (as a paradigm on how to think when creating algorithms)

  • @jimr7987
    @jimr7987 10 місяців тому

    This only changes the place where a bug appears. If an array is defined indexed from -1,000,000,000 to 1,000,000,000 (minus a billion to plus a billion) then then very first step in a binary search will overflow with the new formula. But the original formula will not. To fully fail safe the code would require an if statement to choose the expression that does not overflow...

  • @brantwedel
    @brantwedel 11 місяців тому +28

    I wonder if any languages have a compiler optimization that tries to simplify mathematical operations: so it would take "l + (r - l) / 2" and turn it back into "( l + r) / 2" 🤔

    • @0LoneTech
      @0LoneTech 11 місяців тому +23

      That style of optimization exists, e.g. in gcc's -ffast-math, which enables some for floating point processing. However, the buggy code has undefined behaviour which the corrected code does not, so this erroneous change should not be produced by an optimization pass.

    • @asagiai4965
      @asagiai4965 11 місяців тому

      I think you made the code more complicated and expensive by doing that.

    • @antonliakhovitch8306
      @antonliakhovitch8306 11 місяців тому +6

      Generally speaking the answer is no. Compiler optimizations shouldn't change the behavior of the code, so something like this would be considered a bug.
      Optimizations WILL do things such as replacing multiplication with bitshifting when possible, or reordering math when it doesn't make a difference (for example, multiple consecutive addition operations)

    • @JarkkoHietaniemi
      @JarkkoHietaniemi 11 місяців тому

      @@antonliakhovitch8306 That optimization would be correct only if the numbers are ideal numbers, behaving exactly like math says. The computer language integers do not do that, they operate in modulo arithmetics.

    • @antonliakhovitch8306
      @antonliakhovitch8306 11 місяців тому +1

      @@JarkkoHietaniemi Multiple consecutive additions are fine to reorder, even with overflow

  • @Vincent-kl9jy
    @Vincent-kl9jy 11 місяців тому +2

    why not just change the order of operations? (R/2 + L/2)

    • @sasho_b.
      @sasho_b. 7 місяців тому

      Rounding. But yes.

  • @kabinettv
    @kabinettv 11 місяців тому

    You could also go the more jank route and cast the l and r to longs, do the calculations, and then cast the resulting m back to an int

    • @clickrick
      @clickrick 11 місяців тому

      An important point to remember is that this sort of calculation will often appear in the middle of some deep loop, so performance is an issue.

  • @MedEighty
    @MedEighty 9 місяців тому

    (l + r) / 2 = l/2 + r/2, so I would have just halved each of l and r (or shifted them right, which has the same effect) and then added them together.

  • @JamesRouzier
    @JamesRouzier Місяць тому

    Keep track of the starting point and length of elements to check.
    s = 0
    while(l > 0) {
    m = s + l/2;
    if a[m] == needle {
    return true;
    } else if a[m] > needle {
    s = m +1;
    l--;
    }
    l >>=2;
    }
    return false

  • @greggoog7559
    @greggoog7559 11 місяців тому

    My first intuition would've been to use 'l/2 + r/2', but your solution is probably faster and more elegant.

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 11 місяців тому

      The compiler probably optimizes them to the same method anyway

    • @greggoog7559
      @greggoog7559 11 місяців тому

      Oh? I would consider that a genuine compiler bug then, as it has the potential to cause errors that wouldn't occur without the optimization.

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 11 місяців тому

      @@greggoog7559 Yes, I didn't realize at the time that l/2 + r/2 actually does something else (assuming integer math.)

    • @greggoog7559
      @greggoog7559 11 місяців тому

      Yes, my main point is that this also avoids the overflow. The main problem with it is that it also rounds differently.

    • @pwhqngl0evzeg7z37
      @pwhqngl0evzeg7z37 11 місяців тому

      @@greggoog7559 Sure; my main point was that the compile will optimize expressions to equivalents, likely better than can be done by hand, so it's a waste to hand-optimize. Unless there is a non-equivalent expression which is equivalent over your problem domain- the compiler may not know what the domain is.

  • @SaddCat
    @SaddCat 11 місяців тому +1

    At 5:23 is it supposed to be R-L instead of L-R? Maybe it works both ways I don’t know.

    • @longbranch4493
      @longbranch4493 11 місяців тому

      Yeah, it should have been R-L. It won't work both ways since (L-R)/2 will be a negative value.

  • @snoopyjc
    @snoopyjc 11 місяців тому

    Is should also work if you change the division by 2 to a logical right shift by 1: >>> 1 because logical right shift doesn’t treat the top bit as a sign

  • @thevaf2825
    @thevaf2825 11 місяців тому +15

    -in some architectures it should be faster to just shift left r and l (divide both uints by 2), then subtract. depending on hardware resource, the two shift ops could happen in parallel.-
    edit: what I wrote wouldn't work, as people have pointed out in their replies below:

    • @skeetskeet9403
      @skeetskeet9403 11 місяців тому +10

      that doesn't quite work as integer division isn't -associative- distributive
      (1 + 1) / 2 = 1
      (1 / 2) + (1 / 2) = 0

    • @Bunny99s
      @Bunny99s 11 місяців тому +1

      -Right, a right shift by 1 on both r and l and then add them of course does the same without the issue. (r+l)/2 is the same as r/2 + l/2 which is the same as (r>>1) + (l>>1)-
      See below.

    • @arbazna
      @arbazna 11 місяців тому +1

      you need to save the remainder just in case both indices are odd numbers, then add 1 if so happens.

    • @B20C0
      @B20C0 11 місяців тому +7

      @@skeetskeet9403 Yep, it's usually things like this where mathematicians fail in computer science 😛

    • @skeetskeet9403
      @skeetskeet9403 11 місяців тому +3

      @@arbazna which you can do as:
      (l / 2) + (r / 2) + ((r & l) % 2)
      which can be optimized by any decent compiler to
      (l >> 1) + (r >> 1) + (r & l & 1)
      but at this point the solution shown in the video seems a lot better to me as it's simply
      l + (r - l) >> 1

  • @elraviv
    @elraviv 11 місяців тому +1

    I think I've never encounter it, because I've used shift operation (>>1) instead of divide by 2.

    • @natescode
      @natescode 11 місяців тому

      The compiler would too

  • @captainchicks
    @captainchicks 11 місяців тому +1

    Talking about Java: Why not simply embrace the "extra bit" and use unsigned shift to divide by two instead? (l+r)>>>1

  • @ArtanisKizrath
    @ArtanisKizrath 11 місяців тому

    There was this one one-off project I had that I implemented a binary search in Java. I wasn't aware of this. I hope they don't process no more than 1.2 billion records.

  • @finmat95
    @finmat95 3 місяці тому

    Hidden overflow --> easyfix (just write (l - r) / 2 instead of (l + r) / 2 from the textbook). Nice.

  • @theonly5001
    @theonly5001 11 місяців тому

    I would rather go with L/2 + R/2.
    That is just 2 Bit shift operations and a addition afterwards.
    Especially if I'm coding for embedded hardware. The compiler will probably catch that for me, but i can make sure, that it will happen by applying the bit shift manually.
    I don't know if a embedded processor can actually perform a bitshift on load of a variable. Or bitshift after a addition. That might be a specific instruction you could perform.
    Maybe the compiler is intelligent enough to see, what you're trying to do and compiles that error away.

    • @clickrick
      @clickrick 11 місяців тому

      If L and R are both odd, you'd need to compensate and add another 1. The extra test has just nullified any benefit you thought you'd managed to get.

  • @raydaypinball
    @raydaypinball 11 місяців тому

    What would have happened if the code had used unsigned integer as the type? Just incorrect results, right ? Is that why they chose to use signed integer so if something goes wrong, at least you know about it with an exception?

  • @martincohen8991
    @martincohen8991 11 місяців тому +1

    Are there any situations when (l+r)/2 and l+(r-l)/2 give different values when (r-l)/2 is truncated?

    • @LarkyLuna
      @LarkyLuna 11 місяців тому

      L+R and R-L should have the same remainder mod 2 and will truncate the same way i believe
      L + (R/2 - L/2) maybe would be different than (L + R)/2?
      Testing an example
      L=1, R=6
      (L+R)/2 = 3.5 → 3
      L +( R/2 - L/2) = 1 + 3 - 0 = 4
      Yup

  • @ecjb1969
    @ecjb1969 11 місяців тому +1

    Why not L

  • @jonbondMPG
    @jonbondMPG 11 місяців тому

    17!.... No significance in the number 17! That's the bus from Bulwell to the City, absolutely critical I believe....

  • @TuanNguyen-rd1ji
    @TuanNguyen-rd1ji 11 місяців тому +8

    i myself would just divide L and R each by 2 then add them together but i suppose that wouldn't be as effective as yours. Great video as always.

    • @mcmaddie
      @mcmaddie 11 місяців тому +3

      And also not correct as pointed out on other answer. Midpoint between 3 and 3 would be 2.

    • @vytah
      @vytah 11 місяців тому +2

      That yields 2 as a midpoint between 3 and 3.

    • @physcannon
      @physcannon 11 місяців тому +1

      Dividing L and R each is really floor(L/2) + floor(R/2) in integer arithmetic (assuming L and R are positive), which is not correct. Consider L = 1 and R = 3.

    • @AndreuPinel
      @AndreuPinel 11 місяців тому +1

      @@physcannon Exactly. Any pair of odd numbers will make it fail.

    • @minilathemayhem
      @minilathemayhem 11 місяців тому +1

      Doing 2 divisions effectively doubles the time it takes to execute the formula since division is already the most expensive operation being done (it's not exactly doubled, and with modern CPUs, it doesn't sound like a lot of slow down since they're happening in nanoseconds, but it adds up). That's on top of your formula relying on the use of floating point math and conversion from float back to integer.

  • @sumdumbmick
    @sumdumbmick 11 місяців тому +1

    in general the way students are taught math relies heavily on growing the values first, then bringing them back down to the desired size for the end result. this is a cultural problem which is entirely avoidable.
    for instance, when I'm tutoring and point out to students that they don't have to treat something like 15/6 *42 as 15*42 later divided by 6. but there's always pushback because 'but my teacher said...' or 'but the order of operations puts multiplication before division'. and when you know what you're doing those arguments are obviously silly, but to someone just learning it who's been indoctrinated with just this 'grow it first, then bring back down to size later' attitude, they're gonna go out into the world and do things like write code that's gonna cause precisely this bug. and it's entirely preventable by simply teaching more wisely in the first place.

    • @sumdumbmick
      @sumdumbmick 11 місяців тому

      when teaching how to find means, for instance, I always start off by asking the student 'what's right between the numbers?' which ends up being more or less the same as L +(R -L)/2, except that it works on human intuition about numbers instead of those arithmetic operations.
      so finding the mean of 6 and 10 might actually be done in any number of ways when posed 'what's right between them?' but it's never going to involve (6 +10)/2, because that's not a method that they'll think of if this is the very first time they're being introduced to taking a mean.
      in this way, our education system has directly caused the problem behind this bug, since it's demonstrable that virtually nobody would use the problematic expression on their own unless told to. and yet nearly 100% of educated adults will go to the problematic expression first, and fuss over any qualms that anyone might present about it. rendering it an unambiguously cultural problem.

  • @idjles
    @idjles 11 місяців тому +1

    You almost created another bug of dividing an odd integer by two. It doesn’t hurt you here but you should have talked about the left-shifting issue and why it doesn’t matter here.

  • @RebelliousX
    @RebelliousX 11 місяців тому

    instead of m=(L+R)/2, use m= (int)(L/2.0 + R/2.0).. problem solved, won't overflow unless L/2+R/2 > (2^32)/2 - 1 (~2 billion positive integer values).

    • @balijosu
      @balijosu 11 місяців тому

      This doesn't work. A 32 bit float doesn't have the precision to represent a large 32 bit int exactly.

    • @RebelliousX
      @RebelliousX 11 місяців тому

      @@balijosu In fact it does, a float can handle 1.1+ Billion easily. I would revise that line of code for simplicity m= (int)(L/2.0 + R/2.0) , R and L will be promoted to double automatically due to division by a double (2.0 is double, 2.0f is float).

  • @longbranch4493
    @longbranch4493 11 місяців тому +1

    For me, (l + r) / 2 is counterintuitive so I would not even do so. l + (r - l) / 2 is way more obvious.