The Genius Way Computers Multiply Big Numbers

Поділитися
Вставка
  • Опубліковано 4 січ 2025

КОМЕНТАРІ •

  • @PurpleMindCS
    @PurpleMindCS  2 дні тому +32

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/PurpleMind/ . You’ll also get 20% off an annual premium subscription.

  • @enormousearl8838
    @enormousearl8838 День тому +285

    Cunningham's Law: "The best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer." Kolmogorov demonstrated Cunningham's Law before the internet existed lol.

    • @PurpleMindCS
      @PurpleMindCS  День тому +23

      Lol

    • @edwardmacnab354
      @edwardmacnab354 День тому +4

      No , the best way to get a flood of unwanted email ads is to post an answer on Reddit .

    • @mustgreetor
      @mustgreetor День тому +7

      @@edwardmacnab354 Cunningham's Law in action right here

    • @lexigan6896
      @lexigan6896 День тому +1

      ​@edwardmacnab354 wait what?? could you explain more

  • @EdgarRoock
    @EdgarRoock 2 дні тому +337

    8:18 Speaker: I'd encourage you to pause the video and see if you can figure that out.
    Me: No, no, please go on.

    • @DavidLindes
      @DavidLindes 2 дні тому +1

      mood!

    • @louisrobitaille5810
      @louisrobitaille5810 2 дні тому +9

      8:22 I didn't need to pause the video. a*c and b*d can be calculated first, then stored in memory and used for the "middle step", saving computation time. The proof of turning 100(a*c + b*d) into 100[ (a+b)*(c+d) - a*c - b*d] isn't nearly as obvious though, unless you know the mathematical trick for it I think 🤔. I haven't tried to do it yet 🤷‍♂️.

    • @projekcja
      @projekcja 2 дні тому +11

      I actually paused the video, figured it out, and continued the video only to hear him suggesting I would pause the video.

    • @DavidLindes
      @DavidLindes День тому +1

      @@projekcja Been there! I didn't with this one, but I've totally done that. :)

    • @klausgrnbk6862
      @klausgrnbk6862 День тому +3

      @@louisrobitaille5810 yeah, It felt counter intuitive to add b and c to the terms, and then subtract the products of the addition again. That to me is the genius, it was a flashback to wonderfully simple math learned 30 years ago ;)

  • @vlc-cosplayer
    @vlc-cosplayer 2 дні тому +39

    Saying "it's impossible to improve on this" is the cheat code to get people to find a better solution, just so they can prove you wrong.
    Reminds me of posting a question and then giving yourself a wrong answer on purpose, because people would rather prove someone wrong than help someone else 😆

    • @SheelByTorn
      @SheelByTorn День тому +1

      that's how conjectures work tho

    • @altrag
      @altrag 16 годин тому +1

      @@SheelByTorn I'd strengthen that statement: it's not just how conjectures work, it's their entire purpose for existing - a challenge to the world to either prove or disprove a hypothesis.

  • @Qfeys
    @Qfeys День тому +167

    On the graph, you can see that the bend for the python code curve starts at 2^6, which is incidentally the same as 64. This is because on your computer - assuming you use a 64 bit computer - there is a chunk of silicon that has the multiplication algorithm burnt into it. Burning it in the silicon has the advantage that it can do all the additions in parallel, so the time is no longer dependent on the number of simple operations.
    When python has higher numbers and needs to use Karatsuba algorithm, it only breaks down numbers until they are 64 bit, and then sends that to the cpu, as the cpu always does its multiplication at the same speed, no matter how many digits you need.

    • @thomquiri9860
      @thomquiri9860 День тому +3

      I'm dumb af, I'm not yet ready to instinctively assume that 64 "digits" can be binary digits lmao, good catch! But then he's wrong when he says it's just the basic algorithm being applicated here, it just uses an hardware instruction, and any "big number" uses karatsuba's algorithm anyway

    • @thomquiri9860
      @thomquiri9860 День тому +3

      so after thinking more about it, there's no way python uses a simple mul instruction because that wouldn't handle overflow, but maybe it makes multiple 32 bits multiplications in a 64 bits register so that it can't overflow?

    • @kellan5431
      @kellan5431 День тому +11

      @thomquiri9860It probably depends on the architecture, but x86_64 does have a 64-bit multiplication instruction. It stores the full result in 2 registers.

    • @christianbarnay2499
      @christianbarnay2499 День тому +10

      I'm surprised he didn't mention it since it corresponds to the Word model and "doing some basic arithmetic on single digit numbers" that he mentioned in the introduction at 1:30.
      As you described it, the 64-bit CPU architecture has physical components that ensure constant time operation on 64-bit numbers. So in this architecture, 64-bit numbers (which we call a word) are the actual "single digit" numbers that cost exactly one operation. And this graph is a direct confirmation that the 64-bit version of Python performs as it should by delegating all 64-bit operations to the CPU and cutting bigger numbers into 64-bit blocks.

    • @asandax6
      @asandax6 День тому +3

      If you are going to go down this rabbit hole you might as well mention SIMD which can do multiple 64bit or less operations per instruction and can also do multiplication and addition in one instruction.

  • @esra_erimez
    @esra_erimez 2 дні тому +68

    In Knuth's "The Art of computer Programming" Volume 2 has a Chapter 4.3.3. "How Fast Can We Multiply?". It is a very interesting read.

  • @Starwort
    @Starwort 2 дні тому +362

    FYI Python integers have a `.bit_length()` method you could have used instead of converting them to a string and taking their length

    • @PurpleMindCS
      @PurpleMindCS  2 дні тому +125

      Ah! I did not realize that. Was looking for something like this. Thanks :)

    • @YatharthRai
      @YatharthRai 2 дні тому +3

      Thanks I'll use this

    • @mohamedahmed-rk5ez
      @mohamedahmed-rk5ez 2 дні тому +16

      However this .bit_length() method is really good but it gives you 1+floor(log(n)) and couldn’t give you rational number
      It will be good for discrete cases

    • @0xDEAD_Inside
      @0xDEAD_Inside 2 дні тому +1

      Damn! Nice!

    • @sdspivey
      @sdspivey День тому +5

      @@mohamedahmed-rk5ez Why? There is an exact number of bits. Even if it rounded up to the next whole byte, it would still be an integer, ie. rational. Seems like python would be faster just to count the number of bytes or whatever int-class is used.
      Right off, I could just take the max_index of the array that holds the number, multiply that by the int_size, then subtract off each zero bit at the top. Always an integer answer, no slow FP functions needed. Am I smarter than the python programmers? Probably not.

  • @Polyamathematics
    @Polyamathematics День тому +12

    This video is exceptionally well made and well paced. I got to the end of the video without realising 20 mins had passed. Amazing work!

    • @PurpleMindCS
      @PurpleMindCS  22 години тому +2

      Thanks so much! Everyone go check out Polyamathematics by the way -- fantastic channel that you'd definitely really enjoy if you like my content.

    • @leisti
      @leisti 10 годин тому

      I agree. I subscribed immediately.

  • @jwpogue
    @jwpogue 2 дні тому +193

    This video is ridiculously well made! holy crap! The animations and scripts are beautiful and incredible!

    • @PurpleMindCS
      @PurpleMindCS  2 дні тому +17

      Thanks so much! Glad you enjoyed.

    • @friedrichmyers
      @friedrichmyers 2 дні тому +3

      It probably uses panim

    • @TheDavidlloydjones
      @TheDavidlloydjones 2 дні тому +3

      I know your type. I bet you're one of those pancake-eaters the machine seems to like so much.

    • @JordanMetroidManiac
      @JordanMetroidManiac 2 дні тому +7

      It looks similar, if not identical, to 3Blue1Brown’s open source animation tool.

    • @friedrichmyers
      @friedrichmyers 2 дні тому +1

      @@JordanMetroidManiac It is VERY similar.

  • @drdilyor
    @drdilyor 2 дні тому +86

    btw, the schonhage-streissen algorithm uses FFT / NTT which is another genius very practical O(n log n) algorithm for multiplying polynomials, but when applying it to number multiplication, we run into precision issues or the NTT modulo becomes too small beyond some point, thats why there is an extra log log n factor.

    • @chrisalex82
      @chrisalex82 День тому +3

      Based pfp btw

    • @asandax6
      @asandax6 День тому +2

      Most of Physics and Mathematics is just Precision Problems. Equations tend to work until they don't.

  • @PurpleMindCS
    @PurpleMindCS  День тому +39

    Hi everyone! A lot of you are pointing a particular spot in the video (namely around 16:00) where I'll admit I was a bit unclear with my explanation of what's going on, so let me clarify a few things here:
    First of all, on the horizontal axis, n is the number of **base 10** digits, not the number of bits for the numbers being multiplied. Secondly, the "version of the naive O(n^2) algorithm" I referred to is, on most modern computers (for up to 64-bit numbers), implemented in parallel on a CPU. That takes us up to 19 base 10 digits, which is roughly 2^4.24. On the graph, this is the section from log_2 n = 0 to log_2 n = 4.24, and there you really do see an almost perfect horizontal line, which makes sense if multiplications of this size are done during just a few clock cycles in hardware.
    Secondly, I pointed to n = 2^7 = 128 **base 10** digits (up to 426 bits) for the part where the slope changes, indicating a switch to Karatsuba's algorithm at that point. After reading through the source code: github.com/python/cpython/blob/main/Objects/longobject.c, it appears that the exact cutoff happens at KARATSUBA_CUTOFF = 70 **base 2^30** digits on 64-bit machines, or 70 **base 2^15** digits on 32-bit machines. My computer is a 64-bit machine and (2^30)^70 is a 64 digit **base 10** number, so I think the correct place to draw the dotted red line on the graph in the video was at log_2 n = 6 instead of 7. Coincidentally, 64 just happens to be a power of 2 which is probably what was causing some of the confusion. But between roughly 4.24 and 6 on the graph, Python really is using the O(n^2) "schoolbook algorithm," as referred to in the source code linked above, because the asymptotic performance of Karatsuba's algorithm isn't kicking in yet for numbers that small due large amounts of overhead. However the "schoolbook algorithm" is also implemented with base 2^30 (for 64-bit) or base 2^15 (for 32-bit) numbers, so the video I have on screen with the base 10 representation is just to give you an approximate picture of what's going on behind the scenes, since base 2^30 numbers are obviously very hard to represent visually in an appealing way.
    Let me know if there's anything else you think I missed or that I was unclear about!

    • @victorvanlent1312
      @victorvanlent1312 3 години тому

      Can't you pin this comment?

    • @erikkonstas
      @erikkonstas 3 хвилини тому

      @victorvanlent1312 No, the sponsor comment most probably has to be pinned, and you can't pin two comments at the same time.

  • @ItsGray3
    @ItsGray3 День тому +6

    Man, how in the world do you not have more views and subscribers???? Like seriously, the animations beautiful, the math interesting, and the explanations are chopped down into more digestible chunks.

  • @kaustubhpandey1395
    @kaustubhpandey1395 2 дні тому +47

    Nice of you to Grant exposure to a small channel like 3Blue1Brown❤

    • @kaustubhpandey1395
      @kaustubhpandey1395 2 дні тому +7

      P.S. this is a joke

    • @anon_y_mousse
      @anon_y_mousse 2 дні тому +4

      @@kaustubhpandey1395 I think what you meant to say was that you *intended* for it to be a joke, but jokes have to be funny.

    • @denki2558
      @denki2558 2 дні тому +7

      ​@@anon_y_mousse being funny is an extrinsic property (a relationship between an audience and the joke), not an intrinsic one. So, you can't define what is a joke or not based on how funny it is to you.

    • @anon_y_mousse
      @anon_y_mousse 2 дні тому +2

      @@denki2558 There's enough universality that you often can, and let's face it, the majority of dad "jokes" are universally reviled.

    • @denki2558
      @denki2558 2 дні тому +5

      @@anon_y_mousse Jokes can simply be unfunny to a demographic. The universality claim can also be easily disproven. For any arbitrary joke, there's at least one person who thinks it is funny- the author if it.

  • @walkergege2105
    @walkergege2105 День тому +2

    What a dedication for this video, im definitely supporting you...

  • @yashprajapati8857
    @yashprajapati8857 2 дні тому +6

    This video is so well made! The flow of information in this is perfect, when you were talking about time complexity I was in fact recalling about galactic algorithms and how the constant multiple could sometimes grow so large seemingly faster functions could be outperformed by "slower" ones for data used in practice, and then you mentioned about galactic algorithms at the end and I was utterly delighted. The flow of info was exceedingly good going from introducing about algorithms and mathematics and then going into a story form talking about history of its discovery for the breakthrough idea, deriving everything from scratch and explaining the reason behind everything, showing the practical implementation in Python and testing it, and even after that not stopping there and explaining the scenario of development and improvement on algorithms beyond that and showing how useful research is. Amazing video, I'm impressed!

  • @raphaelmonserate1557
    @raphaelmonserate1557 Годину тому

    just finished the data structures and parallelism class at UW, and you did a great job compressing everything in that class into a wonderfully small package. 🏆

  • @Pystro
    @Pystro 2 дні тому +27

    I wish you would have mentioned that a+b and c+d have one more digit (or bit) than each of the halves, and how that affects (or actually doesn't affect) the scaling.
    And surprisingly it doesn't even mess up the "powers of two" scheme that hardware is typically aimed at. Your most elementary unit that you eventually fall back to just has to be able to multiply numbers that are 1 longer than you'd think from dividing your number's length by 2 to the power of the number of "Karatsuba divisions" that you do.

    • @PurpleMindCS
      @PurpleMindCS  2 дні тому +8

      That is correct! I did actually mention this in a red blurb on the screen at that time but for the sake of concision I decided to leave it as a footnote of sorts.

    • @Alphabetatralala
      @Alphabetatralala 2 дні тому +3

      I mean as much as this detail is interesting, about anyone can figure it out by themselve while watching the video. When there's 'one more digit' due to addition, this digit is always a 1 no matter the base, and this is especially easy to take into account in base 2.

  • @zacharyzadams
    @zacharyzadams 2 дні тому +24

    13:12 If you want a concrete interpretation of why O(n^log2(3)) shows up, look into the Master Theorem. Basically, for any recursive divide-and-conquer algorithm that's leaf-heavy, all you need to find the complexity class is to know the number of problems it splits into at each level and the new problem size. So every recursion of Karatsuba calls Karatsuba 3 times, each with a problem size of n/2. That's where the 2 and 3 come from, and that's also why naive multiplication which calls itself 4 times becomes O(n^log2(4))=O(n^2).

    • @gregorymorse8423
      @gregorymorse8423 День тому +3

      A simple divide and conquer algorithm has a log(n) tree with n leafs. Karatsuba has the lo-hi vs hi-lo mul making it not a simple case. The Master Theorem formally proves it but it was already obvious

  • @tnealclips
    @tnealclips День тому +2

    That is a very surprising and interesting result. Great video

  • @shadeblackwolf1508
    @shadeblackwolf1508 17 годин тому +3

    Funny detail, in binary, multiplication, is expressable as summing one number with bitshifted copies of itself where the bitshifts match the indices of every 1 in the other number. imagine A times B. This algorithm iterates over A to get the indices, then for each index creates a bitshifted version of B, then needs to do some number of summations of the numbers. Not sure how well this performs, but it's fun to come up with algorithms like this. Time complexity for each step: iterating over a number's bits is O(N), Iterating over the list of indices to create bitshifted copies which while fast is still O(N^2). summing many large numbers would also be O(N^2).

  • @EricKolotyluk
    @EricKolotyluk 2 дні тому +5

    I could feel the joy you have in your explanation. Thanks.

    • @markbloyd9852
      @markbloyd9852 20 годин тому +1

      He does. We're climbing buddies, and I enjoy when we are resting sometimes that he shares with me some ideas that are more on my level of comprehension. He's very good at helping me understand what he's talking about.

  • @johnlehew8192
    @johnlehew8192 2 дні тому +45

    I figured out the two digit at a time multiplication and a few things not mentioned in the video when I was 14 y/o in 1982. I was rotating points of a tie fighter on an Apple II+ 6502 CPU. I think I got it down to 20 clock cycles to do a 16 bit multiplication on an 8 bit CPU that took 1 to 5 clocks to do a single operation on the CPU. Reached 50,000 mults per second. Then rotating 3D points required sine and cosine so I had to optimize that as well plus square roots. Lots of work and experimenting. I could rotate 35 or so 3D points on Darth’s Tie Fighter at 3 frames a second including time to clear the screen and draw the lines one pixel at a time in assembly code. This was 1 frame per second faster than Woz who built a 3D rotation utility program. I didn’t realize the importance of what I was doing, was just trying to rotate a tie fighter to show my geeky friends.

    • @TheFrewah
      @TheFrewah День тому +4

      How funny! At 14, I figured out how to manually calculate square roots. My aha moment came when i did 9/3=3. That can be seen as an equilibrium. Divide by something smaller than the square root and you get something larger. So 9/2=4,5. Just calculate the average and repeat (2+4.5)/2=3,25 which is better. Similar to the babylonian method but more intuitive. Best of all is that it can be extended to do cube roots. Or fourth roots. I did nothing, not even telling my lousy math teacher who I thought would say that someone had told me this.

    • @leif1075
      @leif1075 День тому +2

      That does sound like a lot of work..was it.modtly fun and enjoyable? Thanks for sharing.

    • @edwardmacnab354
      @edwardmacnab354 День тому +1

      @@TheFrewah And then there's the guy who can just get the answer in his head with zero calculation

    • @ThomasPalm-w5y
      @ThomasPalm-w5y День тому +1

      I did something similar on the Apple II, only to realize the bottleneck was how long it took to draw lines not to calculate the end points.

    • @TheFrewah
      @TheFrewah День тому +1

      @@ThomasPalm-w5y Some things may come as a surprise. Someone wanted to make a c++ program to outperform python and it didn’t work as expected. The reason was that he used a print statement that which contained

  • @AdityaRaj-ki3md
    @AdityaRaj-ki3md День тому +2

    Never expected this type of great explanation. One of the greatest channel I found on computation.

    • @PurpleMindCS
      @PurpleMindCS  День тому +1

      Thanks so much! Glad you liked it.

  • @sidreddy7030
    @sidreddy7030 2 дні тому +4

    This is such a great video. Can't wait to see the CS content you will come up with next.

    • @PurpleMindCS
      @PurpleMindCS  День тому +2

      Thanks so much! I'm looking forward to it as well :)

  • @baxtermullins1842
    @baxtermullins1842 3 години тому +1

    Interesting! In 1970, a couple of us in a radar lab developed code for a 16-bit, fixed point computer to multiply 2048 bit numbers as a lark. But it did allow for quad precision numbers. We used a straight-line assembly program with no recursive properties. Fast method!

  • @whiteoutbored
    @whiteoutbored 2 дні тому +2

    woah! as i was watching i didn't realize just how underrated this channel was! subbed, and great work on the video!!

  • @RandomBurfness
    @RandomBurfness 2 дні тому +27

    OH MY GOD YOU WRITE A SET INCLUSION SYMBOL INSTEAD OF EQUALITY WHEN TALKING ABOUT TIME COMPLEXITY, FINALLY SOMEONE THAT DOES IT PROPERLY!!!!!!!!!! THANK YOU!!!!!!!!

    • @drdilyor
      @drdilyor 2 дні тому +4

      🤓

    • @PurpleMindCS
      @PurpleMindCS  2 дні тому +5

      😃😃😃😃

    • @spicybaguette7706
      @spicybaguette7706 2 дні тому +1

      🎉🎉

    • @spicybaguette7706
      @spicybaguette7706 2 дні тому +2

      One minor nitpick though: big O notation is an upper bound, so not all algorithms that are O(n) look like a line. For example, any algorithm that is O(log n) is also O(n), but it doesn't look like a line

    • @gregorymorse8423
      @gregorymorse8423 2 дні тому +1

      ​@@spicybaguette7706wrong. It's worst case does look like a line which is exactly what big O is addressing. You completely missed the point.

  • @aidanthird
    @aidanthird 2 дні тому +6

    2:07 nice touch with pi and e

  • @Derpinator01
    @Derpinator01 19 годин тому +1

    Me seeing reused terms in Katsuraba's method: "Ooh, and then we can merge the similar terms to get 99900[axc] + 100[(a+b)x(c+d)]-99[bxd] to bring the total number of multiplications down to 3!"
    Video: *Goes straight to the math comparing O(n^log(2)3) to O(n^2)*
    Me: "Oh, that works too."

  • @MobiusHorizons
    @MobiusHorizons 17 годин тому +2

    Probably worth mentioning that python doesn't "switch from the naive algorithm to karotsuba at some size" it's actually going from doing a single hardware multiply to multiple hardware multiplies. ie the inflection point on your graph is where the number goes from 1 digit to multiple digits, where each digit is the largest number the computer internally knows how to do math on. from your graph it looks like that is 2^6 (64 bit) which would make sense. Modern CPUs operate on 64bit numbers natively.

    • @PurpleMindCS
      @PurpleMindCS  17 годин тому +1

      Hey, I just made a comment about this :) Hopefully that should help clear things up.

  • @TheNameOfJesus
    @TheNameOfJesus 2 дні тому +22

    @20:04 - "efficient multiplications with thousands of digits are now essential to public Key Cryptographic systems like RSA that keep your information secure." This is not accurate, because RSA is NOT used to encrypt user data. RSA is only used to encrypt the user's session key for a symmetric algorithm such as AES, Triple DES, DES, Blowfish, etc. The *average* session key used on the Internet is 200 bits, which is about 25 decimal digits. That's not a big number at all, by this video's definition of "big." Having to multiply a 25 digit number by another 25 digit number using any algorithm just once for a single SSL session (eg, one web page) is not going to make any difference to the user's experience. That's my opinion.

    • @seheyt
      @seheyt 2 дні тому +1

      You still need the RSA keys which are routinely 4096 bits

    • @TheNameOfJesus
      @TheNameOfJesus День тому +5

      ​@@seheyt That's true, but in order to get a 4096-bit RSA key you have to multiply two 1024-bit prime numbers, p and q, then append the product of two other 1024-bit numbers, p-1 and q-1. And my point was that you have to do this process only once per session, not once per block of user text. Moreover, in this video, he said that these faster algorithms are better only when you have thousands of decimal digits, and 1024 bits is only 128 digits. So there's no performance benefit to the newest algorithms for something that's done only once every web page.

    • @seheyt
      @seheyt День тому +2

      @@TheNameOfJesus Fair analysis. It was rather irrelevant to point out the session key though :) I wonder what the role of FFT/convolution based algorithms is in this picture. I assume they may not be suited for exact integers (rather for numerical computation)

    • @BobJewett
      @BobJewett День тому +2

      10 bits is 3 decimal digits (2^10 = 1024) so 200 bits is about 60 decimal digits.

    • @BobJewett
      @BobJewett День тому +2

      @@TheNameOfJesus No, 1024 bits corresponds to about 308 decimal digits.

  • @shadeblackwolf1508
    @shadeblackwolf1508 16 годин тому +2

    To give an illustration of why just relying on big O notation for an algorithm's runtime, is dangerous, here is an example. Take a function F(x) that runs in constant time, or O(1). Now we create a new function G(x) that sleeps for 1 year, before calling F(x), and returning the result of F(x). G(x) still has a O(1) time complexity.

  • @Blingsss
    @Blingsss 2 дні тому +2

    This video beautifully illustrates how a mathematical breakthrough can change the course of technology. Karatsuba’s Algorithm remains a cornerstone in computational theory. Pairing these insights with SolutionInn’s study tools is a fantastic way to deepen understanding.

  • @FilmscoreMetaler
    @FilmscoreMetaler День тому +1

    18:27 "And _this_ is where the record stands today." 🎵

  • @IvanToshkov
    @IvanToshkov 2 дні тому +1

    5:45 Count Dracula! :D
    Great video btw. I really enjoyed it.

  • @pulkitsingh6966
    @pulkitsingh6966 День тому +3

    amazing video! keep up the work

  • @fblua
    @fblua 2 години тому

    Thank you !! Great channel !! Please continue with those enricherous videos !!
    Greetings from Buenos Aires, Argentina.

  • @axelk.1782
    @axelk.1782 20 годин тому

    Wow, this video was a real eye opener! The first time that I heard about Karazuba's algorithm must have been in the late 80s or early 90s, and for all this time I have completely missed the point. Thank you so much! 🥰

    • @PurpleMindCS
      @PurpleMindCS  19 годин тому

      Glad you enjoyed and learned something!

  • @markthart2661
    @markthart2661 2 дні тому +23

    GPUs use the naive O(n^3) for matmul, because that one can be done in parallel. Recursive (divide and conquer) does not work great with specialized circuits.

    • @gregorymorse8423
      @gregorymorse8423 2 дні тому +1

      Wrong. Papers are published showing Strassen and Winograd outperform naive algorithms for as little as 2048×2048 matrices. Either you've been living under a rock for 20 years or rely only on free public tools which aren't providing state of the art.

    • @framegrace1
      @framegrace1 2 дні тому +1

      They use special parallell algorithms for matrix multiplication. Which is the clue of their speed.
      As all hardware implementations I know, yes GPUS also use some sort of O(n^2) algorithm for plain multiplication.

  • @ke9tv
    @ke9tv 2 дні тому +18

    Most Python implementations use Comba multiplication rather than the schoolboy method for numbers too small to resort to the Karatsuba method. It has the same big-O time as the schoolboy method, but takes an aggressive approach to carry propagation that reduces the constant factor significantly. (The recursion in the Karatsuba logic also bottoms out into Comba, rather than going all the way down to single precision.)
    Lovely explanation! Subscribed.

    • @DjVortex-w
      @DjVortex-w 2 дні тому +1

      The more official name for "the schoolboy method" is "long multiplication".

  • @vlynn5695
    @vlynn5695 День тому +1

    Incredible Video!! You make Algorithmic Mathematics look like art!

  • @William_Scranton
    @William_Scranton День тому +1

    2:01 - π and e, well played :)

  • @Matthew-eu4ps
    @Matthew-eu4ps День тому +2

    In university we had a method that involved: reducing the inputs modulo a series of primes, where a large power of 2 divided p-1. Then do a finite field Fourier transform, add the results, and reconstruct using Chinese remainder theorem. I can't remember the cost, but I think the reason had to do with the maximizing the computational benefit of the word size on the computer.

  • @donaldhobson8873
    @donaldhobson8873 2 дні тому +5

    The last term in the sum is
    n*1.5^k where k=log_2(n)
    =n*(2^log_2(1.5))^k
    =n*2^(log_2(1.5)*k)
    =n*(2^k)^log_2(1.5)
    =n*n^log_2(1.5)
    =n^(1+log_2(1.5))
    =n^(log_2(1.5*2))
    =n^log_2(3)
    So the last term in the sum, the bottom row of the recursion, takes asymptotically the same amount of compute as the whole thing. (up to a factor of 3.)

    • @PurpleMindCS
      @PurpleMindCS  2 дні тому +5

      Indeed! There's a special name for that (it's called a leaf-dominated recurrence).

  • @TallinuTV
    @TallinuTV День тому +1

    This was a really good explanation. Thank you!

    • @PurpleMindCS
      @PurpleMindCS  19 годин тому +1

      Thanks so much! Glad you enjoyed :)

  • @pablodelafuente4810
    @pablodelafuente4810 2 дні тому +2

    This is beautifully explained. Congrats!

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому +1

      Thanks so much! Glad you enjoyed :)

  • @Laurencio-lm7ui
    @Laurencio-lm7ui 2 дні тому +10

    1:59 the numbers being added are the first 6 digits of pi and e

  • @mohamedqasem
    @mohamedqasem День тому +1

    Amazing video. Thank you for this information.

  • @OliviaCynderAera
    @OliviaCynderAera 2 дні тому +8

    This video reminds me how I got good at multiplying by 9. Just add a 0 at the end and then subtract the original number! Somehow just trying to multiply by 9 without that was too much headache.

    • @leif1075
      @leif1075 День тому +1

      Zero at the end of what? Youboeft it kind of vague sorry..or isnt itneasier or maybe as easy and as clear as ypur method is just replace 9nwith 10 minus 1..that's what i think of and Inthink as effective as your method..

    • @eieieieieier
      @eieieieieier День тому

      ​@@leif1075 they meant the same thing
      9 * x = (10 - 1) * x = 10x - x

    • @KulaGGin
      @KulaGGin День тому

      Yes, it's a system. That's how a mathemagician does it, you can find the video about it here on YT.
      When I have to multiply like 7 by 39, you can do the same: 7x4 = 28 * 10 = 280 - 7 = 273. You can do all that in your head in 10 seconds.

  • @Roxor128
    @Roxor128 День тому

    The recursive splitting is something I came up with independently while trying to figure out how to build a multiplier that could be used to either do two n-bit multiplications in parallel or a single 2n-bit multiplication.
    I ended up on that track after noticing that a lot of GPU specs on Wikipedia listed half-precision performance being double that of single-precision, and wondering how Nvidia and AMD did it.
    That factor of two difference suggested the possibility of parallel operations, and while it was easy to figure out for addition, multiplication had me stumped for quite a while before I realised you could do a 2-digit multiplication as 4 single-digit ones with appropriate adding and scaling of the results, and that the same thing could be translated into circuitry by treating blocks of bits like digits for smaller multipliers.
    I eventually came up with something in Logisim Evolution that would do a single 32-bit multiplication, or two 16-bit ones, or four 8-bit ones.

  • @tristantheoofer2
    @tristantheoofer2 2 дні тому +1

    honestly fire video, and your explanation is actually phenomenal for.. literally everything. im subbing

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому +1

      Thanks so much! Gad you liked it :)

  • @denny141196
    @denny141196 День тому +1

    Great video. I'd just suggest one improvement - when describing the naive algorithm of scalar multiplication, I think it'd be useful to mention the O(n) process of adding the sub-results, then explaining that they get dominated by the O(n^2) process of creating the sub-results. Currently, it's stated that the total number of operations is directly proportional to n^2, which is slightly inaccurate.

    • @PurpleMindCS
      @PurpleMindCS  19 годин тому +1

      Yeah, that's true -- thanks for pointing this out. What I should've said at this point (and a few other times during the video too) is "looks more and more proportional to n^2 as n gets bigger."

  • @alextoppo3958
    @alextoppo3958 23 години тому +1

    I liked your video thanks for a simple yet awesome explanation.

    • @PurpleMindCS
      @PurpleMindCS  19 годин тому +1

      Thanks so much! Glad you enjoyed it.

  • @justmarfix
    @justmarfix День тому +1

    Thank you for the video. Quite helpful!

  • @mamiemando
    @mamiemando 2 дні тому +2

    Maybe you could be interested in the paper "addition is all you need" which describes an optimized multiplication for floats.

    • @trueriver1950
      @trueriver1950 2 дні тому +1

      In the eighties there was a considerable (though minority) interest in doing everything in interest arithmetic. The language Forth is probably the best place to start if you want to understand how that worked back in the day.
      Numbers like pi and e would be represented as rationals, that is as ordered pairs of integers, rather than as floats.
      The introduction of the floating point co processor, and the later incorporation of floating point arithmetic into the standard CPU was more programmer-friendly so soon won the battle. However with sufficient programmer skill it would be possible even now to get better performance using only integer-rational arithmetic. That would entail scientists, engineers, and accountants all learning more integer number theory than they seem to want to...

  • @kart_elon_xd
    @kart_elon_xd 2 дні тому +8

    9:54 oh no, math?? In a math vídeo??!

    • @trueriver1950
      @trueriver1950 2 дні тому +2

      It's also got maths, but that's ok because it's also a maths video 😅

    • @PurpleMindCS
      @PurpleMindCS  День тому +4

      XD

    • @leif1075
      @leif1075 День тому +2

      ​@@PurpleMindCSCan you PLEWSE share how or why Laratsuba would come up with thia and hiw can I come upmwith something like thst orngresterand be a math genius. I won't settle for anything less. Thanks for sharing and hope to hear frlm you.

  • @punkbutcher5321
    @punkbutcher5321 2 дні тому +12

    I am not sure about the initial behaviour of the builtin multiplication at 17:00. 2**-22 s is about 10**-7 s, which is horribly slow for a simple multiplication, which should be done in ns (10**-9 s).
    Probably some function calling overhead is the bottle neck at that point, where it also is important how you benchmark functions (measuring around or in loop? which time function? threading? etc)
    In Python I definitely would compile the functions with numba or alike, to reduce the calling overhead.
    The math is presented really well though :)

    • @bru57000
      @bru57000 2 дні тому +1

      My guess is this measurement is biased because of incompressible time of API calls handling datastructure transfert or transformation from the interpretive language to the machine code.

    • @JonBrase
      @JonBrase 2 дні тому +1

      10-7 is 100 ns, which is to be expected when you're working at arbitrary precision, even at less than a machine word of precision, an arbitrary-precision multiplication can't just do a hardware multiply and nothing else: you have to follow a pointer to each number, determine the length of each, decide what algorithm to use, retrieve the actual value from memory (if it indeed fits in a machine word and can thereby be loaded into a register), and only then can you perform the hardware multiplication, and then you have to store it back into memory.
      Even without any function call overhead, the bookkeeping will be expensive, as will the memory accesses if they miss in cache (especially if they get punted all the way down to L3 or RAM. A single access that misses in cache can easily eat 100 cycles).

    • @trueriver1950
      @trueriver1950 2 дні тому

      This doesn't matter, as the discrepancy will be constant. The demo is only looking at orders of magnitude here, hence the validity of the O() notation

    • @punkbutcher5321
      @punkbutcher5321 2 дні тому

      @@trueriver1950 I am not doubting the validity of the O notation, but people might draw wrong conclusions from the example shown. For example the location of the "kink" will be shifted due to the offset, leading to wrong conclusions about where the transition happens, which actually does impact the presented explanation.
      I did not expect an in-depth performance tutorial, but hinting at caveats would have been nice. I am just trying to provide a clearer picture, point out pit-falls and what should be expected when people try to do similar studies out of curiosity.

    • @punkbutcher5321
      @punkbutcher5321 2 дні тому

      @@JonBrase As the presented code does not show the data type used, it is absolutely possible more book keeping is done than I expected, plus I am not familiar with performance levels of arbitrary precision modules in Python.
      However, Python does have a very heavy footprint when it comes to calling functions, I would not even look at different cache levels for this issue. Also keep in mind, that even for a single-digit multiply the cost is allegedly 100ns, which to me is absurd, so I still doubt this without seeing more details.
      However, it would be important to know how the performance is measured, otherwise we are just guessing. I shot myself into the foot once, because the rng for the data was part of the loop, and as the algorithm got faster (rewriting and compiling) that was the limiting factor at some point. The amount of input data pre-calculated and stored then also affects the cache levels used during the loop, assuming we measure the sequential throughput.

  • @fblua
    @fblua 2 години тому

    Great comments !! Excellent group, full of high level people.

  • @FunWithBits
    @FunWithBits 14 годин тому +1

    16:10 - the slope is flat at the start on the chart because a 64-bit CPU can multiply two 31-bit numbers directly. Then for up to two 63-bit numbers in just takes two to three instructions.(basically a multiply high and multiply low) After 63 bits it starts grow with the expected growth.

    • @PurpleMindCS
      @PurpleMindCS  3 години тому +1

      Hey, I just made a comment about this :) Hopefully that should help clear things up.

  • @valseedian
    @valseedian День тому +1

    ohhh, ya. u remember building my own LargeInt and LargeFloat types in c++ and js. my js arbitrary math system is still open source.
    multiplication is actually pretty well solved, it's division that has specific potential for improvement depending on specific inputs.
    iirc I think I came up with a multiplication system that used powers of 2 and an accumulator to reach O(n) speed technically... real world testing showed that chunking multiplication was many times faster despite being O(n^2)
    I originally wrote my when I wanted a fully homebrew rsa implementation for my fully home brew irc esque chat system so I could have end to end encryption. tested everything with phps gmp class

  • @Memose2
    @Memose2 2 дні тому +3

    17:46 toom really cooked 🍚

  • @markhelmick8084
    @markhelmick8084 23 години тому

    Great video. May I suggest one on algorithms for finding primes? Or maybe a basic one for beginning students showing binary arithmetic? I've coded arithmetic libraries in assembly on 8 bit machines and it's cool to me how simple and elegant binary math works. I think every programming student would benefit from understanding this.

  • @ry6554
    @ry6554 День тому +2

    17:05
    But why does the blue line's slope not change instantly at 2^7 digits? If Python's algorithm switches at that point, then shouldn't we see an instant snap in slope to match Karatsuba's algorithm? Why does it curve for a little bit?
    Edit: Ok so using background knowledge, Python has a similar case with sorting algorithms. I know there is a naive O(n^2) sorting approach: insertion sort, and a refined O(n*log(n)) sorting approach: merge sort. However, insertion sort is significantly faster at small array lengths compared to merge sort. To take advantage of this, Tim Peters implemented Timsort into Python, an ingenious algorithm that essentially executes insertion and merge sorts in parallel.
    This implementation is, interestingly, almost identical to Python's built-in multiplication. I want to ask a question regarding this, and while I know it is 100% answered somewhere online, I have other things to do. I can't take another rabbit hole. This might be a terrible question, but here goes.
    In that curved region after 2^7 digits, does Python use the naive and Karatsuba algorithms _simultaneously?_

  • @EconAtheist
    @EconAtheist День тому +1

    I dunno exactly what font MANIM defaults to (TNR offshoot?) but it always makes me feel warm inside. 😌

  • @rafaelsantiagoaltoe6606
    @rafaelsantiagoaltoe6606 2 дні тому +3

    Quick question: At 12:14, shouldn't it be from 1 to log2(N)? I don't think I can summarize well my line of thought, but I will give it a try.
    Suppose we have two numbers of two digits (N = 2) being multiplied. To calculate it, we divide them into 3 subproblems of size N=1, so the total work would be 3 * c * 1. As the stated summation goes from K=0 to K = 1, there would be two steps (3 * c * 1 + c * 2), while in reality only one is actually made, at K = 0 we already have the result, so no work is done in that bit.
    Well, this is my counter example. Now as for logically speaking, the algorithm's last step is summing up the calculated subproducts obtained from the step before, said last step is of size 3^1*N/2, after that we have the result and there is no step further to add for the work. If we made one more step, its work would be c * N, as if we had to do some calculations with the result.
    I know it is just semantics, and in the end everything will be eaten up by the bigO notation, but I just want to make sure I got everything right.

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому

      The summation gives an upper bound. It's true that the work done at the base cases is very small compared to the linear work done to use the subproblems at all higher levels, but if you look for example at the tree depicted at 7:08 (which has the same height as the tree you would get from Karatsuba's algorithm), you'll notice that for n = 4, the height of the tree is 3 = log_2(4) + 1, so when we add up the work done at each level of the tree, we get a summation from 0 to log_2 n.

  • @Morbazan125
    @Morbazan125 17 годин тому +1

    I’ve largely forgotten most of what I learned about math due to not having to do anything other than basic calculations for nearly 30 years😂

  • @HaroonKhan-h8w
    @HaroonKhan-h8w 2 дні тому +1

    amazing video. very well animated and explained!

  • @rogerman65
    @rogerman65 День тому +1

    6:20 in. What if x and y are uneven numbers, or even prime?

  • @FishSticker
    @FishSticker 2 дні тому +2

    Good to see you're getting attention

  • @bananacraft69
    @bananacraft69 Годину тому

    We weren't taught some algorithm for multiplication, we were told to memorise the result of every multiplication x*y for 1

  • @Ca7iburn
    @Ca7iburn День тому +1

    This has a 3blue1brown vibe.
    Very well done.

  • @seheyt
    @seheyt 2 дні тому +9

    1:44 allocating is not a neutral example of "accessing memory", and is usually several orders of magnitude more costly than the other examples of prototypical "operations"

    • @nurmr
      @nurmr День тому +3

      It depends on the memory allocator being used.

    • @KevinDay
      @KevinDay 20 годин тому +1

      But how long each operation takes doesn't affect algorithmic complexity. It matters for your actual real-world performance in the end, but it doesn't change the O notation.

    • @seheyt
      @seheyt 17 годин тому

      @KevinDay, true, however, O(n) with the constant being 2 orders of magnitude bigger still makes them uncomparable. So, it matters if you can express and evaluate the the complexity of the compound operation in the underlying operations

  • @considerthehumbleworm
    @considerthehumbleworm 2 дні тому +2

    Fast adders are really interesting too!

  • @ericfielding668
    @ericfielding668 2 дні тому +1

    This makes me more interested in division algorithms. I wrote my own division algorithm for arbitrarily large integers decades ago, but it was not efficient at all. It would have been quicker to subtract logarithms and then ant-log the result.

  • @kirillsukhomlin3036
    @kirillsukhomlin3036 22 години тому +1

    Very cool explanation (even though I already read, understood and implemented that algorithm).
    I have one minor suggestion: when you are showing your discord link, you have a room to put QR-code with that link.
    Even if you're having it in the description, still would be a nice addition.

    • @PurpleMindCS
      @PurpleMindCS  19 годин тому +2

      Ah, good point! I'll keep that in mind for the future :)

  • @lucavogels
    @lucavogels 16 годин тому +1

    I cannot think of any O(n^2) algorithm that is faster than any other O(n) algorithm, not even for a very small n.
    But really cool video through!

  • @beaverbuoy3011
    @beaverbuoy3011 2 дні тому +2

    Amazing!

  • @glowpon3
    @glowpon3 День тому +1

    I would expect the standard form of multiplication would be the fastest in binary given a number with a small bitcount (total 1s), so that might be taken into consideration when choosing Karatsuba or not. With a low enough bitcount it would end up being just a few additions in binary.

  • @chessematics
    @chessematics День тому +1

    Nice improvement on an older video someone else made, just like what karatsuba did.

  • @venkateshanujpawar405
    @venkateshanujpawar405 2 дні тому +2

    Amazing video 🔥🔥🔥

    • @PurpleMindCS
      @PurpleMindCS  День тому +2

      Thanks so much! Glad you enjoyed :)

  • @blockshift758
    @blockshift758 2 дні тому +1

    7:15 if i am understanding how this works it like separating each digits to their 10^n places? Like 1234 is broken down to 1000 200 30 4 and then individually multiplied to 5000 600 70 8?

    • @trueriver1950
      @trueriver1950 2 дні тому +1

      For explanatory purposes yes.
      In a computer the break down would be into powers of two rather than ten. (Exception below)
      Or on some machines that are operated for 8bit manipulation the smallest growing might turn out to be in groups of 8 bits.
      However, some machines also implement BCD arithmetic (or at least used to: for example the IBM 360 and 370 series, where fixed point integers of arbitrary length were stored in decimal, using 4 bits for each digit: hence the name "binary coded decimal").
      Those machines could natively multiply arbitrarily long BCD integers in a single machine instruction, a non-interruptible instruction that typically took huge numbers of CPU cycles to complete.
      These were much favoured in the COBOL era, as there's a direct mapping from the COBOL data types that don't tend to exist in science oriented languages.
      Whether the microcode implemented the naïve algorithm or something better is an interesting question, which I never bothered to ask back in the day. Anyone with a preserved 370 is welcome to do the timing tests ...

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому

      Yep! @trueriver1950 does a good job explaining it here :)

  • @greenstonegecko
    @greenstonegecko 2 дні тому +2

    Time Complexity is such a difficult topic. This video doesn't do it justice, but it's an entire field of math that specializes in optimizing math.
    It's weird how a field of math based on "being practical" could become so theoretical, it stops being practical...

  • @simdimdim
    @simdimdim День тому +2

    18:40 too bad the graph doesn't include the stuff from Alphatensor :D

  • @parthsavyasachi9348
    @parthsavyasachi9348 День тому

    I started listening to the video and it gave me idea to multiply two large digit numbers far far faster than this method. The method is so simple that I am surprised no one is using it.

  • @michaelbauers8800
    @michaelbauers8800 2 дні тому +3

    Even if we implemented long multiplication in a CPU, I am surprised that would be O(N*N). Because all it has to do, is a max of N additions for N bits. And addition itself doesn't do addition in serial as that would be way too slow. It does fast addition, with look ahead carry, to parallelize addition. That being said, CPU's still don't use long multiplication I think, as there's faster methods like this video talks about. I just don't think a CPU doing shift adds in binary with fast adders is O(N*N). Feel free to correct me.

    • @ccpersonguy
      @ccpersonguy 2 дні тому +4

      Big-O notation describes how an algorithm behaves as N trends toward infinity. Modern CPUs do have fast multiplication and addition ***for specific finite-sized inputs*** (say, 64-bit integers). On arbitrarily large inputs, long multiplication is still O(N*N). Let's assume that some CPU can do a 64-bit multiply-shift-add in 1 clock cycle. Long multiplying 128-bit still requires 4 mult-shift-adds, 256-bit requires 16, etc. The inputs double in size, time complexity quadruples. Still O(N*N).

    • @framegrace1
      @framegrace1 2 дні тому +1

      They have lots of improvements reducing the constant time to the minimum, by using fast multipliers and such. But the complexity is still O(N*N)

  • @InkLore-p3h
    @InkLore-p3h 11 годин тому +1

    Is Karatsuba’s method the same as using convolution?

  • @RayHorn5128088056
    @RayHorn5128088056 2 дні тому +1

    Amazingly, in five decades of professionally writing code, i never needed to know any of this. Interestingly, low-level math works differently on numbers you can express using binary bits rather than strings of digits.

  • @henbotb1178
    @henbotb1178 2 дні тому +2

    I'd love to see a video talking about rng's and prngs, like Mersenne Twister 19937

  • @PauloDutra
    @PauloDutra День тому

    Amazing! I wonder how optimizing the algorithm vs optimzing the "hardware" itself scales, for example you can do additions "in parallel" on hardware but it biger adders have a bigger latency, but can have the same output speed / throughput...

  • @pandorairq
    @pandorairq 2 дні тому +2

    What would interest me now is what performs better on the task of multiplying a large number with a small number lets say a number with more than 1000 digits with one with less than 2. Is the school way better or karatsubas method?

    • @framegrace1
      @framegrace1 2 дні тому +1

      It becomes an O(n) complexity problem, so use the faster for low values of n.

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому +1

      I believe that as @framegrace1 is indicating, the school way would be better for these types of inputs because it becomes an O(n) complexity problem and the extra additions and subtractions needed for Karatsuba's algorithm are expensive.

  • @justwannaknow_42
    @justwannaknow_42 2 дні тому +3

    Love your animations man ! Coming from a Manim UA-camr :)

    • @PurpleMindCS
      @PurpleMindCS  2 дні тому +2

      Thank you so much! I remember checking out your channel too. Keep up the good work :)

  • @boycefenn
    @boycefenn 2 дні тому +1

    i need more from this channel!

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому +1

      Glad you are enjoying the content! More coming soon :)

  • @akruijff
    @akruijff День тому

    @19:30 It is like sorting. Qsort and mergesort have a complexity of O(n log n) but sleepsort has a complexity of O(n). The latter is a lot slower than the former.

  • @amigalemming
    @amigalemming День тому +1

    It is astonishing that Kolmogorov believed that multiplication would not be possible faster than quadratic time, since fast convolution via fast Fourier transform was already known. Ok, Gauss' fast Fourier method was not widely known and Good-Thomas maybe was also not widely known and Schönhage-Strassen came up only few years after Cooley-Tukey.

  • @The_Pariah
    @The_Pariah День тому

    God, you are such a nerd.
    Which is exactly why I watched the whole thing and remembered to like.

  • @raymondgabriel5724
    @raymondgabriel5724 День тому +1

    Sir, could you please expand on how adding (a+b) and (c+d) works? 8:25

    • @PurpleMindCS
      @PurpleMindCS  19 годин тому +1

      We essentially use the algorithm described at 1:55.

  • @atomicJUMP
    @atomicJUMP 2 дні тому +2

    can you make a video on the harvey hoeven algorithm? i tried reading its article but couldnt understand a word 😅 it would be a good one!

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому +1

      I may cover this at some point in the future, but at least for the time being I have some other videos I'd like to make :)

  • @mat-hu5ys
    @mat-hu5ys 2 дні тому +1

    You choose the better thumbnail! Thank you

    • @PurpleMindCS
      @PurpleMindCS  18 годин тому +1

      Thanks so much! Glad you enjoyed :)

  • @memesifoundonline
    @memesifoundonline День тому

    19:37 do you think they might see use with the sort of astronomical things quantum computers theoretically would work with?

  • @MindGoblin41
    @MindGoblin41 2 дні тому +1

    Really pleasing vid. Subbed.