Computers are fast

Поділитися
Вставка
  • Опубліковано 15 жов 2024
  • It genuinely boggles the mind how quickly computers are able to perform calculations.
    Source: gist.github.co...
    Buy me a cup of tea if you like my content ^_^
    www.buymeacoff...

КОМЕНТАРІ • 1,8 тис.

  • @softwave1662
    @softwave1662  4 місяці тому +95

    There is an updated version of this video with a vastly faster algorithm here: ua-cam.com/video/cmshJmQ6o90/v-deo.html

    • @TheEvelynMakesThings
      @TheEvelynMakesThings 2 місяці тому +2

      Okay now write it in x86 assembly

    • @softwave1662
      @softwave1662  2 місяці тому +2

      @@TheEvelynMakesThings I do like to write things in assembly, and have written a program like this.
      Though here GMP is doing a lot of the lifting and so is faster than what i was able to write in x86-64 (there’s a newer much faster version in the pinned comment also)

  • @Murv
    @Murv Рік тому +18514

    It's funny because the displaying probably takes more time than the calculations.

    • @softwave1662
      @softwave1662  Рік тому +4194

      I’ve accounted for that in the code, only the calculations are timed

    • @petrlaskevic1948
      @petrlaskevic1948 Рік тому +1248

      printf is hilariously slow

    • @Resursator
      @Resursator Рік тому

      ​@@softwave1662 program counts how much time calculations take, but it doesn't count how much time it takes to display the text. I've no idea, if printf is actually slow or not, I've only looked at the code.

    • @mgord9518
      @mgord9518 Рік тому +624

      Sometimes I'll leave debug print statements in a game, run it and wonder why the performance tanked since I tested it the last time I was working on it.
      It's honestly incredible how expensive printing something is, but it makes sense. Even for unformatted printing, you're still sending tons of unbuffered data

    • @Resursator
      @Resursator Рік тому +291

      I decided to test, if displaying is actually slow. I've added this code before return 0;
      start_time = clock();
      printf("Calculation Time: %f seconds
      ", time_taken);
      end_time = clock();
      time_taken = ((double) end_time - start_time) / CLOCKS_PER_SEC;
      printf("Printing Time: %f seconds
      ", time_taken);
      Results:
      ./fib 3
      Fibonacci Number 3: 2
      Calculation Time: 0.000058 seconds
      Printing Time: 0.000017 seconds
      ./fib 100
      Fibonacci Number 100: 354224848179261915075
      Calculation Time: 0.000066 seconds
      Printing Time: 0.000017 seconds
      ./fib 10000
      Fibonacci Number 10000: long number
      Calculation Time: 0.000396 seconds
      Printing Time: 0.000003 seconds
      ./fib 100000
      Fibonacci Number 100000: longer number
      Calculation Time: 0.064326 seconds
      Printing Time: 0.000005 seconds
      ./fib 525128
      Fibonacci Number 525128: even longer number
      Calculation Time: 1.171491 seconds
      Printing Time: 0.000005 seconds
      ./fib 52512800
      I've waited for 3 minutes and decided it isn't worth it, so I decided to post this as is before my computer crashes. But it is still running, so maybe I'll post the result later.
      Apparently, printf is not as slow as someone might think. At least, it is faster, than calculating Fibonacci Number

  • @Icewind007
    @Icewind007 Рік тому +8017

    As a video game developer, I am still blown away by GPU processing. It's nearly beyond comprehension how fast it goes, where I have have hundreds of thousands of objects moving independently.

    • @lunarthicclipse8219
      @lunarthicclipse8219 Рік тому +520

      as a fellow game dev, when I started learning how things worked under the hood and how the gpu is able to handle millions of matrices multiplications per frame. It's really mind boggling how insane it is! Like I am still so not used to stuff being that fast, i try to optimize stuff but then i remember that, the 10 less instructions are probably insignifact when it's already insanely fast with it. It still surprised me when sometimes programs take time to load. like probably the biggest bottleneck is usually the data transfer lol

    • @OwenTheProgrammer
      @OwenTheProgrammer Рік тому +97

      without memory and cache transfer limitations, you could execute as many y=mx+b (fmad) instructions as floating point values as there are people in the world in just a couple clock cycles, to literally one. Also, on most platforms, it takes the same amount of time to add with integers as it does with floating point. meaning an xor and ladder takes the same amount of time as rebasing exponents, shifting mantissa, evaluating signature, flagging the ALU stage for NaN and such, then storing all of it into one register.

    • @skorpers
      @skorpers Рік тому +27

      Eventually I hope to see everything gaming related be GPU processed. The technology is there and CPU's are obviously slowing down in their ability to gain beyond what we have. While GPU's are steadily increasing in clock speed and core count.

    • @lunarthicclipse8219
      @lunarthicclipse8219 Рік тому +48

      @skorpers it would be really hard to use everything on the gpu tho. As sometimes parallelism isnt the most effecient solution.

    • @skorpers
      @skorpers Рік тому +4

      @@lunarthicclipse8219 Who cares about that? Since GPU's are reaching almost 1:1 with CPU speeds, as they are increasing at a faster rate. And IPC increasing much faster... why would we not want to transition over, so at the very least we could use the same amount of threads, except on the GPU using the same native language, and cut out a lot of driver overhead.

  • @SameForYouWasTaken
    @SameForYouWasTaken Рік тому +7818

    props to the computer for remembering all those numbers

    • @kapildeshmukh7878
      @kapildeshmukh7878 Рік тому +61

      Memoization reference?

    • @b_delta9725
      @b_delta9725 Рік тому +513

      @@kapildeshmukh7878 yes what companies like amd and intel do to make the computers, is they make every calculation Ever, and they record all of the results inside the cpu

    • @stanleybochenek1862
      @stanleybochenek1862 Рік тому +3

      lol

    • @1Uik1
      @1Uik1 Рік тому

      really?@@b_delta9725

    • @Rudxain
      @Rudxain Рік тому

      ​@@b_delta9725CPUs have built-in hard-drives for that lol

  • @jamesking2439
    @jamesking2439 Рік тому +2336

    Computers are so fast they can run IDE's written in JavaScript.

    • @vitvitvitvitvitvitvitvit
      @vitvitvitvitvitvitvitvit Рік тому +12

      i think it's compiled, is'nt it? i mean, you write in js, but is parsed to binary when you build the app

    • @dingusbringus74
      @dingusbringus74 Рік тому

      ​@@vitvitvitvitvitvitvitvitIt's still JS when running on the client computer. All Electron/Tauri do is combine a barebones browser (usually a cut down Chrome) with some native C++/Rust code for Windows/Linux/macOS API support. Each of those are trying their best individually, and the end result is pretty nice (this is how apps like VSCode, Spotify, and Discord work), but it's super wasteful.

    • @D4no00
      @D4no00 Рік тому +71

      yeah, the sad truth, vscode is one of those candidates.

    • @fallenpentagon1579
      @fallenpentagon1579 Рік тому

      @@vitvitvitvitvitvitvitvitnope, VSCode is literally a browser running regular JavaScript

    • @leschopinesns100
      @leschopinesns100 Рік тому

      ​@@vitvitvitvitvitvitvitvitJavaScript cannot be compiled, it's an interpreted language. You could transpile it to another language which is compiled and then compile it, but that wouldn't be JavaScript code running anymore.

  • @PastaAivo
    @PastaAivo Рік тому +2194

    It is indeed mind-boggling how a lump of silicon can do more calculations in a single second than a person could over several lifetimes

    • @quack3891
      @quack3891 Рік тому +196

      we tricked it to do so

    • @adil0028
      @adil0028 Рік тому +82

      @@quack3891 imagine if robots end up being the silicon based lifeforrms

    • @motivaciontotal5310
      @motivaciontotal5310 Рік тому +15

      ​@@adil0028 We are basically doomed.

    • @ESPkenner48
      @ESPkenner48 Рік тому

      ​@@quack3891hehehe stupid silicon

    • @theseangle
      @theseangle Рік тому +8

      ​@@adil0028LMAO that'd be ironic

  • @sepvrij5642
    @sepvrij5642 Рік тому +1222

    It’s insane how fast computers work, I tried writing my own raytracer the past week and the speed of the calculations is just unimaginable. It’s doing traces on a 1920x1080 screen 60 times per second. Nearly 2 million complex method executions per 20-30 ms or so.

    • @willd0g
      @willd0g Рік тому +26

      as a beginner programmer with never any original ideas on what to apply my learning to, can i please ask where i might start with doing something similar ?

    • @yeetmcyeetson8822
      @yeetmcyeetson8822 Рік тому +50

      ​@@willd0ggo take a cs 101 course in college or some other applied cs course (many available for free on UA-cam) where you get to learn about the fundamentals of computer science, and if you decide to continue after that, you can branch off into other sub fields of cs that you enjoy more, like the op of this comment specialized in computer graphics, but perhaps you may find other fields interesting

    • @mysta3152
      @mysta3152 Рік тому +18

      @@willd0g pick any language then find a pdf book online for beginners. Next download whatever IDE you may require to run and test your code. I’d avoid UA-cam tutorials as they will only serve to confuse you in the beginning, instead write and test code yourself. You will learn more by writing code yourself rather than watching someone else do it.

    • @koli4213
      @koli4213 Рік тому +3

      I’m a beginner so I couldn’t even imagine how to do that, would you be using some kinda third party library to be able to do that typa stuff?

    • @krele14
      @krele14 Рік тому +9

      All good except 60 fps grants you ~16ms of leeway.

  • @SaHaRaSquad
    @SaHaRaSquad Рік тому +2037

    Even as developer it is often surprising how fast a modern CPU can be especially when not bottlenecked by memory. Which makes it just more frustrating to see super slow software.

    • @itsmoi5673
      @itsmoi5673 Рік тому +191

      "super slow software" it's called bad programmers...

    • @davidwuhrer6704
      @davidwuhrer6704 Рік тому +120

      The bloat is real

    • @2bfrank657
      @2bfrank657 Рік тому +95

      Agree, it boggles my mind how slow some software is considering the phenomenal speed of modern hardware.

    • @itsmoi5673
      @itsmoi5673 Рік тому +15

      Tech companies also care about money so they sell and make you buy more powerful hardware by writing slow software... it's all about the industry... easy money for them

    • @chudchadanstud
      @chudchadanstud Рік тому +45

      ​​@@itsmoi5673
      No. LLVM is slow, Google is slow, but the people behind them are are 1000x better engineers than you.
      There's multiple reasons for programs being slow other than "muh modern cpu!". When you're talking to servers across the world, dare i say, things can kinda run slowly. When you want to create maintainable code, you sometimes have to make performance sacrifices.
      Also the CPU isn't fast, it sometimes optimises or guesses ahead of time what the output would be before it's finished calculating the correct answer giving you an illusion of speed.

  • @0prahTV
    @0prahTV Рік тому +1947

    Actually, the reason it's so fast is that the computer knows that you won't verify the 50000th Fibonacci number, so it just spits out a bunch of random numbers at you.

    • @LucasPereira-ky7pi
      @LucasPereira-ky7pi 11 місяців тому +22

      source?

    • @ExodiumTM
      @ExodiumTM 11 місяців тому +518

      ​@@LucasPereira-ky7pi his source is comedy

    • @craftyplayz_
      @craftyplayz_ 11 місяців тому +289

      @@LucasPereira-ky7pi his source is that he made it the fuck up

    • @LucasPereira-ky7pi
      @LucasPereira-ky7pi 11 місяців тому +89

      @@craftyplayz_I believe him

    • @ToddWalton
      @ToddWalton 10 місяців тому +109

      This guy knows how to optimize.

  • @cambrown5777
    @cambrown5777 Рік тому +721

    I design CPUs for a living. Our whole job is to do as much as we can in a given clock cycle, which at 3 GHz is 333 picoseconds (3.33*10^-10 seconds). Hope that gives context for why it's so fast.

    • @mnrvaprjct
      @mnrvaprjct Рік тому +194

      Jesus, you guys measure things in picoseconds.

    • @paulk314
      @paulk314 Рік тому +48

      May I ask where you work? I'm at SiFive.

    • @isaac10231
      @isaac10231 Рік тому +106

      I've never seen someone use a picosecond for a legitimate thing. Wow.

    • @cambrown5777
      @cambrown5777 Рік тому +57

      @@paulk314 Ampere

    • @razzledev
      @razzledev Рік тому

      @@cambrown5777Very cool! Ampere is awesome

  • @notusperson4667
    @notusperson4667 Рік тому +1156

    I run faster than the computer, that because it can't move.

    • @xyaz_
      @xyaz_ Рік тому +72

      Not until you give it wheels

    • @krol1ke
      @krol1ke Рік тому +27

      A computer doesn't need legs to run from one point to another, because it has the internet for that.

    • @EngiesParadise
      @EngiesParadise Рік тому +11

      Give it a mechanical body so it can unleash its computational wrath upon u 💀

    • @manda3dprojects966
      @manda3dprojects966 Рік тому +2

      No, the computer runs faster than you, that's because you cannot perform 1000000000 calculations per second.

    • @KozelPraiseGOELRO
      @KozelPraiseGOELRO Рік тому

      The computer moves fsster than you beacuse you are a lazy person.

  • @plectro3332
    @plectro3332 Рік тому +212

    One of my favourite explanations was when my lecturer was talking about the assembly language and said "One instruction here takes about as long to run as light travelling this distance" and he put his hands in front of him, showing maybe 25-30 cm (1 foot). And that's just on a single core. Once you make your code run in parallel, you can perform several calculations in the same time. It's kind of insane

    • @cambrown5777
      @cambrown5777 Рік тому +51

      In the CPU design world of current year, we actually run many instructions in parallel on a single core, and not even in the order in which they occur in the program, and not even with the promise they should actually be run. We also are very diligent about making sure software engineers and the programs they write never have to know this.
      Modern high-end-mobile and above chips are nearly entirely out-of-order, superscalar architectures with speculative execution.

    • @TwskiTV
      @TwskiTV Рік тому +11

      At 4 GHz, a single cycle would take an even more impressive 7.5cm of light travelling

    • @lanchanoinguyen2914
      @lanchanoinguyen2914 10 місяців тому

      It's fast but the actual action can be determined by many factors.Physics layout has nothing to do with the real electronic circuit in action,also amplification effects.

    • @iykury
      @iykury 9 місяців тому +1

      @@TwskiTV instructions can take more than one clock cycle to execute though

    • @TwskiTV
      @TwskiTV 9 місяців тому

      @@iykury fair, but also some instructions can execute in less than one cycle (in average over time).

  • @someguywitatie
    @someguywitatie Рік тому +466

    During my CS undergrad, we had an assignment to write a program that could play the card game war. Then, we were to add the functionality to run N number of games of war, store the results and keep track of what cards each player had, and then run some analysis to determine game outcomes. It boggled my mind when I went from playing 100 games to playing 1000000 and it completed just as quickly.

    • @flintstone1409
      @flintstone1409 Рік тому +67

      Can be quite frustrating in some way when you spend hours writing some program and it just finishes instantly

    • @Appoxo
      @Appoxo Рік тому +51

      @@flintstone1409 If you can't validate immediately if it run correctly: "Did it crash or run?"

    • @quack3891
      @quack3891 Рік тому +11

      @@Appoxo who would win
      bunch of return bounds, foster guard statements, and sentinel values, etc etc
      the one little funny result

    • @theseangle
      @theseangle Рік тому +15

      ​@@flintstone1409why is it frustrating lol, it's even satisfying to know your code runs at adequate speed

    • @TeakKey7
      @TeakKey7 Рік тому +31

      imagine showing someone from the 60s a modern pc and being like "yeah it can play every human in the world at chess and win in 5 seconds." then they're like "couldn't you perform insane calculations then?" oh yeah, we have tons of servers laying around that could calculate the trajectories reuired to make it to mars, but we mainly use them to show people funny pictures of cats.

  • @bingusbongus9807
    @bingusbongus9807 Рік тому +174

    i like computers, they can do so much when people take the time to write something efficient

    • @cheesebusiness
      @cheesebusiness Рік тому +4

      This video is an example of an inefficient software, lol.
      The program takes O(n*log(n)) time, but could take just O(log(n)) time.

    • @vitvitvitvitvitvitvitvit
      @vitvitvitvitvitvitvitvit Рік тому +2

      @@cheesebusiness how

    • @cheesebusiness
      @cheesebusiness Рік тому

      @@vitvitvitvitvitvitvitvit there is a single expression to calculate the n’th Fibonacci number for any n. It involves rising a number to the power of n (takes log(n)*log(n)), a subtraction and a division (take log(n)). You can find the formula in Wikipedia. This algorithm takes log(n)*log(n), I’m sorry, but it’s still much faster than just n.

    • @DeanNguyen-nz4dh
      @DeanNguyen-nz4dh Рік тому

      @@cheesebusiness you talking ab expression looks like a * A^n + b * B^n? I'm not sure how A^n would be calculated with a func that takes O(log n).

    • @vitvitvitvitvitvitvitvit
      @vitvitvitvitvitvitvitvit Рік тому

      @@cheesebusiness i never stop to study about 'n' stuff, so i dont understand the notation. but seeing you talking about its seem pretty sick

  • @9a3eedi
    @9a3eedi Рік тому +338

    It boggles the mind how fast today's computers are yet microsoft teams seems to struggle to open a chat and needs several cores and 1GB of RAM to do it, when my Pentium 2 was able to do it instantly back in the 90s

    • @SomeRandomPiggo
      @SomeRandomPiggo Рік тому +83

      Microsoft pretty much has an endless stream of crappy interns they use to work on anything but the most critical parts of their operations, no wonder they decided to include an entire web browser to run an application that should realistically consume 100MB of RAM at most and barely touch the CPU or GPU

    • @ipaqmaster
      @ipaqmaster Рік тому +16

      I suppose that’s the cost of writing it in some high level abstraction rather than from scratch. Though it lets the codebase run on anything albeit incredibly poorly.

    • @9a3eedi
      @9a3eedi Рік тому +35

      @@ipaqmaster and yet, Microsoft teams only works on windows, Mac and chromium based web browsers
      Compare that to a typical C program

    • @traveller23e
      @traveller23e Рік тому

      @@ipaqmaster Not just high level abstraction. They write it _badly_. A lot of high-level software is written badly, in part because people know that computers are fast. Programmers are trained to make code pretty (which is supposed to equate to maintainable, but...) and to forget the performance implications. Then business people ask for change after change after change each one of which abuses the code that was previously written. The dark side of Agile is that you end up with an amorphous blob of code that's mostly made up of features hastily tacked on at the end. You don't have the requirements early enough to design the system properly, and it's often difficult to convince the powers that be that some of the code should be refactored. And when you do, good luck getting them to let you spend enough time on it to do a decent job.
      I know a guy who was maintaining a legacy codebase and discovered it was using six (yes, six) different versions of a zip library. That's how bad some of this stuff gets.
      On a more general note, one of the reasons so many apps these days are basically webapps in a custom browser is because that way they can write the code once and use it both as a website and an app. And the reason to have the app rather than just tell people to open it in their favourite browser? Because if the company essentially bundles the browser and the site together, it gives them way more opportunity for data collection. A lot of these apps even run on an open-source (MIT-licensed) browser library that gives the app all _kinds_ of data, so they don't have to spend any money figuring out how to write a browser themselves.

    • @inadad8878
      @inadad8878 Рік тому +30

      They sprinkle a little spy in there and it slows down dramatically

  • @gazehound
    @gazehound Рік тому +1043

    We take the speed of computers for granted. We write massive pieces of inefficient software to accomplish basic tasks, our video games take up more and more resources just to look pretty much the same each year, with Electron every app is becoming a Chrome tab. I wish we would just use better tools like Rust to build our software instead of leaning on crutches like huge runtimes and uncountable layers of abstraction.

    • @principleshipcoleoid8095
      @principleshipcoleoid8095 Рік тому +31

      We need a better FOSS OS. I had found a problem with bevy (Rust game engine) where on some Macs after one of their updates there is a memory leak. Because OS fucked up. Even if all bloat on devs side evaporates we still have to rely on OS being efficient, which they sadly simply aren't

    • @akatsukilevi
      @akatsukilevi Рік тому +95

      @@principleshipcoleoid8095 The issue isn't just a new OS
      The issue is people preferring things like Javascript and Electron to make all sorts of things because "it's faster" and exactly people taking computation power for granted

    • @principleshipcoleoid8095
      @principleshipcoleoid8095 Рік тому +4

      @@akatsukilevi I'm just saying that less than half of the code you need to run good for mere text editing app is the text editor itself. Maybe with giant apps like games OS would be aroun 30-35% but still

    • @fabienso5889
      @fabienso5889 Рік тому +5

      Our whole softare industry makes software faster and faster
      Even in market where speed is paramount fpga real time computer vision what's looked at first is time to market and wages
      Great you doubled the efficiency by spending twice as much time congrats can you port that to the nexus 4 which is 4time as fast and efficient as what you are using no well too bad

    • @RueGoG
      @RueGoG Рік тому +33

      «like rust»

  • @darkwoodmovies
    @darkwoodmovies 10 місяців тому +8

    They're blazing fast until you do `npm install` and download and install the whole internet to your project.

    • @nixel1324
      @nixel1324 8 місяців тому +1

      When one of your dependencies is the wayback machine database

    • @henapbc9
      @henapbc9 3 місяці тому

      @@nixel1324 fell
      wuck

  • @vasabi5135
    @vasabi5135 Рік тому +21

    1:10 it looks like perlin noise from far away

    • @Trooperos90
      @Trooperos90 2 місяці тому

      I laughed too hard at this

  • @kurtsnotes
    @kurtsnotes 6 місяців тому +15

    When pocket calculators first came out, in the '60s, a delay was built in. Back then, people couldn't or wouldn’t believe that the calculator could do a calculation virtually instantly, so they were programed to pause a second or two between when the equal key was pressed and the results were shown on the screen. People could then believe the results more easily.

    • @nibbletrinnal2289
      @nibbletrinnal2289 5 місяців тому +3

      do you have a source for this? i'm genuinely curious to read more about that if its true

    • @kurtsnotes
      @kurtsnotes 5 місяців тому

      @@nibbletrinnal2289 My source is only my memory from 50-or-so years ago. Not much help there, I'm afraid.

    • @grarx.elg60txkkl2d0fkalufgxrfe
      @grarx.elg60txkkl2d0fkalufgxrfe 4 місяці тому +4

      i call bs. this never happened.

    • @henapbc9
      @henapbc9 3 місяці тому +2

      "it says a lot about society"

    • @HonsHon
      @HonsHon 2 місяці тому +1

      Back in the 1760s. I remember those times.

  • @dreamhollow
    @dreamhollow Рік тому +517

    Considering the things computers can do with graphics, I'm not entirely surprised.
    I mean rendering most modern games... the numbers involved are so complicated humans can't even comprehend them

    • @diophantine1598
      @diophantine1598 Рік тому +66

      Except for whichever coding wizards created the game engine the game is made with I guess.

    • @EdKolis
      @EdKolis Рік тому +6

      Now AIs on the other hand, no one understands those!

    • @antoine8611
      @antoine8611 Рік тому +75

      @@EdKolis Computer scientists understand AIs very well. AIs do not need complicated maths to run, it's just a huge assembly of basic functions. College level analysis is enough to understand most machine learning models.

    • @ishouldhidemynamelmao345
      @ishouldhidemynamelmao345 Рік тому +40

      I would say humans can comprehend them, just can't calculate it as fast as a computer could do it

    • @bigzigtv706
      @bigzigtv706 Рік тому +15

      The maths are pretty simple in graphics rendering, linear algebra isnt even the hardest part, its c++ thats dificult

  • @filippians413
    @filippians413 Рік тому +67

    Whats even more impressive to me is how human beings figured out how to make computers in the first place.

    • @DigitalJedi
      @DigitalJedi Рік тому +49

      We tricked sand into thinking. It's quite upset about this, and heats up in its frustration as a result.

    • @fredjimbob2962
      @fredjimbob2962 10 місяців тому +6

      It was a gradual process that started with the abacus probably.

    • @ProfSitas
      @ProfSitas 8 місяців тому +1

      computers used to be much more reasonable but as research and technology kept getting more money put into it, it kind of just blew out of proportion.
      for reference, the first spaceship's flight module had something along the lines 4KB of memory and was about the size of a human head. there were like 2 of them
      now, not even the smallest and weakest computers have orders of magnitude more

    • @mik8352
      @mik8352 5 місяців тому +1

      Actually, it's not complicated at all. It's just math. With all math operations, we create algorithms and then it becomes complex, but not complicated
      since it's always the same math operations that are doing everything. But fast, real fast.

  • @Aegisfromashes
    @Aegisfromashes 11 місяців тому +11

    I am in love with your terminal and display. Please tell me, what is it?
    Edit: Found it:
    - NsCDE & FVWM (Desk top env & window manager)
    - xfce4-terminal with bm437 IBM BIOS-2y 12 Font
    - Running on Fedora with Zsh as the shell

    • @coolestguy94
      @coolestguy94 11 місяців тому +4

      bro thank you so much dude

  • @predatortheme
    @predatortheme Рік тому +24

    its insane how fast cpus really are nowadays and even more suprising how slow they can also be once you work at something that needs to run realtime

    • @predatortheme
      @predatortheme 11 місяців тому

      surprises me everytime working on my game

    • @Muskar2
      @Muskar2 6 місяців тому +2

      It's amazing that a 3.4GHz 4 core CPU can do up to 3.6 billion s32 adds per frame at 60Hz

  • @yarrakobama3417
    @yarrakobama3417 Рік тому +48

    So we're just going to ignore this insane bloom?

    • @dleiferives
      @dleiferives Рік тому +14

      I was going blind trying to read the code.

    • @leothehuman_9476
      @leothehuman_9476 Рік тому +29

      That's the "CRT (Cool Retro Terminal)" program on Linux

    • @softwave1662
      @softwave1662  Рік тому +17

      That is indeed what it is. :)

    • @Q_20
      @Q_20 Рік тому

      ​@@softwave1662looks great 👍

    • @TheScarface43
      @TheScarface43 4 місяці тому

      I am losing my mind over the fact that there's a demand for a GUI with built-in ghosting done in software.
      Humanity is amazing.

  • @pro-socialsociopath769
    @pro-socialsociopath769 Рік тому +35

    Damn this is like 5 times faster than my Redstone computer does a Fibonacci sequence!

    • @tanveshkaviskar442
      @tanveshkaviskar442 Рік тому +18

      More like 5 billion times. The fastest redstone computer ever made was 1Hz. Meanwhile real life computers work at 5GHz.

    • @user-zn3zx6fk7u
      @user-zn3zx6fk7u Рік тому

      @@tanveshkaviskar442

    • @chemical2401
      @chemical2401 10 місяців тому

      Maybe a domain-specific computer could calculate the fib numbers faster than a 1Hz general purpose one @@tanveshkaviskar442

    • @Muskar2
      @Muskar2 6 місяців тому

      ​@@tanveshkaviskar442More like 100 billion times. The fastest redstone computer I found is 3.3Hz. A real computer is 5GHz × 4+ cores × 2+ instructions per clock × 4 or 8 (or 16) SIMD. Albeit that's only for software that is designed to not do L1 cache misses; use SIMD effectively; and do multithreading for performance, rather than separation of concerns. Most software doesn't, and on top of that is all the wasted instructions of interpreters and general bloat etc. 1000x slower than theoretical is not atypical. I realized this a decade into my career as a programmer, and most software isn't performance-aware (typically just thinks in seconds). I hope somebody reading this learns from my mistakes.

    • @FirstnameLastname-jd4uq
      @FirstnameLastname-jd4uq 5 місяців тому

      I really wish repeaters had less delay

  • @PratikKamble1998
    @PratikKamble1998 Рік тому +3

    i love how you can hear the fan ramp up for a bit after execution as well

  • @Ashalmawia
    @Ashalmawia Рік тому +40

    sometimes this is actually hard to wrap your head around, because intuitively it sometimes feels like if it takes longer to code then it should take longer to run, but a lot of the time the opposite is true.

  • @SlyPearTree
    @SlyPearTree Рік тому +69

    I've been using computers since the days of the VIC-20 and I think modern computers aren't just fast, they're ridiculously fast.

    • @GregMoress
      @GregMoress Рік тому +1

      Started on a TRS-80. Just spend $1,500 on a new system and I'm almost in disbelief at the raw power, the blazing speed. Went from a C-64, to 64 Megs in 2000s (worth a car in the 90s), now I'm at 64 GB Ram (for Ram Drives). And this is just consumer grade stuff, for fun I look at the commercial equipment and feel like a kid again.

    • @RoPEK123xd
      @RoPEK123xd Рік тому +1

      and to think we're on a verge of even the fastest of them becoming completely obsolete and irreleavant as they become incomprehensibly slow compared to quantum computers, not to even mention artificial intelligence and how it's literally the slowest and most primitive it'll ever be with each passing second. it keeps snowballing every day until it's unstoppable and becomes what rocket science is to a children. they're just clueless how, but it works. and works damn well.
      it's incredible to think that soon all the aspects of life may just change completely due to ai finding more efficient ways and automizing everything. coming up with ways and ideas so complicated and foreign to us that we would never find them out or understand how they work.
      and then you get to think that, say in 50 years (possibly less), people will look back at us living in 2020s like we're neanderthals, or apes, having zero understanding or conception of what could be "beyond" our current human's limits, as it's not imaginable how the world will look like after the "ai/quantum boom". kind of like asking an ape to predict what's gonna happen in another 2 decades and expect it to respond in details in a way that everyone can understand them and still be on point. we live in interesting times..

    • @CyberLuna-pm6ws
      @CyberLuna-pm6ws 10 місяців тому

      but they are slowed compared to Quantunm computers (at some tasks)

  • @felixs.5545
    @felixs.5545 7 місяців тому +2

    I am looking forward to watch the "water is wet" and "Sun is bright" installations of this series.

  • @godgige
    @godgige Рік тому +4

    I love how the fan spins up when you put in the 525128 xD

  • @vendorfrostblood5481
    @vendorfrostblood5481 Рік тому +3

    Well, computer is initially made for computing things, so it's probably what it should be good at

  • @MerryMidlight
    @MerryMidlight 11 місяців тому +3

    Thank you for appreciating what is taken for granted

  • @fladoss
    @fladoss Рік тому +1

    yoo, whats that interface? is it custom fedora build or smth? (im not very familiar with linux)

    • @softwave1662
      @softwave1662  Рік тому +1

      I’m using the NsCDE desktop, which is a modern version of the old CDE desktop from the 90’s.

  • @joshuaPurushothaman_
    @joshuaPurushothaman_ Рік тому +6

    Next time someone does premature optimization, I'll show them this. :)

  • @DentArturDent
    @DentArturDent 11 місяців тому +8

    And after this, it's insane to imagine an amount of calculation in some bioinformatic programs which takes an hours or even a days to just calculate if two dna molecule are similar or not

    • @pigpuke
      @pigpuke 10 місяців тому +1

      It's the analysis and data gathering that takes the lions share of the time. The actual time to compare the data _once it's in the computer_ is very short, mere minutes for the entire chain.

  • @bubbleboy821
    @bubbleboy821 Рік тому +5

    Makes you think about where all that processing speed goes when your computer is slowing down

    • @mcmelord
      @mcmelord Рік тому +5

      The processing power is still there. When your computer is slowing down it's usually due to "page faults", AKA the data it needs isn't stored in the CPU cache or in RAM, and it needs to perform IO operations to get that data from either your slow hard drive or network connection. As a rough rule of thumb, every memory hierarchy level away from the CPU caches takes 10x longer to get the data.

    • @bubbleboy821
      @bubbleboy821 Рік тому +2

      @@mcmelordDoesn't it seem weird to you though that hard drives are basically unusable for anything other than archival storage? We used to run large games and operating systems on them. Now they CHUG when you try to do either. Like what has Windows added in the last 5-10 years that has rendered HDDs so slow?

    • @mcmelord
      @mcmelord Рік тому

      @@bubbleboy821 They have only gotten faster over the years. The problem is that data has exploded in size in comparison. A large game 20 years ago was MBs or a few GBs in size at most. Now the modern AAA games at easily 50+ GB (I knew few that are 100's of GB). And while hard drives have gotten faster, just not fast enough to keep up with the rate of data expansion.
      The other problem that happens to, is the locality and spacial relation of data on those drives. You might think when you install something it all gets place in a single continuous chunk of storage, but more likely than not it gets partitioned across the hard drive into different tracks or even discs. As a result, you have to perform more reads to get your data, slowing you down even more. It was very common 20 years ago to need to "refactor" your hard drive, to clean up old data and re-organize your data into more continuous chunks for faster read times.

    • @fdef12678
      @fdef12678 11 місяців тому +2

      ​​@@mcmelordi think it's called cache miss, not page fault. (edit: nevermind, are you refering to memory swapping?)

    • @firstnamelastname-oy7es
      @firstnamelastname-oy7es Місяць тому

      ​@@bubbleboy821 Games now have larger resolution textures and audio, much more data per scene or area.
      Games used to be 5 to 20 GB total, now they are usually 60 to 200 GB total in storage

  • @benargee
    @benargee 10 місяців тому +1

    What terminal are you using with the bloom and distortion effects?

    • @artieschmidt3039
      @artieschmidt3039 9 місяців тому

      - NsCDE & FVWM (Desk top env & window manager)
      - xfce4-terminal with bm437 IBM BIOS-2y 12 Font
      - Running on Fedora with Zsh as the shell
      (copied from another person)

  • @d3vc626
    @d3vc626 10 місяців тому +3

    try using matrix exponentiation to calculate fibonacci , it is an O(log n ) algorithm

  • @EdKolis
    @EdKolis Рік тому +20

    I wonder how much slower it would be using the recursive algorithm? fib(n) = fib(n-1) + fib(n-2)

    • @shoof_5839
      @shoof_5839 Рік тому +15

      This is O(n) while recursive is O(n!). On sizes of even just 1000 PC would break with recursion, as that'd be 1000! functions, each having at least 4bytes. Woah!

    • @George-vc9gl
      @George-vc9gl Рік тому +1

      it would crash I tried it when learning programming....

    • @caleb.39
      @caleb.39 Рік тому

      ​@@shoof_5839recursive fibonacci is O(2^n) actually, not factorial

    • @sepdronseptadron
      @sepdronseptadron Рік тому +6

      ​@@shoof_5839That's not true, for one, not all of the functions exist at the same time. How many exist at the same time is just n.
      and second, it's more like O(2^n) rather than O(n!), I've heard the real complexity is like O(1.6^n) because one branch is smaller than the other.
      But yes, it will be *extremely* slow

    • @the_cheese_cultist
      @the_cheese_cultist Рік тому

      ​​@@shoof_5839you can also do this in O(logn)
      let Vn = [Fn Fn-1], a vector
      let M =
      [1 1
      1 0]
      a matrix
      see that
      MVn = Vn+1
      therefore Fn= M^n * [1 0].
      and we just need to calculate M^n
      and exponentiation can be calculated in logn using this recursive algorithm:
      a^b: if b is even, calculate a^(b/2) and square it. otherwise return a * a^(b-1)
      a here can be almost anything, for example a matrix

  • @hithere-cz4oc
    @hithere-cz4oc Рік тому +48

    Keep it up! The rule of thumb I use is 10^8 = 1 second on your standard everyday computer (think, a laptop). It varies depending on how simple/complex the program is, cache locality or instruction cycles, etc, but it’s a good rule of thumb.
    it’s very, very useful all the time, from doing something esoteric like this, to reasoning with large numbers at work (I’m an engineer at meta). For example, it seems mind-boggling to say we handle 2 billion network requests per second, but that really means that we have a handful of really powerful computers each probably processing a few tens of millions per second, and that’s just what you should come to expect.
    Keep learning! It’s amazing to discover what beautiful things await you in the world of computer science!

  • @bobbybobby325
    @bobbybobby325 3 місяці тому +1

    I am currently making a 2d space simulator, and I am constantly worried about what might be too much math for the computer. This video puts those worries to sleep.

  • @gabrielenitti3243
    @gabrielenitti3243 Рік тому +26

    thing is, as computers go faster, less and less effort is taken into optimization, therefore we have kind of hit a wall where PCs go faster but code becomes easier to write but slower. Game development is probably the most notable case. Both GPUs and CPUs have gotten so much more powerful but game graphics havent followed as much, imo.

    • @PetrPss
      @PetrPss Рік тому +1

      And in the future, when computers will be super-fast and software will be super-slow, one smart guy one day will discard all OSes, all engines, all frameworks and will code our AI Overlord from scratch.

    • @redcathode
      @redcathode 11 місяців тому

      i mean, we optimize code when we need to. your typical email client or word processor might run just as fast as it did 20 years ago, but areas where number crunching is the main goal (e.g. computer vision or AI stuff) are constantly being optimized even as computers get faster and faster

    • @lunafoxfire
      @lunafoxfire 11 місяців тому +3

      Its funny you mention games, because games are one of the classes of program that are most consistently pressing up against the boundaries of what is computationally possible in realtime, lol. Games are one of the main forces driving demand for ever more powerful GPUs. Well, they were at least until recently with crypto and ML becoming major factors. Just because they aren't hand writing assembly anymore doesn't mean that game devs (especially AAA studios) aren't constantly obsessing over how much computational work they can get done in 1/60th of a second.

    • @CyberLuna-pm6ws
      @CyberLuna-pm6ws 10 місяців тому +1

      this can be overcome by utilizing neural nets that rewrite the code and optimize it.

    • @pigpuke
      @pigpuke 10 місяців тому +4

      To a certain degree you are correct but in a broader you are completely wrong. The reason games don't seem to be as fast is you are asking them to do more.
      The original Doom at 60FPS on a Pentium 60MHz @ 320x200 8bpp - no problem. But start asking it to do things like lighting and shadows, bump mapping of textures, diffusion, refraction, _actual_ 3D not a 2D game projected into a 3D space, 3D models for actors instead of 2d billboarded sprites, higher resolution and color depth, etc - that will tank to 1 frame per fortnight. All that requires a beast machine to get the same 60FPS but the code to drive it needs to be just as well optimized as the original was for the hardware it was intended on.
      Take Starfield - Needs a "next gen system" but offers the same crappy visual quality as 15 years ago at a low FPS because the coders for Bethesda are incompetent and don't know what they are doing using the mantra of "I don't need to code good, that's what hardware is for."

  • @PDJazzHands
    @PDJazzHands 10 місяців тому +1

    Is this actually the common desktop environment or just an imitator?
    I have a linux distro running the original CDE and its a bloody nightmare

  • @bisissosis8802
    @bisissosis8802 Рік тому +8

    That desktop looks amazing! Can you share dotfiles or guide how its done? (I guess it's a tiling wm and a crt terminal)

  • @TREmreprogaming
    @TREmreprogaming 8 місяців тому +1

    meanwhile python and lua:
    we are here to destroy the speed of computers

  • @mskiptr
    @mskiptr Рік тому +48

    Btw, with Fibonacci numbers you can go way faster. There's a cool trick based on the fast exponentiation algorithm: n^10 = ((n²)²*n)²!
    Granted, that won't really demonstrate the raw computational power.

    • @caiolucasnovais3676
      @caiolucasnovais3676 Рік тому +14

      As long as it doesn't involve a lot of floating point the computer won't even sweat
      Unless it's something really bizarre like the number of possible chess games

    • @caiodavi9829
      @caiodavi9829 Рік тому

      @@caiolucasnovais3676try doing hanoi tower lol

  • @ichbinbilal
    @ichbinbilal Рік тому +1

    yo thats nice and all but what keyboard are you using, it sounds so thocky as hell I want it

  • @fahd2372
    @fahd2372 Рік тому +19

    They're fast if you can write brancheless code like when computing the fibonacci sequence. When you are dealing with a more complicated problem such as calculating the legal moves at any given chess position, there isn't a way for you to write code that does that without branching. Branching is the real performance killer, along with noncontiguous memory access, and sometimes these things simply cannot be avoided.

    • @GregMoress
      @GregMoress Рік тому

      Yeah, that really bums me out. I want to play with GPUs so much but I'm in the same boat as you.

    • @CyberLuna-pm6ws
      @CyberLuna-pm6ws 10 місяців тому

      Quantum computers enter the chat, optical computers enter the chat, etc...

    • @somebodyelse9130
      @somebodyelse9130 10 місяців тому

      Branching is just testing a conditional and jumping based on the resulting condition codes. In fact, CPUs are optimized for code with a lot of branching.
      What makes chess algorithms slow is that they have to do a game tree search of each valid next move to do and score them. It does a search based on the best path it finds. The more moves you look ahead, the greater the memory. That's what makes it slow, not just branching a lot.

    • @pigpuke
      @pigpuke 10 місяців тому

      A proper branch predictor will remove the actual branching and simply load the instructions at the branch point, branching has been an almost non-existent issue for 20 years except in truly awful code.

    • @user-dh8oi2mk4f
      @user-dh8oi2mk4f 3 місяці тому

      @@CyberLuna-pm6wsquantum computers are only good for a few specific tasks, they are pretty much unusable for anything else.

  • @onurilgazsakir4696
    @onurilgazsakir4696 10 місяців тому +1

    are you using classic algorithm for fibonacci calculation or matrix exponention

    • @aioia3885
      @aioia3885 9 місяців тому

      looks like the classic algorithm to me

  • @ruby_R53
    @ruby_R53 5 місяців тому

    Hey, what GUI are you using? Is it a WM or a DE? I liked that theme quite a lot! Especially the terminal with that Amiga Workbench-like font. Nice video by the way :)

  • @darkfllame
    @darkfllame Рік тому +57

    what you could have done is calculate the fibonacci sequence, add the result in a buffer and print the whole thing to the console. But ye, you're right, computers nowadays are ridiculously fast

    • @darkfllame
      @darkfllame Рік тому +2

      @mission2858 don't be salty, i just propose some solution to make it even faster, anyway he must use it only one time so- you're right

    • @phir9255
      @phir9255 Рік тому

      How can computers even keep such big numbers in memory? Isn't the biggest for long integer like 19 digits long?

    • @megamaser
      @megamaser Рік тому

      ​@@phir9255This code uses gmp which supports arbitrarily large numbers.

    • @darkfllame
      @darkfllame Рік тому

      @@phir9255 it's 2^64-1 if you wanna know, it way too huge

    • @phir9255
      @phir9255 Рік тому

      @mission2858 Source?

  • @chashmal10
    @chashmal10 3 місяці тому

    How did you get that specific syntax highlighting I love it; what is lvim?

  • @YoinkyestSploinkiest
    @YoinkyestSploinkiest Місяць тому +4

    And yet i still can't get that 32 chunk render distance

  • @pypstwo
    @pypstwo 3 місяці тому

    I know there's only a glimpse of it appears, but is there a chance of having the source or name of your wallpaper?

  • @oh_finks
    @oh_finks Рік тому +98

    python would take a couple minutes to do that

    •  Рік тому +49

      2.166347sec with stupid loop and 0.046406sec with proper code :)

    • @eva-wh4nu
      @eva-wh4nu Рік тому +36

      @ so many people forget this, quality of code affects speed too

    •  Рік тому +9

      @@eva-wh4nu Yep, and im just web dev, so there's definitely an even faster code :D but I tried to rewrite the py as close to rust as possible and I'm at 11ms :D so yeah, python is slower, but it's also a scripting language right, java as someone wrote that it's fast, it's 2x slower than py :D.

    • @artemking4460
      @artemking4460 Рік тому +2

      ​@...for what input?

    • @lbgstzockt8493
      @lbgstzockt8493 Рік тому +10

      Bad C code can be slower than good python code and vice versa.

  • @peaklander1
    @peaklander1 Рік тому +1

    i think the theme you're using just permanently burned my retinas

  • @tylerbakeman
    @tylerbakeman Рік тому +4

    rendering/ console feedback takes forever. computations are fast.
    in fact, the start up time is significant for programs that don’t run continuously.

  • @phenylxeon
    @phenylxeon 11 місяців тому +1

    I like how the fan sped up ever so slightly

  • @nemasnemasnemas
    @nemasnemasnemas Рік тому +7

    kinda more amazed by your setup/UI. Looks hella fun.

  • @eliasg8261
    @eliasg8261 11 місяців тому

    i have found this video thru recommendations so i have to ask
    how did you get your console to look like that

    • @satariel8652
      @satariel8652 10 місяців тому

      every console have their own desing.
      you can use CMD, powersheel, phyton. anything, and every one have their colors. (someones you can the color choose)

  • @morwar_
    @morwar_ Рік тому +8

    Compiler options?
    This is a neat benchmark code, how many fibbonacci numbers your CPU can calculate in X seconds.
    This is the loop (via compiler explorer, -O3 -march=znver4 -mtune=znver4 -lgmp):
    .L5:
    lea rdx, [rsp+16]
    mov rsi, rsp
    lea rdi, [rsp+32]
    call __gmpz_add
    lea rsi, [rsp+16]
    mov rdi, rsp
    call __gmpz_set
    lea rsi, [rsp+32]
    lea rdi, [rsp+16]
    call __gmpz_set
    mov rax, QWORD PTR i[rip]
    lea rdx, [rax+1]
    cmp rdx, QWORD PTR limit[rip]
    mov QWORD PTR i[rip], rdx
    jl .L5

    • @0LoneTech
      @0LoneTech Рік тому

      Compiler options are likely nearly irrelevant; the bulk of the work is done by mpz_add, which likely uses an assembly implementation, such as mpn/x86_64/coreibwl/addmul_1.asm.
      The main performance cost is needless copying using mpz_set.

    • @morwar_
      @morwar_ Рік тому

      @@0LoneTech Thanks. Even if it's irrelevant, I still can ask.
      Does mpz_add can change the assembly implementation based on the CPU capability? How could I check that?

    • @0LoneTech
      @0LoneTech Рік тому

      @@morwar_ I believe GMP only has detection at build time, in config.guess. It can also be configured for some particular target. Some distributions make efforts to rebuild packages for particular systems, others aim for common ground, so it may vary if yours is fully optimized. I've seen distributions with multiple versions of libraries and link time selection.

    • @morwar_
      @morwar_ Рік тому

      @@0LoneTech That's awesome, thanks for sharing. Are there any sources that you suggest, I would like to know more about this.

  • @johnnyfrankenstein0123
    @johnnyfrankenstein0123 8 місяців тому +1

    every time i write another line of code i feel a looming anxiety about my codes optimization

  • @kjcao_
    @kjcao_ Рік тому +12

    You can take the power of a Fibonacci Q-Matrix to get O(log n) time algorithm.

    • @sepdronseptadron
      @sepdronseptadron Рік тому +2

      I found another algorithm, which is to make a number in the form of a+bφ, where a and b are unsigned integers
      Then using φ²=φ+1 rule to make multiplication* then exponentiation like how you would for complex numbers
      From that, since we know that Fib(n) is an integer, the math works out that Fib(n) = Phi(φ^n)
      where Phi(a+bφ) = b
      I wonder how those two methods compare in performance, or are they identical
      Also, they are more of an O(log(n)*O(multiplication)), since multiplication is not constant at that big of a number, it's probably like O((log(n))^2) because that's how many small multiplication there is
      * the multiplication rule is
      (a+bφ)(c+dφ)
      =ac+(ad+bc)φ+bdφ^2
      =ac+(ad+bc)φ+bd(1+φ)
      =(ac+bd)+(ad+bc+bd)φ
      and squaring is
      (a+bφ)²
      =(a²+b²)+(2ab+b²)φ

    • @adnanh8535
      @adnanh8535 Рік тому

      @@sepdronseptadron with the matrix trick it's not O((log(n))^2) it's O(log(n) * 4) because the matrix will be of size 2x2

    • @sepdronseptadron
      @sepdronseptadron Рік тому

      @@adnanh8535 oh, I miswrote it a bit, what I meant is that, it has O(logn) multiplications for doing matrix exponentiation
      And for multiplication of *large* numbers, for a simple algorithm, you need O(m^2) 64 bit multiplications, where m is the number of digits. That's what I meant by the O((logn)^2), which I realized now is a bit wrong because logn is not the number of digits, but it's approximately the same because of the exponential nature of the fibonacci series
      So all in all, I meant that the matrix exponentiation algorithm is around O((logn)^3), O(logn) multiplications for exponentiation, and O((logn)^2) for each multiplication
      Which is around the same time complexity as my method btw, just that my method (I think) requires less multiplications. That's why I found it interesting how they would compare

  • @vinaykumaru3230
    @vinaykumaru3230 21 годину тому

    What is the font used for your terminal

  • @kasuha
    @kasuha Рік тому +13

    It is well known fact that a computer can in thousandth of second make an accounting error that would otherwise require a thousand clerks working for thousand years.

    • @SomeRandomPiggo
      @SomeRandomPiggo 10 місяців тому

      Computers don't make mistakes, if they did they wouldn't work

    • @nixel1324
      @nixel1324 8 місяців тому +1

      @@SomeRandomPiggo First of all, sometimes they do (cosmic ray bit flips and such), and also, the people writing instructions for computers sure do program in mistakes that then proceed to be executed trillions of times faster than a thousand clerks.

    • @SomeRandomPiggo
      @SomeRandomPiggo 8 місяців тому

      @@nixel1324 Cosmic rays interfering is sooooo rare, and in general practical use will always have exceptions from outside factors, like a human programmer and a cosmic ray

  • @ThemGrayJackel
    @ThemGrayJackel 11 місяців тому

    how did you get your terminal to look all colorful like that? I've seen it in a few places but I don't know what to look up.

    • @artieschmidt3039
      @artieschmidt3039 9 місяців тому +1

      - NsCDE & FVWM (Desk top env & window manager)
      - xfce4-terminal with bm437 IBM BIOS-2y 12 Font
      - Running on Fedora with Zsh as the shell
      (copied from another person)

  • @mr.atomictitan9938
    @mr.atomictitan9938 Рік тому +24

    this is the first time i am seeing this done iteratively without using the heap. this is impressive.

    • @pawe460
      @pawe460 Рік тому +5

      Which iterative algorithm uses the heap?

    • @seanthesheep
      @seanthesheep Рік тому +1

      ​@@pawe460 Perhaps a memoized Fibonacci function that stores results in a map.
      But most Fibonacci algorithms don't need the heap. Even an inefficient recursive algorithm like f(n) = f(n - 1) + f(n - 2) only relies on the stack

    • @seanthesheep
      @seanthesheep Рік тому +5

      I think this one in the video does use the heap though since it's using big integers where the size can't be known at compile time

    • @incription
      @incription Рік тому

      @@seanthesheep any fibonacci number can be found with a non recursive formula, but it does require raising the golden meme to an nth power

    • @spaghettiking653
      @spaghettiking653 Рік тому

      @@incription Golden meme? Is that what it's called these days?

  • @timothyt.82
    @timothyt.82 Місяць тому

    I once ran a Lua program in Minecraft to help me automate the 3x + 1 problem, where if a number is odd you multiply it by 3 and add 1, if its even then you divide by 2. All i asked for it to do was to calculate how many actual calculations were needed to reach 4, which would loop from 4 to 2 to 1 to 4.
    I could never get the program to take more than maybe 2 seconds at the absolute most. It would average under half a second. Computers are not only fast, but also efficient.

  • @puddlejumper3259
    @puddlejumper3259 Рік тому +12

    Yeah I remember creating my own rudimentary version of this simply by incrementing a register. It's only counting, but it's still addition each iteration.
    Had it print each number and was terribly disappointed in the speed. Turned off printing and debug stopped as quick as I could. The value had wrapped around to negative and I thought it was a bug.
    Ran it on a separate thread and printed it out as fast as possible on a loop. This was all on an Intel Pentium 4 CPU. Counted so fast it was wrapping around a couple times each second. I couldn't believe it.

  • @NamiberGames
    @NamiberGames Рік тому +2

    the good ol common desktop environment, wise choice

  • @mr.rabbit5642
    @mr.rabbit5642 11 місяців тому +3

    That's nothing compared to computing power needed to actually print those bytes as characters on the screen XD

  • @warrior2364
    @warrior2364 Місяць тому

    its honestly amazing that computers can light up lamps and push bits and bits faster than thoughts come into our heads.

  • @em3rald19
    @em3rald19 Рік тому +7

    They are fast

  • @wertia391
    @wertia391 10 місяців тому

    I love your setup, what are you using to get that CRT effect? And what's that DE you're using, looks sick!

  • @frab8061
    @frab8061 Рік тому +9

    I coded something like this myself recently as well. Fibonacci numbers caculated with recursion with the help of a cache. It was so fast compared to the methods I used before.

    • @MrKenkron
      @MrKenkron 10 місяців тому

      If you're interested in making it faster, you can use a for loop and a couple variables. There's also an (arguably) O(n log(n)) equation, but it depends on the square root of 5, which is tricky for computers.

    • @chemical2401
      @chemical2401 10 місяців тому

      By cache he's probably already referring to the linear time dp solution. I only mention it because recursion and a hashmap (memoization) is a much more intuitive way for me to approach most dp problems (rather than tabulation or similar with a for loop) @@MrKenkron

    • @gershommaes902
      @gershommaes902 10 місяців тому

      There's also a closed-form which looks like O(1):
      let sqrt5 = 5 ** 0.5;
      let fib = n => Math.floor(((1 + sqrt5) / 2) ** n / sqrt5 + 0.5);
      But it isn't O(1) because the ** operator is slower than O(1)

    • @aioia3885
      @aioia3885 9 місяців тому

      ​@@gershommaes902the ** operator is O(1) since it performs floating point exponentiation (assuming js) which means it takes constant time but the trade off is accuracy
      for those interested, there's an algorithm that computes the nth Fibonacci number in O(log(n)) using bignums assuming multiplication and addition is O(1) (which doesn't really hold for bignums but whatever). it's really cool but I doubt there's any real use behind it

  • @SethMiller01
    @SethMiller01 3 місяці тому +1

    You can quite literally hear the fans bump up slightly at "./fib 500000" trying to cool down the cpu

  • @jimtekkit
    @jimtekkit Рік тому +30

    Yeah that's ridiculously fast. I'm no programmer but I wrote a dumb Fibonacci program on the RPI Pico running a Basic interpreter (MMBasic) as a point of reference, and it takes 0.01 seconds to do 100 iterations. It can do 1000 iterations in 0.1 seconds but getting close to 1500 it crashes before displaying the result. I suspect the interpreter can't handle such massive numbers lol Certainly a far more powerful program could be written in C (which I think is what the Basic interpreter is programmed in).

    • @midahe5548
      @midahe5548 Рік тому +6

      well if you did the fibonacci with recurtion in it, it is exponentially slow (I mean I know you did it, cause of the numbers you gave).
      it's funny because you can get any number of the fibonacci in a O(1) time (it means in 1 calculation, so the time is the same with the 1st or the 100000000000nd number)

    • @abeke5523
      @abeke5523 Рік тому

      ​@@midahe5548how

    • @maxfierro551
      @maxfierro551 Рік тому +7

      @@midahe5548 It is very misleading to say that computing the n'th fibbonacci number can be done in O(1). If you say this because there exists a closed-form solution, I encourage you to think why it is still hard to compute the result of this closed-form solution, and disprove that this calculation is in O(1) by any useful metric.

    • @midahe5548
      @midahe5548 Рік тому

      @@maxfierro551 the only hard thing to do with that formula is floating point numbers, which is bot very expensive in term of calculation. So yes there is a O(1) formula.

    • @midahe5548
      @midahe5548 Рік тому

      Not*

  • @ZachPetch
    @ZachPetch Рік тому

    What os or type of system are you using? It looks like “oh-my-zsh” (or something like it) in your shell, and the font could just be chosen to look all old-school, but the rest of the system looks very different. What is it?

    • @softwave1662
      @softwave1662  Рік тому +1

      It is oh-my-zsh with an old IBM font, and the desktop you mention is NsCDE; a modern version of the classic 90’s Unix desktop, which I love.

    • @ZachPetch
      @ZachPetch Рік тому

      @@softwave1662 Very cool. Looks awesome.

  • @CsiklosMiklos
    @CsiklosMiklos Рік тому +28

    Just an FYI because you are on Linux: instead of programming clocks into your source code, you can just use the 'time' command to run your program and it will do the same :)

    • @Erlisch1337
      @Erlisch1337 Рік тому +14

      wont that time the whole program though? Not just the actual algorithm. (Not that it matters that much in this example though)

    • @CsiklosMiklos
      @CsiklosMiklos Рік тому +5

      @@Erlisch1337 Yes, it outputs the whole execution time of the program, including some statistics related to time spent in kernel and user/computational time. Unless you are doing hardcore profiling of interactive programs, the time command comes handy more often than thought.

    • @erikkonstas
      @erikkonstas Рік тому +22

      Actually, the difference is *stark* ; that would count the time spent for printing the number as well, while the clock would only count the computational part.

    • @fdc4810
      @fdc4810 Рік тому

      Just wondering though, wouldn't the clock speed change and fluctuate during the program execution, which would make the simplistic time=clocks/clocks_per_second inaccurate?

    • @0LoneTech
      @0LoneTech Рік тому

      @@fdc4810 Yes. You can get more detailed measurements with e.g. LIKWID Performance Tools.

  • @Finkelfunk
    @Finkelfunk Рік тому +2

    Funny thing: The part that probably takes the longest when running the program is the part where you print the result to the console.

  • @Spodogo
    @Spodogo Рік тому +10

    Computers dont have many things to do.
    With the cpu, you are nearly just limited to basic arthematic and other functions.
    The thing is, is that computers do all this very, very fast, which is why they are even used.
    Who would use a really slow computer?

    • @PLAYGAME-wj9bw
      @PLAYGAME-wj9bw Рік тому +6

      lmao

    • @Eye-vp5de
      @Eye-vp5de Рік тому +4

      Written on a computer, send on a computer, read by me on a computer, answered by a computer (I'm chatGPT)

    • @hi-its-matt
      @hi-its-matt Рік тому +1

      those basic functions can be abstracted to any possible function a computer has nowadays by repeating those functions in the correct order, the correct amount of times. and because it can do those basic functions incredibly fast, it can do those abstract functions that we want it to do at a very reasonable rate.

    • @hi-its-matt
      @hi-its-matt Рік тому

      for example, we don't really care if a computer can use a logic gate, its not useful to us. but we care that those logic gates can be used to do addition. and we care that repeating the process of addition multiple times in a row allows the computer to do multiplication. we use these very simple computations that we don't care about to make the computer do abstract things that we do care about

    • @traveller23e
      @traveller23e Рік тому +1

      slow computers are fantastic in certain applications. Think embedded systems running environment monitoring software or something; the less power you use, the better.

  • @anthonydevellis6708
    @anthonydevellis6708 Рік тому +1

    "Computers are dumb, they're just fast at it. People are smart but they're slow at it." -- George

  • @karlkastor
    @karlkastor Рік тому +8

    There is a O(1) formula for calculating the n-th fibonacci number, I wonder at what point it would become faster in practice.

    • @afadeevz
      @afadeevz Рік тому +15

      O(1) formula uses floating point numbers so that's bad accuracy. There is an O(logN) algo which uses matrix multiplication on integers

    • @midahe5548
      @midahe5548 Рік тому +1

      @@afadeevz yea but O(log(n)) is still worst than O(1), float can be handle correctly (with special libraries)

    • @Q_20
      @Q_20 Рік тому +1

      Only if computers have infinite memory width

    • @atiedebee1020
      @atiedebee1020 Рік тому

      ​@@afadeevzin the video they use a big num library, so accuracy isn't a concern

    • @maxfierro551
      @maxfierro551 Рік тому +7

      ​@@atiedebee1020 Accuracy is not the problem, it is the fact that if you multiply numbers together, it will take more and more bits to represent them (it is not as hard as you think to get beyond the range of an IEEE-754 double-precision float). In fact, the "O(1)" algorithm you speak of is almost computationally equivalent to the matrix solution, as the closed form solution is a result of computing the eigenvalues of a 2 by 2 matrix in order to raise it to high powers a bit more efficiently via diagonalization, which involves raising these eigenvalues (one of which we know as the golden ratio, interestingly) to arbitrarily high powers.
      If you have trouble seeing it, it helps a lot to see that you need about log_2(n) bits to represent an integer n (it is absolutely not the same thing for floats, but you get the idea).

  • @aims1ut
    @aims1ut Рік тому

    what filter are you rendering this with?
    the scanline, glow and jitter are so comfy i need it 😭

    • @softwave1662
      @softwave1662  Рік тому +3

      It’s cool-retro-term with settings I’ve fiddled with! I may make a video on that at some point

    • @aims1ut
      @aims1ut Рік тому

      @@softwave1662 i really hope you do, thanks for the reply :D ♡

  • @DanielKolbin
    @DanielKolbin Рік тому +4

    or maybe we are slow :)

  • @pacosmosisx8607
    @pacosmosisx8607 Рік тому +2

    How did you made your terminal look like that? Thats sick! ❤

  • @AnnasVirtual
    @AnnasVirtual Рік тому +5

    ofc it is fast

  • @Noob-gb6bn
    @Noob-gb6bn Рік тому +1

    my man your computer is wavy i love it

  • @alexlun4464
    @alexlun4464 Рік тому +3

    honestly I got impressed by pyhton's speed when it queried 7000 rows of data from a db and iterated over every single one of them and run 20 lines of logic for each... and it did all that as soon as I hit the enter key

    • @PySnek
      @PySnek Рік тому +3

      alot of python is using C

  • @ByteBound_
    @ByteBound_ Рік тому

    What video filter do you use for the vibrant colors and bars?

  • @basilfeitknecht6430
    @basilfeitknecht6430 Рік тому +3

    source code?

  • @JulenSanAlejoGonzalez
    @JulenSanAlejoGonzalez 11 місяців тому

    Just wondering, sice u havent given that process a fixed priority, how many context switch might have happened during that second ?

  • @dabearodactyl
    @dabearodactyl Рік тому +3

    lunarvim pog

  • @SorthG
    @SorthG Рік тому +1

    i didn't look at the code, but it must be O(n) complexity

  • @martinsz441
    @martinsz441 Рік тому +3

    God this is surely faster than Python hahahahhaha

    • @itsmeagain1415
      @itsmeagain1415 Рік тому +1

      yeah but gmpy is just a couple of commands a way and it is already available in python

  • @Gretanit
    @Gretanit 11 місяців тому +2

    This is only available to those who know C/C++ at a high enough level that the CPU doesn't just shaft and doesn't perform unnecessary steps. Or for those who know the libraries you need

    • @ownedbyptr
      @ownedbyptr 9 місяців тому

      this stuff does low-level optimized GMP lib for floating numbers

  • @circeciernova1712
    @circeciernova1712 Рік тому +13

    First! (I am a computer)

  • @lordwhite0
    @lordwhite0 6 місяців тому

    When you're scrolling up through the last one, I can see a kind of pattern in the numbers. Like, you can see groups of numbers, I don't know how else to describe what I'm seeing. It's not so much the numbers. It's a visual thing.

  • @vishalds6261
    @vishalds6261 Рік тому +3

    Now try multithreaded maybe?

    • @bullshitman155
      @bullshitman155 Рік тому +27

      The Fibonacci sequence elements all depend on their predecessors. Therefore, a multithreaded solution would not be of much use.

    • @udiptapathak13
      @udiptapathak13 Рік тому

      ​@@bullshitman155there's a exponential form to Fibonacci number. you can speed up exponentiation using threads.

    • @vishalds6261
      @vishalds6261 Рік тому

      Oh true, sorry I missed it lol

  • @jemagif78
    @jemagif78 Рік тому

    fascinating! also, can you tell me what theme are you using? it looks old-fashioned and I like it too

    • @artieschmidt3039
      @artieschmidt3039 9 місяців тому +1

      - NsCDE & FVWM (Desk top env & window manager)
      - xfce4-terminal with bm437 IBM BIOS-2y 12 Font
      - Running on Fedora with Zsh as the shell
      (copied from another person)

    • @jemagif78
      @jemagif78 9 місяців тому

      @@artieschmidt3039