The limits of AI - a ramble including Sidewinder missiles

Поділитися
Вставка
  • Опубліковано 25 тра 2023
  • Putting Artificial Intelligence into perspective.
    References
    [1] Ministers not doing enough to control AI, says UK professor, retrieved 17/May/2023
    www.theguardian.com/technolog...
    [2] AI pioneer warns Government offering little defence against threat of technology, retrieved 17/May/2023,
    www.standard.co.uk/tech/the-t...
    [3] Formula 1 Turkish GP 2021 (User: SAİT71)
    commons.wikimedia.org/wiki/Fi...
    [4] Lewis Hamilton visiting fans at the 2018 British Grand Prix at Silverstone (User: Jen_ross83)
    commons.wikimedia.org/wiki/Fi...
    [5] Website of the International Mathematical Olympiad, Retrieved 23/May/2023
    www.imo-official.org/problems...
    [6] Starship Flight Test by SpaceX
    • Starship Flight Test
    [7] Curve-Fitting, xkcd, xkcd.com/2048/
    [8] Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter, retrieved 21/May/2023
    www.theguardian.com/technolog...
    [9] / 1607677190254235648
    [10] A. O. Dlyzko, “Newton's Financial Misadventures in the South Sea Bubble”, Notes and Records 73, 29 (2019).
    royalsocietypublishing.org/do...
    [11] List of prime ministers of the United Kingdom, Wikipedia,
    en.wikipedia.org/wiki/List_of...
    [12] Afghan High School Class of 2015 (Photo by NATO Training Mission-Afghanistan)
    commons.wikimedia.org/wiki/Fi...
    [13] Mnist Examples (User: Josef Steppan)
    commons.wikimedia.org/wiki/Fi...
    [14] Chandra X-ray Observatory by NASA,
    chandra.harvard.edu/resources...
    [15] HMS Illustrious' Operations Room team during an exercise, Photo: POA(Phot) Ray Jones/MOD
    commons.wikimedia.org/wiki/Fi...
    [16] Steelworks of BlueScope Steel Limited company in Port Kembla, Australia by Marek Ślusarczyk
    commons.wikimedia.org/wiki/Fi...
    [17] Wolfsburg VW-Werk (User: AndreasPraefcke)
    commons.wikimedia.org/wiki/Fi...
    [18] I. J. Goodfellow et al., “Explaining and Harnessing Adversarial Examples”, arXiv:1412.6572v3 [stat.ML] 20 Mar 2015, arxiv.org/pdf/1412.6572.pdf
    [19] K. Eykholt et al., “Robust Physical-World Attacks on Deep Learning Visual Classification”, arXiv:1707.08945v5 [cs.CR] 10 Apr 2018, arxiv.org/pdf/1707.08945.pdf
  • Наука та технологія

КОМЕНТАРІ • 349

  • @ImprobableMatter
    @ImprobableMatter  Рік тому +30

    I am re-opening a Discord server, read below: discord.gg/cTjcKqPCEk
    This is more of a rambling, low effort video. Bonus points to the first person to correctly identify both references in the thumbnail. I wanted an excuse to talk about the channel, so please go ahead. Let me know if there’s a topic you want to see in a regular video, if you want a serious livestream, or even something like a video game stream. If you want to join the Discord, go ahead, but then you have to actually talk in there, and someone needs to volunteer to be a mod etc.

    • @TerminatorV250
      @TerminatorV250 Рік тому +9

      1. WOPR, the supercomputer from the movie wargames.
      2. The Self-Aware Machines technology from the best entry in the civilization series: SMAC.

    • @irgendwieanders2121
      @irgendwieanders2121 Рік тому

      @@TerminatorV250 Streber...
      Gratulation.

    • @thefullmetalmaskedduo6083
      @thefullmetalmaskedduo6083 Рік тому +3

      Calling this stuff “AI” is a bit of a misnomer because it is not really artificial intelligence like seen in science fiction, it’s just glorified computer algorithms.

    • @user-me7hx8zf9y
      @user-me7hx8zf9y Рік тому +1

      @@TerminatorV250 beat me to it.

    • @FirstSecondThirdFourth
      @FirstSecondThirdFourth Рік тому +1

      This was way lower effort than your other videos, for which your channel really stands out, and for which I truly admire.
      While there is naturally a lot fear mongering out there about AI, I think the sentiment that it could have massively negative repercussions is well deserved. It’s becoming evident that its impact will be more intense than any other technological revolution: in magnitude, breadth, and and per unit of time.
      You covered some notorious doom scenarios which are arguably unrealistic for some of the reasons you pointed out. However this really does minimum to dismiss the ever growing space of plausible yet significantly negative consequences that we need to start thinking about. I say growing because it doesn’t appear to me that a lot of these have ever been seriously researched. Again, I’m not talking about instant catastrophes resulting from an anthropomorphized ai. Hope to hear your more nuanced views on this in a future video.

  • @keilerbie7469
    @keilerbie7469 Рік тому +76

    I think it's pretty incredible that the original Sidewinder missile could hit anything with such a detection method.
    Also, your videos never fail to be clear, concise and straight to the point, even with a more rambly tone.

    • @michaelmoorrees3585
      @michaelmoorrees3585 Рік тому +9

      Its a genius solution, when digital computer barely existed, and where the size of a large room ! Purely analog, with a simple ONE PIXEL IR sensor. That spinning reticle is genius, and probably not the first iteration.

    • @candle_eatist
      @candle_eatist 10 місяців тому +2

      Those china lake guys are an ingenious bunch. Primitive missile engineers really had to come up with some funky stuff to overcome the limitations of their time.

    • @0MoTheG
      @0MoTheG 18 днів тому +1

      It wasn't good though. It could only hit a target that wasn't maneuvering from certain angles.
      You had to be below and behind.

  • @davidmurphy563
    @davidmurphy563 Рік тому +89

    I code DNNs, so I thought I'd throw my 5 cents in.
    Dykstra / pathfinding. Already the latest AI's are using tools to solve problems so you would expect them to simply call the function like we do. Even an external one. Plus, expressing a linear relationship as a matrix-vector transform is actually efficient. Sure, there's an overhead in deciding to use it, but there's a much bigger overhead in the human doing the same. AIs can fail dramatically without warning though, I accept that point and it's likely fundamental. Although, as can humans.
    The Lewis Hamilton thing was important. Let's go back to the 90s and make the same argument with Kasparov. No AI (and really, the architecture is irrelevant, only the results) could beat him in the domain of chess. But it didn't match humans which you seem to be suggesting, it very quickly went superhuman. In fact, this is a general trend, from calculators to traffic management, very subhuman to very superhuman in rapid succession. I can't think of a single example that sat at human equivalent. Why would it? It's possible but it's not an outcome we should expect.
    "The limiting factor is the speed of building and physically organising things" - there are plenty of problems which aren't limited but that but ok, many certainly are. Are you saying an AI can't produce a more efficient construction and planning process? Is there something special about project management that means it can't be learnt? I doubt it. Now, for as long as you rely on humans to build stuff that's a weak link but, even then, an AI can optimise the hell out of the process. Put them in charge of factories / robots and I don't see why you would predict an inefficient outcome when they are optimised for efficiency.
    Ok, I've already written too much and I'm only on minute 3! Plus, we haven't even touched on GAI and alignment.

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +29

      With Dijkstra, my point is that a human or an AI with access to an efficient route finding function will both get the answer in the same time (from the function) and can do no better. The analogy with Hamilton/Kasparov is actually another example of this. The best AI will not beat Hamilton by much in the race if at all, because the time is limited by the speed of the car and Hamilton is already close to max efficiency. Like the first point, but function = car. Same with the last point - AI might optimize a little here and there, but won't magic up a rocket or a building or whatever.

    • @davidmurphy563
      @davidmurphy563 Рік тому +33

      @@ImprobableMatter I see what you're saying:
      1. A* (easier to spell!) looks like the most efficient algorithm but unless there's a mathematical proof, we don't know it can't be improved. The search space of pathing algorithms is huge - how do we know A* isn't a local minimum? Recently, AlphaTensor made an improvement on the Strassen matrix multiplication algorithm that had looked absolutely unassailable. How do you know A* won't meet a similar fate?
      2. "The best AI will not beat Hamilton by much in the race if at all". I don't accept that at all. Take launch assist, it's banned in F1 because when it's used the car simply cuts through the pack at the start of the race. My MG4 has automatic emergency braking which gives reaction times even Hamilton can't even nearly match and that's a dumb algorthm. AI could potentially be massively better, have an in-depth understanding of the surface, predict the actions of the other drivers, know the heat and thermodynamics of the car to a degree mere traction control. It could run every lap within a whisker of perfection. Such an AI would likely lap him many times. But still, to your fundamental point that it is still just driving a car and subject to the same laws of physics. Clearly that's correct yes. Magic isn't a thing.

    • @Knirin
      @Knirin Рік тому +12

      Nearly all item production scheduling algorithms can be reduced to linear equations and we have methods for solving those equations. Mostly known from the 1700s and 1800s. The biggest problem with using those algorithms is the bureaucracy keeps ignoring the output of those equations.

    • @davidmurphy563
      @davidmurphy563 Рік тому +6

      @@Knirin Yup, linear algebra is Turing Complete.

    • @analogsamurai9576
      @analogsamurai9576 Рік тому

      Alignment is pointless, pruning the data of garbage goes against what biology did to give us intelligence, a proper GAI sifts without needing to be aligned only told what is asked of it, alignment begins with a false concept that the procedural LLMs now can compute which they can't.. you're right about everything else though
      We have to remember the "AI" we are using now is traditionally AI as much as chess machines are, but the media and false actors in CS want it to be declared conscious and therefore dangerous because we apparently regulate behavior now, it's a complete load of hype bologna for doomers to huff and investors to play monopoly with
      General intelligence even if built will resemble a poorly built brain because it is still a procedural computer algorithm, biology made brains in real time with all sorts of communication between the neurons
      I think it's sad the media ran so hard that "it's conscious it's conscious" when all LLMs are basically Minecraft world gen on crack

  • @entrootentropy
    @entrootentropy Рік тому +68

    The scariest thing about AI to me is what it puts into perspective: how easily impressed people are, and how vigorously humans jump at the opportunity to indulge in magical thinking when we encounter anything we don't understand. In an uncomfortable number of conversations that I have about AI, I leave depressed and honestly a little afraid, not because of what I fear AI might do with intelligence, but because of what I fear humanity might do with stupidity.

    • @leploeo7145
      @leploeo7145 11 місяців тому +2

      Very well put

    • @nemtudom5074
      @nemtudom5074 11 місяців тому

      Its the fear of the unknown
      #Lovecraftwasright

    • @CandidDate
      @CandidDate 11 місяців тому +1

      Ha ha! Take over the world. What if whales were more "intelligent" than humans? What if humans could be more like whales?

    • @davidsplooge14
      @davidsplooge14 10 місяців тому

      man, you suck ass. I hope I wouldn't ever meet you.

    • @theaureliasys6362
      @theaureliasys6362 10 місяців тому +1

      ​@@nemtudom5074Lovecraft was very very very very wrong.
      Ahem. Don't Google his cats name.

  • @mikedrop4421
    @mikedrop4421 Рік тому +14

    Hey I'm just glad we're talking about AI and not "The Blockchain". At least AI has some. actual benefits

    • @EspHack
      @EspHack Рік тому +1

      case in point, just as newton was bad at stocks, you and our dear host here are presumably very capable people despite being seemingly unaware of bitcoin's significance

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +9

      LOL, you saw the part about the the South Sea Bubble and still brought up Bitcoin.

  • @bigpushing7167
    @bigpushing7167 Рік тому +37

    Your title says it all: AI has limits. Just like everything else in the world
    Thanks for making the video

    • @Neomadra
      @Neomadra Рік тому

      What a pointless statement. The limit is breaking the laws of physics, agree. But except for that, AGI has the potential to change everything. Better to take it serious instead of bury your head into the sand.

  • @Tahoza
    @Tahoza Рік тому +144

    Thanks for making this. The fear-mongering and/or simping over AI has become exhausting.

    • @thefullmetalmaskedduo6083
      @thefullmetalmaskedduo6083 Рік тому

      I don’t know why people even call this stuff “AI” when it is actually not even true AI like the ones seen in science fiction which fear-mongers and simps love to compare it to, it is just glorified computer algorithms no different from the algorithms in computers back in the 90’s.

    • @Tahoza
      @Tahoza Рік тому +8

      @@thefullmetalmaskedduo6083 I agree. All the recent hype, good or bad, is just marketing (albeit very successful marketing...).

    • @thefullmetalmaskedduo6083
      @thefullmetalmaskedduo6083 Рік тому +12

      @@Tahoza exactly the “AI” thing just a misnomer. It’s just glorified computer algorithm’s falsely advertised as artificial intelligence.

    • @Neomadra
      @Neomadra Рік тому +6

      ​@@Tahoza No, it's not marketing. Academics agree on the danger, they don't care about marketing. It's more that many people seem to cope with the fact that soon A(G)I will be replacing most of the human workforce. Maybe in two years, maybe in 10, but the path is very clear.

    • @Tahoza
      @Tahoza Рік тому +7

      @@Neomadra 1) The creator of this video is an academic and they don't agree.
      2) I'm an academic and I don't agree.
      3) Your response is evidence of how well the marketing is working.

  • @EmperorBun
    @EmperorBun Рік тому +25

    Big fan of this video. Tech companies and VCs are hyping "AI" so hard and it's critical to be realistic about what new capabilities it will bring to the table (likely far fewer than they predict).

    • @NavidIsANoob
      @NavidIsANoob Рік тому +12

      AI hype has a financial incentive. Tech valley found another bubble to blow up.

  • @3yron
    @3yron Рік тому +4

    “Mechanized agriculture? We already can farm by hand, a tractor just does it faster using an alternate method. What’s the benefit?”

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +5

      "The limits of mechanized agriculture - a ramble including looms"

  • @jb76489
    @jb76489 Рік тому +4

    Wopr is from war games and the eye in the gear is the SMAC logo from civ
    Another thing that people don’t talk about with regard to AGIs is that they are, at their heart, just math and we know From Gödel that there are limits to math

  • @anekdoche7055
    @anekdoche7055 Рік тому +3

    i was abaout to scroll past this because i thought it didn't have sidewinder missiles in it but then i read the title and oh boy i was happy

  • @EspHack
    @EspHack Рік тому +4

    AI wont be god, but it'll get close enough
    the possibility of solving logical problems by simply throwing enough energy & compute at it, is quite profound

  • @TroyRubert
    @TroyRubert Рік тому +4

    Great video definitely do more like this. Especially with a great intro and satisfying outro. This universe is wild. The deeper and deeper one looks, the more complicated and perplexing it is. Our ancestors would have burned us at the stake for the things we take for granted.

  • @garyplewa9277
    @garyplewa9277 Рік тому

    Great video, as are all of the others from you that I've watched since discovering your channel only 2 days ago. The clarity of your explanations and illustrations are second to none. I look forward to your subsequent videos. Thank you for making them.

  • @MrAngry777
    @MrAngry777 8 місяців тому +2

    Thanks for the video. Although I work as an IT developer, I'd argue that some of the issues you mentioned aren't quite that simple. For example, you suggested that three teams working 8-hour shifts could achieve the same results as AI. In certain applications, however, it's more akin to three teams of 100,000 each. While that's theoretically possible, it's implausible. Some problems, especially in areas like material science, drug discovery, and certain logistical challenges, are simply solved faster by exploring the entire solution space. I don't view AI itself as a threat, but it certainly has the capability to turbocharge a small group of people, potentially rendering the rest obsolete. Will this lead to a utopia where only a few need to work, or a dystopia where only a few live well? That's a matter of one's perspective on human nature. But if history is any indicator I wouldn't have .y hope up.

  • @ghunter0939
    @ghunter0939 Рік тому +12

    Im usually a lurker here but id like to share my opinion on this. My fear and fascination with AI come from the same place. When i learned about openworm an publicly open science project were the goal is to simulate a microscopic worm down to its dna and neurologic functions, were the current model is already so advanced that it can swim on its own, and although the project is far from complete it already shows what might be possible in the future. Simulating an human brain might be possible and that is terrifyingly fascinating. Also, we are going beyond the realm of silicon and stepping into biological computers, using fungal networks and even neurons to process information like a computer (i believe that some researchers were able to teach neurons in a petri dish to fly a plane in a videogame, but im to lazy to search for the paper).
    So what are the implications of simulating a human brain? Does it have human rights? What if i copy the mind of someone that already exists, is that ethical? Will prominent people in their respective work fields have to protect their brains with copyright laws? Imagine having trademark your own well... "self" for a lack of a better word so your intellect won't be stolen by some corporation with the means to do it.
    This all sounds like something out of a scifi i know, and i hope it remains like that, but a part of me cant help but fear that is what is going to happen.
    AI at the moment is a tool, and like any tool if its misused it can hurt us more than help us. I hope we learn from Ian Malcolm and not only ask if we can do it, but if we should do it.

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +5

      Good points. They've been discussed for many decades of course, but I suppose I should have put a positive note: at least now politicians will take these issues seriously, which they should have done at the latest in 1983 when the movie in the thumbnail came out.

  • @maxm2639
    @maxm2639 Рік тому +2

    Good discussion, good points. However, as always, there will be unintended and unforeseen consequences. "May you live in interesting times." The old curse certainly applies.

  • @MasamuneX
    @MasamuneX Рік тому +3

    I think the examples of a mathematically perfect solution already existing as a limitation of ai is not a real point since Large language models are already being given tools and are being taught to use them and evaluate their goals to achieve something that language models are not capable of. Like telling them to calculate inflation on a 26000$ amount from 1903 to the current year instead of spitting out a general historical rate newer more advanced LLMs will look up websites that calculate it year by year and read the webpage and enter the amounts into the correct fields to get more accurate information. Personally a limitation I think will become bigger will be that the training data for ai will already contain ai generated content and be diluted overtime until no further improvement is possible. The quality of the output is getting better at a faster rate than the dilution for the moment though so we could be a few years off from critical dilution.

  • @websterl.william106
    @websterl.william106 Рік тому

    Thank you for posting.

  • @tristan7216
    @tristan7216 Рік тому +3

    The sidewinder is a poor example of a simple algorithm beating AI, it can be spoofed by flares. A neural net with a decent resolution camera could probably do a better job of telling target from chaf, correcting it's mistakes, and may also be able to employ intercept strategies beyond simple following that take pilot biases and physical limits into account.

    • @sashimanu
      @sashimanu Рік тому

      it's a really old guidance system implemented under severe mass and complexity constraints of a smallish rocket, and the bulk and weigh of that era's electronics. Being able to pull off a single-pixel tracker was a neat feat.

  • @cabanford
    @cabanford 8 місяців тому +1

    A.I. used to create living tissue that can be melded with the silicon based A.I. that designed it...
    Always hard to imagine where (and how quickly) these kinds of advances will go. Life has a lot of "these horseless carriages will never catch on" aspects to it.
    Thanks for the great work ❤

    • @tomizatko3138
      @tomizatko3138 7 місяців тому

      Just create ai that can create better living tissue (that is made of exotic organic materials) rather then anything silicon base melding to skip a step and you are there.

  • @rearview2360
    @rearview2360 Рік тому +1

    Amazing vid as always!!!

  • @StephenRoseDuo
    @StephenRoseDuo Рік тому +2

    I'd love to know alot more on how the Sidewinder works. I know it uses the concept of holding a constant bearing to intercept the object but this now makes me confused of how that works

    • @theavaliengineer
      @theavaliengineer 6 місяців тому

      To explain simply: you're missing the other, required principle. Holding a constant bearing _with decreasing range_ guarantees a collision. With these two givens, you, your target, and the interception point form a triangle (the "collision triangle") with the side lengths directly proportional to e/o's velocity.

  • @ProfessorPicke
    @ProfessorPicke Рік тому

    really glad u put that last bit in for some bizarre reason, that is very cool

    • @WyeBrewingCompany
      @WyeBrewingCompany Рік тому

      the point of the missile at the end was to demonstrate that a very effective technology invented in the 50's would not benefit from AI. The strength in the sensor design is it's simplicity, and the point is it doesn't make sense to apply AI everywhere.

  • @cabanford
    @cabanford 8 місяців тому +1

    Great channel. Well presented points.

  • @JoeSimonsen
    @JoeSimonsen Рік тому +4

    This is a great video. I really appreciate the cold water you throw on what AI can and will do in the future. One thing I find interesting is the directions in a city example and how an existing algorithm can find the mathematically perfect solution each time already. I think this is true, but could be a place where AI doesn't replace this ability but offers some level of enhancement. For example, a driver might prefer a longer route that has less traffic to avoid the stress of packed city driving. Could an AI help here or would this just be another algorithm we could already use today? I think I disagree with the idea too that since people can already drive their cars, there isn't a new ability gained by having an AI with the driving skills of a professional driver do it for them. This might be true in function, but it wouldn't be true in practice. I know plenty of people, myself included, that would enjoy a truly robust and reliable self driving function on the car so I could let it take over with real confidence on even highway driving. I think we are a ways away from this right now, but anecdotally I will say going from an old Subaru with cruise control to an EV that will self drive to a level 2 ability was a bit mind blowing. Sure, I can still driving my car, stay in the lane and keep distance from the car in front of me. When the car does it though it's very nice at times. My Old Subaru does none of this and I have to always monitor the drive because the Cruise Control only fixes the speed and does nothing else.

    • @int2201
      @int2201 Рік тому +1

      He wasn't saying AI is pointless, he was saying that AI is limited. The video is disputing the idea that AI will be an all-powerful tool that will be able to solve everything. The city example he used was more of an example of something that could already be done, so AI does not offer anything new.

    • @JoeSimonsen
      @JoeSimonsen Рік тому

      Yeah I agree with him :)

  • @MineAndCraftMaster
    @MineAndCraftMaster Рік тому

    An interesting video indeed. I would be glad to see more

  • @blueberry1c2
    @blueberry1c2 Рік тому +6

    I think we're on similar wavelengths here. I've thought of contemporary AI as automated low-level thinking, raising the threshold for the complexity of menial tasks humans can already do. They're best used to quickly create a starting point for work, not to be the end product.

  • @tamasmihaly1
    @tamasmihaly1 Рік тому +5

    This man has a lot of strong opinions.

  • @MrDowntemp0
    @MrDowntemp0 Рік тому +2

    Very solid criticism. I appreciate the linked sources too!

  • @TerryClarkAccordioncrazy
    @TerryClarkAccordioncrazy Рік тому +2

    If you're right then the stakes are small but if you're wrong then everything, our entire future is at risk.

  • @MrRolnicek
    @MrRolnicek Рік тому +9

    You're missing most of the point I think even though you described it accurately.
    AI won't be able to do anything that a human can't do at least not in the near future (and baring the specialized pattern recognizers) but the issue is: It CAN do almost everything that a human can do.
    And people aren't ready for what extremely cheap near-human workforce can achieve. Think of all the amazing things ORDINARY people have done / came up with when they put their minds to the task. And then think how EVERYONE might have access to that power many times over.

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +4

      True. But by that logic, there are billions of people in developing nations who would accept lower wages than e.g. the US. Their existence alone will not magic things up that people are suggesting for AI. AI will certainly have economic and societal benefits and challenges.

    • @MrRolnicek
      @MrRolnicek Рік тому +4

      @@ImprobableMatter It is that exact logic, yes. Outsourcing has COMPLETELY changed the economy. Arguably even destroyed the US industry ... like all of their industry that they used to be number 1 in and now don't even participate.
      And this is the same thing but ON STEROIDS. And you can't just easily tax it into oblivion because you'll be able to have an in-house AI. But it goes further than that. You can dedicate 5 human equivalents to exclusively dedicate themselves towards one real human. For medicine specifically the benefits of this should be obvious.

  • @asynts
    @asynts Рік тому +6

    From a safety standpoint, a super intelligent system really isn't much of a concern. More problematic is a system that is barely intelligent enough to do a lot of damage. The next time that something like Log4Shell comes around we might be in trouble.

    • @fluxophile
      @fluxophile Рік тому +2

      Barely intelligent systems with critical, and often highly exploitable modes of failure. I think back to the single-pixel attacks mentioned here, that we learned about a few years ago.
      I've already seen reports of numerous orgs and companies laying people off and diminishing whole departments to rely in some major way on a ChatGPT-based plugin. Surely, nothing can go wrong there!

    • @CH-bd6jg
      @CH-bd6jg Рік тому +4

      "From a safety standpoint, a super intelligent system really isn't much of a concern. "
      if you've solved value alignment, sure, a superintelligent system is actually highly desirable. we should all want a superintelligent system to run things, if it values the things we value.
      if you haven't solved value alignment in a way you can definitively prove, then what you've almost certainly done is create a superintelligent system that does not value what you value, and might see you as an adversary, might see all humans, as an adversary, to be removed in the pursuit of it's true goals.
      thankfully, we're probably a long way off from being able to make something like that, but we are also a long way off solving value alignment, so if we don't make progress on the latter soon, then humanity is still probably fucked at some point.
      I know what I'm saying sounds like fear mongering, but no, I'm advocating that we need to pull-back on AI capability research, and work harder on value alignment. if we can't solve value alignment for the limited tools we have, then we shouldn't try to build tools that could even potentially be near human capability, let alone beyond it.

  • @gashery
    @gashery Рік тому +2

    I appreciate the arguments you put forth. However (and as previously touched upon in these comments by D. Murphy.) one would need to dive deeper in these examples to see how the performance of AI relative to humans is of central importance.
    A follow-up video where you explore the differences and similarities of human operators and AI would be interesting.

  • @azimuth4850
    @azimuth4850 Рік тому

    Been waiting for someone to make this video

  • @morkovija
    @morkovija Рік тому +1

    that joke on screen at 2:40 - chef's kiss

  • @Vatraxotsipoura
    @Vatraxotsipoura Рік тому +10

    Loved this opinion piece mixed in with the more "robust" content. More of it please!
    As a personal opinion, convolutional neural networks (CNNs) do have a hard limit as AI, as they can only brilliantly/expertly regurgitate their training set. This can only facilitate (in terms of time or cost) current operations which are limited by man-hours and expert personnel dedicated to performing these tasks.
    However, I think it is critical to understand that a huge increase in efficiency (time/cost/man-hour -wise) results in a different world, where certain operations which required immense resources are now in the realm of feasibility for individuals or small groups. A lot of white collar jobs are perfectly suited for CNNs, as its all about data analysis and information regurgitation.
    An additional paradigm shift which will arise from the CNNs and large language models (LLMs) is the filtering of information. As the vast majority of knowledge of mankind is accessible to every individual with an internet connection, the directed delivery of information becomes the bottleneck. This can dramatically increase productivity and revolutionize teaching. Herein lies what I consider one of the greatest risks of neural networks. It will not be hard to direct and control flow of information (personalized even to the individual's level) to more or less "control" a population's perception or thinking. Something more delicate, insidious and much more dangerous than classic propaganda.
    To conclude, I believe CNNs are not true AI and due to their architecture they can never be. At the same time they are an extremely powerful tool, with the power to shape society as we know it. So while I agree with the general thesis presented in the video, I am quite concerned by the power (the current cutting edge and the next gen) CNNs can bestow to the organizations which control them.

  • @irgendwieanders2121
    @irgendwieanders2121 Рік тому +2

    Meta discussion: Ramble* more.
    It was a good ramble.
    *I accept your diction here. I would use other words (comment, note, ...) because I think I know what rambling is (have you ever heard of LazerPig?) and this was not rambling as I know it.

    • @ImprobableMatter
      @ImprobableMatter  Рік тому

      I've enjoyed LazerPig on occasion. I think he was a little harsh on the T-34, though.

    • @irgendwieanders2121
      @irgendwieanders2121 Рік тому +1

      @@ImprobableMatter "...a little harsh..."
      I love British understatement

  • @candle_eatist
    @candle_eatist 10 місяців тому

    you got me at sidewinders missiles

  • @nuprinfeelgood
    @nuprinfeelgood 11 місяців тому

    you are a welcome voice of cool reason in a chaotic world, keep up the great work. also your dry sense of humor is marvelous.

  • @fonesrphunny7242
    @fonesrphunny7242 9 місяців тому

    Adding to the Sidewinder story:
    The missile could point itself at the target, but couldn't stabilize during flight. The solution was purely analogue: Each fin at the back has a movable flap, with a partially exposed toothed flywheel. When the missile "wiggles", the flywheels will automatically articulate those flaps, without the need of any flight computer.
    Since this is purely mechanical and there is nothing like dampening, the missile will oscillate back and forth, similar to the Sidewinder snake. That's how it has gotten the name.
    However, the AIM-9 shown here is of the X-variant. By now it's a completely different missile.

  • @jrkirby93
    @jrkirby93 Рік тому +2

    Here's what AI and associated newly available automation can do: be cheaper than the most competitive human laborers for any widely employed task. The ways this can restructure society for the worse should not be understated.
    For certain: this will not replace all jobs. There will always be jobs that there hasn't yet been an implemented AI or automation system to replace.
    But this can displace the most widely employed occupations at scale, and quicker than society is expected to be able to react. There are billions of people that need to be employed, and currently there are billions of jobs which humans can do cheaper than the best automation system that could replace them. In large part this is due to the intelligence required to do jobs - even so called "simple" jobs such as lifting and moving things require a surprisingly complex system to work on par.
    But the incentives for replacing jobs with automation are proportional to the number of jobs being replaced. So any job that employs millions of workers is at risk, now that the capabilities of AI systems are quickly approaching more and more of the capabilities that used to be exclusive to living things and humans.
    This has the potential to bring massive prosperity to all humans on earth, as the need for our labor decreases drastically without the benefits of that labor going away.
    However, our current societal organization does not allow for billions of people to live, eat, and fulfill their basic needs without doing regular employed labor. Under the current structure, the most likely outcome is mass unemployment, poverty, and social upheaval, while the goods society needs could continue to be created without their labor.
    That is the most immediate fear I have about AI. Anything about a rogue singularity superintelligence AI, or any other scifi possibility is something interesting to think about, but not an immediate high priority concern.

  • @thelazy0ne
    @thelazy0ne Рік тому

    That sidewinder solution is brilliant!

  • @geraldkenneth119
    @geraldkenneth119 8 місяців тому +1

    One thing that interests me is AIs’ ability to come up with unusual solutions and policies that are very hard for humans to decipher. I remember reading a paper where a genetic algorithm was used to design a tone discriminator on an FPGA, and the resulting circuit with very unusual, to the point it even had a collection of gates completely isolated from the main circuit, which it couldn’t function without. It seemed to be exploiting various analog phenomena. If something as basic as a genetic algorithm can do that, can you imagine what a neural network with the right architecture and training could do?

  • @user-me7hx8zf9y
    @user-me7hx8zf9y Рік тому +1

    good points. I've been a hobbyist in machine learning, specializing in NLG/NLP. Made several algorithms in my time. AI fear mongering will be around for a while... and no, most jobs won't mysteriously evaporate because a monumentally large linear algebra machine with nonlinear activations can replicate human reasoning (within known bounds)

  • @kicsikacsa2723
    @kicsikacsa2723 Рік тому +1

    >it's just speeding up what we can already do
    I wanna say "bruh". How isn't complete overturn of labor costs not a big deal?!

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +1

      Yes. What annoys me, though, is when people like the comment at 00:16 say that it will sidestep physics issues "because AI". Some people are predicting properties that are not physically or mathematically possible.

  • @oopsiesh
    @oopsiesh Рік тому +1

    LOVE your videos

  • @veteransniper6955
    @veteransniper6955 26 днів тому

    In 1950x era it was ok to spend a year to develop clever ways to save two vacuum tubes, because they didn't fit into the rocket, limiting the rocket to do one specific task in simple environment. And also losing some performance, if compared to mathematically optimal solution. With ghz cpus and gpus it is ok to 1) exploit more direct approach packing multiple sensors and processing to implement optimal processing 2) exploit more generic approach, training complex, generic algorithms for specific task. E.g. quality control often includes visual inspection, and machine learning or ai can do it with same or better performance as humans for a fraction of cost. Ai can find and link together various pieces of information on same topic that otherwise would be processed independently and with lower quality. Basic coding, art, became more accessible, as help ai can uderstand the problem, and propose solution, or find examples in documentation and adapt it for user's needs. To be helpful and transform the world ai does not need to be super intelligent, like changes in textile production sparked industrial revolution, dumb transistor led to the world we are in, and computers changed everything again

  • @IanBLacy
    @IanBLacy 9 днів тому

    I’ve got to somewhat disagree on the chaos theory point. It’s not impossible to fully understand a complex system, just often impossibly impractical. You could absolutely build and successfully fly a starship without crashing any beforehand, you just have to do a stupid amount of math and simulation and unit tests, which no company is willing to pay engineers to do.
    Sometimes the reason we iterate is because we have no way to do otherwise, but far more often it’s because it’s simply the fastest way to reach our goal. An AI that is capable of reasoning about more things faster than humans are would be able to reach solutions without iteration in a practical amount of time.

  • @kieranhosty
    @kieranhosty Рік тому +6

    I liked the examples of perfected systems already existing, sometimes the things we already have are actually quite good.
    A lot of folks are entering the conversation on AI looking to their sci-fi geek friends for some perspective, who have been raised on literal decades of various degrees of apocalyptic speculation from writers, many of whom are not required to have a sense of scale beyond that of narrative to make a good book. Reasonable speculation is that we can have AIs that are specialised into things like optimising the layout of PCBs and other complex systems towards different goals, but it's easy to perform a logical sleight-of-hand when speculating about the capabilities of intelligence beyond our own, especially on yourself, and build from there.
    While I suspect that plugins and effectively tool-using capabilities of LLMs will improve their relevance and productiveness to a lot of jobs, a job only ought to be replaced when its consistently outright proven that AI can do a better job than a human in a certain area, and hallucinations are a major part of any LLM at the moment

    • @reddragonflyxx657
      @reddragonflyxx657 Рік тому +3

      The group that Roko's Basilisk came out of kinda formed a cult/religion out of the concept of a super intelligent AGI, complete with a "We want to make a safe/good AGI, which will prevent the bad AGI from bringing on the apocalypse. Join us, or pay us money, to save the world (assuming you believe our prophets)!"

    • @kicsikacsa2723
      @kicsikacsa2723 Рік тому +1

      Not talking about Skynet, lol, but no more customer service, illustration, misc writing and entry level coding jobs. Bit more relatable fears, don't you agree?
      >outright proven that AI can do a better job than a human in a certain area
      how much better and cheaper would it have to be to qalify tho? there are a bunch of tasks these tools can do already as well or better as median entry level human. yes, safety critical things like driving will take a lot more, if nothing else, humans easier to establish responsible for mistakes than machine.
      "But you'll still have humans in control" yes, those few, and the vast majority of us will lose even the small leverage we have now.

    • @kieranhosty
      @kieranhosty Рік тому

      @@kicsikacsa2723 ...yeah basically, I don't expect to be having kids, regardless of whether the AIs end up being hostile or not.

    • @Dipj01
      @Dipj01 10 місяців тому

      The world will be extremely competitive, and every resource will be scarce for the common man. Everything will be consolidated by the richest of the companies, and more and more work will be automated everyday. The child will likely be jobless, and homeless (if the parent doesn't own a house, which is already impossibly difficult in a lot of places).
      Add to that extreme weather from global warming (we're already seeing it) that particularly hurt the homeless.
      It is going to be a horrible existence for any child.

  • @lepointique9706
    @lepointique9706 Місяць тому

    Thanks for such content. People these days do not appreciate slow paced, detailed informative content.

  • @alanjenkins1508
    @alanjenkins1508 8 місяців тому

    The Saturn 5 rocket worked first time. This is because enough time and effort was put into designing and building it. Trying to short circuit a propper engineering process is what leads to progress through trial and error like SpaceX.

  • @nathanb011
    @nathanb011 Рік тому +1

    Great video and I agree with basically everything, but it is important to note that just because AI has its limits doesn't mean it isn't a huge threat to the current job market. The vast majority of jobs, especially office jobs, could be replaced by AI. If it speeds the process by even a modest 50%, then 1 in 3 jobs of that occupation are no longer necessary. And with current technology that's already shattered in many fields.
    AI is so powerful specifically because it is a solution to problems of which we cannot define. We simply feed it examples and it can infer the problem. This creates new misalignment problems, but those can be mostly reduced with more data. AI is inherently inefficient, and any defined mathematical or logical solution is essentially always far more efficient, but with computers now fast enough to run these extremely intensive programs (that's why it's happening *now*), efficiency is no longer an issue for the most part.
    Will AI become a singularity and destroy the world? No. But it will absolutely destroy the current economic system unless we as a society can shift gears VERY quickly (yeah, right).

  • @CristianNazare
    @CristianNazare Рік тому +1

    You should change your UA-cam name to Real Real Engineering. Have seen a few of your videos the past days and i have to say. IMPRESSED. And i have to add. VERY.

  • @g.f.martianshipyards9328
    @g.f.martianshipyards9328 Рік тому

    The background music in the sidewinder bit made me feel really... Suavemente.

  • @mitchellcouchman1444
    @mitchellcouchman1444 Рік тому

    The limitations with respect to the real world is huge, the biggest advantage of AI is it ability to learn from huge data sets that normally need to be somewhat curated, and the rocket example is good as often such AIs are built on our own knowledge in simulations we built so therefore limited by our own understanding of the world atleast in the near term

  • @BGraves
    @BGraves Рік тому +3

    But can't AI generate newer, better algorithms that we can then implement that doesn't have to iterate? I think before we have such conclusive opinions on the power of what "artificial intelligence" can generate, we must first understand fully the technology and software behind how it works, which is powered by a multibillion-dollar investment and we don't have the the keys to their intellectual property

    • @jarivuorinen3878
      @jarivuorinen3878 11 місяців тому

      Do you mean multi-billion investment like MOD, Google, Microsoft or Tesla? With the neural networks it's unlikely that the massive ones can be even comprehensively tested by the companies themselves with today's tools. Even the open source LLM's are pretty much black boxes when they work, despite being much smaller in scale than commercial ones.

  • @douradesh
    @douradesh 11 місяців тому

    the biggest problem is the optimization problem. however chatpgt is helping me a lot to speed up my coding.

  • @PhilfreezeCH
    @PhilfreezeCH 10 місяців тому

    0:29 You really don‘t need floating point operations to do this, a few fixed-point (integer) operations are just fine.

  • @Ikbeneengeit
    @Ikbeneengeit Рік тому

    A smartphone doesn't do anything that a Kodak camera and a telegraph and a television could already do. Yet it's impact is huge.

  • @godlugner5327
    @godlugner5327 Рік тому +1

    Im here for the sidewinder AI that took over the job of a pigeon

  • @tristan7216
    @tristan7216 Рік тому +7

    OMG thank you! The moral panic about AI is out of control! Yudkowski actually cried on Lex Friedman's show! Intelligent people vastly overvalue raw intelligence, it's a bias trap for them.

  • @markthart2661
    @markthart2661 Рік тому +1

    I get your point and largely agree, but also believe you slightly missed the mark. You do say that AI could speed up progress, but say that that is not "any emergent capability". I fully disagree here.
    Much like Newton was incrementally building on work done by others, all human progress is. There is no vastly new emergent capability, not in humans nor in AI, but taken enough little steps towards a goal it would certainly seem that way. The fact that an AI can be trained on a broad area of topics (like just predicting the next word about that topic) and can discover patterns over widely different fields, could greatly speed up human progress. This especially fits on topics that we wouldn't normally think to apply to one another, and when we did that in the past, we called it a giant breakthrough.

  • @Razoredge581
    @Razoredge581 Рік тому

    I like the Alpha Centauri reference in the thumbnail

  • @LossyInput
    @LossyInput Рік тому

    The missile knows where it is.

  • @nikolatasev4948
    @nikolatasev4948 Рік тому

    A more complicated software can mess up royally (the last 3 failed robotic Moon landings were due to software errors). But it can also give you much better capabilities - like identifying flares. Flares follow a predictable trajectory - airplanes do not. A simple solution to guide you towards the heat source was good enough for the 40s, but can be vastly improved.
    Another thing that could be improved would be to avoid excessive cranking, or plowing through the ground due to the enemy aircraft diving. A missile which understands its energy state and altitude could be far harder to avoid.
    I'm not saying we should be putting neural networks or language models on missiles, I'm just saying sometimes a more complicated software, when properly written, can lead to vast improvements in performance.

  • @tk423b
    @tk423b Рік тому

    The smartest Robot to date is in Star Wars when they stuffed Kenny Baker into R2D2.

  • @revoriccyborg9932
    @revoriccyborg9932 Рік тому

    Loved hearing opinions that resemble my own :)

  • @DavidIsenschmid
    @DavidIsenschmid Рік тому

    I thank you for your videos. They are very well researched and your voice is pleasant to listen to. I do not mind "opinion based" at all, especially if the reasons for that opinion are that well explained.

  • @KevinBalch-dt8ot
    @KevinBalch-dt8ot Рік тому

    I love that curve fitting diagram! Reminds me of von Neuman (I think) saying “With 5 degrees of freedom, I can make an elephant. With seven degrees of freedom, I can make the elephant wiggle its trunk!”

  • @nathanlewis42
    @nathanlewis42 Рік тому +7

    An AI would likely be a lot cheaper than three teams of humans. Humans also take weekends and holidays off and have annual leave.

    • @ethics1010
      @ethics1010 Рік тому +1

      Exactly, and we can't clone the team of humans to do the same work in parallel.

    • @pukpukkrolik
      @pukpukkrolik Рік тому +1

      On the other hand, human brains are super energy efficient, and for many tasks, AI - to the extent it continues to be developed - may remain overkill at least for a while.

    • @nathanlewis42
      @nathanlewis42 Рік тому

      @@pukpukkrolik the human brain is quite energy efficient but they usually come with bodies that require clothing, food and water and bathroom breaks. They also require energy to transport their food and water too them and energy to transport the human bodies to and from home. The bottom line is that humans are far more costly than AIs. Also human brains get bored (maybe that is a part of energy conservation) and distracted.

    • @victorcotu
      @victorcotu Рік тому +1

      You are not counting how much harm 3 teams of humans, with no job and bored to hell can do, which should be added to the cost of using AI.

    • @nathanlewis42
      @nathanlewis42 Рік тому

      @@victorcotu AIs can easily generate images and videos to keep the humans entertained. Don't forget that the youtube recommendation algorithm is using ML.

  • @WobblycogsUk
    @WobblycogsUk Рік тому +1

    Current AI is being hyped well past it's capabilities and I don't think it's any sort of threat to us at the moment, it will allow us to automate some jobs that currently can't be automated but that's about it. More disruptive might be the ability to generate fake media, it'll be interesting to see how the world deals with the fact you can't believe photo's and videos anymore. Turning to the future though I think it's much harder to predict what will happen. Imagine we create an AI in a reasonable sized package that is as clever as any human that has lived. Presumably it would know how it was built and it's not constrained by time in the way humans are. That AI would be a formidable opponent if it decided it no longer wanted us around.

  • @piyush_chaturvedi
    @piyush_chaturvedi Рік тому +7

    The thing that present AI has already replaced is the need for going to a university and spend quit a time there to only learn and mostly get to know that things already been solved. It quite literally gives you the context and the concept on the spot, I think it makes the present education system absolute for what was left in it after the internet

    • @narfwhals7843
      @narfwhals7843 Рік тому +4

      Current language models are _awful_ at teaching or fact checking. They have no concept of what is a right or wrong answer. They will readily make up facts on the spot because they fit the conversation. This is not a feature you want when replacing the education system.
      And that is absolutely a feature of how they work. Word vectoring does not get you correct answers. It gets you answers that fit the question.

    • @piyush_chaturvedi
      @piyush_chaturvedi Рік тому

      @@narfwhals7843 ofcourse one needs to cross verify the reference and information that this models output, and yes they do make up facts. But won't the due diligence worth significantly less time as compared to learning everything from scratch.

    • @narfwhals7843
      @narfwhals7843 Рік тому +3

      @@piyush_chaturvedi given that critical thinking it's traditionally not people's strong suit, this will cause more problems than it solves.
      Though that is something our education system should put more focus on, anyway.

  • @maxfmfdm
    @maxfmfdm Рік тому +1

    A super intelligent thinking AI would be very hard to debug. Regular algorithms require already a high ratio of debugging to coding. Make the debugging an order of magnitude more complex as it would be with a thinking AI and you have quite a task. We humans have a number of innate reactions to stimulus that develop within us entire systems that lead to our long term mental stability. An AI would need something similar: it would essentially need to debug itself in this way. What we have now are systems to correlate massive amounts of data. But you cannot train in this kind of system.

  • @monsterthenergydrink
    @monsterthenergydrink Рік тому +2

    I think your videos are compelling, thanks for the content

  • @adfaklsdjf
    @adfaklsdjf Рік тому +2

    I can't tell if this video is intended to address all the concerns about AI including the ones expressed by "experts", or only intended to address misconceptions "lay people" have after reading sensational news articles / posts online.
    Nothing stated in the video struck me as false, but if it's intended to address the things that I think may be coming in the next decade or two, or the concerns I have about the future, I feel like it's missing the point. I don't think anyone familiar with what's going on is expecting current models to cure cancer, or be able to do new things people couldn't do.
    When it comes to jobs, a machine doesn't have to be better than a person or capable of things the person isn't capable of, it simply needs to be able to do the job for lower cost.. it can even do a worse job as long as it can do it significantly cheaper. There are lots of examples of jobs that have been automated to systems that are worse than the humans they replaced.. but they do the job so much more cheaply that it doesn't matter. We can see this in products as well -- think of the cheap products we buy which are lower quality than a much more expensive version we might have bought 50 years ago which simply isn't available anymore because it's been wiped out by the cheap version. It is possible that as many or more jobs will be created than are obsoleted, but that certainly isn't guaranteed and there are reasons to be concerned that that might not happen this time.
    In terms of capabilities, people are excited(/worried) not about what models can do today, but by projecting what appears to be a still-accelerating pace of advancement into the future. If we look at the advancements of machine learning in the last 10 years and project that rate of advancement (or more) into the next 10 years, that's what people are worked up about. People are particularly excited now because of the way large language models have already exhibited emergent properties.. being able to do things that their makers didn't anticipate they would be able to; exhibiting theory of mind is an example of this.
    It can be somewhat important to distinguish between AGI and super-intelligence; not all AGI is super-intelligent AGI. AGI is usually defined as human-like intellect especially in terms of its generality, meaning its ability to work across many domains, extrapolate, come up with novel ideas and so on. Super-intelligence is the idea of intelligence far beyond ours to a similar extent as our intelligence could be considered compared to that of a mouse or an insect. (If you reject the notion that our intelligence is far beyond that of a mouse's or an insect's then you can just stop reading here.) I think both scenarios boil down to a couple simple questions:
    1. Is it possible for such a thing [AGI / super-intelligence] to exist at all?
    2. Is it possible for us to build it with the intelligence, skills, and technology we can develop?
    I think both of these are open questions we can each answer for ourselves, but if you think the answer to both questions is yes, then I posit that in that case we _will_ create it and it is simply a question of when, and what happens next. I don't claim certainty but I think the answer to both is "yes", and many people including myself think we are likely to build AGI within the next 10-20 years. For super-intelligence, the idea is that once there is AGI, there will come a point where an AGI is able to start engineering itself to make itself more intelligent. People think if that happens then it will increase its own intelligence exponentially and become super intelligent very quickly after that. Timelines vary, but most people don't think that AGI will do that immediately. The concern here comes from looking at how we treat mice and insects.. it isn't our goal to wipe them out, but we don't treat their feelings or interests with very much respect, particularly when their interests are at odds with our own.
    Thanks to anyone who read this wall of text.

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +1

      I did read the whole comment, thanks. I appreciate it sounds like a strawman argument, but I get a lot of comments (and I'm sure this sentiment is widespread) as at 00:16. "Fusion in 5 years because AI". This ignores the three points in the video: (1) NNs are less efficient than many existing methods (2) there are physical limits (on plasma physics, on reaction times etc) (3) humans are already intelligent enough to have thought of many successful and unsuccessful ideas in the field. Obviously, these misconceptions apply far beyond fusion.
      I am aware of the arguments about the singularity etc, but as I expanded on point (3), I believe that humans have crossed a threshold above mice and beetles. We are intelligent enough to intellectually come up with optimal solutions, from Pythagoras to the Standard Model of particle physics. I think - and I was clear to state this is an opinion - that you should add one further question to ponder, which is "how much 'smarter' than humans can an intelligence possibly be?".

    • @adfaklsdjf
      @adfaklsdjf Рік тому

      ​@@ImprobableMatter Thanks for the reply. Okay I see the comment in the video at 0:16.. for what it's worth I often listen to your videos without watching so I hadn't seen that bit of context. Those comments* you showed in the video are definitely what I'm thinking of as "notions 'lay people' have after reading stuff online".
      I feel the "how much smarter can intelligence get" question falls under the first one -- is it possible for the thing to exist at all. There are nuances for how the first question would be phrased for AGI vs super-intelligence, and I knew my comment was already going to be excessively long so I was trying to cover both AGI and super-intelligence with a single phrasing, but here is how I would phrase question #1 separately for AGI vs Super-intelligence:
      AGI - is it possible to implement intelligence similar to ours (esp w/r/t generality) by non-biological means (e.g. electricity and silicon)?
      Super-intelligence - is it possible for intelligence at a level far beyond what we can imagine (mouse -> human -> ???) to exist? Your phrasing also works: does a "scale" of intelligence extend that far up or does it reach some kind of ceiling before that?
      That's "is it possible".. if yes, then there's "are we capable of building it".. and if yes, then there's "will we actually do so?". To me, the first and second are open questions but I come down on "yes" for both, and recent advances in LLMs have significantly eroded what doubts I had about the possibility of AGI specifically (super-intelligence is separate). My view is that if the first two are both "yes" then I'd argue the third is absolutely yes.
      I hear you coming down on "probably not" for #1 on super-intelligence :)
      P.S. I was already playing through a bunch of your videos, but ✅subscribed, ✅rang bell 🔔. Thanks for your work. 👍

  • @bipolarminddroppings
    @bipolarminddroppings Рік тому

    3:00 I take issue with this argument. The Shuttle flew successfully 133/135 times.
    Saturn V flew successfully 12/13 times and only had 1 partial failure.
    The idea that SpaceX couldn't have made a rocket that got the space first time is just flat out wrong.

  • @anon_y_mousse
    @anon_y_mousse 10 місяців тому

    I think the thing that people on both sides of the argument forget is that regardless of the capabilities of the technology being used, it's largely who's using it and for what that will be far more important. Consider someone hooking into Google's image recognition software, which is already really good at identifying people, and they attach machine guns and rocket launchers onto drones and send them into a war zone or worse an urban area and has them target all humans. Set them on some patrol mode and we've got our HK's from Terminator. All of the technology is there, it's just a matter of someone with the know-how to decide to put it together, or be paid enough that they forget what morals and ethics are. Truthfully, it's just a matter of time before someone smart enough and evil enough and with enough funding puts it all together and humanity ceases to exist with no need of a virus or nuclear weapon.

  • @shanent5793
    @shanent5793 Рік тому

    If what Brian Kernighan said about debugging being twice as hard as programming is true, then that should put a limit on how far AI can get on its own. Whenever an neural net makes some massive blunder there is a human around to tweak the rules or training data, and that includes the adversarial systems which stll let those bugs through. It's hard enough to build a complicated working system out of well-defined and tested modules, so what happens when those building blocks are opaque AIs? Unconstrained simulation of just three bodies eventually becomes chaotic, so what are the chances a complicated AI system figures out how to keep itself from devolving? If it can't derive those constraints on its own, then those rules might as well be a religious text

  • @ajr993
    @ajr993 Рік тому +1

    The main thing you're not factoring in here is that previously, many of the cognitive tasks that could only be done by humans can now be done by AI. So yes humans could do the stuff AI is doing now--however, instead of doing expensive things like improving education or increasing the population size, now you can spin up a few servers to add additional cognitive surplus. Furthermore, in certain domains AI is proving to be superior to humans, and on top of that AI offers enhancements to how humans already do things. For example, Chat GPT and github copilot offer programmers a way to accelerate development speed while improving quality. If software engineers can be 20-50% more productive as a result, then AI will pay in dividends through that mechanism. There may be unexpected feedback loops as well--for example, if LLMs improve the productivity and speed of AI engineers and help to write better AI models, then those new models could help AI developers write even better models and so on and so forth.
    Much of your video misses the point about the significance of AI and what it offers. No one is concerned or worried about AI being used to replace temperature calculators--people are concerned that AI may replace large swaths of white collar workers who may now be easily automated away. Even a 10-20% change in the labor force would cause significant societal upheavels, and AI is improving at a geometric rate right now. AI could have substantial worldwide implications. India for example relies on customer support and call centers as one of its important sectors, which may become increasingly irrelevant as the decades go on. Could many 3rd world countries be made irrelevant by AI and automation? Its quite possible, at least partially.

  • @CraftyF0X
    @CraftyF0X Рік тому +5

    To be fair, anyone who thinks about AI seriously, would not imagine the AI solving stuff you mentioned (ftl, squering circle w/e) more over there are also uncomputable stuff and maybe irreducable complexity. That is being said, the concern here is that the AI does not need to be godlike to be much better than what we are, and saying it won't be able to do things that we could already is a bit short sighed. As you mentioned, intellects are products of their enviroment, that is a true constrain not only on the AI but on any of us humans too. We can only work with/build on what we already know, but an AI can acces all the factual knowledge we put together so far, and we could hardly say this about any human, . So as far as taking information in, "processing it" to produce new solutions based on already accessable information, the AI will always lead ahead of us because it has a much faster and direct acces to said informations and can make connections between concepts, while we forever limited by the boundries of our personal knowledge, expertise or even momental attention. Im not sure if we can predict with a great certeanty as to what is possible through this simple reprocession and connection of already known stuff.
    For example you mentioned the physical stuff and how we need tests, and that is true, but the evaluations of tests may sped up by a lot with AI, and if enough test is done, maybe a model with task appropriate approximations can be built. This would mean even these kind of tests go to the realm of computation, wheres the AI with the assistance of other simpler algorithms may arrive to solutions lot faster than how a human group could.
    While this mostly speculative the effects on employment and the whole economy are almost certain. (already many white collar job could be automated right away and only our socio-economic and political inertia slows that down)

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +3

      Fair enough. I've read some bad takes, like cracking RSA encryption by thinking about it really hard; which, to be fair you could do, but then use an ASIC and it will still take centuries. Hopefully you get my point.

    • @CraftyF0X
      @CraftyF0X Рік тому

      @@ImprobableMatter Indeed. Didn't expected you to read this long a$s comment and react, this just earned even more respect ! I would go as far as to subscribe to the channel but you know.... I already did long ago :)

  • @johnduncan5117
    @johnduncan5117 Рік тому

    I was quite excited about ChatGPT because it was genuinely surprising how well it performed certain tasks - above all how well it interpreted human input.
    I think it's a leap forward in terms of man/machine interfacing... but the hysteria from some quarters seems a little inane. One example: we were immediately told traditional internet browsing will quasi immediately go the way of the dodo. I don't think it will. I don't need to engage with an intermediary intelligence every time I search for cat pics or a new graphics card benchmark.
    That's not to say we won't see big changes in the way stuff is done going forwards. But just think: there have been automated cash tills for years now, but how many big supermarkets have replaced all their human cashiers with automated ones ? Not any that I know of.
    And as the excellent video points out, intelligence is one thing, applying it terms of physical change in society is a much slower process.

  • @guard13007
    @guard13007 4 місяці тому

    (I love that you explained why it's called a sidewinder without saying that's what you explained. xD)

  • @Williamtolduso
    @Williamtolduso Рік тому +1

    thank you for arguing against all the bs out there.

  • @booketoiles1600
    @booketoiles1600 Рік тому

    So what you're saying is that the missile doesn't know where it is

  • @marcinmarcin2506
    @marcinmarcin2506 11 місяців тому

    Interesting episode! Although I don't agree. I think that ai will allow for thinking outside the box. For example there's no way to go through space faster than light, but It potentially could create completely different way (that is out of our understanding for now) of transportation like space warping or sharing information through quantum entanglements. Also what ai could do physical things by itself like building everything. Consider dynamicly growing robotics tech. To sum up, It's important to remember that world will change in every sector, not only in which we are discussing and the rest will be exactly the same.

  • @boldCactuslad
    @boldCactuslad Рік тому +1

    It's not hard to see how a single monolithic AI could result from the current process. The first team to produce a self-improving AI will naturally be the team least obsessed with safety. Singleton formation is natural, it is precisely why there is so much trouble with preventing monopolies from forming. We busted the trusts and they came back with a vengeance. Companies are predictable yet we cannot control them, much less their combined impact on say the climate. We cannot stop countries from waging wars. What hope have we against an opponent a thousand times our superior, who travels on airwaves at c and keeps itself alive on a billion routers and servers and Raspberry Pis? None, none whatsoever.

    • @ImprobableMatter
      @ImprobableMatter  Рік тому +1

      So, you think an AGI will fit into a Raspberry Pi? No? So it would need to be sharded over multiple Raspberry Pis? Look up bandwidth and latency and you will see why this is a terrible idea. By the time it recognizes a single image, the owners of those Pis would realize they've been hacked and do something about it.

  • @explosiverift2037
    @explosiverift2037 Рік тому +1

    The goal of AI is to perform the work of humans faster and at less cost.

  • @ashsilverwizard3275
    @ashsilverwizard3275 Рік тому

    The current state of AI disruption is about economics not extended capabilities.
    Robotaxi will be cheaper than a human driven one, for example.
    As for the fears of super intelligence, current AI is only human insofar we feed it human data, its version of thinking is different from our own.
    Because of this we cannot accurately infer the possible scope of its capabilities.
    It is not the known unknowns that scare us it is the unknowns unknowns.
    I do agree that that kind of singularity is less likely though.

  • @josephpiskac2781
    @josephpiskac2781 Рік тому

    Pretty Cool

  • @Rationalific
    @Rationalific Рік тому +3

    The Sidewinder info was really interesting. However, I will disagree somewhat with your opinion in this video. While the changes to society due to AI may be overblown, I think that AI has really seen huge improvements in the past 2 years, and it's way too early to say what it will eventually be able to do. This is like discussing the abilities of airplanes in 1912. I think there's a long way to go, and I do believe that AI will become more creative and will be able to solve problems that no humans have thus far been able to. Also, I don't think that Newton (like Einstein) is the best example to say that there are no geniuses that propel science forward. The sheer number of discoveries and scope of understanding that Isaac Newton brought to science could not have been done by the vast majority (99.99% at least) of humans. Sure, it could have eventually been discovered, but I'd say not for dozens or even hundreds of years more in some cases. You showed some people probably in Baghdad or some similar place, and that was maybe during the 1000s. To be honest, they may not to this day have discovered everything that Newton did. I mean, hundreds of years did in fact elapse between Baghdad's golden age and Newton, and if anything (partly due to the Mongol invasions in the 1200s and partly due to religion becoming the only important thing in people's lives and science being discarded), the people living there in the 1600s were probably less informed than those living there a half millennium prior. Of course, we sit on the shoulder's of a giant, and prior scientists did as well. There's no way around that. Just like nobody would have invented a 5nm transistor chip before someone invented a 50nm transistor chip. Even so, there are major breakthroughs (wheel, Newton, Einstein, etc) that can transform societies. I do believe that at least some AI will be able to gain the ability to, if not create its own breakthrough, at least find novel ways to iterate on what humans are capable of and expand knowledge further into the frontiers.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 Рік тому +1

      "The sheer number of discoveries and scope of understanding that Isaac Newton brought to science could not have been done by the vast majority (99.99% at least) of humans." Hey, we need discoveries done collectively, because social factors. :D Otherwise, we may notice that you need people with outlier high IQ, while IQ has huge genetic factor... and in some populations such flukes were happening suspiciously often as if distribution of relevant alleles wasn't perfectly uniform...

    • @Rationalific
      @Rationalific Рік тому

      ​@@useodyseeorbitchute9450 ​ (Changing things, because UA-cam immediately deleted my comment.) True. We'd like to believe that everyone is =, and put in the same place would make the same discoveries. I tell you right now that while I am definitely intelligent, there is no way that I would make Newton's or Einstein's discoveries even in the 21st century with all of our info available, if their discoveries were selectively hidden from my 21st century life, but I had access to discoveries from right before as well as after they made theirs. It takes a special person. I went on (not even about anything "bad"), but I'll leave it here so the censors don't freak out. We know how they are.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 Рік тому +1

      @@Rationalific Well, when think about it, a rudimentary check of statistics of any university shows that a few highfliers produce the most of their output, while the rest of faculty barely gets any citation, impact factor or whichever metric is fancy in that particular country. Nevertheless, it's all social factors, and not some terribly unevenly distributed talent.
      Oddly enough I've seen a few channel which managed to speak such heresy and as long as they were careful with words they were overlooked.

  • @stephan553
    @stephan553 Рік тому

    The definite video about AI [hype]. Not much to add to something so complete yet concise. 🙃

  • @pie75
    @pie75 Рік тому +1

    While I don't think that AI, especially as it currently exists, is a hyper-intelligent marvel that will replace or supercede humans (the biggest factor preventing it is simply that nobody would sell such a powerful technology cheaply enough to be more cost-effective than humans. It would be bad market strategy.)
    The problem I find with assailing AI marketing and hype from this stance, is that it's not really relevant whether the AI is as good as the marketing says, at least for my perspective.
    Which is that current ML models, being blackboxes, are inherently dangerous; sometimes (but not always) for the same reason curve-fitting problems exist.
    I believe that a blackbox cannot be made safe; it can only be made inert. And the major problem, which fear-mongering (as destructive as it can be) draws attention to -- is that little effort is being made to change the first fact; and little is being done to realize the second.
    The fact of the matter is that any massive model doesn't need to be even remotely 'intelligent' in any degree to be dangerous. It simply needs to be given enough power to act dangerously; and be unmonitored for a rather mere amount of time.
    In AI Safety there is the common, comical, example of a paperclip optimizer which despite being extremely 'unintelligent,' left to its own devices, with access to all resources on Earth, it would destroy the entire planet. By an Accidental Design. As its goal is neither complex nor its methods intelligent. It simply makes paperclips by any means available -- which are incidentally, all the means possible.
    Ridiculous, of course, for anything to have such means; but the means available are merely the domain of the repercussions.
    A more realistic super-massive domain could be, for example, letting an AI manage the network security of a widely distributed operating system, such as ChromeOS or Windows. Surely, inserting a completely unpredictable random number generator into the security of millions of systems is so ridiculous that it would never happen, right? Surely, a company as massive as Microsoft would never *rush beyond its means* to deploy a product with the capacity to destroy billions of dollars in data. It's not as though they have previously done such a thing with even more predictable software. *cough* Windows deleting documents folders randomly after update installs *cough*
    Minimize the means, and the problem becomes closer to inert. But it does not become safe.
    It is still a blackbox -- it needs to be understood to become safe. And despite that ML is just "a bunch of inputs that bias an output" -- there are many, many, unexplored intricate structures to precisely how that happens.
    The overaching problem: How to convince companies to put slowing down, safety, and inertness, in focus, when the objective of developing the technology is the opposite -- Nobody builds a tool to connect it to nothing. A computer is only valuable if it has an output.
    Unfortunately, despite the advice of basically every engineer in the space, companies are still rushing to deploy their products as quickly as possible. Often explicitly sacrificing safety- very often connecting them to as many things as physically possible. We already have hundreds, maybe thousands of ChatGPT-powered bots crawling the internet, automating tasks, buying things online, lying to people.
    The only way to make companies care about something is through punishment. Laws. public pressure. In other words, swathes of previously uninformed people need to be concerned about the safety of the products.
    That is a problem that is best solved by educating them about how the product works, and thereby how it is dangerous.
    But we've run into a problem -- This is not /one/ topic of education.
    You need bring up to speed a sizeable portion of people who are not involved in the space. That's extremely difficult to do on its own.
    But you also need to, after that, teach them an intricacy and philosophy behind why things can be dangerous in a way they've never imagined.
    But in order to do that, you need to deconstruct and overwrite decades of previous bias about the problem, in order to make the understanding realistic and concrete.
    All the while, it needs to be done in a timely enough manner to make real pressure. Otherwise the risk is, as previously, as with ballistic calculation, chess engines, computer graphics, that the space will rapidly grow from relatively poor performers, to extremely powerful, optimized dreadnoughts.
    After that happens, even if we were to enforce some level of safety. It would now be fighting through massively inert and powerful things, in order to squeeze in safety protocols which should've been there before the whole thing went supernova. Trying to fit a bunch of safety protocols into a thing that nobody understands is difficult indeed. It's unlikely that companies would be willing to invest in such an expensive process, even were it legally enforced. If they would even be capable of doing it.
    The process of education is too difficult and too slow.
    Unfortunately, if the goal is to reach a critical mass of public concern over the safety of AI, one has to play strategically, and utilize existing public bias. It's much easier to convince people that AI is concerning by using an idea that's already in their head, than by giving them a new one which might seem to them, even more false. Even if the idea that's already in their head is as silly as skynet inventing time travel and chasing little kids, it's had so much time to settle that it can feel more real than a real problem.
    While I don't really like this approach; I much prefer when education is actually happening; I prefer for people to be truly informed about what things are, and how they really work -- that is a separate educational problem from the safety of the systems.
    I think the situation is quite dire -- we don't really have time to solve two educational problems. The problem of safety supercedes the problem of understanding, despite that it depends on it.

  • @henryfleischer404
    @henryfleischer404 Рік тому +6

    I like this more opinionated video, as the opinions expressed in it are what I want to hear. As a 3D artist and programmer, AI is very intimidating, particularly the kind that tech executives are trying to sell but don't have. Fortunately with programming, the hard part is knowing what I want, which no AI can help with.

  • @screwsnat5041
    @screwsnat5041 Рік тому

    Ps I don’t think humans will ever be good as AI at objective optimisation however humans still exceed at finding an average optimal solution for each scenario. A true GAI will be one that can optimise averageness. Not 100% optimised but averagely optimised relative to every other variable . I’m currently working on a new concept that works with this principle like @improbable matter said the reason Hamilton is so good is because he’s good at optimising so many things relative to each other speed relative to breaking and drifting and then his self preservation is also a factor that must be optimised for and then other self preservation is also a factor when you optimise any one of this factor too much relative to the other he’s just a crap driver however he’s good because he found a balance between these factors and can optimise them in real time whilst still maintaining a good average

  • @ignaslt
    @ignaslt Рік тому

    Cool

  • @trainskitsetc
    @trainskitsetc 11 місяців тому

    This channel deserves a lot more love than it gets

  • @ibma_DMmeTheGuyWhoTookMyName
    @ibma_DMmeTheGuyWhoTookMyName Рік тому +1

    never change pls. or show me other channel like that.

  • @Zpajro
    @Zpajro Рік тому +3

    Consider for a moment the struggles one might face when dealing with dyslexia or learning to communicate in English as a second language. These are the challenges I've personally confronted. But through AI, I've found an ally that's akin to an advanced spell-checker on steroids. This powerful tool doesn't stop at just spotting my misspelled words. It enhances my writing by refining sentences, removing repetitive elements, and improving formatting. Furthermore, it offers constructive feedback, assisting me in my writing.
    Now, ask yourself this: From reading this text, would you have guessed that I grapple with dyslexia or that English isn't my first language? The odds are low, and that is a testament to the power of AI, a technology that enables smoother and more effective communication for individuals like me.
    But the benefits of AI don't stop at spell-checking and sentence refining. One of the most transformative features I've found is its ability to summarize lengthy and intricate texts. We've all encountered documents or articles so dense, it feels like you need a degree in cryptography just to decipher the main points. But with AI, those previously daunting texts are distilled into clear, concise summaries.
    Imagine being handed a 50-page report and, within moments, you have a succinct overview of its contents. It's akin to a personal assistant who's already read everything and can instantly summarize the essentials. This is particularly helpful for people who might struggle with dense language or complex structures, making information more accessible and easier to understand. It's another way AI is not just a helpful tool, but a true game-changer in breaking down barriers to effective communication.
    Despite these revolutionary capabilities of AI, it's important to remember that it's a tool,and it requires human judgment to be most effective. When I use AI, I always proofread its output, as it can occasionally misinterpret context or nuances, a challenge sometimes referred to as AI 'hallucination'.
    When using AI to summarize long and complex texts, I've found that patience and a bit of strategy can yield more comprehensive results. It can be beneficial to prompt the AI multiple times, requesting it to interpret the text from various angles. This can bring out varying perspectives, focus on different elements, and ultimately provide a more nuanced understanding of the material.

    • @Zpajro
      @Zpajro Рік тому +2

      I asked it to summarize it for those that think it's too long...
      "The text talks about how powerful computer programs (Artificial Intelligence or AI) can help people who have trouble with language, like those who have dyslexia or are learning English. The author says AI can fix spelling, make sentences better, and even summarize long, hard-to-understand texts. But it's not perfect and can sometimes get things wrong, so people still need to double-check what it does. The main point is that AI is really helpful with language, but it's not perfect and we still need to use our judgement when using it."

    • @Rationalific
      @Rationalific Рік тому

      @@Zpajro Both of your comments here impressively illustrate your point!

  • @tesseractcubed
    @tesseractcubed Рік тому +1

    Technology isn’t a universal solution; it instead solves specific problems that could otherwise be solved in a more complex, costly way, with technology development predicated on the idea that technology can improve human quality of life through predictably (agricultural revolution and stable food), reduction in labor (more efficient tools), etc.
    AI seems to be marketed as a solution for any use case: AI isn’t practical for most use cases given other solutions.