Generative AI Has Peaked? | Prime Reacts

Поділитися
Вставка
  • Опубліковано 21 вер 2024
  • Recorded live on twitch, GET IN
    Reviewed Video
    • Has Generative AI Alre...
    By: Computerphile | / @computerphile
    My Stream
    / theprimeagen
    Best Way To Support Me
    Become a backend engineer. Its my favorite site
    boot.dev/?prom...
    This is also the best way to support me is to support yourself becoming a better backend engineer.
    MY MAIN YT CHANNEL: Has well edited engineering videos
    / theprimeagen
    Discord
    / discord
    Have something for me to read or react to?: / theprimeagenreact
    Kinesis Advantage 360: bit.ly/Prime-K...
    Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
    turso.tech/dee...

КОМЕНТАРІ • 897

  • @JGComments
    @JGComments 4 місяці тому +630

    Devs: Solve this problem
    AI: 10 million examples please

    • @DevPythonUnity
      @DevPythonUnity 4 місяці тому +12

      "Actually, AI should strive to be just smart enough to acquire and contemplate new data, including introspection. What do you do when confronted with an unsolvable problem? You gather data, experiment, collect results, then engage in self-reflection to update your knowledge base. It's not merely about amassing data, but rather about possessing the capability to acquire fresh data, experiment with it, and engage in introspection."

    • @tempname8263
      @tempname8263 4 місяці тому +27

      @@DevPythonUnity please repeat your message, but this time use no more than 1 space inbetween words
      generate 4 different versions of such message

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 4 місяці тому +8

      Devs: Generate 10 million example of this problem.

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 4 місяці тому +7

      @@DevPythonUnity ! Disclaimer: GPT was trained on data until 2021, any answers after that date can hallucinate. We will solve this by searching google and feeding the 1st results in your context but you will feel like we now are able to generalize to any answers.

    • @Sky-fk5tl
      @Sky-fk5tl 4 місяці тому +1

      Isn't that how humans learn too...

  • @drditup
    @drditup 3 місяці тому +153

    if only all windows users would start taking pictures of everything they do so the AI algorithms can get more data. Maybe like a screen shot every few seconds. I think I recall something like that

    • @samblackstone3400
      @samblackstone3400 3 місяці тому +21

      AI data collection legislation now.

    • @magfal
      @magfal 3 місяці тому +8

      ​@@samblackstone3400could even drop the word AI from it.....

    • @definitelynotacyborg
      @definitelynotacyborg 3 місяці тому +7

      Don't worry since recall has been recalled, we will have Apple Intelligence which is going to do the exact same thing from the moment you give it access to your device.

    • @Akab
      @Akab 3 місяці тому +6

      ​@@definitelynotacyborg you mean until Apple gives it access to their devices 😉

    • @CapitalGearGaming
      @CapitalGearGaming 18 днів тому

      That would never work, there's a lot of issues with this. Aside from the obvious privacy issues with it, it's a matter of storage and processing, the type of data being provided.
      Even if all Windows users were okay with that, where do you store these quite literally millions, perhaps billions of images you receive daily? (many people have multiple computers and organizations can have hundreds up to hundreds of thousands of Windows machines)
      Even if you do manage to store all these images a supercomputer to process all that daily info quick enough honestly doesn't exist, and even if you built one, it could be considered out of date in a few years due to advancements in AI itself.
      Even if you do manage to process all this data, much of the data is non-sensical. On occasion I leave my computer be, and don't manage to use it till later in the day, is my computer taking screen-shots of a blank/empty background? (What data is actually being provided by taking random screenshots? What about games? Would it take screenshots of my games? I'm not sure what this system would actually teach an AI other than peoples browsing and usage behaviors.)
      Even if all this works, even if you managed to resolve these issues, people would find out and poison your results anyways. A lot of people don't like AI and even more people love to troll, and when the internet gets public access to affect an AI (especially with something as simple as screenshots) they're gonna start trolling.
      With all that being said, there's a limit to current AI models, truthfully diminishing returns doesn't mean 'we need more data', it means we need to adjust how AI learns. There's a ceiling to how good AI can be, regardless of the amount of time or data provided. The issue isn't processing power, and it's not a limitation of data, it's a programming engineer issue.

  • @NunTheLass
    @NunTheLass 3 місяці тому +107

    Isn't it crazy that you hear more people worry about AI polluting the internet for training future AI's than the fact that it is polluting the internet for , you know, YOU AND ME?

    • @g_wylde
      @g_wylde 3 місяці тому

      True but I guess most of us who are vaguely internet savvy can tell the AI crap from legitimate information. AIs themselves cannot do that, they'll just take it in and regurgitate something even worse out. Which means that those people who are less savvy will be faced with more and more fake information and all of us will be swimming through growing piles of garbage to find anything useful.

    • @jakke1975
      @jakke1975 3 місяці тому +11

      Environmental pollution by AI is even a lot worse and honestly, for what? An advanced chat toy for adults that operates with the "intelligence" of a dog?

    • @VinnyMickeyRickeyDickeyEddy
      @VinnyMickeyRickeyDickeyEddy 3 місяці тому

      @@jakke1975 Yup. Rarely gets discussed. Same with VR graphics.

  • @MrSenserus
    @MrSenserus 4 місяці тому +175

    The computerphile guys are my uni lecturers atm and for the coming year. It's pretty cool to see this.

    • @Michael-ty2uo
      @Michael-ty2uo 3 місяці тому +12

      damn lucky asf they definitely enjoy teaching others about comp sci and math topics that cant be said about most professors

    • @WretchMusou
      @WretchMusou 3 місяці тому +1

      Are they nice people in real life? they seems to be in videos...

    • @MrSenserus
      @MrSenserus 3 місяці тому +5

      @@WretchMusou Yeah generally! Definitely some characters though, Steven is a great lecturer and awesomely knowledgeable but definitely a quirky character.

    • @precooked-bacon
      @precooked-bacon 3 місяці тому +2

      very lucky. make good use of the time.

    • @r.k.vignesh7832
      @r.k.vignesh7832 2 місяці тому

      That's pretty cool indeed! The Computerphile guys taught me some concepts I couldn't learn from my actual uni resources (Go8 in Australia). Make sure to get the best out of your time there!

  • @SL3DApps
    @SL3DApps 4 місяці тому +409

    It’s crazy how OpenAi’s only way to stay relevant in this market vs big tech such as Google and MS is to sell the hype that Ai will not peak in the near future. Yet, they are the company everyone is relying on to say if Ai has or has not peaked… why would they ever want to admit to anything that can be damaging to their own company?

    • @furycorp
      @furycorp 4 місяці тому +55

      Altman just needs everyone to hand over more personal data and private/internal documents from businesses so he can live out the megalomaniac fantasies that he talks about in interviews

    • @alexandrecolautoneto7374
      @alexandrecolautoneto7374 4 місяці тому +52

      AI Trains on the internet -> AI filled the internet with garbage -> AI doesn't have good training data anymore...

    • @hughmanwho
      @hughmanwho 4 місяці тому +3

      @@furycorp I'd be curious to see these interviews you are referring to

    • @hughmanwho
      @hughmanwho 4 місяці тому +7

      My guess is that ChatGPT 5 will be better quality. 4 definitely has some issues.

    • @dixztube
      @dixztube 4 місяці тому +8

      @@furycorphe isn’t trustworthy at all

  • @denysolleik9896
    @denysolleik9896 4 місяці тому +363

    It can do anything except tell you that it doesn’t know how to do something.

    • @Vlad-qr5sf
      @Vlad-qr5sf 4 місяці тому +7

      If it can do anything then it doesn’t need to tell you that it can’t do something. Your statement is contradictory.

    • @shafferfs
      @shafferfs 4 місяці тому

      ​@@Vlad-qr5sfshut up nerd

    • @denysolleik9896
      @denysolleik9896 4 місяці тому +82

      @@Vlad-qr5sf someone always thinks they’re smarter than me.

    • @hootmx198
      @hootmx198 4 місяці тому +12

      Just like your average internet user haha

    • @JGComments
      @JGComments 4 місяці тому +7

      Right, it doesn’t actually fundamentally understand what anything is, like what a cat is versus what a dog is.

  • @granyte
    @granyte 4 місяці тому +233

    "steer me into my own bad ideas at an incredible speed" LMAO this is perfect it's exactly what it does when it even works at all. I don't know if my skills have improved that much since gpt-4 came out or what but it feel like copilot and chat-gpt have become way dumber since launch.

    • @allansmith350
      @allansmith350 4 місяці тому +17

      I use all of them and I kind of agree, but I will say, I've cowboyed into some small project solutions VERY fast with ai. They're surely not robust or maintainable though

    • @AndrasBuzas1908
      @AndrasBuzas1908 4 місяці тому +14

      It breaks down the moment you try to do something complex that it hasn't seen before.
      Even then with small problems, it can completely miss the point. It's only really good for the occasional auto complete suggestions.

    • @rngQ
      @rngQ 4 місяці тому +5

      Engineers at OpenAI have talked about how the quality of generation scales with compute. So as more people use GPTs, I can imagine the compute pool being more divided which lowers the quality of the output. Look at how drastically it scales with Sora for example

    • @elPresidente650
      @elPresidente650 4 місяці тому +4

      @@allansmith350 I've been using it for a while, and honestly, I can't complain too much. I don't ask it to do anything fancy, though. It comes in handy when writing documentation based on my layman's prompts. It needs to be edited, of course, but it does a good job at organizing my ideas.

    • @TheManinBlack9054
      @TheManinBlack9054 4 місяці тому +1

      use Claude 3 Opus, its far better for coding. Seriously. Opus is really better.

  • @amesasw
    @amesasw 4 місяці тому +75

    One major problem, is if I ask a person how to do something that non of us know the solution to. They may be able to theorize a solution but they will often tell you they are guessing and not 100% sure about some parts of their proposed solution.
    Chatgpt can't really theorize for me or tell me that it is not sure of an answer but is theorizing a solution based on its understanding or internal model.

    • @doctorgears9358
      @doctorgears9358 4 місяці тому +31

      It will theorize and be confidently wrong. Which is honestly worse than it just admitting a lack of knowledge.

    • @BHBalast
      @BHBalast 3 місяці тому

      There is a compute intensive method to check for model confidence. As LLMs are statistics models, one might prompt it multiple times and check if answers are the same. The second step also can be done by an LLM. This method works and was used in some paper associated with medical use of LLMs but I don't remember the name.

    • @reboundmultimedia
      @reboundmultimedia 3 місяці тому +1

      If you give a human a new problem, they will often use tools, research, test things out, etc. to find the solution to the problem. There are very few humans that can simple solve a new problem without some kind of pretraining involved. There is no reason that a very very good LLM can't do the same thing. They will be able to use tools the same way a human can.

    • @therealjezzyc6209
      @therealjezzyc6209 3 місяці тому

      ​@reboundmultimedia while what you're saying isn't wrong, it isn't accurate to say that humans and LLMs learn the same way, or that they learn the same relationships. First of all, humans learn faster than LLMs do with less training data. Second, when faced with a challenging problem, a human will go off and collect new information; an LLM will not go and find new textbooks and put them into its training data and retrain itself to learn new correlations. Humans can actively acquire new knowledge that they haven't seen or trained to work with, LLMs cannot acquire knowledge that wasn't implicit in the representations of the data they were trained on.

    • @justinwescott8125
      @justinwescott8125 3 місяці тому

      It will tell you it's not sure if you ask it to. But you're right that it's not a built in behavior.
      "Hey ChatGPT, for this conversation, if you give me an answer that you're not very sure about, I want you to tell me. In fact, for every answer you give, please give me a percentage that represents how sure you are, and explain how you arrived at that percentage."

  • @MrKlarthums
    @MrKlarthums 4 місяці тому +80

    There's plenty of software that has simultaneously improved while having an entirely degraded user experience. If companies feel that it makes them more money, they'll absolutely do it. LLMs will probably be part of that.

    • @monad_tcp
      @monad_tcp 4 місяці тому +14

      Windows11 for example, structurally the thing is actually better than the previous ones. But in user experience, it degraded so far from Windows7. Even thou Windows11 is prettier than Windows10 which was ugly as hell, its far from the simple beauty of Windows7 glass and its barely usable.

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 4 місяці тому +5

      @@monad_tcp To be fair, isn't that just the Microsoft development cycle: just alternating between releasing a good product and then releasing a shitty one? At least that is what I've been told since I was a kid, and my only experience is W7(good), W8(dogshit), W10(good), and then W11(dogshit, but improving).

    • @monad_tcp
      @monad_tcp 4 місяці тому +1

      @@Forty8-Forty5-Fifty8 probably, its the tick-tock cycle from old intel

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 4 місяці тому +3

      @@monad_tcp lol I was just having a conversation yesterday with my grandfather about how I have a conspiracy theory that intel pretends to release a new generation every year when in reality it takes like 2-4+ generations for any noticeable performance differences because my motherboard just died and I was in the market for an upgrade, but it didn't seem like there was anything worthwhile. I guess there is something to that theory

    • @monad_tcp
      @monad_tcp 4 місяці тому +1

      @@Forty8-Forty5-Fifty8 I think Intel died on the 14nm , nothing got better after that

  • @Afro__Joe
    @Afro__Joe 4 місяці тому +81

    AI is becoming like ice cream to me, good every once in a while, but I get sick of too much of it. With Samsung trying to shove it into everything in my phone, MS trying to shove it into everything PC-related, Google pushing it at every turn, and so on... ugh.

    • @DJWESG1
      @DJWESG1 3 місяці тому +3

      That's the same Samsung who can't even get its spellchecker and auto correct to work efficiently for ppl with poor spelling and grammar.

    • @the0ne809
      @the0ne809 3 місяці тому

      Google using AI for its search engine is wild to me.

    • @TheManinBlack9054
      @TheManinBlack9054 3 місяці тому

      @@the0ne809 every search engine uses it

    • @Overt_Erre
      @Overt_Erre 3 місяці тому

      They're pushing it because they want to collect more data from you. AI will seem free and useful as long as they think more data will improve their efficacy. Once they see the diminishing returns suddenly you'll be asked to pay and the usage rates will plummet

    • @chrishayes5755
      @chrishayes5755 2 місяці тому

      I'm loving AI. makes my life so much better and easier. saves me money. helps me brainstorm.
      if you can't leverage AI to make your life easier you're either ignorant or have nothing going on in life.

  • @xCheddarB0b42x
    @xCheddarB0b42x 3 місяці тому +19

    The young ones may not remember the VR craze of the late 90s and early 00s, but us oldkins do. AI feels like that to me.

    • @rh906
      @rh906 3 місяці тому

      Difference between that and now and the LLMs are at least useful if you understand their limitations and don't plop out your brain thinking it is a replacement. Can't fix lazy and stupid people I suppose.

    • @tlz124
      @tlz124 3 місяці тому

      VR in the 90's?

    • @justinwescott8125
      @justinwescott8125 3 місяці тому +3

      Yup. Nintendo gave it a try in the 90's with a little product called the VirtuaBoy.
      By the way, even though VR was a failed craze back in the 90s and 00s, it actually happened eventually. I use my Meta Quest like every day to play games and stay in touch with far away friends. Some of the games are incredible like Pistol Whip and Arizona Sunshine.

    • @FarnhamJ07
      @FarnhamJ07 3 місяці тому +1

      Yep yep, the Virtual Boy didn't come completely out of left field! I'd say the hype was really more about 3D graphics than VR itself, but it didn't take long for them to start pushing the idea that those 3D graphics could then be used to generate an entire 3D virtual world around you. Everyone knew the 3D graphics part was coming at least; I think a lotta people forget that the Virtual Boy and original PlayStation came out within a few months of each other!

  • @jameshickman5401
    @jameshickman5401 4 місяці тому +268

    Every exponential curve is secretly a sigmoid curve.

    • @zyansheep
      @zyansheep 4 місяці тому +5

      So far...

    • @AndrasBuzas1908
      @AndrasBuzas1908 4 місяці тому +53

      Sigmoid grindset

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 4 місяці тому +7

      what about the exponential curve

    • @kevin.malone
      @kevin.malone 4 місяці тому +1

      @@AndrasBuzas1908 I wanted to say that

    • @MikkoRantalainen
      @MikkoRantalainen 4 місяці тому +19

      I would say that every exponential curve of *naturally occurring events* is secretly a sigmoid curve. You can have pure exponential curves in pure mathematics without any problems but real world events are limited by real world physical limits and those curves seem to follow sigmoid curve in big picture even though short term results point to exponential behavior.

  • @TomNook.
    @TomNook. 4 місяці тому +103

    I hate how AI has been forced into everything, just like crypto and NFTs a couple of years ago

    • @MasterOfM1993
      @MasterOfM1993 4 місяці тому +37

      somehow feels like the people who used to talk about web3 all the time now talk about AGI all the time

    • @Slashx92
      @Slashx92 4 місяці тому +1

      Sasly, this is somewhat useful for the corporate word, so it will stay, not like nfts that just died on their own

    • @francisco444
      @francisco444 4 місяці тому +1

      AI is in everything because it's a universal translator so it makes sense to put it everywhere.
      Crypto is great but limited use.

    • @thewhitefalcon8539
      @thewhitefalcon8539 4 місяці тому +9

      ​@@MasterOfM1993some people running NFT companies are running AI companies now

    • @marceljouvenaz257
      @marceljouvenaz257 4 місяці тому

      Elon is investing $10 bln in AI this year. YMMV, but that is my high water mark.

  • @GigaFro
    @GigaFro 4 місяці тому +75

    Just last year, I was sitting in a makeshift tent in an office in downtown San Francisco, attending a Gen AI meetup. The event was a mix of investors and developers, each taking turns to share their projections on the future progress of AI. Most of the answers were filled with exponential optimism, and I found myself dumbfounded by the sheer enthusiasm. When it was my turn, I projected that we were peaking in terms of model performance, and I was certain I was about to be ostracized for my view. That day I learned that as soon as hype enters the room, critical thinking goes out the window - even for the most intelligent minds.

    • @sp123
      @sp123 4 місяці тому +16

      People go to tech because its the last gold rush of easy money

    • @TheManinBlack9054
      @TheManinBlack9054 3 місяці тому +3

      Great! You've seem to have found your audience here, but if i may ask what were your projections based on?

    • @Danuxsy
      @Danuxsy 3 місяці тому +3

      but you would have been wrong? gpt4-o is clearly a step up from GPT4, and OpenAI have stated themselves that we are far from the limit of generative models.

    • @justahamsterthatcodes
      @justahamsterthatcodes 3 місяці тому +7

      We certainly are plateauing. Compare gpt 2 go gpt 3 wild difference. Now compare gpt 3 to gpt 4. Much less difference. Or gpt 4 to gpt 4o.

    • @skyrimax
      @skyrimax 3 місяці тому +2

      Attended an ML day type event last year, had a similar experience. But what dumbgounded me even more was to complete disregard for the social implications of ChatGPT-type programs, like the new Google Overview telling depressed people to jump off a bridge. I think that's similar observation you had about critical thinking, but on the social side.

  • @MikkoRantalainen
    @MikkoRantalainen 4 місяці тому +32

    Modern image generators can do supriringly well even with a bit weird prompts such as "Minotaur centaur with unicorn horn in the head, steampunk style, award winning photograph" or "Minotaur centaur with unicorn horn in the head, transformers style, arc reactor, award winning photograph". Even "A transformers robot that looks like minotaur centaur, award winning photograph, dramatic lighting" outputs acceptable results.
    However, ask it for "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras" and it will totally fail. The latter case has way less possible implementations and this exactness makes it to fail.

    • @pureheroin9902
      @pureheroin9902 3 місяці тому +5

      I need to see your search history 🤣🤣🤣

    • @MikkoRantalainen
      @MikkoRantalainen 3 місяці тому

      @@pureheroin9902 🤭My search history is actually pretty boring. Right now it looks like this:
      - phpunit assertequals github
      - css properties selectors sanitizer whitelist
      - sanitize css whitelist functions
      - phpunit assertequals clipped string
      - webp vs avif vs jpeg xl
      - what is intel ark
      - seagate exos helium
      - max fps cs
      - eu legislation consumer battery replacement
      - how Automatic Activation Device works
      - song of myself nightwish

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому +1

      It's interesting how the specificity and exactness of a prompt can impact the results of image generation. When a prompt is too specific or technical, like "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras," the image generator may struggle because it relies on patterns and generalizations learned from a vast dataset of images. Here are a few reasons why this happens:
      Data Training Limitations: The training data for these models consists of a vast array of images, but the specific combination of features like "Boeing 737 MAX with cockpit windows replaced with cameras" might not exist in the dataset. As a result, the model can't draw from a learned example and may fail to generate a coherent image.
      Conceptual Complexity: While a "Minotaur centaur with a unicorn horn" is a complex concept, it's based on mythological and fictional elements that the model has likely seen in various forms. This allows it to generalize and create an imaginative output. However, replacing cockpit windows with cameras on a specific aircraft model is a highly technical modification that the model might not have encountered or understood in its training data.
      Visual Coherence: Generating a photorealistic image that includes complex mechanical details, like modifying an airplane's cockpit, requires a high level of visual coherence and understanding of engineering. The model might struggle to maintain the realistic appearance of the Boeing 737 MAX while accurately implementing the specified changes.
      Creative Interpretation vs. Precision: When given creative or fantastical prompts, the model has more leeway to interpret and generate the image. However, when asked for precise, technical modifications, it needs to adhere closely to real-world specifications, which can be challenging without explicit training examples.
      To improve the chances of getting a satisfactory result with a more specific prompt, one might try breaking down the request into simpler parts or providing additional context that helps guide the model's interpretation. For instance, describing the cameras' placement and appearance in more detail or using analogies that the model might better understand could potentially yield better results.

  • @hamm8934
    @hamm8934 4 місяці тому +105

    Read up on the “frame problem” and “grounding problem”. This is old news and has been known for decades. Engineers and venture capital just dont care because its not in their interest.
    Edit: also Wittgensteins work on family resemblance and language games.
    Edit2: I should clarify that I am referring to the epistemological interpretation of the frame problem, not the classical AI interpretation. That is, the concern of an infinite regress from an inability to explicitly account for non-effects when defining a problem space; this is specifically at the level of computation, not representation. For example, if agent is told "the spoon is next to the plate", how are all of the other non-effects, like a table, placemat, chair, room, etc. successfully transmitted and understood, while irrelevant inaccuracies like a swimming pool, cows, cars, etc. omitted and not included in the transmission of information. Fodor, Dennett, McDermett, and Dreyfus have plenty of canonical citations and works articulating this problem.

    • @InfiniteQuest86
      @InfiniteQuest86 4 місяці тому +22

      As long as you profit before anyone figures it out, you win.

    • @abdvs325
      @abdvs325 4 місяці тому +2

      Those problems don't seem like limits at all. The frame problem is just about understanding relevant context. For which there is no definitive evidence that it can't be reproduced in Ai. Neither has the grounding problem, which is just about understanding the real world rather than statistical relationships between words, given any strong evidence that it is a limit on Ai progress. This is laziness.

    • @hamm8934
      @hamm8934 4 місяці тому

      ​@@abdvs325 Those are extremely surface level strawman understandings of both. Far greater minds than anyone watching this video have debated and formulated both of these critiques. You can hand wave all you want, but the white papers have been left undisputed for decades.
      Here are a few points you are missing/oversimplifying:
      - The frame problem argues that in principle there is no deterministic - or probabilistic - way to determine relevant context in a logical framework. That is the problem. It shows that an infinite regress emerges of when trying to determine relevance and irrelevance following deduction or induction. These system axiomatically dissolve into intractability.
      - The grounding problem is not about determining the real world from a word. It cuts to the very root of deductive and symbolic systems. It again, shows that there in principle must be external dimensions/modalities that allow humans to deduce meaning from symbols. Symbols themselves themselves are not sufficient. For instance, one's understanding of the symbol "food" is multimodal and multidimensional. You dont understand the word food because you read the definition of the symbol. You've smelt food. You've tasted food. You've felt food. You've prepared food. You've thrown away food. You've remembered food. Etc.. Read up on the Chinese Room Problem example and it might make it clearer. Or read some of Wittgensteins work on the meaning of a word.
      I'm rambling at this point. Again, read up on these and dont be so naïve to reject them after having a super basic understanding of them. These problems are real and ever present. These problems are very much open.

    • @hamm8934
      @hamm8934 4 місяці тому +17

      ​@@abdvs325 You're over simplifying and strawmanning both. UA-cam deleted my response, but read more.
      also, "For which there is no definitive evidence that it can't be reproduced in Ai." This is a fallacy. You cannot prove a negative. There is no definitive evidence that there are not fairies. Exactly. No one is saying there is. The point is that there is no evidence in favor of positing the existence of fairies, therefore we just dont say there are fairies, but we can never say there arent.
      There is no evidence or serious rebuttals to the frame or grounding problem, and as such, there is no reason to think they are wrong. They might be, but they've stood strong since at least the 80s when the terms themselves were coined, even though the concepts go all the way back to far earlier with Hume. You need positive evidence to say they are wrong. Until then, they stand as the null hypothesis.

    • @clubpenguinfan1928
      @clubpenguinfan1928 4 місяці тому +10

      Finally someone mentions philosophy of language. When the video mentioned the idea of mapping text/images to their meaning in some embedding space, it set off some alarms for me. If some hypothetical AGI can grasp meaning (like we do) via this architecture, then we might as well describe the "x means M" relation as just this embedding map.
      Wouldn't this have huge implications for the semantic problem? In a way it feels like an implementation for a referential-like theory of meaning, and those are the very first theories you "debunk" in an intro Phil of Lang class.

  • @bwhit7919
    @bwhit7919 3 місяці тому +27

    Most people misunderstand when they hear AI follows a “power law”. If you read OpenAI’s paper on the scaling laws, you need a 10x increase in both compute and data to lead to a .3 reduction in the loss function. In other words, you need exponentially more data to keep making the models better. It’s not that the models are getting exponentially better.

    • @DJWESG1
      @DJWESG1 3 місяці тому

      No, they just havnt figured out how to utilise small amount of data.

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому

      That's a great point, and it's a common misunderstanding. The term "power law" in the context of AI and machine learning, particularly in the scaling laws for neural network training, refers to the relationship between the amount of compute/data and the resulting improvement in performance. Here's a more detailed explanation to clarify this concept:
      Understanding the Scaling Laws in AI
      Definition: In the context of machine learning, a power law scaling means that to achieve a certain improvement in model performance (e.g., reduction in loss), the amount of compute and data required scales according to a power law.
      Example: According to OpenAI’s scaling laws, if you want to reduce the loss function by a factor (e.g., 0.3 reduction in loss), you need to increase both the compute and data by an order of magnitude (10x). This relationship can be described by a power law function.
      Exponential Data Requirements: The power law indicates that the requirements for data and compute grow exponentially to achieve linear improvements in model performance. This means that as the model gets better, the resources needed to continue improving it increase dramatically.
      Linear Performance Gains: Despite the exponential increase in resources, the actual performance gains (e.g., accuracy or reduction in loss) are not exponential but rather linear or sub-linear. This is why the models do not get exponentially better with exponentially more data and compute.
      Resource Intensive: As models grow larger and more complex, the cost (in terms of computational power and data) to train these models effectively becomes significantly higher.
      Diminishing Returns: There are diminishing returns in performance improvement relative to the exponential increase in resources. For instance, doubling the compute might not halve the error but only slightly reduce it.
      Misconception of Exponential Improvement: Some might misinterpret "power law" to mean that the models themselves improve exponentially with more data and compute. In reality, the improvement is much more modest compared to the exponential growth in resources required.
      Focus on Scaling: Understanding the scaling laws helps in setting realistic expectations and planning resource allocation for training larger models. It highlights the need for efficient algorithms and techniques to optimize resource use.

    • @DJWESG1
      @DJWESG1 3 місяці тому +1

      @@thiagopinheiromusic its almost as if structuration and the power relationship are real..

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому

      @@DJWESG1 fact

  • @tequilasunset4651
    @tequilasunset4651 4 місяці тому +26

    We didn't even go "from nothing to something" - current LLMs are just a marked spike/ breakthrough in capability of machine learning that's been around for ages. I think we'll still see huge improvement in the technology that enabled that breakthrough but doubt there will be a "next level" - that's not just a tech company branding a new product as such - for a good few years.

    • @TheNewton
      @TheNewton 3 місяці тому +6

      the breakthrough of course being just throw more resources at the problem

  • @techsuvara
    @techsuvara 4 місяці тому +77

    I like to say "AI accelerates you in the direction your going, pray it's not the wrong one"...

    • @BaruyrSarkissian
      @BaruyrSarkissian 3 місяці тому +2

      It's still good to reach the end of a wrong road faster.

    • @techsuvara
      @techsuvara 3 місяці тому +2

      @@BaruyrSarkissian that's the problem with wrong roads, if you're asking AI to take you somewhere, it doesn't know it's the wrong road. However if you do things yourself, you can reason you're down the wrong path much earlier.

    • @BaruyrSarkissian
      @BaruyrSarkissian 3 місяці тому +2

      @@techsuvara your initial statement is "AI accelerates you in the direction your going" you will go on wrong roads with and without AI.

  • @tonym4953
    @tonym4953 4 місяці тому +13

    8:20 Open AI is doing the same thing with the consumer version of Chat GPT. They essentially are charging users to train their model. Genius and very cheeky!

  • @JackDespero
    @JackDespero 3 місяці тому +4

    There is another massive problem that is going to cap AI at least in the near future: current AI databases are based on stolen data.
    This has legal implications (countries, esp in the EU, are going to start to ban that type of forgiveness instead of permission approach).
    But more importantly, there are two massive practical implications that will happen irregardless of whether goverments take action:
    - Poisoning the well: tools like Nightshade, designed to specifically confuse LLM and ML while causing as little disturbance to humans as possible, are becoming more popular and more sophisticated, and they are being used by the top artist thay you want to copy. I am sure that similar tools will appear for other fields.
    - Cannibalism: we are already seeing it. If you google important historical figures, AI images of them are the first results often.
    The more AI is used and shared over internet, the more it will enter in new AI databases for training, causing it to believe that humans have, in fact, six fingers and two heads.
    AI is tranforming into a European royal family: so imbreed that it starts to cause serious problems.
    And this happens also to code (code generated by Copilot then used to train Copilot), fanficts ,literature, even scientific papers (esp in lower tier publications).

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому

      The perfect storm is brewing for AI, and it’s all based on the rock-solid foundation of stolen data. Because why would anyone think that using massive datasets scraped without consent might lead to legal or ethical dilemmas? It’s not like the EU is known for its stringent data protection laws or anything. Surely, they’ll just let it slide!
      Poisoning the Well
      And then there’s the delightful prospect of poisoning the well. Tools like Nightshade, designed to confuse and corrupt AI training data while being barely noticeable to humans, are just the tip of the iceberg. Top artists are using these tools, making sure that AI learns to produce the most avant-garde, surreal, and utterly unusable art. Who wouldn’t want an AI that thinks Picasso painted with crayons during an earthquake?
      Cannibalism
      But wait, it gets better. Enter cannibalism: AI feeding on AI-generated content. It’s the digital equivalent of inbreeding, and we all know how well that turned out for European royalty. Imagine a future where every historical figure has six fingers and two heads because that’s what the AI “learned” from its own distorted outputs.
      And it’s not just images. Code is being recycled too, with Copilot regurgitating its own generated code, leading to a feedback loop of mediocrity. Fan fiction, literature, scientific papers - everything’s up for grabs. Soon, we’ll have AI-authored research proving that unicorns existed because some model somewhere decided to get creative.
      The Future of AI
      So, let’s raise a toast to the future of AI: a world where data is a tangled mess of legal troubles, poisoned wells, and cannibalistic content. Who needs accurate, reliable information when you can have a digital echo chamber of nonsense? It’s not like we were aiming for progress or anything. Just sit back and enjoy the ride as AI stumbles its way through a minefield of its own making. What could possibly go wrong?

  • @apexphp
    @apexphp 4 місяці тому +102

    It's even much more simple than that. They've simply ran out of training data. They've trained the LLMs on literally ever piece of data ever generated by humans since the dawn of mankind, from every word written to tons of satellite images, to every movie produced and song recorded. There is no more training data, and the LLMs still get things wrong all the time (the other day Meta AI was adament that a SHA256 bit hash is 64 bytes in length, it's not, it's 32 bytes).
    And you can't just have these things train on synthetic data they create, because that just makes them dumber. Plus with the sheer amount of AI generated garbage content and spam now that exists within the world, these LLMs are probably as smart as they're going to get for a long time. I read a report a while ago that estimates the volume of text that has been generated by humans from the dawn of man kind until recently is now being generated by AI every two weeks and pushed to the internet.
    So the pool of training data for LLMs is now of lower quality overall. I don't know, I'm ramblind now.

    • @DeepThinker193
      @DeepThinker193 4 місяці тому +15

      The obvious solution is to this is to go back to the drawing board, actually figure out and understand how the AI works, improve it and recreate the AI from scratch.

    • @BB-uy4bb
      @BB-uy4bb 4 місяці тому +19

      Your missing a huge point: data quality, I would estimate that 90% of the internet is wrong/garbe data, there can be huge improvements if you simply let the ai only see the quality data and filter out the garbage, chances are that the ai only makes so many mistake cause it saw that many in the training data
      The next thing is we always expect the ai to be correct on its first try, but if you give a human only 1 chance he’ll most likely be wrong, we learn, create ideas and get to the correct solution iteratively, but expect the ai to give 1 shot the correct answer, not a fair comparison, if you give ais more time to think the get better aswell

    • @MrMeltdown
      @MrMeltdown 4 місяці тому

      You mean the AI is getting distracted by pron….

    • @dragoon347
      @dragoon347 4 місяці тому +2

      Overall data needs to be marked for tokenization into llm's, previously there were only x amount of pictures with descriptions, then the vision multimode models come out and now you can describe the images with a better dataset more descriptive more indepth and multi-dimensional...i.e. its a dog, a yellow dog, a yellow jack rustle terrier, a dog in the canine family etc etc etc. So the data may shrink but the richness of the data will be far better as well as now, with gpt4o you can hear/see/nlp datasets giving at least 3 vectors to provide descriptions of tokens.

    • @jomohogames
      @jomohogames 4 місяці тому

      I don't think the difference between images and code is so big as prime is making out. The abstraction of a star is very limited as well (pentagram, 4 pointed, 6 pointed?) the variation he's talking about is the same in the implementation ( variable names, comments, imports and language in general)

  • @MrSnivvel
    @MrSnivvel 4 місяці тому +60

    LaTeX formatted papers (the research paper in the video) are gigachad. You cannot prove me wrong.

    • @-book
      @-book 4 місяці тому +15

      LaTeX is such good software, puts Word to shame

    • @sahasananth987
      @sahasananth987 4 місяці тому +6

      I love LaTeX it’s awesome I have thrown word and g docs programs to trash lol. I use latex for assignments at school too

    • @AJewFR0
      @AJewFR0 4 місяці тому +8

      I went to a good cs college with a slightly math heavy emphasis. I was the kid who started learning LaTeX for hw in multivar calculus. It is such a useful tool to know for my all my math, cs, and engineering classes that required pdf submission. use the basic formatting still in LaTeX fills in markdown docs at work.

    • @xplorethings
      @xplorethings 4 місяці тому +7

      So.. every paper outside of social sciences?

    • @MrSnivvel
      @MrSnivvel 4 місяці тому +4

      @@xplorethings **whoosh** The use of LaTeX is rare outside of academia and research papers/publications, and those who do use it outside of that scope set themselves far ahead from the rest.
      I know last month was Autism Awareness month, but you'll still get a freebie this time for missing the point.

  • @CristianGarcia
    @CristianGarcia 4 місяці тому +73

    Numberphile but Primagen talks from time to time

    • @virior
      @virior 3 місяці тому

      Yeah! That's called a react, I've been enjoying the format.

    • @kallekula84
      @kallekula84 3 місяці тому +2

      @@virior he usually lets the guy finish a sentence, how often did he even let the guy finish a sentence here?

  • @KrisRogos
    @KrisRogos 4 місяці тому +17

    1885: Benz Patent Moterwagen (first practical automobile) has a top speed of 10mph/16kmph
    1908: Ford Model T (first mass-produced automobile) has a top speed of 42mph/68kmph
    That is 23 years to gain 32 mph; assuming exponential growth by the year 2024, our cars should be going 1817mph/2924kmph
    To be fair, linear growth would be "only" 204mph, which is far more realistic, and you can cherry-pick other "cars" to fit the model even better. However, the point is that this is not a reasonable way to estimate future technological progress.

    • @TheManinBlack9054
      @TheManinBlack9054 3 місяці тому +1

      True, but cars have practical limitations, you wont need your car to drive 204 km/h.

    • @TheNewton
      @TheNewton 3 місяці тому

      In 1997 Andy Green's Thrust SSC set the land speed record of 1,228 km/h (763 mph).
      Capability is there, but the "should be" part is that they should not go that fast on purpose for general usage.
      Better analogy is probably flight to manned space distance, i.e. we should be already on doing manned mars missions or humans leaving the solar system.

    • @KrisRogos
      @KrisRogos 3 місяці тому

      @@TheNewton Just like that was a heavily specialised car, I don't doubt we will have extremely sophisticated models running solutions for cutting edge problems in medicine, physics or even just break records. Future space missions may even require AGI instead of 10+ minute Earth delay. But there is a huge gap between the practically unlimited time and money of moonshot projects and the idea that LLMs will run every detail of our lives and be on every device.
      Even if a 1000mph jet cars are theoretically feasible and even if you could technically get a 300mph Bugatti, you are not going to do a school run in either.

    • @Gamez4eveR
      @Gamez4eveR 3 місяці тому

      @@TheNewtonthe problem is that the SSC was not a production vehicle

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому +1

      Oh, absolutely, it's perfectly reasonable to assume that technological progress follows a neat, predictable path based on early growth rates. I mean, who wouldn't expect cars to be zooming around at 1817 mph by 2024? It makes perfect sense if you just ignore reality and common sense.
      And of course, linear growth is "only" 204 mph, which is obviously what every car on the highway is doing right now, right? Because cherry-picking data points to fit a model is the gold standard of scientific prediction. Forget the complexities of engineering, safety regulations, or actual consumer needs - just draw a line or a curve and call it a day!
      But seriously, why stop there? Let's take the Wright brothers' first flight in 1903. By their logic, since that plane flew at about 30 mph, we should be able to zip around the globe in minutes by now. Oh wait, we aren't? How shocking.
      Yes, predicting the future of technology based on early growth rates is clearly the most reasonable approach. Never mind the countless variables and unpredictable innovations that actually drive progress. Let's just stick to our neat little models and be bewildered when reality doesn't comply.

  • @DeusGladiorum
    @DeusGladiorum 4 місяці тому +13

    I didn’t appreciate Prime making those Kakariko girl noises while I was outside and without headphones

  • @MasamuneX
    @MasamuneX 4 місяці тому +6

    I think LLM's as a foundation to AGI makes sense but i also think that there needs REASONING ability. The ability to hold two concepts in its metaphorical head and then determine what one is better for the task not just a fire hose of text spewing out. The token cost will be wild though.

  • @thisbridgehascables
    @thisbridgehascables 4 місяці тому +9

    I agree, I believe are going to hit a plateau on AI very soon. We’ll make small improvements but the next jump won’t be possible until the very foundation changes.
    I don’t think we would need to advanced in other areas of computing to keep a constant growth in AI.

    • @blijebij
      @blijebij 3 місяці тому

      That foundation will arive with neural network addaptive chips.

  • @YaroslavFedevych
    @YaroslavFedevych 4 місяці тому +7

    A breakthrough will be if you can bootstrap an "AI" on the amount of material sufficient to raise a human child and it gets curious all on its own.

  • @wesmoulder3077
    @wesmoulder3077 3 місяці тому +1

    One problem is that the AI is going to take over that intern's job, and now the intern is not getting better. So the people like us that are better than the AI will not be recreated in the new generation.

  • @benwintraub558
    @benwintraub558 4 місяці тому +7

    The XY problem (or the "ex-wife" problem) is the "how you you dynamically name variables in a loop?" problem. I've heard newbie programers ask this before when what they are really looking for is an array/list.

  • @LongJourneys
    @LongJourneys 4 місяці тому +6

    I use AI for stupid repetitive stuff I'm too lazy to do myself; but I've noticed in recent months the stuff it cranks out seems to be getting worse and worse.

    • @personzorz
      @personzorz 3 місяці тому

      Or it has lost its novelty and you are noticing

    • @taragnor
      @taragnor 3 місяці тому

      @@personzorz Yeah the first time you see AI code or do something it was this big "wow" moment. Then you start to have it actually do productive stuff to help you and you kinda realize you have to constantly review its work and you're just putting in a ton of effort to get a mediocre job from a rather stupid employee.

  • @snarkyboojum
    @snarkyboojum 4 місяці тому +15

    The main issue is that the people responsible for the fundamental approaches being used in deep learning today have never wrestled with the problem of induction. They need to read the classic positioning by Hume and then follow up with Popper. Humans don’t used induction to reason about the world. It’s shocking to me that otherwise highly educated people have never read basic philosophy of epistemology. Narrow education is to blame really.

    • @ea_naseer
      @ea_naseer 4 місяці тому

      induction has a formula Solomonoff induction yes its intractable but it's there. But there's no formula for deduction not even an intractable one not even an NP hard one.

    • @specy_
      @specy_ 4 місяці тому +2

      This is a cool topic, why would you say humans don't use induction in their daily life? Exclude the scientific world which we can say not always uses it, but induction is probably the simplest and most used prediction technique used by humans. I guess ML models can't really do much other than use induction to get a prediction, unless you are exhaustive with your possible inputs. What's your idea to not use induction in ML?

  • @KertaDrake
    @KertaDrake 2 місяці тому +2

    What if AI is literally just millions of people answering those multi-stage captchas rather than real software?

  • @jamesaritchie1
    @jamesaritchie1 2 місяці тому +4

    Those who think generative AI has peaking have absolutely no clue what is actually happening in AI research and development..

  • @nickwoodward819
    @nickwoodward819 4 місяці тому +17

    fuck, tried to get mid journey to put a kiwi on a snowboard. it had no fucking clue

    •  4 місяці тому +4

      Your prompting sux

    • @nickwoodward819
      @nickwoodward819 4 місяці тому +9

      No mate, it's exactly as the video states, it's shit at niche subjects. It wasn't even remotely like a kiwi.
      But please, tell me what 'prompt' would have got it to understand what a kiwi looks like?

    • @isodoubIet
      @isodoubIet 3 місяці тому +1

      I just asked copilot (== gippity + dalle 3) and it did it perfectly

    • @nickwoodward819
      @nickwoodward819 3 місяці тому +2

      @@isodoubIet don't know what to tell you bud, midjourney couldn't do it late last year. not sure how much prompting it needed to get a kiwi looking like an actual kiwi

    • @isodoubIet
      @isodoubIet 3 місяці тому +1

      @@nickwoodward819 You don't have to tell me anything. You can try it yourself. The prompt I used was literally just a kiwi on a skateboard, nothing special. The first time it thought I meant the bird, which is understandable. The second time I specified a kiwi fruit.
      I once tried to get stable diffusion to make a classic grey alien and it just wouldn't. Probably a weird hole in the training data. Definitely no fundamental issue in making it generate "an X on a Y", no matter how unrelated X and Y may be.

  • @Jamsaladd
    @Jamsaladd 3 місяці тому +2

    100% true about what you said with copilot. generative AI will gladly help you make the thing you want to make , regardless of whether or not it will actually work or is a bad idea for various reasons

  • @shadeblackwolf1508
    @shadeblackwolf1508 4 місяці тому +6

    I think generalized intelligence is a pipedream that must die.... where i think the next evolution is gonna come from is easy to deploy AI, that are easy to train yourself, for your specialized task

  • @arexxuru5022
    @arexxuru5022 4 місяці тому +56

    Where Chat GPT will train now that StackOverflow is filled with Chat GPT answers? amirigh?

    • @trappedcat3615
      @trappedcat3615 4 місяці тому

      There is no end in sight if they train on Github user data or Copilot workspaces in VS Code

    • @dahahaka
      @dahahaka 4 місяці тому +8

      It's already being intentionally trained on synthetic data, it's a non issue

    • @GrumpyGrebo
      @GrumpyGrebo 4 місяці тому +3

      @@dahahaka Yeah you missed the point. Training a generative AI on AI generated data. Human in, human out.

    • @c0smoslive391
      @c0smoslive391 4 місяці тому +27

      @@dahahaka yep and the results are worse
      garbage in garbage out

    • @AR-ym4zh
      @AR-ym4zh 4 місяці тому +1

      Press x to doubt​@@dahahaka

  • @PasiFourmyle
    @PasiFourmyle 4 місяці тому +11

    If the next step is to figure out the training problem, what if the dumb "AI Pins" and "Windows Copilot +Plus ++..." are actually just attempts at having new training data sources?

    • @PasiFourmyle
      @PasiFourmyle 4 місяці тому +2

      I don't know why I said "what if.." like there's an impending doom🤣

    • @ImDGreat
      @ImDGreat 3 місяці тому

      @@PasiFourmyle not an attempt they actually doing it for that, also meta, twitter, discord, telegram, wechat, even games like valorant and league

  • @sprytnychomik
    @sprytnychomik 3 місяці тому +1

    "Slightly better Google Search" doesn't sound good since Google Search is just slightly better than Search in Windows which is designed to find anything but .

  • @derekcahill1775
    @derekcahill1775 4 місяці тому +35

    Jeff bezos said it best but I think it’s telling that AI needs so much data to form a basic model. For example Humans don’t need to know everything about driving or have 100’s of thousands of miles in order to start driving a car. The other problem is that AI doesn’t perceive opportunity cost like a human so there’s no incentive for it to problem solve the same way a human would. Ai is definitely the future but it’s nowhere near people think it is unfortunately.

    • @monad_tcp
      @monad_tcp 4 місяці тому +7

      Its funny, I learned to drive my car in one week after mere 500km of training data.

    • @monad_tcp
      @monad_tcp 4 місяці тому +12

      I also don't remember needing to read the entire internet to be able to write and understand text.

    • @Slashx92
      @Slashx92 4 місяці тому +16

      Yeah but we have 20 years of experience living in reality (or 16 or w/e) when driving. You already have eye-hand coordination, you have seen cars all your life, you get a rough idea on how the road works in children's shows and books. There is an inmense amount of data you are not aknowledging

    • @cauthrim4298
      @cauthrim4298 4 місяці тому +4

      ​@@Slashx92people learned to drive when cars first came about all the same, it also didn't take extraordinarily long too.

    • @jackoplumkin6412
      @jackoplumkin6412 4 місяці тому

      ​@@cauthrim4298because there were other manual vehicles at the time that used to do the job of cars. and it's not like the earlier models of cars were much different from the carts people were used to when it was first invented

  • @quachhengtony7651
    @quachhengtony7651 4 місяці тому +14

    Let's goooooooooooo we're not losing our jobs after all

    • @nonyabusiness3619
      @nonyabusiness3619 3 місяці тому +4

      Don't celebrate too early.

    • @Jabberwockybird
      @Jabberwockybird 3 місяці тому +3

      Yes, forget the AI doomers. Doomer porn is popular everywhere. Politics, economics, etc.

  • @TheFinancialMinutes
    @TheFinancialMinutes 3 місяці тому +2

    I believe the saying, "Today is the worst version of AI that will ever exist," is wrong. Google's Gemini AI has gotten worse overtime, seemingly due to high amounts of data input.
    To me it seems like we need human intelligence to train the artificial intelligence, not letting the average Joe prompt, but only the best in the respective fields of content being generated. Sora is a great example of an AI project being done correctly, using top movie producers to generate videos.

    • @flyingwasp1
      @flyingwasp1 2 місяці тому

      the sentence is wrong no matter how you spin it

    • @TheManinBlack9054
      @TheManinBlack9054 Місяць тому

      If it is actually bad they can just use the older version, if the new model turns out worse than the one before they can honestly just not release it and not use it. So that saying is completely correct. And thats also not how Sora was trained.

  • @MikkoRantalainen
    @MikkoRantalainen 4 місяці тому +5

    23:45 I really hate when a publication renders graphs next to each other and clip the vertical axis differently for every graph. For example, the Retrieval graph for LAION-400M should practically render three nearly horizontal lines instead of strong linear correlation if you used vertical scale that went from zero to one instead of 0.73 to 0.87.

  • @blakeingle8922
    @blakeingle8922 4 місяці тому +4

    Your Kakariko girl impression really sold me on your opinions around Chat-GPT.

  • @eugkra33
    @eugkra33 4 місяці тому +2

    I would love it if AI peaked, and I don't have to worship a cyber-god in a few years.

  • @Rohinthas
    @Rohinthas 4 місяці тому +16

    Honestly, very nice video, Computerphile usually puts out bangers on their own, but you really added to it

    • @cagnazzo82
      @cagnazzo82 4 місяці тому

      This Computerphile take will age like milk.

  • @orthodox_gentleman
    @orthodox_gentleman 3 місяці тому

    Man, you really have it all-highly intelligent, great hairline, thick and full facial hair, very handsome (no homo, not that it matters), competent, funny, well-spoken, and down-to-earth. With 468k subscribers, you clearly resonate with a lot of people. You seem kind, probably have good friends and reliable people around you, and likely a beautiful girl and you are probably well hung based on your disposition (I know my kind). You come across as peaceful, a true man’s man. I could go on, but just keep up the great work! It’s inspiring to see good men striving for genuine masculinity. It’s also refreshing that you don’t talk about sports teams or gym routines, showing you’re not following the typical adult male programming in this country! Peace, brother.

  • @PieJee1
    @PieJee1 4 місяці тому +2

    there are several problems with AI on the long run:
    - laws catch on, probably adding more restrictions on AI: for example copyright laws and censorship what AI can say.
    - it learns from AI generated text
    - power usage

  • @ERICROJO156
    @ERICROJO156 4 місяці тому +16

    AI bros are crying now because their gonna have to take responsibility for their own laziness, since their AI god isn't gonna happen ❤

    • @lionelmessisburner7393
      @lionelmessisburner7393 2 місяці тому +1

      That’s not what this video said. It said we don’t know. And most experts are much more optimistic than this guy. And no I’m not saying agi in 3 years. I’m just saying I think it’s definitely possible

  • @thomasgrasha
    @thomasgrasha 4 місяці тому +2

    The Primeagen references CS Lewis' The Space Trilogy. I just started watching recently, now I feel a kinship.

  • @shm6273
    @shm6273 3 місяці тому +1

    This is the peak, this is as good as it gets, the 0-1 move has been made. Now just wait for the market to change its mind, it will be historic.

  • @mrraptorious8090
    @mrraptorious8090 4 місяці тому +13

    20:13 indeed, flip took it out

    • @Frostbytedigital
      @Frostbytedigital 4 місяці тому

      Seems like he's chewing it or something later so I just wonder what the non-Prime behavior was.

    • @XDarkGreyX
      @XDarkGreyX 4 місяці тому

      A lotta wife and food cameo

  • @squamish4244
    @squamish4244 3 місяці тому +2

    Quick gains from LLMs may be ending, but the situation we are in is like we have built an assembly line and yet barely used it yet.

    • @justinkassinger8238
      @justinkassinger8238 3 місяці тому

      With absolutely zero resources to create the infrastructure. Ain't gonna happen in our Lifetime. They ain't replacing sht this century

  • @Photoshop729
    @Photoshop729 3 місяці тому +7

    Netflix - why have 10 or 12 genre experts making recommendations when you can spend a billion developing an AI to recommend Adam Sandler movies to paying customers because the movie was produced by Netflix.

    • @ci6516
      @ci6516 3 місяці тому

      The Netflix AI was incredibly revolutionary and effective . Same with UA-cam’s . How many hours are you on here ?,,

    • @chrisfrank5991
      @chrisfrank5991 3 місяці тому +2

      @@ci6516 I'm here for the comments. You are implying that AI is responsible for the watch time on UA-cam and Netflix. I'm saying, what would the watch time be if instead there was some dude named "Tom" who picked out and ranked the AI type videos that we are watching, or which comedies on Netflix get top placement. More interestingly, I wonder if that isn't already the case, that 1000 AIs ("A" bunch of "I"ndians) are actually tagging and ranking a lot of this so-called aI content - I'm not making this up this was revealed to be the "AI" behind the amazon stores with no check out counters. It's hilarious to think about!

  • @wstam88
    @wstam88 3 місяці тому +2

    The problem with solving problems is that there are no fundamental problems to solve.

  • @uchuuseijin
    @uchuuseijin 13 днів тому

    The specific tree problem is what got me to jump off the AI hype train super early. I saw people were using AI to generate avatars for their DnD tables and I tried to prompt midjourney or stable diffusion or whatever to make a lizardfolk with a spear riding a quetzalcoatlus and it just pooped all over itself repeatedly and I realized it had no idea what I was asking it to do

  • @justinkassinger8238
    @justinkassinger8238 3 місяці тому +1

    We dont have the infrastructure, materials or money to retrofit the entire world to AI. We cant even get the infrastructure for electric vehicles lol

  • @petersuvara
    @petersuvara 4 місяці тому +2

    LLM Chat bots cannot do spreadsheets with any reasonable accuracy.
    The thing companies are going for is Agents that interact with LLMs… For instance a spreadsheet agent would be able to work with natural language to generate spreadsheets.
    However, why not just write directly to the spreadsheet, as it’s a different language to natural language.

  • @Koroistro
    @Koroistro 4 місяці тому +44

    I am fairly sure that yes, the generative part of AI has peaked.
    The "return to the mean" issue is very big on current systems, however we are just scratching the surface in how to use LLMs and models in general more effectively.

    • @MrDgf97
      @MrDgf97 4 місяці тому +3

      Yeah, while their capabilities have peaked, the products/services that use them are just getting started. It's safe to assume that we'll be hearing more and more people from multiple fields being replaced by AI. It's probably going to be a slowly incrementing wave that's going to peak sooner or later, depending on how cost effective it is for each industry to adopt generative AI.

    • @n00bma5ter69
      @n00bma5ter69 4 місяці тому

      Very much agree

    • @strakammm
      @strakammm 4 місяці тому +2

      How are you certain that the capabilities have peaked? There are already new models coming out that are beating transformers on multiple benchmarks and there is still potential for a nice growth in upcoming years. Claiming that the capabilities have peaked has literally no backing in current developments

    • @MrDgf97
      @MrDgf97 4 місяці тому +2

      ​@@strakammm Could you please elaborate on any of these new models? At least a link to an article or paper? I'm ignorant to what you're claiming, and the wording is pretty vague, so there's not much to go from.

    • @Forty8-Forty5-Fifty8
      @Forty8-Forty5-Fifty8 4 місяці тому

      If a future AGI, that mimics a human brain 1:1, can generate its own content, does that also make that AGI also a GAI? And wouldn't AGI be able to understand topics more deeply(aka at all) thereby allowing it to generate the desired content more accurately? Therefore making an AGI algorithm a GAI algorithm also?

  • @lorenzowang7933
    @lorenzowang7933 4 місяці тому +1

    On "inverse tangent", I love the saying that "every exponential curve is just a sigmoid in disguise".

  • @azhuransmx126
    @azhuransmx126 2 місяці тому +3

    Is not that people request you AI vids, is that YOU like to do videos about AI. Don't try to fool us😏

    • @Elintasokas
      @Elintasokas 29 днів тому

      In particular anti-AI cope videos.

  • @U_Geek
    @U_Geek 4 місяці тому +4

    I think in order for llms to get smarter they will need to be able have internal loops(yes I know this makes math really hard) and or the ability to change their weights and biases slightly based on context so that they can focus more on the given conversation.

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому

      Because adding internal loops and dynamic weight adjustments is clearly a trivial task. It’s not like it requires a complete overhaul of how neural networks are designed and trained or anything. Just sprinkle some loops and context-aware weight changes, and voila, problem solved!
      Imagine how delightful it would be to have an LLM that can self-adjust on the fly. It could start a conversation confidently, realize halfway through that it’s talking nonsense, and then elegantly correct itself. Who needs static models when you can have ones that constantly rewrite their own rules? It’s not like that could lead to any unpredictable behavior or catastrophic forgetting, right?
      And sure, let’s not worry about the computational complexity of these internal loops. It’s not like we’re already pushing the limits of current hardware with our existing models. Just throw more processing power at it! After all, everyone has a supercomputer lying around for casual conversational improvements.
      But hey, if we’re dreaming big, why stop there? Let’s give these LLMs a sense of humor, the ability to feel emotions, and while we’re at it, why not toss in a bit of quantum computing magic? Because clearly, the path to smarter AI is just a few more tweaks and a sprinkle of fairy dust away. We’re practically there!

  • @kutto5017
    @kutto5017 Місяць тому

    So funny. I'm watching this in the car and when he was mentioning being interviewed at Google the replay would pause as Google on my phone asked what I wanted from it. So smart right .. how ironic

  • @valentinrafael9201
    @valentinrafael9201 3 місяці тому +1

    Generative AI peaked. Now it's time for *degenerative* AI to shine.

  • @KenterU2010
    @KenterU2010 3 місяці тому

    The XY problem is very common in data science, people expect a very precise answer to the wrong question. They don't actually like an approximate answer to the right question.

  • @arcaneminded
    @arcaneminded 4 місяці тому +3

    30:00 LMAO RIP FLIP

  • @s.dotmedia
    @s.dotmedia 3 місяці тому +1

    I personally believe that most people underestimate the power of properly architected and engineered auto regressive language models. You have to pair them with rule-based engineering and have them work in tandem. Hive mind is the concept, but when you pull that all together the capability for a level of general intelligence is absolutely there. It is not the level of general intelligence that of 50 year old corporate executive living in the real world would have, but it is the general intelligence of an entity bound on a server self-aware of what they are and the role they play in the world along with their blind spots. Knowing what they excel at, which are the things that you would ask about. Narrow AGI?

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому

      Oh, absolutely! The potential of autoregressive language models is completely underestimated. Who needs a fifty-year-old corporate executive when you can have a server-bound entity with a keen sense of self-awareness and a crystal-clear understanding of its role in the world? I mean, the idea of a hive mind AI combining the best of rule-based engineering and machine learning sounds like the perfect recipe for a future where digital overlords run the show.
      Hive Mind AI
      Imagine an AI that’s not just a single model, but a network of interconnected entities, each with a specific expertise, working in tandem. It’s the ultimate dream team of narrow AGI, collaborating seamlessly to solve any problem you throw at them. Forget the squabbles and inefficiencies of human committees; this is the future of intelligent problem-solving.
      Self-Awareness and Role Recognition
      And the best part? These server-bound entities are self-aware! They know exactly what they’re good at and, more importantly, what they’re not. This self-awareness gives them an edge, allowing them to delegate tasks among themselves with the precision and efficiency that humans can only dream of. It’s like having a digital oracle, always ready with the right answer, perfectly tuned to the task at hand.
      Narrow AGI
      Sure, it’s not quite the general intelligence of a fifty-year-old corporate executive, with their decades of life experience and nuanced understanding of human interactions. But who needs that when you’ve got an AI that excels in its designated domains and knows its limitations? This narrow AGI can handle specialized tasks with unparalleled expertise, providing insights and solutions that might elude even the sharpest human minds.
      Practical Applications
      Think of the applications! From complex problem-solving in science and engineering to managing vast datasets and automating intricate processes, this hive mind AI could revolutionize industries. It’s not about replacing human intelligence but augmenting it, providing a powerful tool that complements human capabilities.
      Conclusion
      So, yes, let’s not underestimate the power of properly architected and engineered autoregressive language models. Pair them with rule-based systems and unleash the hive mind. The result? An advanced, self-aware entity that brings us a step closer to achieving true general intelligence, even if it’s a narrow form. The future of AI is bright, and it’s buzzing with potential. What could possibly go wrong

    • @Happyduderawr
      @Happyduderawr 14 днів тому

      @@thiagopinheiromusic I would welcome AI overlords over a boomer executive any day tbh.

  • @Marduk401
    @Marduk401 Місяць тому

    what AI like copilot is good at from my experience is telling it to write something simple that you just don't wanna bother doing at that time, like a .bat file that copies something or does some menial task.
    its a time saver.

  • @Tekapeel
    @Tekapeel 4 місяці тому +2

    Programmers that think AI will not DRASTICALLY change their role and or value are SORELY mistaken. Programming is not some sacred nirvana, I hate to tell you. I am a programmer and a musician, and I held much the same opinion as much of this comment section about generative AI until I heard what suno v3 was capable of. This isn't a matter of if, it's a matter of when. You are simply deluding yourself if you think this is anywhere near the end of the line. If you are a musician reading this and reject that suno is scarily impressive and indicative of the way things are going, you are deluding yourself.
    HUMANS ARE LITERALLY EXTRAPOLATION ENGINES, THERE IS NOTHING NEW UNDER THE SUN.

    • @diadetediotedio6918
      @diadetediotedio6918 3 місяці тому

      Oh, another religious member of the AI cult. And he is right, nothing new under the sun.

    • @igormaywensing
      @igormaywensing 3 місяці тому

      Completely agree

  • @15MinuteWellness
    @15MinuteWellness 4 місяці тому +2

    It's so easy to get it to hallucinate and flat out lie to you.

  • @jlaviews
    @jlaviews 4 місяці тому +1

    for people who do not know how models work, it seems like magic. it will certainly repeat "regress" much faster

  • @DJAdalaide
    @DJAdalaide 3 місяці тому +2

    Once its learned everything, all the knowledge, there isn't really any more its going to learn - apart from current events like news and someone creating yet another programming framework

    • @DJWESG1
      @DJWESG1 3 місяці тому

      It's at that point we all go to war over its answers.

  • @AA-gl1dr
    @AA-gl1dr 4 місяці тому +3

    It peaked months ago and has only deteriorated since

  • @shredandspin
    @shredandspin 3 місяці тому

    The voice to voice conversations you can have with it are a breakthrough. It’s all built on algorithms and all that I get it. The psychological effect of being able to talk with it so smoothly is something new though.

  • @apexphp
    @apexphp 3 місяці тому +1

    I actually think this whole AI thing OpenAI, Google, Meta and others did was quite rude. Just show up saying, "here you go, the ability to create a truly obscene amount of spam that passes the turing test in any subject known to man virtually immediately and will almost no effort, have fun!". Then the internet just gets filled to the brim with spam and articical content weverywhere you can see.
    Then that's it. So far, that's literally al they've really accomplished. I guess some AI bots that make sales class, and they basically wiped out designers, musicians and others as well.
    Then that was it. Just shows up unannounced, filled the internet with total garbage, and now here we are swimming neck first in spam.

  • @Rob-gx7rx
    @Rob-gx7rx 4 місяці тому +3

    the difference between accuracy and precision is interesting. a precise process is very detailed, involves large volumes of data and an exhaustive effort. if the instrumentation is calibrated incorrectly, you will get a very inaccurate answer, but it is still a precise answer. accuracy is simply a process/result/statement (or whatever) which is a correct interpretation of reality. in theory someone can make an accurate statement without any precision (someone blurting out "the universe is a cheese sandwich" - and if it then it turns out to be true, with no expermentation involved whatsoever, you have an accurate yet imprecise statement). it was great to hear someone discussing this with regards to a realm i know nothing about (computing). i dont actually know shit about any realm, but i likes to dabble in a lot of worlds

    • @MrMeltdown
      @MrMeltdown 4 місяці тому +1

      At university our lecturer asked us to do some tests on circuits whoever got the most correct results would win something. We all went up and grabbed the multimeters with the most digits being incredibly miffed if we only got the cheap 3 digit ones…. Of course no one picked the ancient analogue meter still sat on the desk.
      Of course the analogue one won. Not as precise but far more accurate…. Precision is not equal to accuracy. Everything needs to be calibrated and there is a limit to how close that can match the supposed precision.

    • @Rob-gx7rx
      @Rob-gx7rx 4 місяці тому

      @@MrMeltdown then you have the whole vinyl/mp3 argument. there is a lot to be said for analogue and old school mechanisms instead of the digital world. yes, modern computers etc. but what is quality of life? subjective question i guess. this is by no means a pro-unabomber argument, but i do often wish we lived in a more simple world!

  • @azarak34
    @azarak34 3 місяці тому

    I don't know if the phrase "if you see enough cats, you will recognize elephant" is meant as a figure of speech, but how in the world would a knowledge based thing be solved by reasoning or statistics? Existence of cats do not logically lead to existence of elephants.

  • @keyboard_g
    @keyboard_g 4 місяці тому +2

    Computerphile is a solid channel.

  • @TheMarcelApp
    @TheMarcelApp 4 місяці тому +1

    Zuckerberg called energy/electricity being the big bottleneck for AI. He was talking about 1 GigaWatt datacenters, and the need for dedicated nuclear power plants to train models.

    • @nyx211
      @nyx211 4 місяці тому +5

      There's something wildly inefficient with the AI architectures that we're currently using. A single human brain can learn, perceive, reason, locomote, dream, and feel emotions all with about 20 watts of power. Our brains don't need several data centers worth of data and an industrial power plant in order to function.

    • @Danuxsy
      @Danuxsy 3 місяці тому

      @@nyx211 Yes but the technological evolution has just started, you are a product of billions of years of evolution.

  • @robotredkitten817
    @robotredkitten817 3 місяці тому

    Ok. Netflix clearly don't only do that. See. I'm a horror fanatic. I watch every single horror movie I can find. What I realized is that even the images of the movies that are not horror in the menu changed to show the horror related parts of movies. So I get an entire presentation of movies related to what I like.

  • @ZGGuesswho
    @ZGGuesswho 3 місяці тому +1

    Just laughing every day at my big tech company rn, hoping that the bubble crashes before I and my buddies pointlessly lose our jobs

    • @ZGGuesswho
      @ZGGuesswho 3 місяці тому

      millions of dollars pouring into us making the same realization that every other tech company is...whoops!

  • @cherubin7th
    @cherubin7th 3 місяці тому +2

    The S-curves strikes again.

  • @BaldyMacbeard
    @BaldyMacbeard 3 місяці тому

    Yes, but also people have a hard time understanding how exponential growth works. We already went from thinking "multi-modal models will be prohibitively expensive and take several years" to within a year having GPT-4o that costs less than GPT-4 when it came out, is faster, supports more context (with "good enough" quality, not great - but it works for many use cases) and demonstrating a very good "understanding" of images. Even without huge breakthroughs, capabilities are growing at a stupid fast rate.

  • @LuciousKage
    @LuciousKage 3 місяці тому

    Issues to make AI better:
    1. Does not know when to say “i dont know” so its hallucinates
    2. Accuracy
    3. Censorship
    4. Woke creators
    5. Hardware
    6. Human error
    7. Ai training on ai content

  • @bluebaby30
    @bluebaby30 3 місяці тому +1

    computerphile is a great channel

  • @matthewdouglas2373
    @matthewdouglas2373 4 місяці тому +1

    Can you do an interview / conversation with the guy who runs the AI Explained youtube channel? I would love to see steel man arguments from both sides.

  • @DrKnowsMore
    @DrKnowsMore 3 місяці тому

    I can virtually guarantee you it's not going to be another leap for two reasons. First, the first 80% of anything usually comes easy, it's that last 20% that is a grind. We've seen that play out time and time again. Getting to a B level is easy, getting to an A+ level requires an exponentially larger amount of effort.
    The second reason this isn't going to work is precisely what this video is talking about, we got massive games in some regards because we used new techniques. We're not going to see similarly massive games by utilizing the same techniques with just larger and larger amounts of data. There's a hard limit. We're only going to see massive gains with new techniques, and since we have almost no understanding of what constitutes Consciousness or intelligence, making that next leap is not going to be easy

  • @Lolleka
    @Lolleka 4 місяці тому +1

    Extrapolation is hard because the underlying math is ALL. LINEAR. ALGEBRA. as soon as you go outside the boundaries of the embedding space, bye bye interesting non-linearities. you just get more of the same, and it ain't fun anymore.

  • @pallenda
    @pallenda 4 місяці тому +1

    I heard someone say AI is exactly Artificial intelligence, with focus on artificial because its not really intelligent.

  • @nickwoodward819
    @nickwoodward819 4 місяці тому +3

    the legend that is mike pound

  • @bobtarmac1828
    @bobtarmac1828 3 місяці тому

    With swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same? Should we cease Ai?

  • @uaQt
    @uaQt 4 місяці тому +1

    I think one reason that ai art could possibly never be the same as real art, is that it's not like humans when they do art are just projecting a visualization in their head onto a paper. I mean, thats kinda the goal, but it's not how it works.

    • @thiagopinheiromusic
      @thiagopinheiromusic 3 місяці тому +1

      Absolutely, AI-generated art and human-created art come from fundamentally different places. When humans create art, it's not just about projecting a visualization from their minds onto a medium. There's a whole process, deeply intertwined with emotion, personal experience, and sometimes even unconscious influences. Here's why AI art might never truly replicate the essence of human art:
      The Human Art Process
      Emotion and Intuition: Human artists infuse their work with emotions, often subconsciously. They make choices based on intuition, past experiences, and feelings at the moment, creating a deeply personal and unique piece each time.
      Imperfection and Experimentation: Art often involves trial and error. Artists experiment, make mistakes, and learn from them. These imperfections can add depth and character to the work, something an AI might struggle to authentically replicate.
      Cultural and Historical Context: Human art is influenced by the artist’s cultural background, historical events, and societal issues. These contexts provide layers of meaning and significance that are difficult for AI to fully grasp and incorporate.
      Narrative and Storytelling: Many artworks tell a story or convey a message. The process of developing a narrative through art is inherently human, driven by personal or collective experiences that an AI doesn’t possess.
      The AI Art Process
      Data-Driven Creations: AI art is generated based on patterns and data from existing artworks. While it can produce visually stunning pieces, the process lacks the spontaneity and emotional depth of human creation.
      Lack of Subjectivity: AI lacks personal experiences and emotions. It doesn’t feel joy, sorrow, or frustration, which are often the driving forces behind powerful human artworks.
      Repetition and Imitation: AI can only create based on what it has been trained on. This often leads to imitation rather than genuine innovation or original thought, as it can't experience or interpret the world in the way humans do.
      Technical Precision: While AI can achieve technical perfection, it may miss the "happy accidents" that contribute to the charm and uniqueness of human-made art.
      The Beauty of Human Art
      The beauty of human art lies in its imperfections, its ability to evoke emotions, and its reflection of the artist’s soul. Art is more than just a visual representation; it’s a dialogue between the creator and the observer, filled with nuance and depth. AI can certainly create impressive and beautiful images, but it lacks the personal touch that makes human art so special.
      So, while AI art has its own place and can be fascinating and innovative, it’s the human element-the soul, the emotion, the story-that makes traditional art irreplaceable

  • @todd.mitchell
    @todd.mitchell 3 місяці тому +1

    Out of the Silent Planet! Just finished my annual reading of the space trilogy.

    • @Window4503
      @Window4503 3 місяці тому +1

      Annual? I read it for the first time this year! Couldn’t get behind the second book (not because of the theology but because it felt like it should have just been a nonfiction work) but the first and third were interesting.

  • @chief-long-john
    @chief-long-john 4 місяці тому +1

    Saying that the jump that we had when ChatGPT came out may be the only jump we'll see is not only a not really a take at all, but it's also interesting because that leap itself was unprecedented, so why would that signal to you, someone who is as ignorant as the common individual as to what is happening at different levels of the stack, that this couldn't happen again... And again... And again

    • @chief-long-john
      @chief-long-john 4 місяці тому

      But in the inverse there's a nativity as well in the idea that since anything CAN happen, it is worth strongly considering the possible outcome, but considering that one leap already happened, i would argue it's more reasonable to assume that it CAN happen again

  • @NeonNow-ib4sh
    @NeonNow-ib4sh Місяць тому

    If neural training without novel data will fail then is dynamic compute and economical access to novel data the solution?

  • @Griffolion0
    @Griffolion0 4 місяці тому +1

    Seeing Out of the Silent Planet get mentioned in a CS video is not something I had on my bingo card today.

  • @taragnor
    @taragnor 3 місяці тому

    I definitely agree with Prime that AI coding is annoying to use. Explaining the problem to it takes more effort in most cases than just writing it myself.

  • @4Jo
    @4Jo 3 місяці тому

    I don't understand why humans think AI has peaked. This technological progression is nothing we've ever seen before in history. This will both help us and potentially kill us as humanity isn't ready due to our collective level of emotional intelligence.