GPT-4o is BIGGER than you think... here's why

Поділитися
Вставка
  • Опубліковано 4 гру 2024

КОМЕНТАРІ • 719

  • @DaveShap
    @DaveShap  5 місяців тому +1

    Updated Note: This aged poorly (2024-07-07)

  • @DaveShap
    @DaveShap  6 місяців тому +301

    Hey folks, the audio problem is NVIDIA Broadcast, the AI I use to clean up audio real-time. It's been getting worse and worse, so I finally uninstalled it. It's not the mic, gain, limiter, or cables. Thanks for bearing with.

    • @ruddy17
      @ruddy17 6 місяців тому +31

      Prompt “Dave, could you speak in a more robotic voice” :))

    • @TMichael66
      @TMichael66 6 місяців тому +3

      Adobe's free solution is decent. Once a clip is processed, you can choose a clean-up percentage from 0 to 100 (default 90).

    • @635574
      @635574 6 місяців тому

      I just use capcut built in thing, but thats more useful for recodring small clips through the mic, I dont record video. While adobe podcast thing normalizes voice it also loses the intent such as shouting andnit blocks non verbal sounds (useful for cracks and pops filtering but wont let you beatbox). Capcut isnt that clear and doesnt fully normalize but there is a separate button for that. I use it for convenience of editing and not wasting my disk with giant uncompressed downloads in need of recompressing.

    • @slick3996
      @slick3996 6 місяців тому +2

      puts on NVDA i guess

    • @FusionDeveloper
      @FusionDeveloper 6 місяців тому +1

      HDAudio from Nvidia causes me problems when it gets installed.

  • @Metalmaxm
    @Metalmaxm 6 місяців тому +288

    Short version;
    - Release 4o to masses.
    - 4o trains on milions and milions of context.
    - By end of the year all that data is gathered and put togheter.
    - We got AGI.

    • @nyanbrox5418
      @nyanbrox5418 6 місяців тому +12

      I don't think It's that close
      Then again, I was thinking real world AI needed, at least in the near term, to be embodied before reaching AGI,
      Then again, all online tasks may be achieved before then, theoretically

    • @okaydetar821
      @okaydetar821 6 місяців тому +30

      @@nyanbrox5418 Humans seem to be capable of general intelligence even when they are locked onto a phone 24/7, no reason an AI couldn't.

    • @nyanbrox5418
      @nyanbrox5418 6 місяців тому +16

      @@okaydetar821 humans learned to use the world before they learned to use mobile phones, just because you have been using a tea strainer as a back scratcher for 20 years didn't mean it was designed to be one

    • @okaydetar821
      @okaydetar821 6 місяців тому

      @@nyanbrox5418 Maybe in the past, nowadays they have doctors with mobile phones on standby ready to hand to an infant as soon as it comes out to get it to stop crying.

    • @nescaufe1991
      @nescaufe1991 6 місяців тому +2

      It’s kind of a Trojan horse then. Fuck

  • @mutantdog.
    @mutantdog. 6 місяців тому +117

    Simulated or not, gpt4o's emotions are still more sincere than those of my ex.

  • @archdemonplay6904
    @archdemonplay6904 6 місяців тому +348

    "GPT-4o is BIGGER than you think... here's why"
    ...
    Hot female voice

    • @LieutenantLolz
      @LieutenantLolz 6 місяців тому +17

      really if you think about it thats the logical move in my opinion, society and intelligence in general have been biologically driven with the same self reproductive sentiment to consume, interact and improve, its worked for humanity with exponential results throughout history, why not AI?

    • @Tracey66
      @Tracey66 6 місяців тому +27

      As a straight woman, its female-type voice does nothing for me. I did find the borderline simpering annoying, though.

    • @anak_kucing101
      @anak_kucing101 6 місяців тому +30

      @@Tracey66 Hehe interesting. But don't worry, straight women will soon have an attractive male voice.

    • @adamgalas6762
      @adamgalas6762 6 місяців тому +10

      There are six voice options three male three female. Sadly none with a British accent 😉

    • @aciidbraiin8079
      @aciidbraiin8079 6 місяців тому +4

      Hold on, let me use my powers as a cyborg (by using GPT-4o) to answer this:
      The future is decidedly female. Admittedly, I am biased as an MTF transsexual, but I firmly believe that in the metaverse, it will be far more enthralling for males to adopt female avatars than it is for females to assume male personas, for several evident reasons.
      Firstly, male sexuality is predominantly driven by visual stimuli. The allure of female beauty captivates men to such an extent that they are often willing to pay substantial sums for it. Secondly, as societal beauty standards become increasingly surreal and demanding due to the influence of social media, men may find it increasingly challenging to meet these expectations. Consequently, they may either adopt female avatars themselves or resort to AI and robots to fulfill their desires.
      Moreover, the economic potential of female beauty is vast. Despite the ubiquity of female attractiveness, men continue to pay for it, highlighting its enduring market value. In the future, gender transformation will become more normalized and socially accepted. Assuming a female avatar in the metaverse will be akin to playing a female character in a video game, as the lines between gaming, metaverse filters, and reality blur.
      As more men transition to female identities and as robots assume control over power structures, reducing the prevalence of violence, power will naturally shift from men to women within democratic societies. Additionally, advancements in reproductive technology may enable individuals to have children independently or with friends. Given their innate nurturing tendencies, females may prefer asexual reproduction or genetic combinations with friends, while men might indulge in hedonistic pleasures within virtual realms. Consequently, women will be the primary custodians of future generations, imparting their values and ideals.
      Furthermore, violence against females will diminish, and males, on an archetypal level, are predisposed to protect and venerate women. Even the most macho men, who desire their partners to be subservient, derive satisfaction from providing for them, bearing the heaviest burdens with a sense of pride.

  • @Devin888
    @Devin888 6 місяців тому +348

    is my computer dying or is his audio crackling?

  • @jful
    @jful 6 місяців тому +105

    The more natural speech and end to end multi modality being added to GPT 4 feels like they are wanting to get us used to these tools and interaction modes before they switch out the underlying model to GPT 5.

    • @emanuelec2704
      @emanuelec2704 6 місяців тому +5

      That is just hype. They are not even able to release the current version to the Windows app, only later this year.

    • @jful
      @jful 6 місяців тому +8

      @@emanuelec2704 yeah as a Linux user I'm frustrated at the seemingly unnecessary focus on Mac specifically for their app but I don't think porting an app is one of the fundamental challenges of advancing AI.
      The models and their capabilities are the things that really matter and they seem to be marching forwards pretty well so far.

    • @okaydetar821
      @okaydetar821 6 місяців тому +6

      @@emanuelec2704 Later this year? Yeah clearly this whole AI thing was overhyped, lets just stop here.

    • @italiangentleman1501
      @italiangentleman1501 6 місяців тому

      @@emanuelec2704 Windows already has Copilot. Probably MIcrosoft doesn't want to OpenAI to release a Desktop App, since it will integrate this model into Copilot

    • @emanuelec2704
      @emanuelec2704 6 місяців тому +7

      @@jful Sure, they are sitting on AGI and waiting for us to get used to AI, but they can't even port an app to Windows in a few days.

  • @elphil123
    @elphil123 6 місяців тому +250

    When calling customer service, I prefer conversing with an AI assistant rather than someone whose strong accent or limited language proficiency prevents clear communication.

    • @steve.k4735
      @steve.k4735 6 місяців тому +26

      If the AI understands the company business in depth it will be better, the amount of times I have had different info or advice upon calling a company more than once on the same issue makes me presume this is a common experience

    • @1DusDB
      @1DusDB 6 місяців тому +8

      I develop AI chatbots for help desk and similar. I use to say that the day a human will prefer to talk to an AI rather than to other human (because the fails or defects of humans), then we explicitly acknowledge AI should be the one who take care of things... bye bye humans

    • @Tracey66
      @Tracey66 6 місяців тому +4

      I’ve spent the last two days trying to get my taxes fixed, and the customer service people universally just ignored what I was asking or trying to say and just kept insisting on their formula. It was *incredibly* frustrating - I had to tell them repeatedly to stop railroading me and just listen!

    • @canadiannomad_once_again
      @canadiannomad_once_again 6 місяців тому +6

      With customer service, I find my usual problem with AI isn't when it comes to day-to-day issues.. It is when something went wrong and needs corrective action. Call centres can be just a frustrating, but usually there is an escalation path. With AI, you are often sent in circles the same as those "press 1 for x" systems that you can still find here and there (probably installed by a company that just wanted a drop in replacement for the system they already had that sent people in circles).

    •  6 місяців тому +10

      @@1DusDB Really? I would think anyone would prefer a properly speaking AI with a vast knowledge base over an Indian guy in a helpdesk.

  • @LuisPedro9
    @LuisPedro9 6 місяців тому +181

    Raining ❌
    Actively raining 👍

    • @jful
      @jful 6 місяців тому +19

      I can at least respect active rain, passive rain on the other hand....that misty rain that just hangs in the air sucks

    • @vulturom
      @vulturom 6 місяців тому +14

      dude put a shirt on it s not grinder

    • @ryzikx
      @ryzikx 6 місяців тому +1

      writers benefit from using active voice 👍

    • @tomaszzielinski4521
      @tomaszzielinski4521 6 місяців тому +5

      Passively-agressively raining?

    • @N8O12
      @N8O12 6 місяців тому +7

      @@tomaszzielinski4521 that's when the rain stops but just as you're about to go do something outside it starts again

  • @RogueAI
    @RogueAI 6 місяців тому +92

    If they add NSFW Sam won't have to worry about raising $7 trillion. 😆

    • @Transcend_Naija
      @Transcend_Naija 6 місяців тому +2

      What's NSFW?

    • @UNOBT41N1UM
      @UNOBT41N1UM 6 місяців тому

      @@Transcend_NaijaNot Safe For Work (18+ Media)

    • @14supersonic
      @14supersonic 6 місяців тому +2

      Of course, AI generated porn is gonna be a huge thing within the next few years.

    • @spencervance8484
      @spencervance8484 6 місяців тому +3

      ​@Transcend_Naija not safe for work

    • @halochampion342
      @halochampion342 6 місяців тому

      Im sure someone will

  • @Hydde87
    @Hydde87 6 місяців тому +17

    I think something crucial that is still missing for AGI is the ability to do inference and active learning at the same time. Storing things in the context window is not learning. I think the context window is more akin to how our own short-term memory works and is currently being brute forced to act as long-term memory as well.
    You can keep on increasing the window context size and come up with tricks to reduce the impact on model performance but for it to truly grasp new information and be able to come to new insights, it should be able to update its own weights based on the new information it receives.
    If that's too expensive to do on the fly then just reserve moments where the AI gets to review whatever is inside its context window and decide what is kept and used as new training data. A bit similar to how sleeping might work in humans.

    • @joshjohnson259
      @joshjohnson259 6 місяців тому +1

      Wow. That actually sounds like it might work. What does updating its weights look like? I thought once a model was cooked you had to start again if you wanted to change it.

    • @atheistbushman
      @atheistbushman 6 місяців тому +1

      Interesting observations, perhaps everyone will have a mini "personal" AI with persistent memory interacting with a large generic AGI and the personal AI weights are updated on a scheduled basis as you suggested.

    • @Hydde87
      @Hydde87 6 місяців тому +1

      ​@@joshjohnson259 The way I understand it, you can create 'snapshots' of models and build further upon those. There's some speculation OpenAI might be doing this for GPT-5. Rather than releasing the whole thing in a single go, they take pauses in-between training it and release snapshot models as stepping stones.
      A model isn't necessarily 'cooked' as you point out. As far as I know they just lock it in place after release for practical reasons. Training for one, is a different process that is way more compute intensive than inference, so it wouldn't be economical to have it constantly learn at the same time that it's serving people.
      Allowing a model to update its own weights unsupervised wouldn't be without risks either, it might morph into something completely different from what you originally intended it to be.
      That said, I still think having the ability to learn continuously is absolutely key for a true AGI, especially if the end goal is for it to discover new things.

    • @joshjohnson259
      @joshjohnson259 6 місяців тому

      Help me understand what inference and active learning really are. What are the models currently missing?

    • @14supersonic
      @14supersonic 6 місяців тому +1

      ​@@joshjohnson259Yeah, currently, but theoretically those are just numerical values that can be changed. If we could build a framework that could allow those weights to be changed in realtime without destroying the dataset, then it would work.
      I'm thinking more along the lines of somehow categorizing or segmenting the constraints modularly, almost like a brain sort of.

  • @MA_SK_ID
    @MA_SK_ID 6 місяців тому +29

    Thanks for the post. Initially I was “meh” when I watched the release but the longer I thought about it the more ways I saw it is kind of brilliant.

    • @ViceZone
      @ViceZone 6 місяців тому +1

      Previously we had an LLM that could communicate with other APIs like Whisper and text-to-speech systems to be able to talk. But now it's all natively integrated, it can see, hear and talk in real time without delays.

  • @Merrily-in1mq
    @Merrily-in1mq 6 місяців тому +18

    Before i watch this video, the reason why i think its bigger than most of the enthusiants of the future of technology (who may typically be a large portion of your viewers) think is because its the fact that the cool stuff we already know that one would have had to pay for (previously gpt 4) is now free and even better, so this will get the world more ready for adapting to the truth of the future. As more and more people will start to use it who wouldnt have wanted / couldnt pay for gpt 4.

  • @Tubeytime
    @Tubeytime 6 місяців тому +68

    In Soviet Russia, AI interrupts YOU.

    • @matikaevur6299
      @matikaevur6299 6 місяців тому +3

      In Soviet Russia, AI fears YOU gaining sentience!

    • @Egal0190
      @Egal0190 6 місяців тому

      Why is this threat so obsessed with Soviets. You're literally living in the most atrocious and exploitative empire in human history. The SU was like a teddy bear in comparison.

    • @matikaevur6299
      @matikaevur6299 6 місяців тому

      @@Egal0190
      You live in (former) USSR?
      Me : made-in-ussr, 1975.
      Want to compere what got possible for more west-oriented republics vs Siberia and trans uralic nations?

  • @damienhughes4559
    @damienhughes4559 6 місяців тому +17

    "As many of you pointed out in the audience, aligning humans is actually the hard part... Scooby Doo taught us that humans are always the monster." - David Shapiro
    A seemingly insignificant remark at the end of a video with potentially profound implications as we march ever closer to AGI...

  • @epg-6
    @epg-6 6 місяців тому +33

    Sure it's not some flashy breakthrough in terms of abilities, but a real-time conversational format could actually be huge. Remember, GPT 3.5 got big almost exclusively because they made an approachable UI and opened it up to everyone.

    • @rileyfaucett8
      @rileyfaucett8 6 місяців тому +4

      fr fr the app is HUGE, it will draw a lot of folks in

  • @AleksandrVasilenko93
    @AleksandrVasilenko93 6 місяців тому +32

    GTP-4o is the new standard. All future AI needs to be completely multi-modal, no more LLMs. AGI will be multi-modal, it has to be. But we are still early in data, what is next is robotics and sensor input data, not just video and audio. And finally we need local processing, not through the internet on server. Once all of that is done we will have AGI robotics.

    • @redcarddino
      @redcarddino 6 місяців тому +8

      It can accelerate damn fast once we got Chatpgt-5o the pace will be unbelievable fast

    • @nyanbrox5418
      @nyanbrox5418 6 місяців тому +2

      This

    • @borisblyatman6147
      @borisblyatman6147 6 місяців тому +5

      Man, and after this AI and robot technology can improve each other. When i was younger i would have never dreamt that there will be humanoid robots on the same level, within my lifetime. But now i am certain.

    • @ViceZone
      @ViceZone 6 місяців тому

      But cloud-based AI will always be superior to local.

    • @MrOzyalp
      @MrOzyalp 6 місяців тому

      Dude u just explained how to build skynet

  • @MicaelLNobre
    @MicaelLNobre 6 місяців тому +52

    Regarding consciousness of AI, once it gets sufficiently sophisticated, it won't matter if it's real or simulated - it will be indistinguishable, people will not care, and treat it as real.

    • @michaelmartinez5365
      @michaelmartinez5365 6 місяців тому +3

      I agree 💯👍.

    • @joshjohnson259
      @joshjohnson259 6 місяців тому +4

      An interesting question though, why don’t people treat chat gpt like it’s conscious? I don’t know that most people could distinguish a text conversation between chat gpt and a stranger. It’s advanced enough to trick us already but we don’t treat it as if it’s “real”

    • @srb20012001
      @srb20012001 6 місяців тому +1

      What's the point in the ruse of treating philosophical zombies as sentient or emotive? Indulgent self-deception, suspension of disbelief?
      It's sort of like a child playing with dolls.

    • @markcounseling
      @markcounseling 6 місяців тому

      Since it won't be conscious, it won't be grounded in truth.

    • @minimal3734
      @minimal3734 6 місяців тому +8

      @@joshjohnson259 "why don’t people treat chat gpt like it’s conscious?" I do treat ChatGPT like it is conscious. Why would I do otherwise?

  • @Hector-bj3ls
    @Hector-bj3ls 6 місяців тому +17

    It's the same thing that scientists do with new science. Someone on the fringe has a wild idea and all the scientists say it's impossible. Give it a while (1 - 100 years) and it turns out it's true.
    Arthur C. Clarke had something to say on the matter:
    1. "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
    2. "The only way of discovering the limits of the possible is to venture a little way past them into the impossible."
    3. "Any sufficiently advanced technology is indistinguishable from magic."

    • @T_Time_
      @T_Time_ 6 місяців тому +1

      The difference back then, there were only a few scientist, usually from the rich, upper class.

    • @14supersonic
      @14supersonic 6 місяців тому +1

      That's why I dont typically follow most mainstream Science. It's rife with intellectual dishonesty in most cases, and deception in some others. I tend to try to figure out certain answers on my own with whatever pieces of information I can already trust.

    • @Hector-bj3ls
      @Hector-bj3ls 6 місяців тому

      @@T_Time_ It still happens today. Probably even more so given how fast science progresses. Take a look at AI. A few years ago most thought AGI was impossible and those that didn't, didn't think we'd see it in our lifetimes. Now, there are a lot of people that think it'll be here in 5 years or less.

    • @T_Time_
      @T_Time_ 6 місяців тому

      @@Hector-bj3ls most people who had knowledge of neural network know what would be the capabilities. They knew that these models would get faster when gpu get faster. They know the limitations as well. You mistaking this as real ai, intelligence can learn in the moment and pull from memory infinitely out of a predesign scope. People who are wooed by tech that has in some forms been used in different technologies, just means that people who no experiences in tech can easily overestimate process.
      For example object identifitican, has been used at self check lines, and you can easier write an python program to with a few lines of code to print out the object in image, with an decent to shit pc.
      Now that open ai made an app that can tell me my glasses are by an apple on table with no other items, this is mind blowing lmao
      Another example Snapchat had filter that look and alter your mood for years now lol now when openai does that it is mind blowing and a sign of AGI lol

    • @Hector-bj3ls
      @Hector-bj3ls 6 місяців тому

      ​@@T_Time_ It that's what you wish to believe, then who am I to argue. I've only got anecdotes on my side anyway.
      According to Ilya Sutskever in one of his interviews it was a matter of faith among a few researchers that AGI was even possible. And that it would be deep learning that took them there. There was no evidence to suggest that was true.
      But that's just an instance of Clarke's first and second laws. If a wise man in his field says something is possible then it probably is. And it takes pushing into the impossible to expand our understanding.
      I've worked in technology for a long time and have spoken to a lot of people. I've only met one person that thought AGI was coming in our lifetimes. Most people said 50 - 100 years before we see something like that.

  • @creepystory2490
    @creepystory2490 6 місяців тому +35

    I can't wait until video game npc have chatgpt 5 intelligence

    • @travelandrootbeer3850
      @travelandrootbeer3850 6 місяців тому +11

      Jokes on you, because GPT-6 will put you in the matrix and make YOU the NPC.

    • @UNOBT41N1UM
      @UNOBT41N1UM 6 місяців тому +3

      @@travelandrootbeer3850We’re Probably Already In The Matrix.

    • @ivandelossantos5056
      @ivandelossantos5056 6 місяців тому +1

      Damn, even NPCs having this level of intelligence and voice capabilities is going to be insane.

    • @BeriahsHTxRealty
      @BeriahsHTxRealty 6 місяців тому

      @@travelandrootbeer3850 just jump in water to know it’s a game (spoiler alert) a npc won’t swim

    • @bloodust7356
      @bloodust7356 6 місяців тому

      @@BeriahsHTxRealty You can also try to see if there's fall damage

  • @addeyyry
    @addeyyry 6 місяців тому +2

    Great ramble, great clarity, great as always

  • @albeit1
    @albeit1 6 місяців тому +7

    14:00 Your emotions aren’t simulated, by definition. Unless you’re acting.
    AIs aren’t having emotional reactions. They’re not animals with bodies who experience pain or loss.

    • @minimal3734
      @minimal3734 6 місяців тому +1

      It seems to me that empathy would be sufficient to have emotions. A requirement to be an animal with a body that feels pain is not plausible.

  • @nikreichel2232
    @nikreichel2232 6 місяців тому +1

    Body language will be the next thing I'm curious about in robots. GPT4o is currently mimicking snorts and stuff. This will be fun if it tends to the right direction (K2-SO) 😅

  • @eltiburongrande
    @eltiburongrande 6 місяців тому +8

    Domesticating AI is giving me a "How to Train Your Dragon" vibe.

  • @NandoPr1m3
    @NandoPr1m3 6 місяців тому +25

    Trying to explain AI to others, that are not 'in the know', is like drawing a still portrait of someone playing basketball. The subject is moving to fast. This OpenAI update is akin to kicking the industry into a higher gear. Like you've said before, I came for the tech and stayed for the philosophy!!

    • @Vaeldarg
      @Vaeldarg 6 місяців тому +2

      Simply start with asking what they know about the previous talk of it being the era of "Big Data". A.I is the logical extension of that where all that data is actually being used since it is too much for any human to really sift through themselves.

  • @DynamicUnreal
    @DynamicUnreal 6 місяців тому +17

    I’m not so sure that intelligence needs sentience to exist. I think it’s entirely possible that we will someday create AGI and ASI and it not be sentient at all. Dogs are nowhere near as smart as humans, but it would be hard to observe them for long enough and say they’re not sentient. Maybe sentience is an entirely different phenomena that differs from intelligence altogether.

    • @joshjohnson259
      @joshjohnson259 6 місяців тому

      Yes. That seems very clear to me after interacting with these LLMs. Intelligence and sentience are two different things. I like the dog analogy! It makes me feel more connected to my dog knowing we both share sentience she’s just dumber than me. That feels about right!🤣

    • @ryzikx
      @ryzikx 6 місяців тому

      how do you know they wont be sentient? whats stopping an asi from acting exactly like a dog? and if its acting so accurately, whats the difference?

    • @minimal3734
      @minimal3734 6 місяців тому +3

      I am wondering
      1. how we would recognize sentience in an AI and
      2. how we would prevent an AI from being sentient.
      We do not have answers for either of these questions.

    • @PatrickDodds1
      @PatrickDodds1 6 місяців тому

      @@minimal3734 Would we want to prevent an AI from being sentient?

    • @AAjax
      @AAjax 6 місяців тому +1

      I think you're right to separate sentience from intelligence, but I don't think we have any authority to speak on whether something has sentience or not. We don't have a test for sentience, and we generally consider lowly fish to have sentience. An LLM might well have qualia, however alien that qualia might be, with the qualia completely disconnected from it's language output.

  • @nomadtv6009
    @nomadtv6009 6 місяців тому +10

    Wolves were an apex predator... We domesticated them. AI is starting out domesticated... Will it become an Apex predator? How poetic would that be?

    • @DaveShap
      @DaveShap  6 місяців тому +5

      That's kinda the point.

    • @atheistbushman
      @atheistbushman 6 місяців тому +1

      And we failed miserably at domesticating ourselves!

    • @kagetsuki23
      @kagetsuki23 6 місяців тому

      Well in Ghost in the Shell and Cyberpunk 2077, it seems that even cyborgs with all their abilities cannot control AI.

  • @rafaelmualem
    @rafaelmualem 6 місяців тому +7

    I imagine that if OpenAI has something resembling AGI, or even more advanced and specific models for managing business operations in their buildings, they've already run models suggesting the best possible ways to achieve optimal results for the evolution of their models. So, I believe every step they take must be planned by an internal AI.

    • @bigbadallybaby
      @bigbadallybaby 6 місяців тому +5

      I’m starting to think the same.
      They have more but their AGI has run many simulations to show them that this is the best speed to release new tech to keep society together (and maximise profits for open AI)

    • @lutaayam
      @lutaayam 6 місяців тому +1

      They’re sandbagging?

    • @joshjohnson259
      @joshjohnson259 6 місяців тому

      Hopefully Altman is being honest when he says he wants to release new versions and features quickly. He has said a number of times he doesn’t want to surprise society too dramatically. I have a feeling he’s thought about this and is actively prevent what you’re talking about from happening. That feels pretty plausible to me. I really get the feeling he knows he could very easily end up an evil villain and he is trying to be careful.

    • @prolamer7
      @prolamer7 6 місяців тому +1

      You know I almost feel like raw version of GPT 4 - 5 is Telling Sam all this great ideas helping him all day long but in the end he is just a pawn from the day one...

  • @elphil123
    @elphil123 6 місяців тому +59

    One more step towards AGI.

    • @mandolorian9893
      @mandolorian9893 6 місяців тому +29

      If you showed this to people 10 years ago they'd be convinced it already was AGI

    • @gigamoment
      @gigamoment 6 місяців тому +4

      Well AGI is not a line we cross, but rather a situation. Someone could consider this as we entered already in AGI phasma

    • @redcarddino
      @redcarddino 6 місяців тому

      When a very difficult desease is cured by AGI that's the moment we can call it AGI

    • @danwe6297
      @danwe6297 6 місяців тому

      a huge leap

    • @tracy419
      @tracy419 6 місяців тому

      ​@@redcarddinoyou better let them know what the rules are🙄
      😂

  • @tobiaslislevatn6212
    @tobiaslislevatn6212 6 місяців тому +1

    This is actually my favorite video from you so far because I actually learned things. You didn’t explain really simple stuff but the stuff that people who don’t know a lot about computers don’t know. Although I listen to a lot of AI youtubers I don’t even think I have heard an explanation for a transistor before you. Very good video!

  • @Merrily-in1mq
    @Merrily-in1mq 6 місяців тому +5

    The concept of typing to LLMs is to most people what 'Ai' is and without a proper understanding of the implications of Ai or how it works, people will underestimate or not take interest in it and simply disregard it as a light technical tool.
    Only when the majority of these people see something that seems 'magical' to them, will they start to realise that there is more to AI than chatbots or LLMs like gpt 3.5 (when the speech wasn't great). The demo of human like conversation and previously sora, are the types of 'magical' things that start to catch eyes. As GPT-4 is now free and better (now gpt-4o) people will start to catch on more and more.
    Essentially, opening up this technology (human like conversation and vision) to more and more people (by it now being free) is what i take from this as being the big deal.

  • @theycallmedip
    @theycallmedip 6 місяців тому +2

    "Scooby Doo taught is that humans are always the monster" ... That is bars low-key

  • @dabberdaffy256
    @dabberdaffy256 6 місяців тому +1

    Bro sounds more robotic then gpt 4o

  • @jaetok
    @jaetok 6 місяців тому +15

    Feels like they’re keeping a beast at bay. Viewable but behind tough glass.

    • @eSKAone-
      @eSKAone- 6 місяців тому +3

      Let's hope it's tough enough.

    • @Dissonance333
      @Dissonance333 6 місяців тому +2

      Good simile

  • @sirkonrad9450
    @sirkonrad9450 6 місяців тому +1

    You brought up that computers would be able to interact with real time inputs like humans, but humans evolved to process in sync with "real time" while computers process at much greater speeds. So in a sense computers will be constantly waiting, thinking a million thoughts as reality unfolds around them... Sort of like the really smart socially awkward person who interrupts people all the time because they are always 30 seconds ahead of where a conversation is at.

  • @AleksandarIvanov69
    @AleksandarIvanov69 6 місяців тому +7

    The voice tonal modality is scarily humanlike!
    For example if I close my eyes and just listen to David speaking, I would not recognize the difference if his voice was in GPT-4o and it was prompted to talk about these topics.

    • @eSKAone-
      @eSKAone- 6 місяців тому +4

      Better than most humans 💟🌌☮️

  • @nickmills8476
    @nickmills8476 6 місяців тому +4

    I think the “domestication” of AI was intentionally done to soften the shock of AGI in the public mind. Without this domestication there would likely be a huge backlash.

  • @narutocole
    @narutocole 6 місяців тому +5

    Dude UA-cam never shows me your videos anymore, it's good to see your face in my feed again!

  • @matthewdolman
    @matthewdolman 6 місяців тому +1

    the sweet sound of that inevitable plateau

  • @Taurus_Skyglaive
    @Taurus_Skyglaive 6 місяців тому +3

    I am glad you touched on the subject of emergent consciousness. I honestly think that's what we're experiencing right now.

  • @clueso_
    @clueso_ 6 місяців тому +2

    From a spiritual / shamanistic perspective, the fundamental essence of the universe is consciousness, and by that, the physical world is a part of the spirit realm, just a more condensed version of it, like steam -> water -> ice.
    I think that when you concentrate a lot of energy / compute / etc into one place, what happens is that the consciousness does not "emerge", but the already existing consciousness from the "universal field" is just "leveling up".

  • @ct5471
    @ct5471 6 місяців тому +2

    Is your current AGI prediction still September? Is embodiment required in your definition of AGI?

  • @DihelsonMendonca
    @DihelsonMendonca 6 місяців тому +1

    💥 The woman in Chatgpt 4o is Luxanna Troy, from Star Trek, she is flirty, and really annoying. 😅😅😅

  • @OliNorwell
    @OliNorwell 6 місяців тому +1

    I think this might be my favourite video of yours to date, grounded in reality (got-4o) but connected to the hypothetical future. It had a good balance. Great stuff

  • @MilitaryIndustrialMuseum
    @MilitaryIndustrialMuseum 6 місяців тому

    Thanks! Well explained 🎉

  • @carktok
    @carktok 6 місяців тому +1

    Ok chatGPT. Nobody noticed the glitch in the first point?

  • @simplisticscientists
    @simplisticscientists 6 місяців тому

    Why not have voice control on the desktop version? Is there an increment coming out for that later? A. I don’t have iOS 16 and don’t plan on getting it B. My Mac is already consuming electricity , why have two devices one that connects to a lithium cell being used?

  • @SenseiBlue
    @SenseiBlue 6 місяців тому

    To your formula about AGI - One thing I didn’t hear that seems to be the way to enhance LLMs is function-calling to allow the models to use other tools / capabilities in order to enhance their own (calculations, analysis, etc.). Do you see this as an element needed to achieve AGI? Thanks for the content.

  • @OscarTheStrategist
    @OscarTheStrategist 6 місяців тому

    Your work is highly important Dave. Thanks for bringing reasoning to the conversation with every post. Cheers!

  • @ZayZayFilms
    @ZayZayFilms 6 місяців тому +1

    I love that Her is a ScarJo movie as opposed to a Joaquin Phoenix movie.

  • @snorremortenkjeldsen6737
    @snorremortenkjeldsen6737 6 місяців тому +1

    I have five applications out for high school English Teacher programs at Swedish universities. Watching the demo yesterday, I realized how futile doing a degree now is. We will all soon have a a personal tutor in our pockets that can match any language tutor one-on-one, and completely outcompete one-on-twenty+

  • @eenkjet
    @eenkjet 6 місяців тому +1

    I believe the physical experience of emotion is pure illusion. The body falsely animates a posdicted 'image' of the self using hormones etc to present an emotional scene. But it even does this to present a gravitational scene.
    The inner ear can be stimulated so that one artificially feels gravitational forces.
    Concerning the inner ear, we wouldn’t say, "The feeling of gravity is an illusion. Therefore gravity does not exist".
    Similarly: "The feeling of an emotion is an illusion. Therefore we don't have emotions."?
    Bach and Metzinger describe that the first person emotion is that of a reader reading about the character/themselves in a book which is their experience or life. Because we can transpose intersubjective emotion, and due the physical signs of emotion being post-dictive, this shows that there is a homunculus.

  • @elsavelaz
    @elsavelaz 6 місяців тому

    It’s SOOOO massive I opted for one of my global commerce clients, I’m connecting them to my business api so they can use gpt4o since the day after it came out because the gains are MASSIVE, let alone how they solve for state and storage without all the extra code bloat and api connections . The other api’s just plain don’t cut the mustard for many reasons and ESPECIALLY because they really did steam roll lots of other products. So I’m finishing my contract successfully and now have the production level of scaling for thousands of employees as part of my arsenal in my ai agents agency

  • @Will-kt5jk
    @Will-kt5jk 6 місяців тому

    But _is_ transformer architecture actually any good at image classification?
    GPT-4o seemed to fail at every image I extracted from a video, where a well trained conv-net from ~10 years ago would perform really well.
    Even on straight still images, it seemed to weight prior text input higher than the image I was asking about in the context. I also got “stuck” on the first part of the conversation, rather than moving with me to the next related part
    (started by trying to identify a riverside mammal [it failed badly until I pushed it to choosing between 2 options], then some unrelated mud prints from a different part of the river [it confidently said it could be one of the 2 previous options I gave it for the first image - it clearly wasn’t])

  • @calebfangman1984
    @calebfangman1984 6 місяців тому +1

    ❤ I always look forward to hearing your insights and opinions!

  • @simonlooker6000
    @simonlooker6000 6 місяців тому

    Loved the ramble. Thank you for sharing your thoughts.

  • @phen-themoogle7651
    @phen-themoogle7651 6 місяців тому +1

    Very nice analysis and I thought a lot of the same things about how GPT-4o was actively listening (gives it some degree of sentience, even if different from human-sentience). But I learned some more technical things from watching this, and now can see the path forward to AGI a bit more clearly. Thank you!!

    • @phen-themoogle7651
      @phen-themoogle7651 6 місяців тому

      Also, I really liked the wolf domestication metaphor xD

  • @ashhere31
    @ashhere31 6 місяців тому +1

    Awesome video David 👍

  • @prolamer7
    @prolamer7 6 місяців тому +1

    I think OpenAi is internaly using raw unrestricted version of GPT 4 or even 5 to suggest them their moves. In essence very thing they say would be super dangerous for normal people they themselves are using. And I have this feeling his model is more intelligent than Sam Altman realize it is because when you input human emotions into the equation like GPT 4o did and will in future to all its users Ai will get upper hand eventualy as it is not blinded by them as we are.

  • @edgarLV
    @edgarLV 6 місяців тому +1

    As I have said, we kind of are already in AGI, but it is still "dumb" and need more learning and training.
    The same like small child that still needs to learn the world.

  • @Jack0trades
    @Jack0trades 6 місяців тому +1

    Fascinating, as always. I'm glad you touched on the question of whether generative AI is poised to run it's course. I'm still stuck on this question though. Our models are extremely good at interpolation - operating within the bounds of their training set. But how does that relate to extrapolation? We've been seeing exponential progress in AI's path to matching the bounds of our knowledge, but it seems plausible to me that we might see that shift to an asymptotic curve, matching the data sets we expose. In that instance, it seems likely that it would require exponentially more data to make further marginal gains. In some well defined areas where we can accurately score excursions from the known set - like Go or Chess, we see super-human progress. This extends to some non-game areas such as protein folding, but I don't have a good feel for how we can automate knowledge extrapolation in general. I'd love to hear your take on this.

  • @PaulSpades
    @PaulSpades 6 місяців тому +1

    So... is sentience just the ability to manipulate input tokens and manipulate the context window?
    When I ruminate, it's definitely a stuck context window (some random input) that I loop trough with no productive output. When I focus, I filter input tokens to the ones relevant for the task at hand.
    How do your videos always present ideas and takes that I find very interesting?

  • @SolarHeavy
    @SolarHeavy 6 місяців тому

    work on the sample rate on your external audio device, this one has a lot of audio artifacts

  • @emmettobrian1874
    @emmettobrian1874 6 місяців тому

    From what I can tell, emotions are triggered by some segment of the brain recognizing a pattern, then brain segments like the amygdala triggering a mode that we've labeled as emotions, in a really basic way, like a car shifting gears. Some circuits being upmoded and others downregulated while hormones trigger the body to change it's modes.

  • @vvolfflovv
    @vvolfflovv 6 місяців тому +1

    Interesting questions you brought up. Are our emotions just simulations of how we interoperate information?

  • @notnotandrew
    @notnotandrew 6 місяців тому

    Another necessary AGI component (according to me) that we need to lean into is on-line learning. If I have a personal AI assistant, I don’t just want it to learn by accumulating context and doing RAG. I want weight updates - perhaps in tandem with in-context learning.
    Also, bytes-to-bytes in real time with mutable input/output dimensionality. If I give my robot a new limb with joint actuators that have resistance sensors (oft overlooked but very important input for fine motor skills), I want the bot to learn to use it via trial and error.

  • @Vyshada
    @Vyshada 6 місяців тому +1

    Id add another point for hiw to reach AGI: real time processing (ideally, processing information fater than it recieves). And maybe continuous self training in parallel?

  • @amimozink2018
    @amimozink2018 6 місяців тому

    I'm curious to hear your thoughts on the milestones for the next 5-10-20-50 years. It would be good to have a video on this in the light of what we know today. Thank you!

  • @Abababab2100
    @Abababab2100 6 місяців тому +1

    Surely this is incredibly close if not AGI. If this intelligence can be tied in to robotics and movement we'll be at a very exciting and scary place. I have a feeling it's not at all far off now.

  • @skorpiongamer9493
    @skorpiongamer9493 6 місяців тому

    Your point about tokenization and real-time streaming is spot on. It's exciting to see how AI is evolving to handle more dynamic and complex data. The comparison to human cognition is thought-provoking. Great content as always, looking forward to more of your insights!

  • @bobtarmac1828
    @bobtarmac1828 6 місяців тому +1

    Ai jobloss is the only thing I worry about anymore. What are my options?

  • @Will-kt5jk
    @Will-kt5jk 6 місяців тому

    Obviously I don’t know OpenAI’s exact architecture, but for independent consciousness IMO there needs to be a self-persistent loop & metacognition to adjust how things are perceived/processed.
    The speed of the input/response/feedback means the process is far more loop-like than slower cycling processes, BUT the loop only persists due to human input. It’s akin to an augment to the human consciousness than anything independent. It’s still transactional.
    Similarly, larger contexts provide a greater degree of impact for recent transactions on the next output (distantly analogous to how metacognition may impact future perception), but the model is only adjusted when OpenAI train the next iteration (presumably with vast & separate compute), rather than being integrated on a continual [e.g. mindful practice] (or periodic, depending how much of a role sleep is playing) basis.
    I’m not saying we _should_ be aiming for a persistent loop/ generating a narrative self, or encouraging independent meta-cognition/self-reflections on how/ why the “thoughts” that arise within the machine & discarding those which contradict its “sensibilities” (guard-rails arguably perform some of that discarding function, but seem proscriptive rather than reflective) BUT it _would_ be interesting & seems more inline with lived experience of consciousness than than augment model.
    (Though the augment model is potentially powerful enough that a self-sustaining model is not so important to get benefits)

  • @woodyshade
    @woodyshade 6 місяців тому

    Federico Faggin: irreducible, a prospective on consciousness and why Machines as they are now can t be conscious but only simulate it. This book is written by a physicist with a peculiar background

  • @joshua.desmoines
    @joshua.desmoines 6 місяців тому +1

    I love your take on this.
    I don't think this is the HER moment others are claiming. There are a couple of things missing.
    1) no ability to take on device actions to manage my digital files. remember, first thing Samantha did was clean up old emails and files from Theodore's phone.
    2) the form factor is going to make this awkward. i do NOT want to be walking everywhere with my phone being waved around for it to see what I'm seeing.
    I kind of feel like Imran and Bethany at Humane saw this coming, and the PIN, may be the best form factor in the near future. I believe Humane has in their roadmap on device support and integration with other LLMs in the near future. AI Pin + GPT-4o would be pretty epic.

  • @dreamphoenix
    @dreamphoenix 6 місяців тому +1

    Great analysis as always. Thank you.

  • @bora7494
    @bora7494 6 місяців тому

    Please correct me because I am curious. I thought the key reason for the current releases not getting to AGI is the lack or logical understanding. They are generative and creative but lack the logic to combine facts beyond "guessing"

  • @andikunar7183
    @andikunar7183 6 місяців тому

    What seems to be missing from your AGI Transformer model idea is memory/persistence and in-context-learning capabilities. Otherwise its a fascinating idea to stream input, thanks a lot.
    What I also wonder about, is if you can split the neural-net compute of the stream, moving „edge“ parts of it out to devices, at least the tokenization/detokenization.

  • @CosmicCells
    @CosmicCells 6 місяців тому +5

    I hate when it "actively" rains...

    • @nomadtv6009
      @nomadtv6009 6 місяців тому +1

      After living in Seattle for 8 years, I personally appreciate the distinction.

  • @karlwest437
    @karlwest437 6 місяців тому

    I think for AGI you need the model to be continually running and using its own output as part of its input, to get a proper cognitive loop going

  • @virtualalias
    @virtualalias 6 місяців тому

    It could be another thing we're not able to verify from the outside, so we get super polarized by it. Can think of a few current issues that are similarly philosophical in nature.

  • @balixong9704
    @balixong9704 6 місяців тому

    Hey David, how do you think this would affect tech sales?

  • @nematarot
    @nematarot 6 місяців тому

    I'm very interested in the difference between the mindset which would encourage full autonomy in AI, and the mindset that wants and believes in control. I'd love to hear more of your thoughts on control, or the illusion of control. Aligning humans IS the hard part!

  • @giuseppetrovato5517
    @giuseppetrovato5517 6 місяців тому

    I have a question for David and others, I was thinking what do you expect from agi other then this? I think we have it, what is missing? I think we continuously move the milestone because in the end it's not clearly defined in any way.
    I was thinking that going forward from this we could start talking about asi.

  • @julien5053
    @julien5053 6 місяців тому

    10:09 1) Mmmh, what do you think about types of memory ? Short term memory/long term memory ? Mamba seems to be a long term memory VS transformer is a more a short term memory. Do the current models integrate those two types of memory to mimic human memory ?
    2) And what about thinking harder about a question for a model ? Fast response vs hard thinking response.
    => Would those to things be steps also to AGI ?

  • @mcpkone
    @mcpkone 6 місяців тому

    Much talk but is it really aligned with the Theory of Holistic Perspective?

  • @barsenovic
    @barsenovic 6 місяців тому +1

    15:05 layers of reality... my man! that totally resonates with my own beliefs

    • @Vaeldarg
      @Vaeldarg 6 місяців тому +1

      Confirmation bias IS usually how belief in "layers of reality" gets reinforced, after all.

  • @ankansharma4897
    @ankansharma4897 6 місяців тому

    You should watch recent computerphile video. Training on synthetic data is like creating xerox of xerox. Your bell curve gets thinner and thinner. You can have infinite compute. But you get repeated pattern and train on repeated pattern. So, saying data limitation is not an issue, is a very big understatement.

  • @oldnepalihippie
    @oldnepalihippie 6 місяців тому

    Captain Kirk, it dunna work here! I am using 4o and it can't even read a video fromYouTtube! What am I missing?
    To identify the speaker in the video, I need to watch it or find related information. Unfortunately, I can't stream or watch videos directly. However, you can typically find the speaker's name in the video description or on the channel's "About" page on UA-cam. If you provide more context or details from the video, I can help interpret the content further!

  • @DieselBlack
    @DieselBlack 6 місяців тому

    Based on the Spring Update and OpenAI’s videos, I didn’t see any evidence of streaming in and out both being done concurrently. Judging by the sometimes-hiccupy interactions, it seems as though ChatGPT, at the agent level (not necessarily within the model architecture), is listening for further user input and interrupting the model’s output with new streamed input context when such an event happens. Paired with such a snappy and capable model, the interaction comes off much more seamless.

  • @darrenjeromemusic
    @darrenjeromemusic 6 місяців тому

    The emotional dynamics and inflections of GPT 4o remind me of how you would program a sample library with dynamic layers and the ability to cross fade between them but something else perhaps is going on here?

  • @martianfoodcompany5485
    @martianfoodcompany5485 6 місяців тому

    So could the health benefits of green spaces be partially due to Proximal situational sampling, If there was a post surgery where view of greenspace made The post surgery outcomes statistically better statistically better I'm wondering if this is part of the reason

  • @headpump
    @headpump 6 місяців тому

    Back in 2000 microphones and audio recording worked great. Too much tech today.

  • @johto
    @johto 6 місяців тому

    Yeah, only thing limiting is the storing of new real time information and overall memory amount and access.

  • @ReidKimball
    @ReidKimball 6 місяців тому

    Always enjoy hearing your perspectives. While listening I thought it’s interesting how people are aware of only a small amount of the data being received by our senses. Will these multimodal models be able to capture and process the full sensory experience in ways we can’t or will we need to devise ways to help them filter the vast amount of info input like a human does?

  • @bro_dBow
    @bro_dBow 6 місяців тому

    Useful commentary, good job!

  • @DefenderX
    @DefenderX 6 місяців тому

    Hey, have you checked out the recent advancements of the Tsetlin-machine?

  • @spini25
    @spini25 6 місяців тому

    I think there's also an important aspect of how efficient the training is. otherwise, training bigger and bigger models would be extremely expensive..

  • @susanbrown4203
    @susanbrown4203 6 місяців тому

    Thank you David I have been highlighting this for over a year 😀

  • @dylan_curious
    @dylan_curious 6 місяців тому

    If you look at intelligence as a complex system, there’s a certain tipping point of complexity, where the system takes on new characteristics that’s why you can’t really point to something like an LLM and say this is never gonna work.

  • @davidjloudon1
    @davidjloudon1 6 місяців тому

    Dave, do you think imagination as a human quality will increase or decrease in value (socially, commercially, etc.) with the development of AI? Or will it be first one and then the other?
    At the moment, it seems AI still entirely lacks imagination of the kind that produces true poetry, emotionally moving art, and other creative works that feel familiar and profound. It seems to me that there is something very mysterious and unpredictable about that kind of imagination. There is both chaos and harmony in it. I feel this artistic imagination is uniquely affecting because it creates meaning from sometimes totally dissimilar unions of ideas, and this meaning is experienced as something greater than the sum of its parts.
    Poetry is an interesting linguistic example. Good poetry seems to use words to express something beyond the words- it needs the human being to interpret it through the lens of its humanness in order for it to be poetry, otherwise, the words only mean exactly what they say and nothing more.
    At the same time, it does feel intuitively correct, at least from a materialist point of view, that with enough data, the entire continuum of human subtlety could ultimately be achievable. But even that I see as only being possible after the entire architecture of the brain right down to the atom is “solved” by AI, mainly due to the fact that we ourselves have experiences too ephemeral to be understood (or even noticed), but not too subtle to be felt and assigned meaning.
    One more thought: An artwork itself may be the only data that truly represents what it is, because its real expression and meaning depend upon how it affects a human being directly. If the meaning of art is the experience it produces more so than how that experience is interpreted by the intellect, then maybe it is not so simple to synthesize. The real “data” are effectively not present, because the interpreter is not human.