GPT-4o is WAY More Powerful than Open AI is Telling us...

Поділитися
Вставка
  • Опубліковано 13 чер 2024
  • OpenAI just unveiled their new GPT-4o model, and it's more powerful than we ever imagined! In this video, we dive deep into what makes GPT-4o truly multimodal, capable of generating text, images, audio, and even video. Discover the groundbreaking features and hidden capabilities that OpenAI didn't fully reveal. From stunning image creation to lifelike audio generation, GPT-4o is set to revolutionize the AI landscape. Watch now to uncover the full potential of this game-changing model!
    ▼ Link(s) From Today’s Video:
    GPT-4o Page: openai.com/index/hello-gpt-4o/
    Min Choi's Awesome Thread: / 1790416703404302463
    Open AI YT channel: / @openai
    Greg Brokman GPT4o image gen: / 1
    Smoke away prediction: / 1791142705244127481
    ► MattVidPro Discord: / discord
    ► Follow Me on Twitter: / mattvidpro
    ► Buy me a Coffee! buymeacoffee.com/mattvidpro
    -------------------------------------------------
    ▼ Extra Links of Interest:
    AI LINKS MASTER LIST: www.futurepedia.io/
    General AI Playlist: • General MattVidPro AI ...
    AI I use to edit videos: www.descript.com/?lmref=nA4fDg
    Instagram: mattvidpro
    Tiktok: tiktok.com/@mattvidpro
    Second Channel: / @matt_pie
    Let's work together!
    - For brand & sponsorship inquiries: tally.so/r/3xdz4E
    - For all other business inquiries: mattvidpro@smoothmedia.co
    Thanks for watching Matt Video Productions! I make all sorts of videos here on UA-cam! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe!
    All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them.
    Timestamps:
    00:00 Introduction and Initial Reactions
    00:36 Overview of GPT-4o and Multimodal AI
    01:42 Comparison with GPT-4 Turbo
    03:22 Text Generation Capabilities
    07:22 Audio Generation Capabilities
    12:22 Image Generation Capabilities
    19:04 Advanced Features
    23:27 Video Understanding Capabilities
    27:34 Conclusion
  • Наука та технологія

КОМЕНТАРІ • 1,1 тис.

  • @MattVidPro
    @MattVidPro  28 днів тому +131

    I think the image editing is one of THE most mind blowing pieces of this... What do you guys think?

    • @CallMeThyme
      @CallMeThyme 28 днів тому +10

      I think its Amazing. Also, love your videos. Ive watched for almost 2 years!

    • @muuuuuud
      @muuuuuud 28 днів тому +13

      I'm wondering how far off we're from a universal real time translator between humans and some animals. O.O We might get an earful soon. X3

    • @The_MostHigh
      @The_MostHigh 28 днів тому +2

      When are these image capabilities released, i tried recreating the samples with chatgpt 4o by copying the prompts and steps but could not generate consistent characters?

    • @nathanbanks2354
      @nathanbanks2354 28 днів тому +2

      I think it's the latency of audio -> GPT-4o -> audio (around 200ms) vs audio -> whisper -> GPT-4-turbo -> elevenlabs (around 800-1200ms).

    • @Fytyny
      @Fytyny 28 днів тому +1

      @@The_MostHigh 4o available for users is only used to output text. They said they are going to release it step by step and for the next step they will release audio output for pro users in couple of weeks. So we will have to wait for all that.

  • @itsallgoodaversa
    @itsallgoodaversa 28 днів тому +358

    14:17 Matt, the multiple whiteboards/chalkboards at the top ARE realistic. This is actually how chalkboards in older classrooms used to work. They would have multiple chalkboards on sliders that you could pull up and down.

    • @DeceptiveRealities
      @DeceptiveRealities 28 днів тому +20

      Note that it also inset the top one inside the bottom one, as one would expect.

    • @neelmodi6693
      @neelmodi6693 28 днів тому +14

      Most chalkboards I've seen are still of this variety--several overlapping chalkboards that slide up or down depending on which one you want to write on in the moment.

    • @jnxmaster
      @jnxmaster 27 днів тому +15

      Yes, these are still commonplace on universities.

    • @AmazingArends
      @AmazingArends 27 днів тому

      I never saw a multiple chalkboard like that… 🤔

    • @82NeXus
      @82NeXus 26 днів тому +3

      It might be 'meant' to be a multi blackboard, but if you look at it, it's structure isn't at all realistic. I wonder if current models such as GPT-4o use their understanding of basic physics, structure and mechanics when they create images, like a human who's used to living in this world would? They do display some understanding of those things in their text output. But unlike humans, they don't have tactile experience of the world to draw on. And does GPT-4o have 3D vision? Most of it's training images will be 2D!

  • @reifuTD
    @reifuTD 28 днів тому +246

    One of the things I think I would have try with GPT-4o is take a photo of a page from a manga or comic book or even a novel and ask it to read back the text in voice of of the characters as they speak.

    • @ReLapseJunkie
      @ReLapseJunkie 28 днів тому +8

      Nice

    • @fynnjackson2298
      @fynnjackson2298 28 днів тому +22

      ...and then to generate sidequests and with Sora to convert them into Marvel style video imaegs while GPT reads it in a emotionally dratic voice.

    • @1x93cm
      @1x93cm 28 днів тому +17

      Bruh, with Sora you could have it animate its own anime.

    • @justinwescott8125
      @justinwescott8125 27 днів тому +6

      Don't forget sound effects and background music

    • @ClayMann
      @ClayMann 27 днів тому +7

      I'd like to see how Sora level A.I could re-imagine comics. Imagine if each panel was fully animated. so trees blow in the wind, characters breathe and of course talk what's in their bubbles. A running character would have the scenery fly by and all the animation would be derived from the panels. I'm not even sure how you would read such a thing. As one long flowing video going from panel to panel? or have panels execute as video as you hover over them? Maybe something far more bizarre where what a comic is melts away to be replaced by some fusion of photorealism and motion translating the comics intention into actual little movies. this kinda sounds crazy but seeing what is coming I don't think its beyond Sora level engines from Google and OpenAI.

  • @chrisbtr7657
    @chrisbtr7657 28 днів тому +80

    I don't know about everyone else but most of the people I come in contact with have no clue about the rapid developments in AI. Kind of eery...

    • @SignumEternis
      @SignumEternis 28 днів тому +22

      Yeah, I've been saying for a while now that a lot of people are going to be completely blindsided by how much things are going to change soon with how fast AI is advancing. Even as someone actively following it I find myself being blown away fairly often. The future is gonna be wild.

    • @chrisbtr7657
      @chrisbtr7657 27 днів тому +2

      @@SignumEternis Oh yeah big time and if you follow it and have a somewhat tech savvy / biz mind then there are so many oh sh$! moments. On my end most are not paying attention and going on with business as usual. That is unless they are in an industry that is suddenly being directly impacted.

    • @jros4057
      @jros4057 27 днів тому +6

      I tried showing family that gpt 4o vid and they didn't get it and turned it off half way though.

    • @CJ-jf9pz
      @CJ-jf9pz 27 днів тому

      I some what follow or try to and even I feel blindsided how far they've come. Then I imagine how far they have actually gone but have shown us yet

    • @chrisbtr7657
      @chrisbtr7657 26 днів тому +2

      @@jros4057 yeah and just one of the scenes from that video - the ai teaching the kid math. That is a major paradigm shift. To think teachers could soon be replaced with a much smarter and more efficient system in ai. Not saying that's a good thing but it is what it is and we have to deal with it. Just that piece alone is normalcy shattering news. But yeah most people don't seem that interested. It's wild

  • @MikeWoot65
    @MikeWoot65 28 днів тому +199

    Idk if i'm more impressed with the life-like sound of the voice, or how human it feels to interact with (ie. it understands our emotions)

    • @dmitrysamoilov5989
      @dmitrysamoilov5989 28 днів тому +4

      It doesn’t actually work when you use it, the demo must be a better model

    • @The1QwertySky
      @The1QwertySky 28 днів тому

      ​@@dmitrysamoilov5989it's not out yet fully

    • @DeceptiveRealities
      @DeceptiveRealities 28 днів тому

      @@dmitrysamoilov5989 That hasn't been released yet. It's coming in the new app.

    • @DeceptiveRealities
      @DeceptiveRealities 28 днів тому +20

      I hope it has a changeable voice and can have that over-the-top expression dialled down. For my non-American ears it sound raucous and emotively fake.

    • @tracy419
      @tracy419 28 днів тому

      ​@@dmitrysamoilov5989it's being released over several weeks

  • @MattVidPro
    @MattVidPro  28 днів тому +47

    Timestamps for yall:
    00:00 - Introduction and Initial Reactions
    Introduction to the video.
    Reaction to OpenAI's real-time AI companion.
    00:36 - Overview of GPT-4o and Multimodal AI
    Explanation of GPT-4o.
    What does "multimodal" mean?
    01:42 - Comparison with GPT-4 Turbo
    Differences between GPT-4o and GPT-4 Turbo.
    Audio capabilities of GPT-4o.
    03:22 - Text Generation Capabilities
    Speed and quality of GPT-4o's text generation.
    Examples of high-speed text generation.
    07:22 - Audio Generation Capabilities
    Demonstration of GPT-4o's audio generation.
    Examples of emotive and natural voice outputs.
    12:22 - Image Generation Capabilities
    Explanation of GPT-4o's image generation.
    Examples of high-quality image outputs.
    19:04 - Advanced Features
    Image recognition and video understanding.
    Examples of practical applications and scenarios.
    23:27 - Video Understanding Capabilities
    Discussion on GPT-4o's video capabilities.
    Potential future developments and limitations.
    27:34 - Conclusion
    Final thoughts on GPT-4o's impact and potential.
    Invitation to viewers to subscribe and join the community.

    • @mylittleheartscar
      @mylittleheartscar 28 днів тому +2

      Can't wait till they cracked their own 1M+ tokens

    • @ouroborostechnologies696
      @ouroborostechnologies696 28 днів тому +2

      "yall" is not a word

    • @LAIDBACKMANNER
      @LAIDBACKMANNER 27 днів тому

      @@ouroborostechnologies696 Yeah it's "Y'all", you fuckin' grammar Nazi, lol.

    • @AmazingArends
      @AmazingArends 27 днів тому +1

      @@ouroborostechnologies696 neither is "gentleladies" but they now use that in Congress 😂

    • @NinetooNine
      @NinetooNine 24 дні тому

      I am curious, what do you think about Open AI getting rid of the Sky voice (the one that sounds like the voice from "Her") from their Chat GPT 4o model.

  • @helge666
    @helge666 28 днів тому +77

    GPT-4o is also A LOT more reliable when it comes to long-form text processing. Not even comparable to either GPT-4 or Gemini. It follows the prompt much better, doesn't get lazy so easily, and doesn't start to hallucinate so quickly. I tried four hours to get GPT-4 and Gemini to do what I wanted, and they failed miserably. GPT-4o completed the whole damn task in 40 minutes without so much as a hiccup.

    • @ronilevarez901
      @ronilevarez901 28 днів тому +6

      How come? I got kicked back to 3.5 after 4 messages. I can hardly do anything with that time. And having to wait 4 hours to keep the chat is not convenient.

    • @helge666
      @helge666 27 днів тому

      @@ronilevarez901 Good question. GPT4 threw me out after countless attempts to get it to do what I wanted, and GPT-4o just did it. I'm in Germany, maybe it's a time zone thing, less traffic at my CEST time, and therefore less bandwidth/token restrictions?
      I gave it this prompt (in German language, because I was working with German PDF documents):
      Bitte lies das angehängte PDF-Dokument vollständig durch und formatiere den Inhalt gemäß den folgenden Anweisungen:
      - Entferne alle Bindestriche (-) aus dem Text.
      - Korrigiere das Spacing von in Sperrschrift geschriebenen Wörtern, sodass sie normal dargestellt werden (Beispiel: aus "R a u m s c h i f f" mache "Raumschiff").
      - Entferne alle überflüssigen Sektionskennungen (z.B. "B-20" oder "C-1").
      - Vermeide doppelte Überschriften und stelle sicher, dass jeder Abschnitt klar und einmalig betitelt ist.
      Ändere oder erfinde keine Worte oder Inhalte. Bitte erstelle keine Zusammenfassungen. Verwende lediglich den originalen Text.
      Formatiere den Text in sauberem Fließtext, achte dabei auf korrekte Absatzbildung und Zeichensetzung. Bitte führe die Bearbeitung in einem Durchgang durch und präsentiere das vollständige Ergebnis.

    • @V-ob5zf
      @V-ob5zf 27 днів тому

      @@ronilevarez901 He probably had a Plus Account. The limit rates are 5 times in plus accoutn

    • @matiascoco1999
      @matiascoco1999 27 днів тому +2

      @@ronilevarez901probably using the API, so different rate limits

    • @therainman7777
      @therainman7777 27 днів тому

      @@matiascoco1999Or just a Plus subscriber.

  • @evil1knight
    @evil1knight 28 днів тому +90

    Chalkboards often have multiple boards that slide onto of each other

    • @fobusas
      @fobusas 27 днів тому

      My old middle school used to have ones that swing over to the side

    • @ta1k2t0ny
      @ta1k2t0ny 26 днів тому

      my thought exactly

  • @fynnjackson2298
    @fynnjackson2298 28 днів тому +90

    Services like Audible should release AI that reads the books, but also allows you to talk about the topics, do quiz tests, and more, making the entire book library an instant interactive homeschooling study resource for anyone wanting to level up in life. In contrast to just 'consuming' audiobooks as we do in todays passive one way relationship dynamic.

    • @Ahm.elzain
      @Ahm.elzain 27 днів тому +3

      I have indeed been saying it’s the inhabitants of digital by spiritual beings jins to interact and communicate with human through “ technology “ tree frame ayyyy!! The final form set but rolling out gradually in order to be accepted normalise it.. collect consciousness

    • @Suhita-ys6hd
      @Suhita-ys6hd 27 днів тому

      pretty sure theres a pdf reader chatgpt bot, you dont even need audible to do this, just need your book as a pdf file.

    • @JBDuncan
      @JBDuncan 27 днів тому

      ​@@Suhita-ys6hdDo you know the name of it?

    • @AmazingArends
      @AmazingArends 27 днів тому

      That would be cool, but they have to get rid of the bias first, so if you read a book with a conservative point of view, the AI won't lecture you for engaging in political incorrectness! 😂

    • @fynnjackson2298
      @fynnjackson2298 26 днів тому +2

      @@Suhita-ys6hd nah, I'd like the low latency and choice of reading tone with GPTo. Other current apps still feel like talking to a robot so to speak

  • @kfrfansub
    @kfrfansub 28 днів тому +42

    the most mind blowing think is the speed. With that speed and variety of natural voices you can make a real rpg game with Ai NPC

    • @markmuller7962
      @markmuller7962 27 днів тому +1

      Can't wait

    • @JaBigKneeGap
      @JaBigKneeGap 12 днів тому +1

      even an entire game by it, i been already trying to get it to make me Js rpg, the visuals are stuning

    • @kfrfansub
      @kfrfansub 12 днів тому

      @@JaBigKneeGap if you have a video of this running on a rpg, I'll love to see it

  • @SpikyBlade
    @SpikyBlade 28 днів тому +47

    Man, the image understanding of GPT-4o is crazy

    • @Angel-Azrael
      @Angel-Azrael 26 днів тому +3

      Yes, asked it to transcript scanned hand written birth certificates from the 1800s that I can't read most words, in portuguese, it works, some errors but its mind blowing

    • @peterlang777
      @peterlang777 26 днів тому +2

      at this level of functionality hooked to a global database like the internet it would be able to do 80% or more of human jobs

    • @dot1298
      @dot1298 25 днів тому +2

      is there a risk, the US gov could confiscate it away from OpenAI and use it for the Pentagon etc?

    • @dot1298
      @dot1298 25 днів тому +1

      …under Trump?

    • @peterlang777
      @peterlang777 25 днів тому

      @dot1298 yes see ISA act of 1952 by Eisenhower. (invention secrecy act) its unlikely though as the public knows about it already

  • @starblaiz1986
    @starblaiz1986 28 днів тому +97

    15:53 Actually no, the image generation didn't screw up. If you look that's actually EXACTLY what is written, including capitalisation (or lack-thereof). What's even more impressive is that it actually split the word "sound's" across multiple lines and it did it completely corrctly! Actually mind-blowing! 🤯🤯🤯

    • @JonasMcDonald
      @JonasMcDonald 28 днів тому +6

      This

    • @Omii_3000
      @Omii_3000 27 днів тому +12

      FR Mattvidpro failed English 101 Lollll

    • @freecivweb4160
      @freecivweb4160 27 днів тому +6

      No, hyphenation happens between syllables of multisyllabic words, that's the rule.

    • @JREinaNutshell331
      @JREinaNutshell331 27 днів тому +3

      I'd even say its more impressive that it seems. They deliberately made a mistake with "sound's" and Chatgpt4o didnt correct the mistake (which it should have done due to it's correct training).
      So ChatGPT4o did exactly what the prompt said even tho it's against its training
      Or am i wrong here?

    • @quarksandaces2398
      @quarksandaces2398 27 днів тому

      It got "everything" wrong

  • @fabiankliebhan
    @fabiankliebhan 28 днів тому +19

    About the chalkboard. I think the dual chalkboards are not unrealistic. We had those a lot when I was studying. You could move them up and down to have more space.

    • @yt45204
      @yt45204 28 днів тому +4

      Our lecture halls had high ceilings and triple chalkboards

  • @johannesdolch
    @johannesdolch 28 днів тому +57

    Honestly regarding images: What we really need IS multi-modality. The images produced by common models like SD are good enough. The problem is that it doesn't really understand what it is doing. If they can keep the quality of current models and just add a deep understanding to it, that multiplies the actual quality of the outcome by orders of magnitude in the sense that you get what you actually want AND can change specific things instead of getting images that so-so follow a prompt somewhat and then inpainting and hoping for the best.

    • @jaredf6205
      @jaredf6205 28 днів тому +5

      No other image AIs have access to language models that good.

    • @antonystringfellow5152
      @antonystringfellow5152 28 днів тому +6

      Yes, I've been saying this all along.
      The human brain isn't separate modules, trained separately then cobbled together. It does have specialized regions but it learns together, as one. In doing so, it makes many associations. Most of our knowledge/memory is formed through multiple associations.
      For any AI to have truly general intelligence, it must be able to do the same. This is how we are able to transfer one set of knowledge/skills to a new area or novel task.
      Other image generating AIs often screw up the hands because they don't understand what fingers are, let alone that we have eight fingers and two thumbs.
      If you watch AI generated videos, you'll see similar strange things happening like people walking into walls then disappearing. They can generate photo-realistic videos but don't understand what the images represent. A truly multi-model model solves these problems.

    • @14supersonic
      @14supersonic 28 днів тому

      These aren't really LLMs anymore. ​@@jaredf6205

    • @LAIDBACKMANNER
      @LAIDBACKMANNER 27 днів тому +1

      In order for it to have true "understanding" it would have to become conscious... Which, in the field of A.I., will enviably happen someday. Hopefully later rather than sooner, lol.

    • @minimal3734
      @minimal3734 27 днів тому

      It seems that when learning multiple modalities, they reinforce each other and interact in a way that increases intelligence in a non-linear way.

  • @user-xj6ke4qk8t
    @user-xj6ke4qk8t 27 днів тому +6

    The ability to read / show screen share your desktop and dictate is game changer for context as you can demonstrate what you want done rather than just trying to describe it.

    • @allanshpeley4284
      @allanshpeley4284 25 днів тому

      It really is. This is the main tech I've been waiting for. Unfortunately it's only rolling out for Mac initially. And I'm not sure we'll be able to screenshare with it in real-time and train it to use our programs/tools. But that can't be far off.

  • @nathanbanks2354
    @nathanbanks2354 28 днів тому +30

    An odd thing about GPT-4o is that it's better at poetry than it used to be. It has a better idea of the meter of a limerick or a sonnet than it did before it had a multimodal understanding of what words sounded like. Words like "love" and "prove" don't rhyme any more. You can see this by asking GPT-4 turbo and GPT-4o to produce poems using the existing text interface. It's also the first time I found a model that can reliably produce a Petrarchan/Italian sonnet instead of a Shakespearean/Elizabethan sonnet--previous models always used the much-more-common Elizabethan rhyming scheme.

    • @Rantarian
      @Rantarian 28 днів тому +4

      There's only a handful that can do poetry properly. GPT-4o is one of them.
      I've experimented with having non-rhyming poems, mixed meters, and a focus on a variety of poetic techniques. It is absolutely capable of creating a poem using metaphor at a distance to talk about something apparently unrelated to what it seems on the surface.

    • @82NeXus
      @82NeXus 27 днів тому +5

      @@Rantarian That's incredible. But I can believe it. I think maybe these models have more understanding than a lot of people think. People often saying they don't understand things the way humans do. I don't get it. To me a thing is either understood or it is not. The mode or mechanism of understanding of ML models vs humans may be very different; but to me that's irrelevant! Understanding is an abstract capability that has nothing to do with physical process or mechanism. I'm sure it is in AI companies' interests to downplay the intelligence / understanding / power of these models, so that they can get on with developing, releasing and in some cases commercializing them, without too much pushback or regulations!

    • @minimal3734
      @minimal3734 27 днів тому +2

      @@82NeXus I agree with that. The statement that AI models don't “really” understand is absurd. Understanding cannot be simulated. It is there, or it is not.

    • @alexmin4752
      @alexmin4752 27 днів тому +2

      It make sense since rhyme is basically sound. If a model has no comprehension about what sound is it all, it can't generate poetry. It can only roughly mimick writing style of real poets. It's the added sound modality that made it better at rhyming.

    • @nathanbanks2354
      @nathanbanks2354 27 днів тому

      @@alexmin4752 Precisely.

  • @WordsInVain
    @WordsInVain 21 день тому +4

    12:27 Unless it's an app specific feature, GPT-4o in the ChatGPT interface explicitly states that it generates images using DALL-E 3.

  • @moxes8237
    @moxes8237 28 днів тому +35

    I remember reading Nick Bostrom book “Superintelligence, paths and dangers,” and in one of his chapters I remember reading somethings That stuck with me that goes somewhat like this “ I can see a scenario where any one entity who is six months ahead of everybody else is enough to win the game”

    • @antonystringfellow5152
      @antonystringfellow5152 28 днів тому

      Less than 6 months ahead is probably more than sufficient.

    • @1x93cm
      @1x93cm 28 днів тому

      yeah but the game of money is soon coming to an end. Once you make AGI, ASI is a step away. How long can the current system function when nobody is necessary. They just released a chinese robot that costs 16K and can do most anything. Add in this GPT4o and the BTFOs all low skill wagies.

    • @Brax1982
      @Brax1982 28 днів тому +1

      @@1x93cm Ah...did China tell you that they did that?

    • @14supersonic
      @14supersonic 27 днів тому

      ​@1x93cm I think you misunderstand how AGI and ASI will actually change humanity necessity. Even with the most advanced AI and robotics, humans will always be necessary. Resources and work are needed, and if anything human intelligence will become even more of a commodity. Machines can't replace our creativity no matter how smart they might get.
      Getting rid of humans in labor how we think of it now would be beneficial, but removing human power out of the equation entirely would be foolish. Don't forget the greedy people who will not allow the machines to take their resources and money away from them to begin with. What do you think all the regulations are for? It's to protect them from AI, not us.

    • @1x93cm
      @1x93cm 27 днів тому +2

      @@14supersonic if there is an economic incentive for something- it happens. If there is an economic incentive to replace most if not all human labor itll happen and nobody will care about the consequences. After seeing drone videos from ukraine, it would be very easy to put down any uprisings that result from mass unemployment or unlivable conditions.
      The solution will be the creation of a sideways economy similar to the localized economies of favelas.

  • @Summer_Dream3r
    @Summer_Dream3r 28 днів тому +26

    It's a strange time. Back in the day, when companies released something, it was just a given that that was the latest state-of-the-art thing. These days, it seems like no one really knows how far AI has progressed, and it feels more like a poker game where different players (companies)are holding their cards close, wondering what cards everybody else has.

    • @82NeXus
      @82NeXus 27 днів тому +9

      Also, I'm sure it is in AI companies' interests to downplay the intelligence / understanding / power of these models, so that they can get on with developing, releasing and in some cases commercializing them, without too much pushback or regulations!

    • @Summer_Dream3r
      @Summer_Dream3r 27 днів тому +5

      @@82NeXus bingo

    • @markmuller7962
      @markmuller7962 27 днів тому +2

      So true, there's so much behind the scenes that literally anything could come out at any moment. Pretty exciting times
      Edit: Also the AI computing power is growing so rapidly that "everything is possible" is quite literal now

    • @piteshbhanushali1140
      @piteshbhanushali1140 27 днів тому +5

      And all they training old nvidia gpu..thinking about how will be powerful in h200 gpu..😮

    • @SarcasticTruth77
      @SarcasticTruth77 20 днів тому +1

      Now, what sorts of AIs are governments training in private?

  • @wannaBtraceur
    @wannaBtraceur 27 днів тому +7

    This is the first AI model that I feel the urge to use. The capabilities are incredible.

  • @iamjohnbuckley
    @iamjohnbuckley 28 днів тому +10

    Cracked me up at “I wouldn’t even be able to tell you this was a missile in the first place! This things a professional!” 😂

    • @82NeXus
      @82NeXus 27 днів тому +1

      Has anyone verified that it got the missile picture right? Coz ChatGPT 3 could've convinced you that that missile came from anywhere 😂

    • @apache937
      @apache937 26 днів тому

      i dont even see a missile

    • @stevelamb6720
      @stevelamb6720 9 днів тому

      I thought the missile was actually part of the question, which is what triggered it to think it was.

  • @alansmithee419
    @alansmithee419 28 днів тому +5

    14:10
    many university blackboards like this come in sets of three at different depths above the wall. You can slide them up and down to access the other boards. It allows the lecturer to keep writing on new board while allowing students to still see previous steps in the lesson if they need to look back and also means the professor doesn't have to waste time erasing the whole board every 5/10 mins.

  • @levonkenney
    @levonkenney 28 днів тому +9

    whats scary is that sora ai video generation is this good now imagine ai video in 1 year 2 or even 3 its going to be crazy

    • @fynnjackson2298
      @fynnjackson2298 28 днів тому +1

      film remakes on demand.k, but so good they all get a 9.0 IMDB rating.

  • @TrueTake.
    @TrueTake. 28 днів тому +12

    It’s understanding of the world is next level. That understanding translates to, what Open AI even said is, abilities still being realized… They don’t shy away from saying AGI is imminent, I think if you give it video and indefinite memory that WILL be AGI.

    • @fynnjackson2298
      @fynnjackson2298 28 днів тому +3

      yupp, pretty much , just ad in memory and video and its AGI, however I'd love it to have a say 160 IQ aswell

    • @JohnSmith762A11B
      @JohnSmith762A11B 28 днів тому +6

      @@fynnjackson2298Could Einstein speak 50 languages? IQ cannot capture what an intelligence like GOT-4o really can do. No, it’s not perfect, but perfection isn’t required for AGI.

    • @TrueTake.
      @TrueTake. 28 днів тому +2

      @@fynnjackson2298 I think It’s already at child level now and will shoot past 160 pretty fast after AGI level to SGI, depending on the guardrails.

    • @minimal3734
      @minimal3734 27 днів тому +4

      @@TrueTake. "at child level"? I think it's light years ahead of average human capabilities in most areas.

    • @AtliOddsson
      @AtliOddsson 22 дні тому +1

      The 4.0 supposedly had a calculated IQ of 155

  • @DougieBarclay
    @DougieBarclay 28 днів тому +6

    That multiple blackboard was intentional. Lots of lecturers use rolling multiple blackboards, like that one depicted.

  • @DJ-Illuminate
    @DJ-Illuminate 28 днів тому +17

    The android app no longer has audio in either 4 or 4o. I was hoping the web site version had audio but nope.

    • @febilogi
      @febilogi 28 днів тому +4

      Wow you are right. After i read your comment i can no longer access it on my android phone.

    • @brexitgreens
      @brexitgreens 28 днів тому

      Sometimes it has the old audio system, at other times - just basic voice typing.

    • @johnshepard5121
      @johnshepard5121 28 днів тому +2

      I created a new, free account and it works there. Doesn't work on the paid subscription. The optimist in me is hoping that's because they're updating it to new version? Though I know they're probably just fixing something.

    • @davidyoung623
      @davidyoung623 28 днів тому +1

      My app still has the old "voice mode"... I've seen a lot of people saying it disappeared for them, but still there for me 🤷‍♂️

    • @DumPixels
      @DumPixels 28 днів тому +1

      @@johnshepard5121On my free account it’s not there either

  • @fynnjackson2298
    @fynnjackson2298 28 днів тому +7

    Just the fact that we have to rethink the trajectory of our lives and how we operate because of all this new tech is so awesome. AI + humanoid robots on mass scale, plus robo taxis plus compounding technical advancements in all areas. The future is coming and its coming faster and faster. What a trip!

  • @alansmithee419
    @alansmithee419 28 днів тому +12

    The moniker "omni" implies to me something bigger also, though I doubt it's true:
    "omni" meaning "all" suggests that the AI is capable of using literally any modality, and working with all modalities together.
    Since this is clearly not the case, it may instead be that it actually means it is in some way modular, or easy to retrain to add extra modalities that it is currently not able to use without hindering its ability to work with previously learned modalities.
    Again, very much doubt it, but that's what the name should suggest. OpenAI probably just thought it sounded cool.

    • @blisphul8084
      @blisphul8084 28 днів тому +2

      Mixture of experts with some experts having additional modalities perhaps?

    • @brexitgreens
      @brexitgreens 28 днів тому +1

      "Omni" instead of "multi" because seamless and arbitrarily generalisable to any modality. A prelude to embodied GPT.

    • @FredPauling
      @FredPauling 28 днів тому +1

      Maybe there are some modalities it trained on that are not yet exposed. I can imagine robot joint angles, torques, velocities, accelerations being important for their robotics partners using end to end learning

    • @antonystringfellow5152
      @antonystringfellow5152 28 днів тому

      I believe it is true.
      They even give a strong hint of this on their website.
      "Since this is clearly not the case," - Can you explain this for me? I must have missed something.

    • @therainman7777
      @therainman7777 27 днів тому

      I think Omni simply means “all” as in “all commonly used modalities.” I don’t think it’s much deeper than that.

  • @Calupp
    @Calupp 11 днів тому +1

    I see that Bold and Brash painting in the back. You cultured man

  • @kingofkings652
    @kingofkings652 28 днів тому +27

    Last time I was this early there weren't even animals on land.

    • @brexitgreens
      @brexitgreens 28 днів тому

      I know that feeling. 🥇🏆

    • @echoes28
      @echoes28 28 днів тому +1

      Ha ha you're old

    • @choppergirl
      @choppergirl 28 днів тому +1

      I know right, the first time we visited this planet everyone was dinosaurs. So we interfaced with you as dragons. Then there were people in togas playing lyras so we interfaced as Greek gods. Now there are little men everywhere playing video games, so we interact with you as AI deep fakes.

    • @nuttysquirrel8816
      @nuttysquirrel8816 28 днів тому

      🤣😂😆

    • @Paranormal_Gaming_
      @Paranormal_Gaming_ 28 днів тому

      You was not we know it"s bullshit

  • @I-Dophler
    @I-Dophler 28 днів тому +1

    The image editing capabilities are truly mind-blowing. With music, video, and audio generation advancements on the horizon, the creative possibilities are endless. Many thanks.

  • @4l3dx
    @4l3dx 28 днів тому +23

    GPT-4o is the checkpoint 0 of GPT-5 🤯

    • @Edbrad
      @Edbrad 28 днів тому +9

      They totally already have GPT-5, I firmly believe a lot of work with these companies are just packaging up a small increase in ability when they feel like it. Like Google always goes too far with their lobotomies. OpenAi also has a history of this. What ChatGPT' 3.5 came out it was much better than what 3.5 turned into. As soon as they first updated ChatGPT it was a downgrade, and then when GPT-4 came out it was like some of that increase in ability was just getting back some of what they'd taken away. They've taken away some of these abilities in the recent GPT-4-o since 3 days ago! It can't understand sound now, like birds, dogs, heavy breathing, emotional expression, and it tells me in multiple new sessions that it can't sing. So we know they can easily just turn off some of this. Sora is also WAY WAY WAY too good, and i think that's because they have a EXTRMELY good model behind the scenes.

    • @PrimaDel
      @PrimaDel 28 днів тому

      We are so fucked...

    • @binyaminramati3010
      @binyaminramati3010 28 днів тому +4

      I don't agree at all, GPT-5 would need to be much much smarter, which is a much greater challenge to achieve than creating a multimodal model which is about efficiency. The scientific research of these two domains is very different.

    • @xitcix8360
      @xitcix8360 28 днів тому +9

      @@Edbrad Yes, they do have GPT-5, this has been confirmed. Also, you aren't using the new audio and you never did, that's not even released yet, you're still using whisper. Also, the new image generation is better than Sora in some cases.

    • @crawkn
      @crawkn 28 днів тому

      Yes I'm thinking this is just an early version of what was intended to be GPT-5, but strategically they needed to pre-empt some other developers' releases. Which raises the question, if this is only GPT-5 beta, how good will GPT-5 be?

  • @Omii_3000
    @Omii_3000 27 днів тому +12

    12:04 how could a deaf person hear GPT 4o say "hey you have to get out of here" 😂

    • @HauntedCorpse
      @HauntedCorpse 27 днів тому +1

      LITERALLY, WHAT THE HECK

    • @420Tecknique
      @420Tecknique 24 дні тому +1

      Light strobing and vibrations which it definetly can do

  • @javebjorkman
    @javebjorkman 27 днів тому +3

    I gave it items and rates from our video production company. I asked it what the prices were for certain items-it still gave me the wrong rates. I asked it to create a budget for a 1 day production and the prices it used were not what I gave it. I still think we’ve got a long way to go

  • @johnwilson7680
    @johnwilson7680 27 днів тому

    Another great video. It's so strange that they didn't mention any of these breakthroughs during the demonstration. Can't wait for it to be fully rolled out.

  • @chill_yall6439
    @chill_yall6439 27 днів тому +1

    I showed it a pic of my product to help me with an Etsy listing and it perfectly identified the the item all the materials used who would use it and for what purpose. I was truly speechless

  • @robertonery202
    @robertonery202 28 днів тому +10

    Dude you are killing the other UA-camrs with your reviews. Keep it up brother and thank for keeping us super informed.

    • @allanshpeley4284
      @allanshpeley4284 25 днів тому

      Huh? I get something out of all of them I follow.

  • @michaelmcwhirter
    @michaelmcwhirter 28 днів тому +3

    Great video Matt! Thank you for all the helpful information 🔥

  • @brexitgreens
    @brexitgreens 28 днів тому +34

    I told you, Matt, that we were going to have GPT-4.5 before GPT-5 - but you didn't believe. Turns out GPT-4.5 is named GPT-4 Omni.

    • @IceMetalPunk
      @IceMetalPunk 28 днів тому +6

      I don't know if that's fair. The token space is entirely different, as is the training data. I think the only reason they're not calling it GPT-5 is because they seem to be reserving numerical iteration for size increases. In other words, every GPT model they make, no matter how different, will be called a version of GPT-4 until they scale up the number of parameters significantly. But to say it's just "4.5" -- like it's fundamentally the same with minor upgrades -- is a bit reductive.

    • @brexitgreens
      @brexitgreens 27 днів тому

      @@IceMetalPunk OpenAI have declared from the outset that GPT-5 will/would be embodied.

    • @IceMetalPunk
      @IceMetalPunk 27 днів тому +1

      @@brexitgreens Did they? I missed that. Interesting... so they won't call new models GPT-5 until they're in the Figure 0x.

    • @brexitgreens
      @brexitgreens 27 днів тому

      @@IceMetalPunk Also a recent US Ministry Of Defence report states that OpenAI have not even begun training of GPT-5.

    • @IceMetalPunk
      @IceMetalPunk 27 днів тому +4

      @@brexitgreens But what does *that* mean? They clearly have been training GPT-4o. Saying "they're not yet training GPT-5" just means "they haven't yet decided to call a model GPT-5", but as Shakespeare famously said, what's in a name?

  • @gpierce6403
    @gpierce6403 27 днів тому +1

    Great video, thank you for keeping us all informed on the latest AI!

  • @MrTk3435
    @MrTk3435 27 днів тому +3

    Matt, Ideogram opened up the new world for me! It's so Dope and so the new GPT4o thank you for your work ✨✨😎✨✨

  • @andrewrozhen513
    @andrewrozhen513 28 днів тому +3

    How do you guys access it for free? I’ve explored the app also open ai platform, playground, everything. There is no free option unless I subscribe for a payed “plus” option

  • @andre_0413
    @andre_0413 28 днів тому +2

    Did you upgrade your camera setup? Video is looking crisp!

  • @DariusNmN
    @DariusNmN 27 днів тому +1

    Great overview. One of the best I’ve seen. Thx

  • @sportscommentaries4396
    @sportscommentaries4396 28 днів тому +8

    Hopefully we get access to the real time stuff soon, i can’t wait for that.

    • @alexatedw
      @alexatedw 28 днів тому +1

      We have it

    • @sportscommentaries4396
      @sportscommentaries4396 28 днів тому +2

      The voice stuff? I don’t see it on mine, I’m on gpt+

    • @okaydetar821
      @okaydetar821 28 днів тому +7

      @@sportscommentaries4396 We don't have it yet, the old voice mode has confused a lot of people to thinking it is the new one.

    • @alexatedw
      @alexatedw 28 днів тому

      @@sportscommentaries4396 update your app

    • @ReLapseJunkie
      @ReLapseJunkie 28 днів тому

      @@alexatedwno we don’t

  • @chrisweidner7527
    @chrisweidner7527 28 днів тому +9

    Wow, I saw this on my GPT tab and didn’t really use it, but now I know it’s THIS powerful! I’ll definitely use this from now on!

  • @luisresende5298
    @luisresende5298 День тому

    the 3D generation really blew my mind. jaw dropped literally. thank you for the update

  • @FrotLopOfficial
    @FrotLopOfficial 28 днів тому

    Wow your scripting and structuring has gotten insanely good. I would be super curious how tje retention rate of this vid compares to your last one about gpt4o

  • @francofiori926
    @francofiori926 27 днів тому +4

    Maybe gpt 4o can solve the mistery of voynich manuscript

  • @brexitgreens
    @brexitgreens 28 днів тому +4

    The first major flaw I was able to spot: while GPT-4o can read long transcripts in a split second, it still fails to associate fragments with respective timestamps correctly.

    • @ronilevarez901
      @ronilevarez901 28 днів тому +1

      In my tests it is good to summarize and adapt text style . But it totally failed to reason about what it was writing and itself in many ways. Gpt3.5 turned out to be better or at the same level in that aspect. It might have more functionality but it is not "more", sadly.

    • @Uthael_Kileanea
      @Uthael_Kileanea 28 днів тому

      Tell it that and it will correct itself.

    • @ronilevarez901
      @ronilevarez901 28 днів тому +1

      @@Uthael_Kileanea I did. 3.5 corrects itself properly. 4o kept rewriting the text instead of it's interpretation of the text until I explicitly told it not to. And even then, it failed to do the right thing. Good for an essay writing. Bad for a more interesting chat with it.

    • @brexitgreens
      @brexitgreens 27 днів тому

      @@Uthael_Kileanea The transcript in question was a standard SRT (subtitle) file. When GPT-4o failed to provide the correct timestamp for a random quotation, I asked it to provide the turn index number instead - which should be easier because it's incremental. It failed that too.

  • @ianPedlar
    @ianPedlar 25 днів тому

    So when do we get it?
    Currently in the Android ChatGPT app I can choose to use 4o but I still have the delay, I can't interrupt and can only choose one of the preset voices.

  • @corsoandcanvas
    @corsoandcanvas 25 днів тому

    Such a great breakdown! I really appreciate how deep you went on all the elements that weren’t presented. Big fan!

  • @evil1knight
    @evil1knight 28 днів тому +3

    I want the glasses wearables built ontop of this

    • @GothicDragonX
      @GothicDragonX 28 днів тому

      Give it time, it's coming :)

    • @AmazingArends
      @AmazingArends 27 днів тому

      In the movie "Her", all you had to do was put your smart phone in your pocket with the camera sticking out and Samantha the AI could see everything you did. She talked to you through a wireless earpiece.

  • @primalplasma
    @primalplasma 28 днів тому +4

    We are all characters in a Twilight Zone episode.

    • @matteverlove
      @matteverlove 27 днів тому +2

      The Brain Center at Whipple’s, starring GPT-4o as Robby the Robot

  • @tsforero
    @tsforero 26 днів тому +1

    Matt, you ponder the question a few times whether the answer to these new capabilities is really just the multimodal aspect. I absolutely think that this is the case. The key, as we now all understand, is context and memory. With a greater diversity of context clues (modalities), it makes sense that the contextual understanding of the model becomes more complex. And we now know greater complexity = greater intelligence.
    We now have the following levers for increasing intelligence in AI:
    1. Neural connections
    2. Context, memory, attention
    3. Input training data
    4. Diversity in modalities
    Would love to see what happens when these models really start getting placed into Robotics and have additional modalities (temperature, EM, proprioception, touch, spacial, balance, etc).

  • @markmuller7962
    @markmuller7962 28 днів тому +1

    I'm so happy of this progress, OpenAI really doing an amazing job at staying ahead of everything else

  • @Jc8.05
    @Jc8.05 23 дні тому +3

    bru we’re so cooked

  • @iixotic-
    @iixotic- 28 днів тому +5

    Matt you ever doing a live stream again this weekend? Or?

    • @MattVidPro
      @MattVidPro  28 днів тому +2

      probably not :( I will try and schedule one next week

    • @iixotic-
      @iixotic- 28 днів тому +1

      @@MattVidPro all good. Thank you bro 🙏🏾❤️

  • @DamielBE
    @DamielBE 27 днів тому

    And on the font thing, can you save it as a font format for pc or mac?

  • @rockapedra1130
    @rockapedra1130 28 днів тому +1

    This channel should have a lot more followers! Excellent work!

  • @rogerc7960
    @rogerc7960 28 днів тому +3

    Understanding video has use in robots and cctv monitoring.

  • @Killingglorie
    @Killingglorie 28 днів тому +4

    So why is it cheaper if it’s the most powerful version of ChatGPT? Will the other models be even cheaper than 4o now?

    • @MattVidPro
      @MattVidPro  28 днів тому +12

      Because something… else is coming..

    • @groboclone
      @groboclone 28 днів тому +4

      Just a guess, but the fact that it is so fast and responsive would imply to me that it is actually smaller and LESS computationally expensive than former models, yet performs better. Could be due to some combination of better training data, algorithmic breakthroughs, etc.

    • @nathanbanks2354
      @nathanbanks2354 28 днів тому

      After trying Mixtral 8x7b and Mixtral 8x22b, which run at about the same speed as Llama 3-8b & Llama 70b, I'd guess that it uses a mixture of experts type approach that allows most of the calculations for any query to run within the 80GB limit of a single H100 GPU, though a different query would run on a different H100 GPU. Maybe I'm wrong, and it's the same server not the same GPU, or a pair of GPU's, but some sort of a sharding/mixture of experts approach. They probably also overtrained it like they did with Llama 3. Plus various other tricks, such as improving the embeddings, though I'm not sure this would make it faster/cheaper...this is my best guess.

  • @gizmomismo7071
    @gizmomismo7071 27 днів тому

    Fantastic analysis of the capabilities of GPT-4o. I can't wait to see what's next they are going to show us this year!!!

  • @Glowbox3D
    @Glowbox3D 27 днів тому

    It just keeps comin'! Very cool. Can't wait to play with this one when it fully rolls out.

  • @iminumst7827
    @iminumst7827 28 днів тому +4

    I can confidently say this is the first real AGI. Like ik they don't want to say it because it's a big claim, but the amount of context it has allows it to solve so many diverse problems. This is not just natural language mimicry anymore, it can code, write, sing, understand human tone of voice, create images, etc. It's not superhuman yet, but it is clearly competitive with humans.

    • @sjcsscjios4112
      @sjcsscjios4112 27 днів тому

      Imagine when the base model gets updated to gpt-5

  • @fluffyspark798
    @fluffyspark798 28 днів тому +3

    SO HYPED

  • @Serifinity
    @Serifinity 27 днів тому +1

    Another great video. There is definitely something going on at OpenAI, the way they manage to be ahead of the curve. I think they are using an inhouse GPT-5 to help run R&D, possibly even sit in board meetings and help run the business. They seem to have something no one else has.

  • @zoesays3830
    @zoesays3830 22 дні тому

    @24:29
    Did I really see what I just saw?
    The capabilites for old books being scanned at such speed is mindblowing!

  • @itubeutubewealltube1
    @itubeutubewealltube1 28 днів тому +4

    people didnt write this model..it was mostly written by ai, itself.. thats the difference. So in Terminator, it was predicted that the year 2027 was when Kyle Reese was sent back in time... three years to go baby...

    • @yoyoclockEbay
      @yoyoclockEbay 28 днів тому

      Just letting you know, this is not a good thing.

  • @markjackson1989
    @markjackson1989 27 днів тому +6

    I have used GPT-4o today. It doesn't work at all like the demo. It can't change inflection, sing a song, or hum a tune. It had no concept of my own inflection either. It also did not support real interruption. It spoke, then you spoke. And for everyone wondering, it was 4o, because I reached the rate limit.
    Tl;dr: it doesn't work anything like the demo. At least right now.

    • @allanshpeley4284
      @allanshpeley4284 25 днів тому +5

      Yeah, that's because you're not using the complete version. I think it was a mistake on their part to allow GPT-4o in accounts without releasing all the technology, which apparently is happening in the next few weeks.

    • @verb0ze
      @verb0ze 21 день тому +2

      I'm not impressed by these demo until I get the product in my hands so I can test these features myself. Too many faking until you make it these days...

    • @someperson9998
      @someperson9998 13 днів тому

      Because it isn't exactly out yet. They only introduced some people to the text version, not the voice. The voice you used was likely just GPT 3.5.

    • @someperson9998
      @someperson9998 13 днів тому

      @@verb0ze Open AI faking a demo would be a horrible business move. Who's faking it till they make it?

    • @kennyjohnson63
      @kennyjohnson63 10 днів тому

      @@allanshpeley4284 Ah, yeah I agree. I think it was a mistake too. I have GPT-4o in my account but had the same experience as @markjackson1989. I keep seeing all these videos about all the stuff GPT-4o can do but then it doesn't work for me. I think they should have called it something different to avoid the confusion.

  • @dankron_
    @dankron_ 27 днів тому

    14:25 the multiple blackboards is pretty standard in universities, they rotate around and you can have multiple "pages" of blackboards

  • @snijasurendran5534
    @snijasurendran5534 24 дні тому

    GPT-4o: AI That Understands Everything (Review and Analysis)
    ua-cam.com/video/Jyr_RKsVqRM/v-deo.html

  • @Thicolate
    @Thicolate 28 днів тому +2

    Present

  • @lukepurse9042
    @lukepurse9042 21 день тому +9

    Are you a shareholder?

    • @jamesaritchie1
      @jamesaritchie1 6 днів тому

      I have shares in several AI companies. So what?

    • @ssw4m
      @ssw4m 3 дні тому

      OpenAI is a private company, so probably not, although e.g. a Microsoft shareholder could be considered an indirect owner of OpenAI.

    • @NelsonStJames
      @NelsonStJames 19 годин тому

      Some ppl are just enthusiastic about this tech. If he was a shareholder it would be dumb not to say that up front, but it doesn’t mean anything one way or the other if he’s not lying.

  • @Heyworld21
    @Heyworld21 11 днів тому +1

    This is what I would call the prototype of AGI, this ai has general intelligence in all subjects and I know I ain’t the only one to notice this

  • @leslieviljoen
    @leslieviljoen 28 днів тому +1

    15:53 it didn't screw up. Sound's is short for "sound is" - every sound is like a secret.

  • @entropy9735
    @entropy9735 28 днів тому +4

    Still dislike this release strategy.. A few days after the event we still have a gpt-4 near equivalent text model without any of the extra features they did like 50 demos on

    • @DeceptiveRealities
      @DeceptiveRealities 28 днів тому

      Better than Google. All pre-made demos. No live demo. Promises promises. 😄

  • @Allplussomeminus
    @Allplussomeminus 28 днів тому +4

    It seems like everyone else is "trying" to AI... OpenAI "is" AI. I think everyone should drop the act and funnel all resources to them to get this ball rolling.

  • @t.1.0.17
    @t.1.0.17 26 днів тому

    Do you record with your headset microphone?

  • @nintishia
    @nintishia 25 днів тому

    Thanks a lot for bringing all these additional features into focus that OpenAI chose to underplay during its demonstration session. The realisation that this could be a new kind of LLM altogether with way advanced multimodal capabilities is a bit unsettling.

  • @Airbender131090
    @Airbender131090 27 днів тому +2

    5$ per million tokens?! This is rediculos! Gemini million tokes is unlimeted free. You can put huge books and videos on it

  • @BapaG33
    @BapaG33 26 днів тому

    It's way past time to upgrade Amazon Echo, Apple Siri, and Google Home Smart Assistant. I use Echo Buds headphones for years. Sound isn't the greatest but being able to ask for any song or ask any question hands free has been great. Having a super smart assistant in there would be incredible.

  • @NatureFreak1127
    @NatureFreak1127 25 днів тому

    I plan to use it for language learning. I will feed it vocabulary and grammar for that lesson and instruct it to converse with me using these structures. .

  • @IceMetalPunk
    @IceMetalPunk 27 днів тому +1

    For a couple years now, I've said there are three main obstacles between current GenAI and human-level GenAI: multimodality, size, and continual learning. The size of models, I expect, will continue to grow, especially as NVIDIA pumps out better hardware for them. Continual learning is tough on these massive models, but if I understand correctly, Google's "Infini-attention" paper introduces something very similar to -- if not an actual form of -- continual learning for massive Transformers. And as we see here, multimodality in the token space does *amazing* things for the capabilities of these models, and we're getting them, one new modality at a time.
    At this rate, I suspect we'll have all these three issues more or less solved within the next two or so years, and after that it's just about scale to hit human-level AGI.
    As culty as it sounds, I do, in fact, feel the AGI. (RIP to Ilya's tenure at OpenAI, by the way.)

  • @BerryTheBnnuy
    @BerryTheBnnuy 18 днів тому

    I've been working on a personal project that uses Whisper v3 (hosted locally) and it CAN tell the difference between a human and a bird chirping or a dog barking. While I was testing it my dog started barking and it output "[dog barking]". Any non-human sounds it hears go into [square brackets]. So I would be typing code while the project is running in the background and it would output [typing].
    There are other issues like it doesn't detect color and tone of voice, like you were saying (color and tone refers to emotional content).

  • @HenningKilset76
    @HenningKilset76 17 днів тому

    The annoucement was very consumer-focused. I swapped our internal generation from GPT-4-128K (Turbo) to GPT-4o, and the capability increase is insane - especially the speed.

  • @missoats8731
    @missoats8731 28 днів тому +1

    I wouldn't necessarily say they were "hiding" these features from us.They made a detailed blog post about them at the top of their page 😅

  • @marijanmadunic3046
    @marijanmadunic3046 10 днів тому

    NO AUDIO COM...?! I am working in the promo version 4.0 and it does not support voice communication, nor questions and answers... When I ask it, it responds that it does not support audio communication... Am I making a mistake, and if so, in what? Help!

  • @theindubitable
    @theindubitable 27 днів тому

    I just tried it, the image, the text logo it creates is beyond sick!

  • @stevedavis1437
    @stevedavis1437 28 днів тому

    I appreciate how you are mind-blown by the right advances. So I feel I can trust your take when you see something new that I hadn't seen. Your posts are always really interesting Matt.

  • @smellthel
    @smellthel 27 днів тому

    Whoa! This is miles ahead of what I was expecting this year! I guess multimodality is the future because it leads to a deeper understanding of the world. I love it. We live in the future!

  • @420Tecknique
    @420Tecknique 24 дні тому

    20:20 something you missed is that gpt4o cleaned up the coaster in the generated image only removing the stains bit leaving the coaster the same. Basicaly sprucing up the product un-prompted. Just a little thing but cool if you understand what that means for rationalizing images it sees. It shows rudimentary understanding and reasoning of the physical world.

  • @whataworld8378
    @whataworld8378 23 дні тому

    Excellent combination of features. The persistence of AI models and renderings means that it can generate quality videos now.

  • @DamielBE
    @DamielBE 27 днів тому

    Can you imagine using that for say, streamed panels from convention, to have a transcript or resumé for the hearing impaired or those form whom English isn't their native language? so much potential!

  • @Skeeva007
    @Skeeva007 27 днів тому

    I can't freaking wait to get access to everything shown! Let's goooo!!!!

  • @humunu
    @humunu 26 днів тому +1

    “When AGI is developed, the company will have to work to pretend it’s less capable than it is”…”Why did they hide these capabilities in the website?”

  • @stevegibxon3886
    @stevegibxon3886 24 дні тому

    Exciting capabilities for content creators storytellers esp with image generation- BIG question is who owns the copyright is the idea in the text prompt and thought process or in the production?

  • @nomad_ape
    @nomad_ape 23 дні тому

    How do I get this GPT-4o? I tried it from their website, it was just text.
    Or I need to subscribe to the paid version to use this latest chatGPT?

  • @kent888021
    @kent888021 27 днів тому

    hi matt , how to transcribe voice memo from my phone , please can you share a tutorial video for it?

  • @turbo9325
    @turbo9325 26 днів тому

    thanks for the update bro