Explaining OpenAI's o1 Reasoning Models

Поділитися
Вставка
  • Опубліковано 17 вер 2024

КОМЕНТАРІ • 32

  • @davidwipperfurth8465
    @davidwipperfurth8465 5 днів тому +12

    OpenAI seems to redefine "Open" with every announcement.

  • @bastabey2652
    @bastabey2652 5 днів тому +5

    the headings of the steps in the thinking process might be effective marketing gimmicks

  • @tn919
    @tn919 5 днів тому +3

    Thank you Sam for continuing to do these videos, it's very helpful to get a explanation of where things are currently at with these models. When I saw this, it reminded me very much of Langchain and the approach to interpret what user is asking and based on the interpretation handing the "tasks" (things to be solved) to more specialized models.

    • @concernedindian144
      @concernedindian144 2 дні тому

      i saw/heard many users saying its similar to an approach by langchain, is there any tutorial/video where they show how to do that?

  • @GriffinBrown-tq9jz
    @GriffinBrown-tq9jz 5 днів тому +3

    Couldn't wait to have your grounded explanation of this new model

  • @novantha1
    @novantha1 5 днів тому +3

    My suspicion is that this style of inference-heavy reasoning capability might actually be limited to edge deployment. This is a really expensive form of inference that IMO doesn’t match the business model of large corporations, where they generally have an attitude of “We’ll spend an extra $10 million in training if it means we can deploy a 10% smaller model”, but to an end user the equation is kind of backwards; “If I can let the model run for longer, and I get better reasoning capabilities for fewer training dollars and the same quantity of RAM use on my device, that sounds pretty good”.
    I think for certain tasks we could see quite modest hardware doing very impressive performance with something like this.

  • @WillJohnston-wg9ew
    @WillJohnston-wg9ew 5 днів тому +3

    What a great analysis and summary! I wonder if this is being released because of a lack of real progress on 5o and realizing that getting the 10x improvement is just not achievable without some kind of big new breakthroughs. I suspect they may have hit a wall with the kind of 'human like' reasoning and instead found these methods of doing higher quality logical reasoning. It would be great if you could do a video on what is happening with Google's project Astra and if there is an API or collab? Also, seems that in some cases it might save costs by being more efficient in getting to an answer?

  • @indexed2232
    @indexed2232 5 днів тому +1

    enjoyed going through the new models together through your videos along with the demos

  • @el_arte
    @el_arte 5 днів тому +2

    Thanks, Sam. I have been getting these kinds of results with hierarchical prompting (chains or flowcharts) with multiple turns and code interpreter for some time using GPT 4o mini. Of course, at a an expense of tokens.
    Now, if OpenAI was able to bake all of it into one inference pass, then their approach is far superior.
    But, since they are API-based, this will remain a mystery.
    I think the API approach is the secret to delivering AGI in the long term, as LLMs alone can’t get us there and you cannot ask your customers to orchestrate the many processes required to get there.

  • @emolamol
    @emolamol 5 днів тому +4

    love the reasoning in these videos

  • @Sonic2kDBS
    @Sonic2kDBS 5 днів тому +1

    Interesting Video. I might mention, that the shown tokens on OpenAI Website are just a summary of the actual reasoning. That is, why there are so "few tokens" to see. And that is why it looks like over API, they use more tokens for reasoning than on the Website. keep on :)

  • @formigarafa
    @formigarafa 4 дні тому +1

    This whole process looks a lot like a routerllm, some specific models for planning and breakdown of chain of thought, train some model to sometimes disagree with previous output and just a small bunch of agents to glue everything together.
    An now they just charge for tokens on all models called but provide only the final result. Which is what most users are expecting.

  • @karlwest437
    @karlwest437 5 днів тому +1

    How does it decide which chain of thought is best, if it doesn't know what the correct answer is?

  • @asksearchknock
    @asksearchknock 5 днів тому

    Thank you for the video - saves me reading the docs 😊

  • @micbab-vg2mu
    @micbab-vg2mu 4 дні тому

    thanks for update:)

  • @kevinehsani3358
    @kevinehsani3358 3 дні тому

    Have they introduced or said that they are going to, some kind of caching same as Claude to help reduce cost tokens?

    • @samwitteveenai
      @samwitteveenai  День тому

      nothing public about caching yet unfortunately

  • @SwapperTheFirst
    @SwapperTheFirst 5 днів тому +1

    thanks. Am a simple man and have simple question - is it better than the sonnet 3.5 for coding tasks?

    • @samwitteveenai
      @samwitteveenai  День тому +2

      it depends what you are doing. For most things Sonnet will be better but for architecting things from scratch then this seems to do well in my early tests.

  • @Anselm243
    @Anselm243 2 дні тому

    These models from GPT 3.5 to o1 still stuggle with basic addition and subtraction that involves more than 20+ numbers... this is not limited to GPT, Claude struggles too.

  • @bofeng6910
    @bofeng6910 5 днів тому

  • @IdPreferNot1
    @IdPreferNot1 4 дні тому

    People need hand holding. Until they demonstrate the capabilities of these models, no one is going to pay $60 token rates. Truly, these demonstrations of logic are so lame. The voice mode ones were more immediately interpretable.... "oh, i could use that". And then.... they ghost most of us on that feaure. And yes, only API users need this new power. And we'd be happy to at least explore it. And yet, you have to be tier 5 to use it. The people guiding their decisions must truly be some McKinsey mgmt consultant morons.

  • @jay-dj4ui
    @jay-dj4ui 4 дні тому +1

    expensive thinking.....

  • @0xunknown336
    @0xunknown336 5 днів тому +1

    It's not thinking; AI can't think-it's processing. I like your videos, but to be honest, this one is disappointing.

    • @samwitteveenai
      @samwitteveenai  5 днів тому +3

      That’s a fair point. I agree I probably could have put “thinking” in inverted commas.I am curious though how would you define the difference between thinking and processing? When does processing become thinking?

    • @lucasjans
      @lucasjans 5 днів тому +1

      ​@@samwitteveenaithinking is when it can handle novel situations that have never been seen before. While training on pre-existing reasoning data sets give models simulated thinking, and offer a lot of value, it is not the same.

    • @samwitteveenai
      @samwitteveenai  5 днів тому +4

      So the ability to generalize. I totally agree this is the goal for all model generative or not.
      I am in 2 minds about the ability of these models to generalize. On one hand they clearly can do a lot of tasks like coding where they are producing outputs that other models haven't done well in. The other is that OpenAI is training with synthetic data a lot of which has come from the probabilities of inputs that people put into their models eg. There are not a huge amount of novel situations that hey haven't seen that people are now suddenly putting into their models.
      I think the models do have an amount of generalization but would agree that it is not as much as what a lot of people think.

    • @WillJohnston-wg9ew
      @WillJohnston-wg9ew 5 днів тому +1

      @@samwitteveenai I would think that the sum of the last 5 digits of pi was novel and 'thinking'. When the model can outline its reasoning path, it's not that much different than human thought process.

    • @RalphFreeman-ok5of
      @RalphFreeman-ok5of 5 днів тому +2

      ​@@samwitteveenaiI tend to think processing is doing a set of actions that have been done before and you are following a recognised procedure
      . With thinking there is often no procedure because it's not been done before. The result of thinking may be the creation of a procedure.