What is the LLM's Context Window ?

Поділитися
Вставка
  • Опубліковано 15 січ 2025

КОМЕНТАРІ • 36

  • @dkonu2b
    @dkonu2b Місяць тому +2

    I like that this is high-level. Perfect for those of us dabbling with various platforms and don’t want just another low level tutorial.

    • @NewMachina
      @NewMachina  Місяць тому

      Thanks for your feedback... follow along with me as I go high-level and then 1 layer down to de-mystify these topics... appreciate you sharing ...

  • @aritzolaba
    @aritzolaba 5 місяців тому +4

    Cristal clear explanation. Thanks! more please :)

    • @NewMachina
      @NewMachina  5 місяців тому +1

      You got it! Working to make each video better and better….

  • @sidraijaz2755
    @sidraijaz2755 2 місяці тому +1

    very nice video and easy to understand sir excellent

    • @NewMachina
      @NewMachina  2 місяці тому

      Thank you for the feedback… appreciate it.

  • @ParlonIA
    @ParlonIA 2 місяці тому +2

    thx bro clear and nice infos

    • @NewMachina
      @NewMachina  2 місяці тому

      thanks for feedback....

  • @BioHazarddasdadfasfsad
    @BioHazarddasdadfasfsad Місяць тому

    Clear and nice! Exactly what answers I was looking for
    Now I have to somehow evaluate how many tokens I am passing to a model through the Ollama

    • @NewMachina
      @NewMachina  Місяць тому

      Glad it helped... hoping to with Ollama soon ... this space is evolving so quickly....

  • @KI4ASK
    @KI4ASK 15 годин тому

    Perfect

  • @veerabalajayaraj4459
    @veerabalajayaraj4459 4 місяці тому

    Best explanation !

    • @NewMachina
      @NewMachina  4 місяці тому

      Glad it was helpful! Trying to determine better with each video …. Thanks for feedback…

  • @rrrubanno
    @rrrubanno 4 місяці тому

    Great content!

    • @NewMachina
      @NewMachina  4 місяці тому

      Thank you for your feedback… trying to get better with each video…. 🙏

  • @ParthivShah
    @ParthivShah 4 місяці тому

    thanks.

    • @NewMachina
      @NewMachina  4 місяці тому

      Glad you liked it…. Working to get better with each video… let me know if you have any ideas for videos …🙏

  • @ramakrishnay9887
    @ramakrishnay9887 5 місяців тому +1

    Thanks for the explanation. Does it mean that the context window is common or saperate for both input and output?

    • @NewMachina
      @NewMachina  5 місяців тому

      For LLM's, context window is for input tokens... There is normally, a LLM setting, called "maxLength" or something similar, that controls the maximum number of tokens, will be generated for a response... Thanks for feedback and question ....

    • @paultparker
      @paultparker 5 місяців тому

      @@NewMachina i’m going to disagree here. I believe the context window typically includes both the LLM input and output, especially in a chat session like your examples. This is in most cases primarily how the LLM knows what it said before.

    • @NewMachina
      @NewMachina  5 місяців тому +1

      @paulparker You are right... I was going through some documentation that ambigious about this ... and assumed it didn't include input.. I have found quite serveral other documents aligned with context window including both input and output. Thanks for helping clarify this ...

    • @paultparker
      @paultparker 4 місяці тому

      @@NewMachina you’re welcome!

  • @paultparker
    @paultparker 5 місяців тому

    The question on tooling is a good question. In my personal case, I don’t know enough here to know what tool I would prefer to use: my inclinations would be VS Code, and/or notebooks, but I don’t really understand Jupiter notebooks to be honest having never used them. I believe Colab and the like use notebooks?

    • @NewMachina
      @NewMachina  5 місяців тому +1

      I am likely going to be showing examples just running in VSC, and maybe some in AWS Cloud using Lambda's and will likely do a simple one with Jupyter Notebooks to see how viewers like it ... thanks for providing your feedback on this....

    • @NewMachina
      @NewMachina  5 місяців тому

      @paulparker Check out the frameworks LangChain and LlamaIndex.. I think these two open source frameworks will continue to get more traction... I am working on some videos in this area next ... I would interested if you have an opinion or thoughts on this frameworks ... not urgent, I suspect your are busy as we all are... but if you get a chance to check these out, let me know what you think ...

    • @paultparker
      @paultparker 4 місяці тому

      @@NewMachina I thought that there was a different successor to Laingchain, and llama index doesn’t sound right. But I have not had time to mess with doing any of this myself.

    • @NewMachina
      @NewMachina  4 місяці тому

      @paulparker Are you maybe thinking about LangGraph or LangServe? Looks like there are some additional extensions to LangChain... some are driven by LangChain while others are from other teams.... Still getting a sense of all of these...

  • @paultparker
    @paultparker 5 місяців тому +1

    Can you substantiate the claim that LLM providers do this primarily to make the models cheaper to run? I ask this because my understanding is that this is actually how the models work and have worked since the initial research. So it seems incorrect to say that this is an optimization chosen for performance at scale.

    • @NewMachina
      @NewMachina  5 місяців тому +1

      Thanks for reaching out with your question... Can I get a quick clarification ... In the video "What is the LLM's Context Window", are you talking about the line "While larger 'context windows', improve the LLM’s performance, on longer text blocks, they also demand, more computational resources" ... I wanted to make sure I was following up on the the same part of the video you were were inquiring about ...

    • @paultparker
      @paultparker 4 місяці тому +1

      @@NewMachina no, I think that was towards the beginning of the video, whereas I think what I am remember was towards the end.
      Yes, currently larger context windows require quadratically, more computation. However, there is a new approach that just came out for infinite context, windows. We will have to see if that is any good. Most errors in this post come from Siri’s broken dictation.

    • @NewMachina
      @NewMachina  4 місяці тому +1

      Ok, I will look into that... if you have a reference to this approach on infinite context please share.... New stuff happening quickly ...

  • @debojitmandal8670
    @debojitmandal8670 5 місяців тому

    Then i have another question what's this concept
    Say for eg Annie loves jam but she hates bread and she also loves fruits
    So if i say context window is 2
    So i take 2 words towards left and 2 words towards right as input
    So for eg Annie loves but she is the input and jam is the output
    My second question is what's the difference bw context length and context windows to me what ever you explained sounded like context length rather then window so please help me to clarify

    • @NewMachina
      @NewMachina  5 місяців тому

      Yes, context window, is measured in tokens. If Context Window is 2, then you could get one token and one token out.
      For second question, I should have been consistent, and used Context Window throughtout... for this topic, context length is the same as the Context window and is measured in tokens.

    • @NewMachina
      @NewMachina  5 місяців тому

      Thanks for taking the time in asking me these questions ....

    • @debojitmandal8670
      @debojitmandal8670 5 місяців тому

      @@NewMachina but sir what I have studied is context window and length are different contex window is the small amount window where your focus is but it's being interchanged very often.

    • @NewMachina
      @NewMachina  4 місяці тому

      Ahh.. I see what you are saying... I will try to be more precise with my terminology as well... thank you for sharing..