SBTB23: Omar Khattab, DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 11

  • @beedr.metwallykhattab115
    @beedr.metwallykhattab115 6 місяців тому

    Very good Omar Khattab
    يحفظكم الله ويرعاكم ويبارك فيكم

    • @user-0j27M_JSs
      @user-0j27M_JSs 26 днів тому

      why do you think he's muslim? He might already passed this stage in human development

  • @vbywrde
    @vbywrde 9 місяців тому

    Great! Thank you!

  • @420_gunna
    @420_gunna 10 місяців тому +6

    dank

  • @julianrosenberger1793
    @julianrosenberger1793 10 місяців тому +2

    🙏

  • @pensiveintrovert4318
    @pensiveintrovert4318 6 місяців тому +1

    The whole point of LLMs is the ability to interact with them in natural language, directly. If that is gone, then FMs should be built around automation and NOT using English.

    • @sathishgangichetty685
      @sathishgangichetty685 5 місяців тому

      you are still interacting (as an end user) via natural language. This is showing how to best make sota prompting techniques available to masses without them having to learn them.

  • @campbellhutcheson5162
    @campbellhutcheson5162 9 місяців тому +6

    I've met a lot of people skeptical of DSPy and these kinds of videos do nothing to dispel the skepticism. I'm 10 minutes in and we haven't seen any examples of how this is any different from ordinary prompting with an LLM. The "goal" that he describes is literally just the prompt without explicit CoT language and CoT language will probably be unnecessary with stronger models, which will better infer when they need to do CoT to reach a good result (excluding cases where output is coerced in JSON mode, etc...).

    • @campbellhutcheson5162
      @campbellhutcheson5162 9 місяців тому

      I literally paused at 16:44 to read the produced prompt. It's fine. But, you literally had to do all the work to get there. And, I'm not sure that's substantially less than writing the prompt yourself, especially when you are going to get GPT-4 to write the first version of the prompt for you (remember turbo-preview knows what LLM prompts are).

    • @GURUPRASADIYERV
      @GURUPRASADIYERV 9 місяців тому +3

      The magic is in the compiling engine underneath. The optimization will get better with open source contribution.

  • @thannon72
    @thannon72 4 місяці тому +1

    Another rubbish presentation on DSPy. Do these people really understand it. Just a regurgitation of the documents