Prompt Engineering is Dead; Build LLM Applications with DSPy Framework

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 25

  • @kd6613
    @kd6613 4 місяці тому +15

    🎯 Key points for quick navigation:
    00:01 *🌅 Welcome and Introduction*
    - Speaker greets attendees at the end of the conference day,
    - Acknowledges the late session and the clickbait title,
    - Discusses a mindset shift in prompt engineering and mentions hiring for an ML engineering leader.
    00:41 *📈 Overview of Session Topics*
    - Outline of the four key areas to be discussed: agents, prompting strategies, prompt evaluation, and the DSP framework,
    - Importance of building meaningful applications with LLMs,
    - Challenges of creating customer-facing products and the future potential of AGI.
    02:22 *🤖 Building with Large Language Models*
    - The value and potential of LLMs and ChatGPT,
    - Limitations of relying solely on LLMs and the need for custom development,
    - Discussion on intellectual property and the role of agents in enhancing LLMs.
    04:41 *🛠️ The Agent Approach*
    - Extending the concept of RAG (Retrieval-Augmented Generation) to agents,
    - Importance of building systems that interact with the world around us,
    - Intellectual property and flexibility in building agent-based systems.
    06:44 *📜 Key Papers and Research*
    - Overview of influential papers and research in the field,
    - Discussion of DSP framework, LLM optimization, and the evolution of prompting strategies,
    - Emphasis on the importance of data in building effective LLM systems.
    08:59 *🎯 Prompting Strategies*
    - Different prompting techniques and their relevance,
    - Introduction to DSP framework for programmatic interaction with LLMs,
    - Importance of data and evaluation metrics in prompt engineering.
    12:13 *🔍 Evaluating Prompt Quality*
    - Importance of data in the evaluation process,
    - Need for automation in testing and evaluation,
    - Insights from researchers on optimization and the scientific method.
    16:07 *⚙️ DSP Framework and Workflow*
    - Introduction to the DSP framework and its benefits,
    - Workflow for building LLM applications, including task definition, data collection, and pipeline setup,
    - Emphasis on iteration and optimization in the development process.
    19:21 *💡 Importance of Data*
    - Historical perspective on the importance of data over algorithms,
    - Relevance of this principle to modern LLMs and their training,
    - Focus on the data-driven approach within the DSP framework.
    20:45 *🧩 Practical Application and Community Support*
    - Practical benefits of using the DSP framework in Databricks,
    - Community contributions and available connectors for seamless integration,
    - Encouragement to leverage the community and available resources for development.
    21:14 *🌐 Integrating LLMs in Databricks*
    - Setting up connections to LLMs in Databricks,
    - External model serving and authentication layers,
    - Abstraction layers for managing multiple models.
    22:12 *🛠️ Getting Started with DSPy Framework*
    - Defining inputs and outputs (signatures),
    - Implementing prompting techniques (modules),
    - Optimization of pipelines for better results.
    24:01 *🔍 Optimizing Prompts and Pipelines*
    - Use of training and test data for optimization,
    - Programmatically optimizing prompts using few-shot examples,
    - Exploring different levels of optimization (e.g., fine-tuning models).
    26:07 *📊 Practical Application and Evaluation*
    - Importing and preparing data sets (e.g., Reddit comments),
    - Setting up evaluators and defining metrics for accuracy,
    - Iterating through different optimization strategies (e.g., bootstrap few-shot, random search).
    34:22 *🔧 Advanced Optimization Techniques*
    - Using more advanced optimization methods to improve accuracy,
    - Implementing instruction optimization with powerful models,
    - Balancing between powerful and smaller models to achieve the best results.
    40:06 *📝 Final Steps and Instruction Optimization*
    - Instruction optimization using a separate language model,
    - Letting models find the best prompt through iteration,
    - Ensuring efficient use of model calls to manage costs.
    Made with HARPA AI

  • @YeahTheBros
    @YeahTheBros 4 місяці тому +44

    Databricks: Prompt engineering is dead
    Also Databricks: use our platform to engineer your prompts!

    • @pookiepats
      @pookiepats 3 місяці тому

      So dumb, these guys are borderline crooks at this point even pushing these piles of crap frameworks out lol under the hood just a bunch of bash scripts, terraform & more caveats than you can shake a keyboard at.

    • @fenderbender28
      @fenderbender28 3 місяці тому +2

      actually the speaker is Matt Yates from Sephora, not Databricks

    • @farihasifat
      @farihasifat 2 місяці тому

      Thanks for the summary

  • @MagusCorporation
    @MagusCorporation 4 місяці тому +1

    Ty, for spreading the word!

  • @ravishmahajan9314
    @ravishmahajan9314 4 місяці тому +3

    Yes Prompt Engineering is dead.
    AI Agents are the future❤

    • @zaubermanninc4390
      @zaubermanninc4390 3 місяці тому +1

      so AI agents don't get prompted huh? Interesting

    • @ravishmahajan9314
      @ravishmahajan9314 3 місяці тому

      @@zaubermanninc4390 obviously they get prompted. But they introduce automation in the process. Obviously nobody would like to prompt chatbot to produce output.
      Next step is obviously to direct those prompts to make decision & call a function to do tasks autonomously.

  • @anthonyphan1922
    @anthonyphan1922 3 місяці тому +6

    Clickbait

  • @zaubermanninc4390
    @zaubermanninc4390 3 місяці тому

    The title kinda worked, because i'm here just because i wanted to say that if you really think that prompt engineering is or will be Dead anytime soon than you simply can not prompt. I'll give both of my thumps for that statement.
    You need to work on your caps lock on those baity titles my man. 😅😏

  • @diga4696
    @diga4696 3 місяці тому +4

    textgrad and agentic processes will outpace dspy

    • @AGI-Bingo
      @AGI-Bingo 3 місяці тому +2

      Can you please elaborate? ❤

    • @KristijanKL
      @KristijanKL 3 місяці тому +2

      @@AGI-Bingo agentic is already used in this video. it just means using different llms for different tasks. smaller ones for memory or specialized llms for different programming languages. textgrad is back propagation or letting AI self optimize and re run task and rewrite steps. textgrad does not exclude dspy.
      main point of dspy is first slide: llm does not speak english so stop optimising english language inputs
      you can test this approach in chatgpt in weak form

    • @seattlerain101
      @seattlerain101 3 місяці тому

      @@KristijanKL Would you be open to chatting? Im in the process of building a pretty cool site and have some questions about using LM within it.

    • @avisankhadutta4053
      @avisankhadutta4053 Місяць тому

      Hi, I'm interested in understanding the basic differences between textgrad and dspy. Can you please elaborate?

  • @caseystar_
    @caseystar_ 3 місяці тому

    But who's to say that the LLM isn't actually more accurate than the data? For the food example, "My favorite food is any meal I don't have to cook", I'd say that IS probably closer to joy/playfulness or even relief

  • @gokukakarot6323
    @gokukakarot6323 3 місяці тому +8

    Here is the thing, Prompt engineering is more or less like google search skills. So maybe you shouldn't exist as a company.

    • @chidinduogbonna5958
      @chidinduogbonna5958 2 місяці тому

      On the prompt engineering as a skill, that’s true.
      But you can’t use a “skill” in a system. You need quantifiable processes, prompt engineering (alone) no longer cuts it for real world applications.

  • @haralc6196
    @haralc6196 2 місяці тому

    Now that GPT4o is here, it's intelligent and it's cheap. So, no need to switch models. So, DSPy is dead. :D

  • @fil22222
    @fil22222 3 місяці тому +1

    I disagree with the notion that Large Language Models (LLMs) merely parrot information. Consider ChatGPT, which employs generative AI. If you present it with code that includes an object-oriented class with a single method performing multiple tasks, and inquire about potential future issues and improved implementations, it will direct you towards best practices and identify any loopholes. This is not mere repetition; it demonstrates clear analytical thinking.
    Why not tell the people the truth that it is really thinking because of the fear of something?

    • @hoots187
      @hoots187 3 місяці тому

      It is not

    • @alexwoxst
      @alexwoxst 3 місяці тому

      nah, its lossful compression of training data

    • @fil22222
      @fil22222 3 місяці тому

      @@hoots187 It is Indeed, it may seem that they are hesitant to admit it, but it remains a form of thinking, not mere parroting. When a human baby mimics someone, is it parroting, or is it learning and thinking?

    • @K9Megahertz
      @K9Megahertz 2 місяці тому

      @@fil22222 I think you should watch a video or two on how LLM's actually work. They do not think, nor is it a form of "thinking". It simply predicts with some probability what the next token in a sequence will be. If you give it the sentence "Michael Jordan is really good at ______", it will 99% of the time respond with basketball or some text relating to basketball. It's all math and statistics.
      Actually if you take out the one or two lines of code that pick a random token from the list of top probabilities and give the LLM the same input sequence of tokens two times in a row, you'd get the same output sequence both times.
      Someone please feel free to correct me on this.