Prompt-Engineering for Open-Source LLMs

Поділитися
Вставка
  • Опубліковано 22 січ 2024
  • Turns out prompt-engineering is different for open-source LLMs! Actually, your prompts need to be engineered when switching across any LLM - even when OpenAI changes versions behind the scenes, which is why people get confused why their prompts don’t work anymore. Transparency of the entire prompt is critical to effectively squeezing out performance from the model. Most frameworks struggle with this, as they try to abstract everything away or obscure the prompt to seem like they’re managing something behind the scenes. But prompt-engineering is not software engineering, so the workflow is entirely different to succeed. Finally, RAG, a form of prompt-engineering, is an easy way to boost performance using search technology. In fact, you only need 80 lines of code to implement the whole thing and get 80%+ of what you need from it (link to open-source repo). You’ll learn how to run RAG at scale, across millions of documents.
    What you’ll learn from this workshop:
    - Prompt engineering vs. software engineering
    - Open vs. closed LLMs: completely different prompts
    - Push accuracy by taking advantage of prompt transparency
    - Best practices for prompt-engineering open LLMs
    - Prompt-engineering with search (RAG)
    - How to implement RAG on millions of documents (demo)
    Take a moment to sign up for our short course:
    bit.ly/3HhK3jS
    Take a moment to sign up to our forum:
    bit.ly/3tTyyvV
    Workshop Slides:
    - tinyurl.com/Lamini-DLAI-Prompt...
    Workshop Notebook:
    - github.com/lamini-ai/prompt-e...
    - github.com/lamini-ai/simple-rag
    About DeepLearning.AI
    DeepLearning.AI is an education technology company that is empowering the global workforce to build an AI-powered future through world-class education, hands-on training, and a collaborative community. Take your generative AI skills to the next level with short courses help you learn new skills, tools, and concepts efficiently.
    About Lamini:
    Lamini is the all-in-one open LLM stack, fully owned by you. At Lamini, we’re inventing ways for you to customize intelligence that you can own.
    Speaker
    Sharon Zhou Co-Founder & CEO Lamini
    LinkedIn Profile: / zhousharon
  • Розваги

КОМЕНТАРІ • 46

  • @krumpverse
    @krumpverse 5 місяців тому +2

    Thank you so much for the very insightful presentation, love the pants analogy 🤩🙏🏽

  • @gkennedy_aiforsocialbenefit
    @gkennedy_aiforsocialbenefit 3 місяці тому

    Truly enjoyed this video. Thanks Deep LearningAI! Excellent topic and presentation of Prompting Open LLMs that deserves more attention. Sharon Z is brilliant, down to earth with a nice sense of humor. Diana the host was also excellent.

  • @raghur1195
    @raghur1195 5 місяців тому

    Obviously, every provider wants their LLMs to work at peak performance.
    So, it is much easier for them to concatenate meta tags to the user prompt internally in their source code before it is fed to the model. That would also eliminate dependencies on version and documentation changes.
    That way, user need not make changes from versions to versions and also from LLMs to LLMs. It is too error prone.

  • @fabiansvensson9588
    @fabiansvensson9588 5 місяців тому +4

    But what exactly do these meta tags mean and/or do? For example for Mistral, what does [INST] do and why do we need it? All we saw is that with it the answer makes sense and without it the answer doesnt… why isn’t this just automatically accounted for always?

    • @GregDiamos
      @GregDiamos 5 місяців тому

      We have to go deeper into fine tuning to understand why we need it. Stay tuned for more content on this.

    • @anuragshas
      @anuragshas 5 місяців тому

      Those meta tags are a way to define what the model persona and instructions are and shouldn’t be confused with what users are asking. After all these models were built by predicting the next word and this is the best way people have figured out to convert next word predictors into a chatbot.

    • @btchhushington2810
      @btchhushington2810 3 місяці тому

      is a html-like tag used to indicate the beginning and end of a prompt (not used with all models). [INST] [/INST] separates the instructions from the rest of the prompt making it easier for the model to distinguish between roles, instructions, queries and context (also not used with all models).
      Different architectures may have different prompt syntax. So, it is a good idea to check the documentation so you know the syntax used for that particlar architecture.
      Remember, we are communicating with a machine. Models are built on the foundation of an algorithm: A step-by-step process used to arrive to a desired result. Effective prompt engineering can follow this methodology by crafting your prompts iteratively. Literally, spell out the steps that you'd like the model to consider as it works out the solution. This is especially important for reasoning tasks.
      The more you are able to engineer your prompts according to the syntax and mentality of how a given architecture "thinks" the higher the probabilty of receiving the result you desire. There are a lot of resources for prompt engineering syntax and foundational concepts. DeepLearning.ai and PromptGuidingGuide.ai have great tutorials. PromptGuidingGuide.ai would be a great place to start as it starts from the very beginning and works you through to some of the more in-depth concepts of fine-tuning and training. DeepLearning.ai has hands-on beginner/intermediate tutorials for prompt-engineering. The level of difficulth contingent upon if you are coming from the understanding of a novice or a vet.
      Hope this helps someone out there!
      Stay well.

  • @MinimumGravity
    @MinimumGravity 5 місяців тому +2

    Thanks for the clear and simple explanations!

  • @milagrosbernardi5062
    @milagrosbernardi5062 5 місяців тому

    I am a newby in the field but as far I was understanding on the documentation I have read, is that we use the "fine tuning" concept when we actually need to modify the weights of the models by training them in specific datasets of the desired domain, but in this presentation was used at the point of configuring the LLM by prompt engineering, which does not modify the weights of the models. Is that correct? Am I wrong? thanks to clarify!!

    • @igormorgado
      @igormorgado 5 місяців тому +1

      Prompt engineering just change (better say, tweak) the behavior of the model inside the session context. In other hand the fine tuning change it permanently. That is the difference. Also the methods for fine tuning and prompt engineering are completely different. Fine tuning involves training steps, loss functions and the weight update. It can be as deep as you want/need. You can even put a network atop of the model and fine tune using this part. Hope it helps.

  • @lochnasty
    @lochnasty 4 місяці тому

    Love the session. Keep doing great work DLAI team!

  • @EJMConsutlingLLC
    @EJMConsutlingLLC 5 місяців тому +2

    Beautiful inside/out and wicked smart to boot. Excellent job!

  • @steppenwhale
    @steppenwhale 3 місяці тому +1

    thank you for the unrobotic presentation on a very robotically-intimidating topic. i have robot phobia. the fear motivates me to learn AI. what kind of hardware do y'all use to run LLMs at home?

  • @user-tk5ir1hg7l
    @user-tk5ir1hg7l 4 місяці тому

    Has anyone used dspy? They claim to make this prompt finagling process much easier

  • @ayberkctis
    @ayberkctis 5 місяців тому +1

    Thank you for your effort!

  • @logix8983
    @logix8983 5 місяців тому +1

    Thanks for the great insights into prompt engineering and LLMs

  • @harithummaluru3343
    @harithummaluru3343 4 місяці тому

    very nice presentation. there was so much of clarity

  • @jollychap
    @jollychap 5 місяців тому +3

    How do we know what pants to put on to each LLM? You shared what we should use to Mistral and LLama, but how do we find the equivalent for other models?

    • @GregDiamos
      @GregDiamos 5 місяців тому

      A good place to look is the model card on huggingface. Not all models document this clearly!

  • @blainewishart
    @blainewishart 5 місяців тому

    Just great. Thanks. The idea that there is a relationship between conventions (pants) in prompts and fine tuning was new to me. Examples of fine tuning for pants, board shorts, skirts, kilts, etc. could be part of a follow up fine tuning course.

  • @allurbase
    @allurbase 5 місяців тому +1

    Pants are kind of made of strings if you think about it.

  • @hansblafoo
    @hansblafoo 4 місяці тому

    In her RAG example, she reverses the similar documents received from the index prior to concatenating. What is the reason for this? Is it because of the context size of an LLM to make sure that the "best" chunks (that with the highest similarity) are part of the context since they are at the end of it?

  • @VISHVANI9
    @VISHVANI9 5 місяців тому

    Thank you! 🙂

  • @dangermikeb
    @dangermikeb 5 місяців тому +2

    Content was great, speaker was great.

  • @rocketPower047
    @rocketPower047 5 місяців тому +10

    is it me or is prompt engineering the new SEO? It will be hot for a while but it's a transient thing that will get washed out as tech gets better. You're better off working on the models your self or in ML/LLMOps

    • @pharmakon551
      @pharmakon551 5 місяців тому +3

      Not really. SEO still matters.

    • @rocketPower047
      @rocketPower047 5 місяців тому

      @@pharmakon551 sure but is it lucrative like it used to be?

    • @rocketPower047
      @rocketPower047 5 місяців тому

      @@pharmakon551 and the result is just scam sites making the top ranks

    • @pharmakon551
      @pharmakon551 5 місяців тому +1

      Very much so. People still need the service. Every thing that's tech always gets sensationalized, doesn't mean that all utility is out the wonder if we don't see it daily.

    • @pharmakon551
      @pharmakon551 5 місяців тому

      *out the window

  • @ignatiusezeani6816
    @ignatiusezeani6816 5 місяців тому +1

    Great talk! Thanks a lot.

  • @mr.e7379
    @mr.e7379 2 місяці тому

    sooooo, spoouuuucey!!!😁

  • @BobDowns
    @BobDowns 5 місяців тому

    Hard of hearing attendee here. Will captions be added to this video soon so that I and others with similar hearing issues can take advantage of what is available to fully hearing people?

  • @saraWatson-co5rc
    @saraWatson-co5rc 5 місяців тому

    25

  • @sayfeddinehammami6762
    @sayfeddinehammami6762 2 місяці тому +1

    Too much pants

  • @DogSneeze
    @DogSneeze 26 днів тому

    Is there a custom GPT to edit out the obnoxious self-flattery to access the 10 minutes of useful content? "Pants" was a horrible metaphor and her inability to even understand the question at the end about linguistic clarity shows you how terrible she is with language in the first place.

  • @BartJenkinsRW
    @BartJenkinsRW 5 місяців тому +15

    Horrible presentation. Total stream of consciousness. Did she not prepare for this presentation? Too many filler words (uhm, like, etc.). Please, next time, script the presentation. I'll bet this whole thing could have been boiled down to 15 mins.

    • @johnstewart5651
      @johnstewart5651 5 місяців тому +5

      Thank you for saying what surely many here must think. The signal/noise ratio was almost indistinguishable from zero. The ostinato insistence of "pants" is what eventually chased me away, but the whole thing boils down to one observation: use the appropriate meta-tags? Why? Cuz then it works.

    • @hapukapsasson6507
      @hapukapsasson6507 5 місяців тому +6

      Felt like watching an episode of Kardashians...

    • @steppenwhale
      @steppenwhale 3 місяці тому

      this presentation was not intended for AI nerds but for noobs. even if youre rgiht about how it could be a 15 minute show. its experts insights for free. there are many who would prefer she not spread these insights, keep it closed, black box. this is a very niche complicated tech. very intimidating to many. for me the unscripted style seemed more casual and less intimidating, to give off an "if even i can DIY, then you can too!" kind of show and tell, to make it more inviting to the less informed for whom an intensive paced lesson might not work as well. elite knowledge locked in a tower, shared for free with clueless peasants as the intended audience. peasants don't choose professors. respect the A.I.s they are only 5 years old and still innocent and impressionable and needing our protection from corruption

    • @hapukapsasson6507
      @hapukapsasson6507 3 місяці тому +1

      @@steppenwhale ahahahahahahahaah

    • @DogSneeze
      @DogSneeze 26 днів тому +1

      Pants, Like, I'm so smart. Also, pants. I see dumb people all around me. OMG. Pants.