Prompt Engineering: How to Trick AI into Solving Your Problems

Поділитися
Вставка
  • Опубліковано 26 гру 2024

КОМЕНТАРІ • 48

  • @ShawhinTalebi
    @ShawhinTalebi  Рік тому +2

    👉More on LLMs: ua-cam.com/play/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0.html

  • @heavenleighbliss
    @heavenleighbliss Рік тому +7

    So many of your videos really resonate with my experience as a traditional data scientist exploring LLMs. Your "at first I thought prompt engineering was bunk" is definitely my journey as well. I find this to be super highly related to your previous video where you said potentially 95% of use cases can be covered by generalized pre-trained models or fine-tuned models. These models are truly so powerful that the secret sauce is in 1.) choosing the right pre-trained base model 2.) asking it the right questions in an efficient way. Thanks so much for all your work in putting together this content, I find it some of the best-explained LLM content on the interwebs

    • @ShawhinTalebi
      @ShawhinTalebi  Рік тому +1

      Thanks for the kind words. I’m glad you’re enjoying the content. More to come!

    • @heavenleighbliss
      @heavenleighbliss 8 місяців тому +1

      @user-hv6is9gx6r like using a model pre trained for an appropriate purpose, general purpose models work for a lot, but if I were using a tool to write code, a code specialty model would be better

  • @user-he8qc4mr4i
    @user-he8qc4mr4i 9 місяців тому +3

    It's a very nice series! By the way, it would be nice if you considered including examples of using Olama side by side with Chatgpt in your series. I rather use Ollama for testing than ChatGPT

  • @somerset006
    @somerset006 Місяць тому

    Fantastic series, thanks Shaw!

  • @sarahmestiri1855
    @sarahmestiri1855 Рік тому +2

    Great series! Thanks

  • @GBangalore
    @GBangalore Рік тому +2

    Great introduction. Thanks for putting this together.

  • @SMARTPALASH
    @SMARTPALASH 7 місяців тому +2

    Your acting is ultimate at 1.15 min :)

  • @weiyeeong1212
    @weiyeeong1212 Рік тому +1

    it's really resourceful! keep up the good work

  • @helrod6131
    @helrod6131 Рік тому

    Another excellent video!

  • @thisurawz
    @thisurawz 11 місяців тому +2

    Can you do a video on finetuning a multimodal LLM (Video-LlaMA, LLaVA, or CLIP) with a custom multimodal dataset containing images and texts for relation extraction or a specific task? Can you do it using open-source multimodal LLM and multimodal datasets like video-llama or else so anyone can further their experiments with the help of your tutorial. Can you also talk about how we can boost the performance of the fine-tuned modal using prompt tuning in the same video?

    • @ShawhinTalebi
      @ShawhinTalebi  11 місяців тому +1

      Thanks for the suggestion! Multi-modal models are an exciting next step for AI research. I added it to my list.

  • @AsadShaikh-qc9ev
    @AsadShaikh-qc9ev 10 місяців тому +1

    Great work my friend! Can there be a situation where after fine-tuning a model, you still have to do prompt engineering to get the desired output? In other words, can you fine tune a model where one-shot inference works all the time?

    • @ShawhinTalebi
      @ShawhinTalebi  10 місяців тому +1

      While you can always do additional prompt engineering after fine-tuning, it may not be necessary based on the use case. With that being said, no system will ever be perfect. So it is hard to imagine a situation in which one-shot inference will work all the time.

  • @simbabuilds9338
    @simbabuilds9338 Рік тому +1

    Your videos are great man; I hope your channel grows. Quick question: Langchain seems very integrated with OpenAI's API and software packages; have you tried using Langchain with an open-sourced free of charge LLM? Thanks! I am trying to build an LLM based app for a portfolio for PhD application in AI.

    • @ShawhinTalebi
      @ShawhinTalebi  Рік тому +1

      Thanks for the kind words, I'm glad you like the videos.
      While I've only used LangChain with OpenAI's API, it is has integrations with many other LLM providers. Here's more on how to use it with HF: python.langchain.com/docs/integrations/providers/huggingface

  • @davesandberg
    @davesandberg 11 місяців тому

    Thank you! Awesome content and excellent presentation. Sincerely appreciated 👍

  • @yankeephan87x
    @yankeephan87x Рік тому +1

    what do the ' \ ' represent in the prompts ? do they break up specific parts of text? Thanks!

    • @ShawhinTalebi
      @ShawhinTalebi  Рік тому +1

      Good question. Since the prompt goes over multiple lines, '\' prevents the newline character "
      " from appearing in the prompt string.

  • @youssefkabir6664
    @youssefkabir6664 9 місяців тому +1

    Thank you so much

  • @RobertSpiller
    @RobertSpiller 10 місяців тому +1

    I'm so new at this, but I have to ask...where or which ones are the previous 3?

    • @ShawhinTalebi
      @ShawhinTalebi  10 місяців тому +1

      Here's the series playlist: ua-cam.com/play/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0.html

  • @kevinbowe
    @kevinbowe 9 місяців тому +1

    Please explain WHY the correct answer is required in the prompt.
    I would expect the model to know what the correct answer is.
    PS: I have enjoyed your other vids and intend on sharing them to my dev friends. Cheers !

    • @ShawhinTalebi
      @ShawhinTalebi  9 місяців тому +1

      Good question. The model does know the correct answer to this particular question. However, there may be questions where the model does not know the answer and providing it in the prompt is necessary.

    • @kevinbowe
      @kevinbowe 9 місяців тому +1

      So the answer acts like a 'break-glass' test. -- Thanks.
      I like the way you present the subject. -- Keep up the good work. -- Cheers @@ShawhinTalebi

  • @sherpya
    @sherpya Рік тому +1

    how can you avoid prompt escape / jailbreak in response?

    • @ShawhinTalebi
      @ShawhinTalebi  Рік тому +1

      That's an important (and technical) question. Here is a nice write up on prompt injection: llmtop10.com/llm01/

  • @CouchProgrammer
    @CouchProgrammer 10 місяців тому +5

    If a programmer builds a working vocabulary and does language design, then prompt engineer as opposed does reverse "engineering" of an existing language in order to find a working vocabulary. The "Fake it till you make it" approach is not usually called science or engineering. So calling this profession “prompt writer” would be more appropriate.

    • @ShawhinTalebi
      @ShawhinTalebi  10 місяців тому +2

      That's a cool way to think about it. The name isn't great. I can see it being replaced or becoming obsolete.

  • @SetsunaKiryuKengan
    @SetsunaKiryuKengan Рік тому +1

    I was here for the Sound Effects

  • @AlessandroSilva-c6c
    @AlessandroSilva-c6c 5 місяців тому +1

    Tony Stark's part hahahahhahahah

  • @jaa928
    @jaa928 8 місяців тому +1

    Fine. I'll roll my eyes less. JK. Great insights on how to improve prompts.

  • @ifycadeau
    @ifycadeau Рік тому +1

    0:58 😂😂😂

  • @jeffg4686
    @jeffg4686 8 місяців тому +1

    @1:03 - they paid you for that didn't they?

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому +1

      That'll be my next career if data science doesn't work out 😂

  • @ErikLandvall
    @ErikLandvall 8 місяців тому +1

    there is something wrong with you...