So many of your videos really resonate with my experience as a traditional data scientist exploring LLMs. Your "at first I thought prompt engineering was bunk" is definitely my journey as well. I find this to be super highly related to your previous video where you said potentially 95% of use cases can be covered by generalized pre-trained models or fine-tuned models. These models are truly so powerful that the secret sauce is in 1.) choosing the right pre-trained base model 2.) asking it the right questions in an efficient way. Thanks so much for all your work in putting together this content, I find it some of the best-explained LLM content on the interwebs
@user-hv6is9gx6r like using a model pre trained for an appropriate purpose, general purpose models work for a lot, but if I were using a tool to write code, a code specialty model would be better
It's a very nice series! By the way, it would be nice if you considered including examples of using Olama side by side with Chatgpt in your series. I rather use Ollama for testing than ChatGPT
Can you do a video on finetuning a multimodal LLM (Video-LlaMA, LLaVA, or CLIP) with a custom multimodal dataset containing images and texts for relation extraction or a specific task? Can you do it using open-source multimodal LLM and multimodal datasets like video-llama or else so anyone can further their experiments with the help of your tutorial. Can you also talk about how we can boost the performance of the fine-tuned modal using prompt tuning in the same video?
Great work my friend! Can there be a situation where after fine-tuning a model, you still have to do prompt engineering to get the desired output? In other words, can you fine tune a model where one-shot inference works all the time?
While you can always do additional prompt engineering after fine-tuning, it may not be necessary based on the use case. With that being said, no system will ever be perfect. So it is hard to imagine a situation in which one-shot inference will work all the time.
Your videos are great man; I hope your channel grows. Quick question: Langchain seems very integrated with OpenAI's API and software packages; have you tried using Langchain with an open-sourced free of charge LLM? Thanks! I am trying to build an LLM based app for a portfolio for PhD application in AI.
Thanks for the kind words, I'm glad you like the videos. While I've only used LangChain with OpenAI's API, it is has integrations with many other LLM providers. Here's more on how to use it with HF: python.langchain.com/docs/integrations/providers/huggingface
Please explain WHY the correct answer is required in the prompt. I would expect the model to know what the correct answer is. PS: I have enjoyed your other vids and intend on sharing them to my dev friends. Cheers !
Good question. The model does know the correct answer to this particular question. However, there may be questions where the model does not know the answer and providing it in the prompt is necessary.
If a programmer builds a working vocabulary and does language design, then prompt engineer as opposed does reverse "engineering" of an existing language in order to find a working vocabulary. The "Fake it till you make it" approach is not usually called science or engineering. So calling this profession “prompt writer” would be more appropriate.
👉More on LLMs: ua-cam.com/play/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0.html
So many of your videos really resonate with my experience as a traditional data scientist exploring LLMs. Your "at first I thought prompt engineering was bunk" is definitely my journey as well. I find this to be super highly related to your previous video where you said potentially 95% of use cases can be covered by generalized pre-trained models or fine-tuned models. These models are truly so powerful that the secret sauce is in 1.) choosing the right pre-trained base model 2.) asking it the right questions in an efficient way. Thanks so much for all your work in putting together this content, I find it some of the best-explained LLM content on the interwebs
Thanks for the kind words. I’m glad you’re enjoying the content. More to come!
@user-hv6is9gx6r like using a model pre trained for an appropriate purpose, general purpose models work for a lot, but if I were using a tool to write code, a code specialty model would be better
It's a very nice series! By the way, it would be nice if you considered including examples of using Olama side by side with Chatgpt in your series. I rather use Ollama for testing than ChatGPT
Thanks for the suggestion :)
Fantastic series, thanks Shaw!
Great series! Thanks
Glad you enjoyed it!
Great introduction. Thanks for putting this together.
Glad it was helpful!
Your acting is ultimate at 1.15 min :)
Thank you 😂😂
it's really resourceful! keep up the good work
Thanks, glad it helped!
Another excellent video!
Thanks :)
Can you do a video on finetuning a multimodal LLM (Video-LlaMA, LLaVA, or CLIP) with a custom multimodal dataset containing images and texts for relation extraction or a specific task? Can you do it using open-source multimodal LLM and multimodal datasets like video-llama or else so anyone can further their experiments with the help of your tutorial. Can you also talk about how we can boost the performance of the fine-tuned modal using prompt tuning in the same video?
Thanks for the suggestion! Multi-modal models are an exciting next step for AI research. I added it to my list.
Great work my friend! Can there be a situation where after fine-tuning a model, you still have to do prompt engineering to get the desired output? In other words, can you fine tune a model where one-shot inference works all the time?
While you can always do additional prompt engineering after fine-tuning, it may not be necessary based on the use case. With that being said, no system will ever be perfect. So it is hard to imagine a situation in which one-shot inference will work all the time.
Your videos are great man; I hope your channel grows. Quick question: Langchain seems very integrated with OpenAI's API and software packages; have you tried using Langchain with an open-sourced free of charge LLM? Thanks! I am trying to build an LLM based app for a portfolio for PhD application in AI.
Thanks for the kind words, I'm glad you like the videos.
While I've only used LangChain with OpenAI's API, it is has integrations with many other LLM providers. Here's more on how to use it with HF: python.langchain.com/docs/integrations/providers/huggingface
Thank you! Awesome content and excellent presentation. Sincerely appreciated 👍
Glad you liked it!
what do the ' \ ' represent in the prompts ? do they break up specific parts of text? Thanks!
Good question. Since the prompt goes over multiple lines, '\' prevents the newline character "
" from appearing in the prompt string.
Thank you so much
I'm so new at this, but I have to ask...where or which ones are the previous 3?
Here's the series playlist: ua-cam.com/play/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0.html
Please explain WHY the correct answer is required in the prompt.
I would expect the model to know what the correct answer is.
PS: I have enjoyed your other vids and intend on sharing them to my dev friends. Cheers !
Good question. The model does know the correct answer to this particular question. However, there may be questions where the model does not know the answer and providing it in the prompt is necessary.
So the answer acts like a 'break-glass' test. -- Thanks.
I like the way you present the subject. -- Keep up the good work. -- Cheers @@ShawhinTalebi
how can you avoid prompt escape / jailbreak in response?
That's an important (and technical) question. Here is a nice write up on prompt injection: llmtop10.com/llm01/
If a programmer builds a working vocabulary and does language design, then prompt engineer as opposed does reverse "engineering" of an existing language in order to find a working vocabulary. The "Fake it till you make it" approach is not usually called science or engineering. So calling this profession “prompt writer” would be more appropriate.
That's a cool way to think about it. The name isn't great. I can see it being replaced or becoming obsolete.
I was here for the Sound Effects
I hope it was worth it 😂😂
Tony Stark's part hahahahhahahah
Fine. I'll roll my eyes less. JK. Great insights on how to improve prompts.
LOL!
0:58 😂😂😂
😂😂 thanks
@1:03 - they paid you for that didn't they?
That'll be my next career if data science doesn't work out 😂
there is something wrong with you...
LOL what gave it away?