LangChain Crash Course - Build apps with language models
Вставка
- Опубліковано 3 лип 2024
- In this LangChain Crash Course you will learn how to build applications powered by large language models. We go over all important features of this framework.
Timeline:
00:00 - Introduction
01:09 - Installation
01:22 - LLMs
03:08 - Prompt Templates
04:58 - Chains
06:02 - Agents and Tools
09:54 - Memory
11:15 - Document Loaders
12:24 - Indexes
Resources:
Written guide: www.python-engineer.com/posts...
Colab: colab.research.google.com/dri...
Docs: python.langchain.com/en/latest/
GitHub: github.com/hwchase17/langchain
Chatbot example: github.com/hwchase17/chat-lan...
Get my Free NumPy Handbook:
www.python-engineer.com/numpy...
✅ Write cleaner code with Sourcery, instant refactoring suggestions in VS Code & PyCharm: sourcery.ai/?... *
⭐ Join Our Discord : / discord
📓 ML Notebooks available on Patreon:
/ patrickloeber
If you enjoyed this video, please subscribe to the channel:
▶️ : / @patloeber
~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
🖥️ Website: www.python-engineer.com
🐦 Twitter - / patloeber
✉️ Newsletter - www.python-engineer.com/newsl...
📸 Instagram - / patloeber
🦾 Discord: / discord
▶️ Subscribe: / @patloeber
~~~~~~~~~~~~~~ SUPPORT ME ~~~~~~~~~~~~~~
🅿 Patreon - / patrickloeber
#Python
----------------------------------------------------------------------------------------------------------
* This is an affiliate link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Great walk-through - good examples, like the colab page. Very helpful to bring all these parts together with relevant live examples.
By far the most comprehensive and clear video about LangChain , Keep doing things like this !
Great walkthrough, thank you for breaking this all down!
Thank you man! Love your teaching style. Pls do more about langchain 🙏🏼
Thank you Patrick for such a precious tutorial!
Everything is explained so systematically and in a very concise manner. ❤
Thank you Patrick for sharing your knowledge with us!
Remarkably coherent and concise. Thank you!
This is one of the best Videos I found on Langchain.
thank you very much for this. its very helpful to see these concepts and step-by-step walkthroughs. :)
It's always nice to see you Patrick. Great Content🤝
Great summary! we need a second video on this.
Very nice. Short and concise.
This video is Short but CLEAR.VERY GOOD. KEEP GOING BRO.
Thank you. I like it so much. I can already see a lot of potential here. Looking forward for more of this.
Phenomenal, such a blessing to be alive in this generation and to come across your video.
I am truly grateful..
Super Teacher.
Thank you for making this video. It was helpful and concise. Good examples, too.
Very clear explanation, Thank you!
Hi Patrick. Thank you for this. You are undeniably in a league of your own when it comes to teaching. You set a high bar for others to follow. Well done.
Second this. There is so much info packed in, it's so clear. This has been an absolute game hanger for me.
Indeed, supreme video
Great first tutorial I see so clear, great job
you are doing the lord's work Patrick
Thank you for the crash course !
informative. diverse spectrum of code examples. builds the bigger picture. -> recommended video.
Thanks for this great tutorial. 😊
I was waiting for this video, man you are a legend
Wowsers... i've been listening to alot of people talk about langchain. your teaching style really helped. Keep up the amazing work you do!
Thank you :)
Excellent work, Thanks!!
Amazing video, everything was well explained, thank you so much!
Nice !
thanks for those clear explanation
great recap. thnak you
Super useful, thank you so much !
The best video I've seen about langchain! Please make more on langchain please
thank you!
If only every single tutorial on youtube was written and executed like this
Brilliant guide!
Best video so far, all the rest just do 1 trick pony with all of them doing the same 1 thing. Here we see multiple options. Thank you for your time making this
Thanks so much man! You are awesome!
Already subscribed, liked, and shared the video 😃🙏
I remember the struggle and effort on working with text-davinci-003 model with lots of API only to have the model updated with current knowledge, and modifying script for short or long term memory in chat. But now we have this amazing library, packed with several remarkable features.
Well done. To the point
Great video. Thanks
An excellent video as always and the Colab s much appreciated. I sense we could do a whole series of langchain and end up building osme epic LLMs
love the creativity that went into making this. It's clear that a lot of thought and effort went into this. One clarification.. we have some sensitive data which should not be sent externally In this case such as langchain.vectorstores stores locally, what sort of information goes externally to openAI or else where
Great video!!
Hi Patrick, your videos are great. I really enjoyed them.
Amazing tutorial. I could love to see Obsidian with Stable Vicuna integration.
Frohe Ostern. Thanks for the very nice vid. Would be nice to see a real use-case project with Langchain. Several videos which have a common goal (i.e. like a personal assistant like Jarvis or open assistant) would be superb! Thanks for the current input
Thank you
wow impossible library! amazing explanation!
thank you very much
Weldon Mr.
I’ve been banging my head against a vector database (milvus) for three days, thank you sir for pulling me out of purgatory
I haven’t found a full step by step tutorial on langchain importing several documents then storing them on pinecone and finally using a chat up to interact with it in a deployment environment, could you make some example about that?
This video is pretty close
ua-cam.com/video/h0DHDp1FbmQ/v-deo.html
Just ask the AI
@@TheGuillotineKing GPT has cutoff knowledge of 2021, langchain is not existed by that time, therefore GPT codex has no clue of langchain
@@DvACtOid while that is true you can copy and paste in new information and it will understand it you just need to feed it the relevant information
cool!!.
great and very comprehensive as usual Patrick , Thank you . Now How can we use the New ChatGPT API (GPT3.5 turbo and GPT4.0) , and ChatOpenAI within the Agent framework?
❤
great explanation. One thing I don't understand is how my application will know to call the LLM directly or accept a response from the agent. For example: what's the weather? the app must be smart enough to know that the LLM doesn't know then go an query an api using the agent....but how to switch back and forth?
👍
Can't thank you enough. I appreciate you.
Hi Patrick, Thanks for such an amazing content. Although, I have a question, If I am trying to build this Q-A bot trained on custom data say PDF, how would the system react or understand if the pdf contains images or mathematical formulations. Example : say I have a document on photosynthesis and I query my bot about What is photosynthesis? In return I want explanation from the doc provided, along with any relevant image and formulae. How should we make something like this? Thanks
anymore tutorials/crash courses people suggest for langchain and long term memory for LLMs? furthermore, how to set it up for your users to use?
So using openai api key or any other keys the request we are sending to generate the text is quite limited right ?
So whenever users ask their query to the model will it able to answer them all or will throw the api limit error?
Hello Patrick, I am asking here as this is your latest video, I have seen your videos regarding Langchain, I am curious how to optimize inference while using LLMs in Langchain using TensorRT or Onnx Runtime... As in Industries, It's very obvious, to save the time as well as computation cost... In TensorRT while using open source models, we have techniques like quantization and a few more more for that... So In Langchain is there any way to do this?
Thank you so much for your video, would be great if you share a way to deploy it to a real website 🙏
so we can use more then one model at a single run?
Is there a way to combine document loader with google api? The only way I got it to work was to pipe the output from the google api into my vector db then do a query on the db.
Thanks for the video. Please correct me if I am wrong. With this tool we can easily build a chat bot for clients for example when he ask about his order or file update. The bot can answer based on memory and documents uploaded regarding this client? Definitely request you to make a video on daily life usage with details please.
yes this is a good use case for it
This makes Langchain make more sense now. My question is how do we turn this into a Chatbot for web for someone to use?
Is there any limit for tokens in the Memory? How long can a conversation get until the model will forget or start hallucinate?
How can we use a ConversationChain with LLM chain and prompt templates?
How does this compare to Haystack which has been around for years?
Is there a way to avoid using open ai API keys for the lang chain chatbot to work?
I keep getting the error: "ModuleNotFoundError: No module named 'langchain'"
It's definitely installed. Any idea why?
Hello Patrick, sers, moin, good content, could you show how to get a German LLM as in for German language
appreciate
I think you should be masking your API keys in the videos
I would say this is a fake one
Can i have my offline models (Ex : - Downloaded Hugging face models) talk with tools like Google search via Agents through Langchain ? Is that possible or only its limited to the online models via api keys ?
Why did you share your tokens clearly?
I’m thinking of making tutorials, and I’m mindful of not including the credentials in the clear, but maybe I’m missing an easy solution.
I’m just too lazy editing them out. Obviously refreshed them after recording
then its just open ai api wrapper?
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
why your token is so exposed? what if someone use it?
I'm pretty sure he revoked it before uploading the video...
i don't understand why prompt.format(question= "Can Barack Obama have a conversation with Georde Washington?") is introduced showed that is it not working and then with the chains you again introduce the same question again to the question variable and use it in a llm_chain. the prompt format is useless?!
One mistake: the 49 line should change to:llm = OpenAI(openai_api_key='your_api_key', model_name='text-davinci-003', temperature=0.9), I used your code, But failed
Why is it using Wikipedia to look up the release date of a movie from 2006? It already has that information in it's training data.
*facilitate (with a t) = erleichtern
Is this what Auto_GPT uses?
why not just use Open AI API directly?
now only of i can find way to collect all the data i need, and money to buy all GPUs, in a meantime Microsoft will integrated AI in all of their produicts for free.
test
What the hype about this library I don't get it. I coded half of this library myself for a small side project, it's so trivial what it does.
Just out of curiousity. Is it not worth knowing? I'm not a developer but I find this interesting tho. I assume you are an experienced developer and find this Library not so much helpful?
@@techsavvy9258 Yes but the hype around is crazy. I mean I can't read a twitter/reddit post without hearing about this. It's everywhere.
Why does everyone in AI videos use notebooks? Why use a text file when we can create a tutorial with unnecessary functional and visual clutter, and present it in a way that's as far away from a production environment as possible?
Hello Patrick. So appreciate your time on our behalf. I copied over your [LangChain.ipynb] and it works as shown in your video when I use my Google Colab account; until your step 81, where you construct the prompt. I see references elsewhere to issues with HuggingFace lately and wondering if I should import a specific verison of HuggingFace to precisely recreate your demonstration?
Really clear explanation, thank you.