LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners
Вставка
- Опубліковано 5 тра 2024
- In this video, we're going to explore the core concepts of LangChain and understand how the framework can be used to build your own large language model applications.
Code for the video is available here:
github.com/rabbitmetrics/lang...
▬▬▬▬▬▬ V I D E O C H A P T E R S & T I M E S T A M P S ▬▬▬▬▬▬
0:00 Introduction and overview
0:38 Why Langchain?
3:40 The value proposition of Langchain
4:50 Unpacking Langchain
5:42 LLM Wrappers
6:58 Prompts and Prompt Templates
7:45 Chains
9:00 Embeddings and VectorStores
11:40 An example of a Langchain Agent - Наука та технологія
90% (or more) of tech tutorials start with code, without providing a conceptual overview, as you have done. This video is phenomenal...
Appreciate it! 🙏 Thanks for watching
I've noticed a significant lack of comprehensive resources that cover LangChain thoroughly. Your work on the subject is highly valued. Thank you
Yes, there's not enough books on it. The documentation is sparse
Agreed. This was the perfect introduction, for me at this time, to Lang chain.
Your video really helps understand the basics of langchain and provides a good context as well. I'm looking forward to more such videos !
Thank you for the video. I think it gives a really good introduction to the topic without much distraction. Absolutely pleasant to follow even for a non-native speaker.
This is the best 101 video I found on the subject. Most of the other videos assume you're already somewhat familiar with the tools or aren't that beginner friendly.
With immediate effect I have subscribe to your awesome channel.
Explanation to LangChain was clear and concise. I really learnt a lot in just 12 minutes.
Wow, this video on lang-chain have all the pieces i have been searching for.
Thank you so much for taking time and making this awesome video.
This was an awesome and very straightforward video. I believe that it's the most useful video about LangChain that exists I've seen so far. Even people that don't know much about programming can follow. Thanks so much!
One of the best QuickStart streaming that I've seen. A clearly explanation in combination with images. Many thanks.
Thank you! 🙏
solid instructor. good intro langchain at the right level of depth. For as quick as he rips thru a huge amount of information, he is still pretty easy to follow.
I've been watching a lot of AI videos, this is definitely one the best - well-organized and very clear
Thank you so much for covering all the components in just 13 mins. Though, it took an hour to learn and absorb everything :D
I found this to be very comprehensive and indeed useful.
I have been searching and searching for an explanation of how to do this exact thing!! Yasssssss thank yooouuu! ❤
Having read through the LangChain's conceptual documentation, I must say this video is a great accompaniment. Very clear and well presented and for a non coder like myself, easy to understand. (I'd pay for a LangChain manual for 5 year olds!) . Subscribed.
Thank you! 🙏 Glad it was helpful
Companion*
Excellent intro, especially for an experienced programmer to start using after a single watch. Learned a lot in a short time with it. Thanks for making.
You're welcome! Thanks for watching
Thank you. I have watched a lot of videos that attempt to explain LLM's and LangChain as successfully as you have here but fail to do it as succinctly as you have. I was looking for a video that I can share with my clients that explains what LLM's and LangChain are without being too dumbed down or being too 'over their heads' and this video is perfect for that! So, again - thank you.
Glad it was helpful! I really appreciate the comment, thank you very much 🙏
This is a absolutely wonderfuk video on LangChain and its clear and concise. Coukd you do a tutorial for beginners??? 🙏🏼
I never comment on any video but your flawless explanation made me, Thank you for such a masterpiece.
Appreciate the kind words! 🙏 Thanks for watching
Amazing tutorial and explanation, thank you!
This is gold! Thank you!❤
Thanks for the clarity , all the best
Thank you this is the info I was looking for.
This is a cool explanation of how langchain works.
The coolest thing about enhancing LLMs like this is that locally-runnable models will be very interesting (no huge API call costs) and smarter than by default.
I would love local LLMs! Though I doubt that one advanced as GTP-3.5/4 will be able to be run locally for a few years because of the required computational power. I still look forward to the day that it becomes a thing though!
The costs are not the advantage. Hosting things on your own hardware is usually more expensive, especially if you need multiple models(embedding model, LLM, maybe a text to speech). The advantage I see is that you could use custom models trained on your data
Enter neuromorphics: ua-cam.com/video/EXaMQejsMZ8/v-deo.html
Thank you very much for watching the video, a very well-structured clarification. 👍
Much appreciated! Thanks for watching
Thank you for explaining all the components. Highly appreciate it.
You're welcome! Thanks for watching
Very good explanation with a simple example to understand how it works! Thanks for this content
You're welcome! Thanks for watching
Great explanation! I learned a ton with your video
Simply fantastic. Thank you very much for explaining it so well.
Appreciate the comment! 🙏 Thanks for watching
This is amazing stuff. Would love to see a deeper dive into it.
Thanks for watching! I'm already working on some deep dive videos
Thanks for sharing the knowledge 👍
Fascinating. Thank you for this.
Excellent introduction! Thanks a lot :-)
Really fantastic crisp explanation of LLM nothing more nothing less.
Thank you!
Excellent! I've spent hours looking for this 13 minute tutorial. You fa man! Thanks! 💪😁🌴🤙
Glad you found it! 😊 Thanks for watching
I inspected Langchain code as soon as it was released, ran some tests and never used it since. Im surprised so many consider its limitations acceptable. Using embedding similarity as a query filter is like trying to answer a prompt by comparing every chunk of text to your prompt. It makes absolutely no sense because often times an answer looks nothing like a question, and/or the data needed to answer a question looks nothing like the question.
The purpose of the embedding layer in a transformer neural network is to prepare the prompt tensor for further processing through the remaining model layers. It’s like bringing your prompt to the starting line of a long process to be answered, but instead of bringing just the prompt to the starting line, langchain brings the entire text your asking the question of to the starting line with your question and asking them to look at each other and be like “hey, whoever looks like me, stand over here with me. Ok now the rest of you go away and I’m going to ask chatgpt to see which of you remaining can help answer me”.
This is a slight of hand trick, trying to replace everything that happens after the starting line, with chatgpt, but it doesn’t really work for 2 big reasons: (1) chatgpt context is not large enough to transform both the entire text your asking a question of + your prompt, and the same limitation applies to batching (2) your embeddings are incomplete because they were not created by the network, but simply hacking the first layer in a sense
Interesting take. I suspect most people don't understand the technology enough to see how it works. Would be helpful if you could make a video explanation
Biggest limitation right know that we can’t get over with, is chat GPTs context length, there is no way around that unless the contexts is greatly increase by OpenAI themselves or we could train our gpt4 model on large texts
@@albertocambronero1326 I agree. It would cool if there was a sort of "short term memory model" that could hold personal data. I don't see expanding context length as a parsimonious solution. Model queries produce the best results when they are sort and poignant. Any time you need to bring a ton of context to the prompt it reduces the relative weight of the primary question. Imagine a patient friend who accepts questions with an unrestricted context length. They have never read the book Great Gadsby (i.e. this would be like your personal data) - so to ask them a question about Jay Gatsby the question must begin by reading them the entire Great Gatsby novel, followed by "thee end... Where did Jay Gatsby go to college?" Then to ask them another Gatsby question it requires reading them the novel, again, and again. It would be awesome if there was a way to side-load a small personalized model that can plug into a LLM for extended capabilities.
@@dendrites amazing response, I did not know what was going on under the scenes with the context and did not know model queries produce the best results when they are sort and poignant.
I believe that if you send the novel it would be stored in the context of the model and then you would be able multiple questions (?) or would the novel be lossing importance (weight) as more and more contexts is added?
Referring to the comment that started this thread, the complicated bit about training the model on a certain topic, lets say: we train the existing GPT4 model in the book Great Gadsby it would probably know how to answer questions about the book, but it could not analize the whole book to find linguistic trends in the book (like what is the most talked about topic in the book) unless you ALSO feed the model with an article about "the most talked topic in the book".
I mean I want my GPT4 model to read the book and analize the whole picture of what the book is about without needing extra articles about the book.
(my use case is to make GPT4 analyze thousands of reviews and answer questions about it, but right now using NLP techniques sounds like a more duable option right now or at least until we have an option to extend GPT4 knowledge)
You can't say simply "it doesn't really work". It really depends on the use case. There are true limitations and some creativity might be required to leverage it. The context size might me sufficient for smaller use cases or it might be sufficient to break down bigger questions into smaller questions with their own contexts and then summarize etc.
EXCELLENT OVERVIEW: Pls note Pinecone as of 1 week is NOT allowing new, free accounts to do any operations! PLS CONSIDER DOING SIMILAR VID FOSS end to end, There is a lot of interest. THANK YOU
Great content! Just what someone who just jumped into Gen AI would need to solve diverse use cases. Subscribed!
Appreciate it! Thanks for watching
Thank you very much, Rabbitmetrics! This tutorial is absolutely a gem for someone looking for a clear and concise overview of the main concepts!
Thank you! I'm glad it was helpful
Excellent video. THank you for sharing. Would love to see a video on Langchain Agents. Thank you
You're welcome! Thanks for watching
Awesome work thanks a lot!
This video really explains A-Z about langchain. This is damn good man.
Appreciate the comment! Thanks for watching
Excellent video for beginners who want to start on Langchain. Well explained.
Thanks! Glad it was useful
Your approach on this Langchain vid garnered you a Subscriber! Thanks!
Appreciate the support! Thanks for watching
great overview and slides
Subscribed. Others have clamored for the notebook. I do as well. Thank you.
Fantastic overview of Langchain! Thank you @Rabbitmetrics
Excellent work!
Great explanation, thanks!
Great video! Thank you.
Thank you very much for the video! Really helpfull to kickstart with LangChain
Glad it was helpful!
This is very insightful and straight to the point.
Thank you!
Wonderful video. Thanks.
Great explanatory video! Would you provide a link to this Jypter notebook?
Thank you for this video. Now I can start work on my Langchain. Have subscribed!
You're welcome! Thanks for watching
What a beautiful video. You Sir are a great teacher ! Thank You !
Thank you!
Amazing short video packed with knowledge. Just smashed that subscribe button!
Appreciate the support, thanks for watching!
this video was nice and gives a good intro to the topic
Great video clear and simple. I wonder is it were possible how can we use this with azure OpenAI
Great video! Do you know if pinecone works with other languages? For example to store and then retrieve?
Great explanation!
Awesome Explanation
Absolutely love the way you explained.
Thank you!
Your explanation is super clear to understand for me as a beginner. I want to know brief steps for the code flow as titles just like
1.Creating environment to get keys, 2. etc.,. Can anyone answer it?
Thank you for your contribution through the UA-cam space
Appreciate it! Thanks for watching
This is really great video!
Highly appreciated video
amazing tutorial. thank you. you are amazing
great! I can use this video to teach my friend
Bloody brilliant!
This is excellent - I have a question re the splitting, lets imagine you have email templates that average like 2000 tokens a piece or IG captions with like 500 tokens - should things like this be embedded as one chunk or what is the advantage to splitting up into say 100 token splits?
Excellent overview - Thanks!
You're welcome, thanks for watching!
Great video, what is the first app that you were using to explain the diagram ?
Brilliant. Structured and clear.
Thank you!
👍 Your explanation is so structure and clear. I can understand how langchain works now even though I don’t know your python codes at all.
Thanks! 🙏 Glad it was helpful
Excellent intro. Harrison would approve!
Thank you!
Great job, what is the soft that you use to draw these magic things?
Great. Would love to have access to the code as well. Thanks!
thank you a lot, really helped
This was so helpful! What are your thoughts on connecting langchain and flutterflow?
Thanks a lot. Very good explanation.
Thanks!
Hi there, is there a way to combine steps 4 and 5? I assumed you would be using the Agent to answer questions on the autoencoder that we had focused on for the whole video, but then we just used it to do some maths. I think it would be useful if it could answer questions based on the embeddings we have in our index?
Really good video!
Great!!! Fantastic! Awesome! Thank you for sharing!
Thanks for watching!
just found your channel. Excellent Content - another sub for you sir!
Thank you I appreciate the support!
How is the relevant info (as a vector representation) and question (as a vector representation) combined as a prompt to query the LLM? The example you show is a standard ChatGPT textual prompting scenario. The LLM will spit out what it knows and not what it does not know. So what application will this info be useful for? Also is there any associated paper or benchmark that investigates the performance of extracting "relevant information" using this chunking method or is it implementing some DL based Q/A paper?
great video !
that's so amazing !!!
good instruction ...
super helpful. I think langchain engineer could hold significant value in the current job market
I agree!
Impressive video, thanks! I will subscribe to your channel!
so well explained! :)
Thanks!
I am finding the challenge is the splitting of documents. It needs to be large enough to cater for the search but small for context windows. I tried to use large pieces and another split when trying to extract information. Not sure if it is the "right" way.
very nice
thank you
Very interesting..can we do this for image search? Query and similarity search for image search and image match? Can we see embeddings of images like text that you presented?. Thanks
Great video
Good 👍🏻
Detail explanation. Looking for solution to an application, can you please update your about page with a communication channel address. Thank you
Thank you