This is the best 101 video I found on the subject. Most of the other videos assume you're already somewhat familiar with the tools or aren't that beginner friendly.
Excellent intro, especially for an experienced programmer to start using after a single watch. Learned a lot in a short time with it. Thanks for making.
Thank you. I have watched a lot of videos that attempt to explain LLM's and LangChain as successfully as you have here but fail to do it as succinctly as you have. I was looking for a video that I can share with my clients that explains what LLM's and LangChain are without being too dumbed down or being too 'over their heads' and this video is perfect for that! So, again - thank you.
The coolest thing about enhancing LLMs like this is that locally-runnable models will be very interesting (no huge API call costs) and smarter than by default.
I would love local LLMs! Though I doubt that one advanced as GTP-3.5/4 will be able to be run locally for a few years because of the required computational power. I still look forward to the day that it becomes a thing though!
The costs are not the advantage. Hosting things on your own hardware is usually more expensive, especially if you need multiple models(embedding model, LLM, maybe a text to speech). The advantage I see is that you could use custom models trained on your data
EXCELLENT OVERVIEW: Pls note Pinecone as of 1 week is NOT allowing new, free accounts to do any operations! PLS CONSIDER DOING SIMILAR VID FOSS end to end, There is a lot of interest. THANK YOU
Having read through the LangChain's conceptual documentation, I must say this video is a great accompaniment. Very clear and well presented and for a non coder like myself, easy to understand. (I'd pay for a LangChain manual for 5 year olds!) . Subscribed.
"Great video! This explanation of LangChain's core concepts is super helpful for beginners looking to build LLM applications. Thanks for sharing the code link as well-makes it easy to follow along and experiment!"
Thank you for the video. I think it gives a really good introduction to the topic without much distraction. Absolutely pleasant to follow even for a non-native speaker.
This was an awesome and very straightforward video. I believe that it's the most useful video about LangChain that exists I've seen so far. Even people that don't know much about programming can follow. Thanks so much!
I inspected Langchain code as soon as it was released, ran some tests and never used it since. Im surprised so many consider its limitations acceptable. Using embedding similarity as a query filter is like trying to answer a prompt by comparing every chunk of text to your prompt. It makes absolutely no sense because often times an answer looks nothing like a question, and/or the data needed to answer a question looks nothing like the question. The purpose of the embedding layer in a transformer neural network is to prepare the prompt tensor for further processing through the remaining model layers. It’s like bringing your prompt to the starting line of a long process to be answered, but instead of bringing just the prompt to the starting line, langchain brings the entire text your asking the question of to the starting line with your question and asking them to look at each other and be like “hey, whoever looks like me, stand over here with me. Ok now the rest of you go away and I’m going to ask chatgpt to see which of you remaining can help answer me”. This is a slight of hand trick, trying to replace everything that happens after the starting line, with chatgpt, but it doesn’t really work for 2 big reasons: (1) chatgpt context is not large enough to transform both the entire text your asking a question of + your prompt, and the same limitation applies to batching (2) your embeddings are incomplete because they were not created by the network, but simply hacking the first layer in a sense
Interesting take. I suspect most people don't understand the technology enough to see how it works. Would be helpful if you could make a video explanation
Biggest limitation right know that we can’t get over with, is chat GPTs context length, there is no way around that unless the contexts is greatly increase by OpenAI themselves or we could train our gpt4 model on large texts
@@albertocambronero1326 I agree. It would cool if there was a sort of "short term memory model" that could hold personal data. I don't see expanding context length as a parsimonious solution. Model queries produce the best results when they are sort and poignant. Any time you need to bring a ton of context to the prompt it reduces the relative weight of the primary question. Imagine a patient friend who accepts questions with an unrestricted context length. They have never read the book Great Gadsby (i.e. this would be like your personal data) - so to ask them a question about Jay Gatsby the question must begin by reading them the entire Great Gatsby novel, followed by "thee end... Where did Jay Gatsby go to college?" Then to ask them another Gatsby question it requires reading them the novel, again, and again. It would be awesome if there was a way to side-load a small personalized model that can plug into a LLM for extended capabilities.
@@langmod amazing response, I did not know what was going on under the scenes with the context and did not know model queries produce the best results when they are sort and poignant. I believe that if you send the novel it would be stored in the context of the model and then you would be able multiple questions (?) or would the novel be lossing importance (weight) as more and more contexts is added? Referring to the comment that started this thread, the complicated bit about training the model on a certain topic, lets say: we train the existing GPT4 model in the book Great Gadsby it would probably know how to answer questions about the book, but it could not analize the whole book to find linguistic trends in the book (like what is the most talked about topic in the book) unless you ALSO feed the model with an article about "the most talked topic in the book". I mean I want my GPT4 model to read the book and analize the whole picture of what the book is about without needing extra articles about the book. (my use case is to make GPT4 analyze thousands of reviews and answer questions about it, but right now using NLP techniques sounds like a more duable option right now or at least until we have an option to extend GPT4 knowledge)
You can't say simply "it doesn't really work". It really depends on the use case. There are true limitations and some creativity might be required to leverage it. The context size might me sufficient for smaller use cases or it might be sufficient to break down bigger questions into smaller questions with their own contexts and then summarize etc.
With immediate effect I have subscribe to your awesome channel. Explanation to LangChain was clear and concise. I really learnt a lot in just 12 minutes.
Zero clutter. A Guru (remover of darkness) is one who can create chunks of knowledge in a sequence that is easier for the Shishya (student) to learn with ease and get it to their neocortex without having to decode the vectors, that allows for carrying it to their multiple incarnations. Thank you Guru-ji.
Your explanation is super clear to understand for me as a beginner. I want to know brief steps for the code flow as titles just like 1.Creating environment to get keys, 2. etc.,. Can anyone answer it?
Thanks! This is the best high level langchain video I have watched. Im not a programmer but this overview is invaluable...its clearly explained and demystified the dark arts of langchain 😂😂...question, whats the most straightforward way of converting website data into vectors? Is there some way to scrape urls...looking to create simple q&a agents for small websites...thanks
I’m glad it was helpful, I appreciate the comment! Regarding scraping urls, take a look at the latest video I’ve uploaded ua-cam.com/video/I-beHln9Gus/v-deo.html In that video I’m using LangChain’s integration with Apify to extract content from my own webpage
@@rabbitmetrics thanks. Yes took a look. Will see what I can do. Came across Apify in my research yesterday ! Will try to run this with llamaindex ….Im teaching myself! There’s not many apify videos around so thanks
solid instructor. good intro langchain at the right level of depth. For as quick as he rips thru a huge amount of information, he is still pretty easy to follow.
I think you have to create the index in pine code explicitly. I did this with the following command 'pinecone.create_index(index_name, dimension=1024, metric="euclidean")' just before calling the search. I wonder if anyone else noticed this...
Hi, this video is one of the best, but now langchain changed its modules and classes, please update us with the new video, for eg: simplesequentialchain is not supporting now!!
How is the relevant info (as a vector representation) and question (as a vector representation) combined as a prompt to query the LLM? The example you show is a standard ChatGPT textual prompting scenario. The LLM will spit out what it knows and not what it does not know. So what application will this info be useful for? Also is there any associated paper or benchmark that investigates the performance of extracting "relevant information" using this chunking method or is it implementing some DL based Q/A paper?
Wonder how useful this might be to use with repos? Imagine you could chat with GPT in it knows your entire codebase and could use specific examples in your conversations. Of course there are some security concerns but the trade off might be worth it.
I want to explore doing exactly this but with a private LLM instance rather than shipping data to GPT or elsewhere. I've been using gpt-engineer, which is super fun. When it can create a codebase and then iterate on it, more fun.
This is so interesting. We (german insurance company) want to develop our own copilot for employees. But we can’t use the GPT4 API given the fact that our companies data is sensitive and we don’t want them to be public at openai. You have a tip for this issue?
If you look at openAI's privacy policy, you'll find that they explicitly state that data provided through the API is not recycled into the training data for OpenAI's systems unless you explicitly enable it, it's off by default. So yes, you can use OpenAI's systems through the API with proprietary information and it wont end up in the training data. A quick search will let you find their official announcements about this.
@@markschrammel9513 yes, they would be in breach of their own terms of service and liable legally, also, the API has many fewer restrictions and controls vs chatgpt, it's a totally different animal
Thanks friend. You answered a lot of questions here and the repo, helped understanding your presentation much better. Please share more. Have a great day.
Thank you very much for your video, it's so well explained! One question: is it really necessary to connect tools like Zappier with an API? Thanks to Zappier we can do a lot of things, but if we can already do it natively with the LangChain API, in what context can it be useful? Thanks again for your video, I'm very excited about what we can create!
It really depends on your use case, you can do a lot with only LangChain/OpenAI. If you are already using Zapier in your flow it might make sense to use Zapier AI Actions.
Nice video, can it be updated to not use any external services. Think dealing with sensitive data, don't want to feed it to OpenAI for embeddings, or use online models.
This is excellent - I have a question re the splitting, lets imagine you have email templates that average like 2000 tokens a piece or IG captions with like 500 tokens - should things like this be embedded as one chunk or what is the advantage to splitting up into say 100 token splits?
OpenAI API keys usage is not free. I had to add a payment method, before the keys started working. Without a valid payment method the keys doesn't return any results.
Hi there, is there a way to combine steps 4 and 5? I assumed you would be using the Agent to answer questions on the autoencoder that we had focused on for the whole video, but then we just used it to do some maths. I think it would be useful if it could answer questions based on the embeddings we have in our index?
90% (or more) of tech tutorials start with code, without providing a conceptual overview, as you have done. This video is phenomenal...
Appreciate it! 🙏 Thanks for watching
Totally agree with this. I love the way this guy teaching the conceptual
I disagree. I almost never find good code examples instead only concepts for dummys.
I've noticed a significant lack of comprehensive resources that cover LangChain thoroughly. Your work on the subject is highly valued. Thank you
Yes, there's not enough books on it. The documentation is sparse
Agreed. This was the perfect introduction, for me at this time, to Lang chain.
I never comment on any video but your flawless explanation made me, Thank you for such a masterpiece.
Appreciate the kind words! 🙏 Thanks for watching
This is the best 101 video I found on the subject. Most of the other videos assume you're already somewhat familiar with the tools or aren't that beginner friendly.
We need more videos like this, comprehensive for the general public and for newbies like me. Thank you!
Excellent intro, especially for an experienced programmer to start using after a single watch. Learned a lot in a short time with it. Thanks for making.
You're welcome! Thanks for watching
Thank you. I have watched a lot of videos that attempt to explain LLM's and LangChain as successfully as you have here but fail to do it as succinctly as you have. I was looking for a video that I can share with my clients that explains what LLM's and LangChain are without being too dumbed down or being too 'over their heads' and this video is perfect for that! So, again - thank you.
Glad it was helpful! I really appreciate the comment, thank you very much 🙏
Best video I have ever seen on explaining Langchain soo far 💯
The coolest thing about enhancing LLMs like this is that locally-runnable models will be very interesting (no huge API call costs) and smarter than by default.
I would love local LLMs! Though I doubt that one advanced as GTP-3.5/4 will be able to be run locally for a few years because of the required computational power. I still look forward to the day that it becomes a thing though!
The costs are not the advantage. Hosting things on your own hardware is usually more expensive, especially if you need multiple models(embedding model, LLM, maybe a text to speech). The advantage I see is that you could use custom models trained on your data
Enter neuromorphics: ua-cam.com/video/EXaMQejsMZ8/v-deo.html
One of the best QuickStart streaming that I've seen. A clearly explanation in combination with images. Many thanks.
Thank you! 🙏
EXCELLENT OVERVIEW: Pls note Pinecone as of 1 week is NOT allowing new, free accounts to do any operations! PLS CONSIDER DOING SIMILAR VID FOSS end to end, There is a lot of interest. THANK YOU
Having read through the LangChain's conceptual documentation, I must say this video is a great accompaniment. Very clear and well presented and for a non coder like myself, easy to understand. (I'd pay for a LangChain manual for 5 year olds!) . Subscribed.
Thank you! 🙏 Glad it was helpful
Companion*
Excellent video for beginners who want to start on Langchain. Well explained.
Thanks! Glad it was useful
"Great video! This explanation of LangChain's core concepts is super helpful for beginners looking to build LLM applications. Thanks for sharing the code link as well-makes it easy to follow along and experiment!"
One of the best 101 video on LangChain out there, Kudos to you!
This is a absolutely wonderfuk video on LangChain and its clear and concise. Coukd you do a tutorial for beginners??? 🙏🏼
Thank you for the video. I think it gives a really good introduction to the topic without much distraction. Absolutely pleasant to follow even for a non-native speaker.
This was an awesome and very straightforward video. I believe that it's the most useful video about LangChain that exists I've seen so far. Even people that don't know much about programming can follow. Thanks so much!
I've been watching a lot of AI videos, this is definitely one the best - well-organized and very clear
Excellent coding examples. Please do more of these.
Please do a tutorial on how to summarise comments received on a UA-cam video.
I inspected Langchain code as soon as it was released, ran some tests and never used it since. Im surprised so many consider its limitations acceptable. Using embedding similarity as a query filter is like trying to answer a prompt by comparing every chunk of text to your prompt. It makes absolutely no sense because often times an answer looks nothing like a question, and/or the data needed to answer a question looks nothing like the question.
The purpose of the embedding layer in a transformer neural network is to prepare the prompt tensor for further processing through the remaining model layers. It’s like bringing your prompt to the starting line of a long process to be answered, but instead of bringing just the prompt to the starting line, langchain brings the entire text your asking the question of to the starting line with your question and asking them to look at each other and be like “hey, whoever looks like me, stand over here with me. Ok now the rest of you go away and I’m going to ask chatgpt to see which of you remaining can help answer me”.
This is a slight of hand trick, trying to replace everything that happens after the starting line, with chatgpt, but it doesn’t really work for 2 big reasons: (1) chatgpt context is not large enough to transform both the entire text your asking a question of + your prompt, and the same limitation applies to batching (2) your embeddings are incomplete because they were not created by the network, but simply hacking the first layer in a sense
Interesting take. I suspect most people don't understand the technology enough to see how it works. Would be helpful if you could make a video explanation
Biggest limitation right know that we can’t get over with, is chat GPTs context length, there is no way around that unless the contexts is greatly increase by OpenAI themselves or we could train our gpt4 model on large texts
@@albertocambronero1326 I agree. It would cool if there was a sort of "short term memory model" that could hold personal data. I don't see expanding context length as a parsimonious solution. Model queries produce the best results when they are sort and poignant. Any time you need to bring a ton of context to the prompt it reduces the relative weight of the primary question. Imagine a patient friend who accepts questions with an unrestricted context length. They have never read the book Great Gadsby (i.e. this would be like your personal data) - so to ask them a question about Jay Gatsby the question must begin by reading them the entire Great Gatsby novel, followed by "thee end... Where did Jay Gatsby go to college?" Then to ask them another Gatsby question it requires reading them the novel, again, and again. It would be awesome if there was a way to side-load a small personalized model that can plug into a LLM for extended capabilities.
@@langmod amazing response, I did not know what was going on under the scenes with the context and did not know model queries produce the best results when they are sort and poignant.
I believe that if you send the novel it would be stored in the context of the model and then you would be able multiple questions (?) or would the novel be lossing importance (weight) as more and more contexts is added?
Referring to the comment that started this thread, the complicated bit about training the model on a certain topic, lets say: we train the existing GPT4 model in the book Great Gadsby it would probably know how to answer questions about the book, but it could not analize the whole book to find linguistic trends in the book (like what is the most talked about topic in the book) unless you ALSO feed the model with an article about "the most talked topic in the book".
I mean I want my GPT4 model to read the book and analize the whole picture of what the book is about without needing extra articles about the book.
(my use case is to make GPT4 analyze thousands of reviews and answer questions about it, but right now using NLP techniques sounds like a more duable option right now or at least until we have an option to extend GPT4 knowledge)
You can't say simply "it doesn't really work". It really depends on the use case. There are true limitations and some creativity might be required to leverage it. The context size might me sufficient for smaller use cases or it might be sufficient to break down bigger questions into smaller questions with their own contexts and then summarize etc.
With immediate effect I have subscribe to your awesome channel.
Explanation to LangChain was clear and concise. I really learnt a lot in just 12 minutes.
Your video really helps understand the basics of langchain and provides a good context as well. I'm looking forward to more such videos !
Zero clutter. A Guru (remover of darkness) is one who can create chunks of knowledge in a sequence that is easier for the Shishya (student) to learn with ease and get it to their neocortex without having to decode the vectors, that allows for carrying it to their multiple incarnations. Thank you Guru-ji.
I appreciate the comment - thanks for watching!
Your explanation is super clear to understand for me as a beginner. I want to know brief steps for the code flow as titles just like
1.Creating environment to get keys, 2. etc.,. Can anyone answer it?
I found this to be very comprehensive and indeed useful.
Really fantastic crisp explanation of LLM nothing more nothing less.
Thank you!
Thanks! This is the best high level langchain video I have watched. Im not a programmer but this overview is invaluable...its clearly explained and demystified the dark arts of langchain 😂😂...question, whats the most straightforward way of converting website data into vectors? Is there some way to scrape urls...looking to create simple q&a agents for small websites...thanks
I’m glad it was helpful, I appreciate the comment! Regarding scraping urls, take a look at the latest video I’ve uploaded ua-cam.com/video/I-beHln9Gus/v-deo.html In that video I’m using LangChain’s integration with Apify to extract content from my own webpage
@@rabbitmetrics thanks. Yes took a look. Will see what I can do. Came across Apify in my research yesterday
! Will try to run this with llamaindex ….Im teaching myself! There’s not many apify videos around so thanks
This video really explains A-Z about langchain. This is damn good man.
Appreciate the comment! Thanks for watching
Thank you very much, Rabbitmetrics! This tutorial is absolutely a gem for someone looking for a clear and concise overview of the main concepts!
Thank you! I'm glad it was helpful
solid instructor. good intro langchain at the right level of depth. For as quick as he rips thru a huge amount of information, he is still pretty easy to follow.
Thank you so much for covering all the components in just 13 mins. Though, it took an hour to learn and absorb everything :D
👍 Your explanation is so structure and clear. I can understand how langchain works now even though I don’t know your python codes at all.
Thanks! 🙏 Glad it was helpful
I have been searching and searching for an explanation of how to do this exact thing!! Yasssssss thank yooouuu! ❤
Wow, this video on lang-chain have all the pieces i have been searching for.
Thank you so much for taking time and making this awesome video.
I think you have to create the index in pine code explicitly. I did this with the following command 'pinecone.create_index(index_name, dimension=1024, metric="euclidean")' just before calling the search. I wonder if anyone else noticed this...
ty sir
Thank you very much for watching the video, a very well-structured clarification. 👍
Much appreciated! Thanks for watching
Excellent unpack! Can you please provide a link to this notebook?
Your approach on this Langchain vid garnered you a Subscriber! Thanks!
Appreciate the support! Thanks for watching
Very good explanation with a simple example to understand how it works! Thanks for this content
You're welcome! Thanks for watching
super helpful. I think langchain engineer could hold significant value in the current job market
I agree!
This is a cool explanation of how langchain works.
This video explains more better than some udemy courses
Excellent video. THank you for sharing. Would love to see a video on Langchain Agents. Thank you
You're welcome! Thanks for watching
This is very insightful and straight to the point.
Thank you!
Thank you for explaining all the components. Highly appreciate it.
You're welcome! Thanks for watching
Thank you for your contribution through the UA-cam space
Appreciate it! Thanks for watching
What a beautiful video. You Sir are a great teacher ! Thank You !
Thank you!
This is amazing stuff. Would love to see a deeper dive into it.
Thanks for watching! I'm already working on some deep dive videos
Excellent intro. Harrison would approve!
Thank you!
Hi, this video is one of the best, but now langchain changed its modules and classes, please update us with the new video, for eg: simplesequentialchain is not supporting now!!
Great content! Just what someone who just jumped into Gen AI would need to solve diverse use cases. Subscribed!
Appreciate it! Thanks for watching
How is the relevant info (as a vector representation) and question (as a vector representation) combined as a prompt to query the LLM? The example you show is a standard ChatGPT textual prompting scenario. The LLM will spit out what it knows and not what it does not know. So what application will this info be useful for? Also is there any associated paper or benchmark that investigates the performance of extracting "relevant information" using this chunking method or is it implementing some DL based Q/A paper?
Thanks for the clarity , all the best
Your summary of LangChain is very accurate. Do you have a PPT to share?
Thanks! Unfortunately, I don’t have a PPT. The video is made with FCPX
Great. Would love to have access to the code as well. Thanks!
Thank you for this video. Now I can start work on my Langchain. Have subscribed!
You're welcome! Thanks for watching
Wonder how useful this might be to use with repos? Imagine you could chat with GPT in it knows your entire codebase and could use specific examples in your conversations. Of course there are some security concerns but the trade off might be worth it.
I want to explore doing exactly this but with a private LLM instance rather than shipping data to GPT or elsewhere. I've been using gpt-engineer, which is super fun. When it can create a codebase and then iterate on it, more fun.
Thank you very much for the video! Really helpfull to kickstart with LangChain
Glad it was helpful!
This is so interesting. We (german insurance company) want to develop our own copilot for employees. But we can’t use the GPT4 API given the fact that our companies data is sensitive and we don’t want them to be public at openai. You have a tip for this issue?
Yes, you would use a local (possibly finetuned) language model instead of GPT4 - planning a video on this
@@rabbitmetrics would be more than happy about a video concerning this topic. Maybe using GPT4ALL
If you look at openAI's privacy policy, you'll find that they explicitly state that data provided through the API is not recycled into the training data for OpenAI's systems unless you explicitly enable it, it's off by default. So yes, you can use OpenAI's systems through the API with proprietary information and it wont end up in the training data. A quick search will let you find their official announcements about this.
@@thebluriam you believe them ??? :D :D: D :D :D
@@markschrammel9513 yes, they would be in breach of their own terms of service and liable legally, also, the API has many fewer restrictions and controls vs chatgpt, it's a totally different animal
Thanks friend. You answered a lot of questions here and the repo, helped understanding your presentation much better. Please share more. Have a great day.
You're welcome! Thanks for watching
Great explanation, thank you!
Would you mind sharing the code in a Colab notebook?
You're welcome! I've updated the video description with a link to the notebook
Amazing short video packed with knowledge. Just smashed that subscribe button!
Appreciate the support, thanks for watching!
Absolutely love the way you explained.
Thank you!
Excellent! I've spent hours looking for this 13 minute tutorial. You fa man! Thanks! 💪😁🌴🤙
Glad you found it! 😊 Thanks for watching
Can you do a video on Autogen and LangChain? Maybe throw in SuperAgent as well.
Will be likely covering this in upcoming videos
Detail explanation. Looking for solution to an application, can you please update your about page with a communication channel address. Thank you
Simply fantastic. Thank you very much for explaining it so well.
Appreciate the comment! 🙏 Thanks for watching
Really appreciate this. For clarity though, the scheme you presented 1:56 had nothing to do with the rest of the presentation. Correct?
The flowchart visualizes how you can extract information with LLMs from vector storage in LangChain
this video was nice and gives a good intro to the topic
Great explanatory video! Would you provide a link to this Jypter notebook?
Great explanation! I learned a ton with your video
Brilliant. Structured and clear.
Thank you!
Excellent overview - Thanks!
You're welcome, thanks for watching!
just found your channel. Excellent Content - another sub for you sir!
Thank you I appreciate the support!
Thanks for sharing the knowledge 👍
Great video clear and simple. I wonder is it were possible how can we use this with azure OpenAI
This is gold! Thank you!❤
Awesome Explanation
Amazing tutorial and explanation, thank you!
Excellent work!
🎉🎉🎉 Great overview of LangChain, can you do similar video on using LangChain on open_assistant and weiviate vector database
Thanks! That’s a good idea for a video
Thank you very much for your video, it's so well explained! One question: is it really necessary to connect tools like Zappier with an API? Thanks to Zappier we can do a lot of things, but if we can already do it natively with the LangChain API, in what context can it be useful?
Thanks again for your video, I'm very excited about what we can create!
It really depends on your use case, you can do a lot with only LangChain/OpenAI. If you are already using Zapier in your flow it might make sense to use Zapier AI Actions.
Great video, what is the first app that you were using to explain the diagram ?
Thank you this is the info I was looking for.
the hack: we reduce what we have to feed to the LLM by filtering down our data using similarity search on-demand [with embeddings]
This was so helpful! What are your thoughts on connecting langchain and flutterflow?
Great explanation, thanks!
Great!!! Fantastic! Awesome! Thank you for sharing!
Thanks for watching!
great! I can use this video to teach my friend
Highly appreciated video
Nice video, can it be updated to not use any external services. Think dealing with sensitive data, don't want to feed it to OpenAI for embeddings, or use online models.
Thanks a lot. Very good explanation.
Thanks!
Subscribed. Others have clamored for the notebook. I do as well. Thank you.
This is excellent - I have a question re the splitting, lets imagine you have email templates that average like 2000 tokens a piece or IG captions with like 500 tokens - should things like this be embedded as one chunk or what is the advantage to splitting up into say 100 token splits?
OpenAI API keys usage is not free. I had to add a payment method, before the keys started working. Without a valid payment method the keys doesn't return any results.
It was free at that time.
early api users got 18$ credit and last time one of my fnd got 5$ credit about a month ago.
but now it's not free.
yes. and I tried yesterday then realised how quickly charge can add up if no control of using it.
you don't need to pay. You need to earn. Check bittensor and work there
Elon 's working on that 😂
Use local llms.
Awesome work thanks a lot!
Great explanation!
Hi there, is there a way to combine steps 4 and 5? I assumed you would be using the Agent to answer questions on the autoencoder that we had focused on for the whole video, but then we just used it to do some maths. I think it would be useful if it could answer questions based on the embeddings we have in our index?
Excellent introduction! Thanks a lot :-)