LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ •

  • @imtanuki4106
    @imtanuki4106 Рік тому +194

    90% (or more) of tech tutorials start with code, without providing a conceptual overview, as you have done. This video is phenomenal...

    • @rabbitmetrics
      @rabbitmetrics  Рік тому +6

      Appreciate it! 🙏 Thanks for watching

    • @ThangTran-hi3es
      @ThangTran-hi3es 6 місяців тому +1

      Totally agree with this. I love the way this guy teaching the conceptual

    • @PatchworxStudios
      @PatchworxStudios Місяць тому +1

      I disagree. I almost never find good code examples instead only concepts for dummys.

  • @adamgkruger
    @adamgkruger Рік тому +261

    I've noticed a significant lack of comprehensive resources that cover LangChain thoroughly. Your work on the subject is highly valued. Thank you

    • @artic4873
      @artic4873 Рік тому +1

      Yes, there's not enough books on it. The documentation is sparse

    • @andrewflewelling4294
      @andrewflewelling4294 10 місяців тому

      Agreed. This was the perfect introduction, for me at this time, to Lang chain.

  • @HarshGupta-sf4rj
    @HarshGupta-sf4rj 9 місяців тому +8

    I never comment on any video but your flawless explanation made me, Thank you for such a masterpiece.

    • @rabbitmetrics
      @rabbitmetrics  8 місяців тому

      Appreciate the kind words! 🙏 Thanks for watching

  • @zerorusher
    @zerorusher Рік тому +10

    This is the best 101 video I found on the subject. Most of the other videos assume you're already somewhat familiar with the tools or aren't that beginner friendly.

  • @jayhu6075
    @jayhu6075 Рік тому +12

    One of the best QuickStart streaming that I've seen. A clearly explanation in combination with images. Many thanks.

  • @sitedev
    @sitedev Рік тому +67

    Thank you. I have watched a lot of videos that attempt to explain LLM's and LangChain as successfully as you have here but fail to do it as succinctly as you have. I was looking for a video that I can share with my clients that explains what LLM's and LangChain are without being too dumbed down or being too 'over their heads' and this video is perfect for that! So, again - thank you.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому +9

      Glad it was helpful! I really appreciate the comment, thank you very much 🙏

  • @guitarcrax127
    @guitarcrax127 Рік тому +6

    Excellent intro, especially for an experienced programmer to start using after a single watch. Learned a lot in a short time with it. Thanks for making.

  • @GerryRCS
    @GerryRCS 3 місяці тому

    We need more videos like this, comprehensive for the general public and for newbies like me. Thank you!

  • @danquixote6072
    @danquixote6072 Рік тому +59

    Having read through the LangChain's conceptual documentation, I must say this video is a great accompaniment. Very clear and well presented and for a non coder like myself, easy to understand. (I'd pay for a LangChain manual for 5 year olds!) . Subscribed.

  • @manrajsingh8617
    @manrajsingh8617 Місяць тому +1

    Best video I have ever seen on explaining Langchain soo far 💯

  • @GeoAIGREL
    @GeoAIGREL 5 місяців тому

    One of the best 101 video on LangChain out there, Kudos to you!

  • @postnetworkacademy
    @postnetworkacademy Місяць тому

    "Great video! This explanation of LangChain's core concepts is super helpful for beginners looking to build LLM applications. Thanks for sharing the code link as well-makes it easy to follow along and experiment!"

  • @steve_wk
    @steve_wk Рік тому +20

    I've been watching a lot of AI videos, this is definitely one the best - well-organized and very clear

  • @ranjithpals
    @ranjithpals Рік тому +4

    Your video really helps understand the basics of langchain and provides a good context as well. I'm looking forward to more such videos !

  • @maya-akim
    @maya-akim Рік тому +15

    This was an awesome and very straightforward video. I believe that it's the most useful video about LangChain that exists I've seen so far. Even people that don't know much about programming can follow. Thanks so much!

  • @4.0.4
    @4.0.4 Рік тому +22

    The coolest thing about enhancing LLMs like this is that locally-runnable models will be very interesting (no huge API call costs) and smarter than by default.

    • @ignfishiv
      @ignfishiv Рік тому +4

      I would love local LLMs! Though I doubt that one advanced as GTP-3.5/4 will be able to be run locally for a few years because of the required computational power. I still look forward to the day that it becomes a thing though!

    • @Ltsoftware3139
      @Ltsoftware3139 Рік тому +9

      The costs are not the advantage. Hosting things on your own hardware is usually more expensive, especially if you need multiple models(embedding model, LLM, maybe a text to speech). The advantage I see is that you could use custom models trained on your data

    • @oryxchannel
      @oryxchannel Рік тому +1

      Enter neuromorphics: ua-cam.com/video/EXaMQejsMZ8/v-deo.html

  • @chukypedro818
    @chukypedro818 Рік тому +5

    With immediate effect I have subscribe to your awesome channel.
    Explanation to LangChain was clear and concise. I really learnt a lot in just 12 minutes.

  • @anandakumar31
    @anandakumar31 10 місяців тому +1

    Excellent video for beginners who want to start on Langchain. Well explained.

  • @johnshaff
    @johnshaff Рік тому +6

    I inspected Langchain code as soon as it was released, ran some tests and never used it since. Im surprised so many consider its limitations acceptable. Using embedding similarity as a query filter is like trying to answer a prompt by comparing every chunk of text to your prompt. It makes absolutely no sense because often times an answer looks nothing like a question, and/or the data needed to answer a question looks nothing like the question.
    The purpose of the embedding layer in a transformer neural network is to prepare the prompt tensor for further processing through the remaining model layers. It’s like bringing your prompt to the starting line of a long process to be answered, but instead of bringing just the prompt to the starting line, langchain brings the entire text your asking the question of to the starting line with your question and asking them to look at each other and be like “hey, whoever looks like me, stand over here with me. Ok now the rest of you go away and I’m going to ask chatgpt to see which of you remaining can help answer me”.
    This is a slight of hand trick, trying to replace everything that happens after the starting line, with chatgpt, but it doesn’t really work for 2 big reasons: (1) chatgpt context is not large enough to transform both the entire text your asking a question of + your prompt, and the same limitation applies to batching (2) your embeddings are incomplete because they were not created by the network, but simply hacking the first layer in a sense

    • @MeatCatCheesyBlaster
      @MeatCatCheesyBlaster Рік тому

      Interesting take. I suspect most people don't understand the technology enough to see how it works. Would be helpful if you could make a video explanation

    • @albertocambronero1326
      @albertocambronero1326 Рік тому

      Biggest limitation right know that we can’t get over with, is chat GPTs context length, there is no way around that unless the contexts is greatly increase by OpenAI themselves or we could train our gpt4 model on large texts

    • @langmod
      @langmod Рік тому +3

      @@albertocambronero1326 I agree. It would cool if there was a sort of "short term memory model" that could hold personal data. I don't see expanding context length as a parsimonious solution. Model queries produce the best results when they are sort and poignant. Any time you need to bring a ton of context to the prompt it reduces the relative weight of the primary question. Imagine a patient friend who accepts questions with an unrestricted context length. They have never read the book Great Gadsby (i.e. this would be like your personal data) - so to ask them a question about Jay Gatsby the question must begin by reading them the entire Great Gatsby novel, followed by "thee end... Where did Jay Gatsby go to college?" Then to ask them another Gatsby question it requires reading them the novel, again, and again. It would be awesome if there was a way to side-load a small personalized model that can plug into a LLM for extended capabilities.

    • @albertocambronero1326
      @albertocambronero1326 Рік тому +1

      ​@@langmod amazing response, I did not know what was going on under the scenes with the context and did not know model queries produce the best results when they are sort and poignant.
      I believe that if you send the novel it would be stored in the context of the model and then you would be able multiple questions (?) or would the novel be lossing importance (weight) as more and more contexts is added?
      Referring to the comment that started this thread, the complicated bit about training the model on a certain topic, lets say: we train the existing GPT4 model in the book Great Gadsby it would probably know how to answer questions about the book, but it could not analize the whole book to find linguistic trends in the book (like what is the most talked about topic in the book) unless you ALSO feed the model with an article about "the most talked topic in the book".
      I mean I want my GPT4 model to read the book and analize the whole picture of what the book is about without needing extra articles about the book.
      (my use case is to make GPT4 analyze thousands of reviews and answer questions about it, but right now using NLP techniques sounds like a more duable option right now or at least until we have an option to extend GPT4 knowledge)

    • @ugaaga198
      @ugaaga198 Рік тому

      You can't say simply "it doesn't really work". It really depends on the use case. There are true limitations and some creativity might be required to leverage it. The context size might me sufficient for smaller use cases or it might be sufficient to break down bigger questions into smaller questions with their own contexts and then summarize etc.

  • @ALEJANDV1
    @ALEJANDV1 Рік тому +11

    Thank you very much, Rabbitmetrics! This tutorial is absolutely a gem for someone looking for a clear and concise overview of the main concepts!

  • @garratygarret8559
    @garratygarret8559 Рік тому +3

    Thank you for the video. I think it gives a really good introduction to the topic without much distraction. Absolutely pleasant to follow even for a non-native speaker.

  • @sujoyroy3157
    @sujoyroy3157 Рік тому +4

    How is the relevant info (as a vector representation) and question (as a vector representation) combined as a prompt to query the LLM? The example you show is a standard ChatGPT textual prompting scenario. The LLM will spit out what it knows and not what it does not know. So what application will this info be useful for? Also is there any associated paper or benchmark that investigates the performance of extracting "relevant information" using this chunking method or is it implementing some DL based Q/A paper?

  • @ejclearwater
    @ejclearwater 10 місяців тому

    I have been searching and searching for an explanation of how to do this exact thing!! Yasssssss thank yooouuu! ❤

  • @Janeilliams
    @Janeilliams Рік тому

    Wow, this video on lang-chain have all the pieces i have been searching for.
    Thank you so much for taking time and making this awesome video.

  • @nickfergis1425
    @nickfergis1425 Рік тому

    solid instructor. good intro langchain at the right level of depth. For as quick as he rips thru a huge amount of information, he is still pretty easy to follow.

  • @rakeshmr3329
    @rakeshmr3329 11 місяців тому

    Really fantastic crisp explanation of LLM nothing more nothing less.

  • @spicer41282
    @spicer41282 Рік тому

    Your approach on this Langchain vid garnered you a Subscriber! Thanks!

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      Appreciate the support! Thanks for watching

  • @ZorroS-b9z
    @ZorroS-b9z 23 дні тому +1

    Thank you. Information is presented really well for a 5 year old like me

  • @dudefromsa
    @dudefromsa Рік тому +1

    I found this to be very comprehensive and indeed useful.

  • @mnava4290
    @mnava4290 3 місяці тому

    Excellent coding examples. Please do more of these.
    Please do a tutorial on how to summarise comments received on a UA-cam video.

  • @shyama5612
    @shyama5612 Рік тому

    Excellent intro. Harrison would approve!

  • @bharatpanchal8582
    @bharatpanchal8582 Рік тому

    Thank you for explaining all the components. Highly appreciate it.

    • @rabbitmetrics
      @rabbitmetrics  11 місяців тому

      You're welcome! Thanks for watching

  • @Thisnthat979
    @Thisnthat979 Рік тому +1

    Hi I am new to Python, how do I get to the screen at 5:00 to edit the environment file? I installed all the component then stuck, thank you!

  • @daffertube
    @daffertube Рік тому

    How do you store a API key in the .env ? I created the .env file in the root and I get error 500 when trying to open the .env and even chatgpt doesn't know why.

  • @ernikitamalviya
    @ernikitamalviya Рік тому

    Thank you so much for covering all the components in just 13 mins. Though, it took an hour to learn and absorb everything :D

  • @repairstudio4940
    @repairstudio4940 Рік тому

    This is a absolutely wonderfuk video on LangChain and its clear and concise. Coukd you do a tutorial for beginners??? 🙏🏼

  • @PhoebePhuu
    @PhoebePhuu 9 місяців тому

    Your explanation is super clear to understand for me as a beginner. I want to know brief steps for the code flow as titles just like
    1.Creating environment to get keys, 2. etc.,. Can anyone answer it?

  • @alaad1009
    @alaad1009 Рік тому

    What a beautiful video. You Sir are a great teacher ! Thank You !

  • @ratral
    @ratral Рік тому +1

    Thank you very much for watching the video, a very well-structured clarification. 👍

  • @monkeyy992
    @monkeyy992 Рік тому +3

    This is so interesting. We (german insurance company) want to develop our own copilot for employees. But we can’t use the GPT4 API given the fact that our companies data is sensitive and we don’t want them to be public at openai. You have a tip for this issue?

    • @rabbitmetrics
      @rabbitmetrics  Рік тому +1

      Yes, you would use a local (possibly finetuned) language model instead of GPT4 - planning a video on this

    • @monkeyy992
      @monkeyy992 Рік тому

      @@rabbitmetrics would be more than happy about a video concerning this topic. Maybe using GPT4ALL

    • @thebluriam
      @thebluriam Рік тому

      If you look at openAI's privacy policy, you'll find that they explicitly state that data provided through the API is not recycled into the training data for OpenAI's systems unless you explicitly enable it, it's off by default. So yes, you can use OpenAI's systems through the API with proprietary information and it wont end up in the training data. A quick search will let you find their official announcements about this.

    • @markschrammel9513
      @markschrammel9513 2 місяці тому

      @@thebluriam you believe them ??? :D :D: D :D :D

    • @thebluriam
      @thebluriam 2 місяці тому

      @@markschrammel9513 yes, they would be in breach of their own terms of service and liable legally, also, the API has many fewer restrictions and controls vs chatgpt, it's a totally different animal

  • @saddam_codes
    @saddam_codes Рік тому

    This video really explains A-Z about langchain. This is damn good man.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      Appreciate the comment! Thanks for watching

  • @tosinlitics949
    @tosinlitics949 Рік тому

    Amazing short video packed with knowledge. Just smashed that subscribe button!

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      Appreciate the support, thanks for watching!

  • @TheAlokgupta83in
    @TheAlokgupta83in Рік тому

    This is a cool explanation of how langchain works.

  • @dozieweon
    @dozieweon Рік тому

    This is very insightful and straight to the point.

  • @miguelangelromerogutierrez9626

    Very good explanation with a simple example to understand how it works! Thanks for this content

  • @conne637
    @conne637 Рік тому +1

    Can someone explain to me, how the question & and the relevant (personal) data is combined when promting the model? Also, if I understand this correctly, using LangChain after all would enlarge the promt and hence number of tokens needed / cost? Thanks in advance!

  • @Swanidhi
    @Swanidhi Рік тому +2

    Great content! Just what someone who just jumped into Gen AI would need to solve diverse use cases. Subscribed!

  • @prometheususa
    @prometheususa Рік тому +15

    I think you have to create the index in pine code explicitly. I did this with the following command 'pinecone.create_index(index_name, dimension=1024, metric="euclidean")' just before calling the search. I wonder if anyone else noticed this...

  • @limster5
    @limster5 Рік тому

    Thank you for this video. Now I can start work on my Langchain. Have subscribed!

  • @zenfoil
    @zenfoil 10 місяців тому

    👍 Your explanation is so structure and clear. I can understand how langchain works now even though I don’t know your python codes at all.

    • @rabbitmetrics
      @rabbitmetrics  10 місяців тому

      Thanks! 🙏 Glad it was helpful

  • @hardikmehta8308
    @hardikmehta8308 Рік тому

    Fantastic overview of Langchain! Thank you @Rabbitmetrics

  • @ramp2011
    @ramp2011 Рік тому

    Excellent video. THank you for sharing. Would love to see a video on Langchain Agents. Thank you

  • @MrAloha
    @MrAloha Рік тому +1

    Excellent! I've spent hours looking for this 13 minute tutorial. You fa man! Thanks! 💪😁🌴🤙

    • @rabbitmetrics
      @rabbitmetrics  Рік тому +1

      Glad you found it! 😊 Thanks for watching

  • @mohajeramir
    @mohajeramir Рік тому

    Really appreciate this. For clarity though, the scheme you presented 1:56 had nothing to do with the rest of the presentation. Correct?

    • @rabbitmetrics
      @rabbitmetrics  Рік тому +1

      The flowchart visualizes how you can extract information with LLMs from vector storage in LangChain

  • @cymaticchaos2425
    @cymaticchaos2425 Рік тому +1

    Zero clutter. A Guru (remover of darkness) is one who can create chunks of knowledge in a sequence that is easier for the Shishya (student) to learn with ease and get it to their neocortex without having to decode the vectors, that allows for carrying it to their multiple incarnations. Thank you Guru-ji.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      I appreciate the comment - thanks for watching!

  • @CinematicHeartstrings
    @CinematicHeartstrings 10 місяців тому

    Thank you very much for the video! Really helpfull to kickstart with LangChain

  • @Bragheto
    @Bragheto Рік тому +2

    This is gold! Thank you!❤

  • @Mr.D4yz
    @Mr.D4yz Рік тому +1

    How safe is proprietary data when using this? Is the data saved by openAI?

  • @DrAIScience
    @DrAIScience 11 місяців тому

    Very interesting..can we do this for image search? Query and similarity search for image search and image match? Can we see embeddings of images like text that you presented?. Thanks

  • @Amr-fc1kd
    @Amr-fc1kd 2 місяці тому

    Very interesting information, but i didn't get it what could be the privilege of using langChain + Pincone vs using open AI Assistance + vectore storage ?

  • @ciaranryan9485
    @ciaranryan9485 Рік тому +2

    Hi there, is there a way to combine steps 4 and 5? I assumed you would be using the Agent to answer questions on the autoencoder that we had focused on for the whole video, but then we just used it to do some maths. I think it would be useful if it could answer questions based on the embeddings we have in our index?

  • @emptiness116
    @emptiness116 Рік тому +1

    Thank you for your contribution through the UA-cam space

  • @youngsdiscovery8909
    @youngsdiscovery8909 Рік тому +1

    super helpful. I think langchain engineer could hold significant value in the current job market

  • @musumo1908
    @musumo1908 Рік тому +1

    Thanks! This is the best high level langchain video I have watched. Im not a programmer but this overview is invaluable...its clearly explained and demystified the dark arts of langchain 😂😂...question, whats the most straightforward way of converting website data into vectors? Is there some way to scrape urls...looking to create simple q&a agents for small websites...thanks

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      I’m glad it was helpful, I appreciate the comment! Regarding scraping urls, take a look at the latest video I’ve uploaded ua-cam.com/video/I-beHln9Gus/v-deo.html In that video I’m using LangChain’s integration with Apify to extract content from my own webpage

    • @musumo1908
      @musumo1908 Рік тому

      @@rabbitmetrics thanks. Yes took a look. Will see what I can do. Came across Apify in my research yesterday
      ! Will try to run this with llamaindex ….Im teaching myself! There’s not many apify videos around so thanks

  • @jimg8296
    @jimg8296 10 місяців тому

    Nice video, can it be updated to not use any external services. Think dealing with sensitive data, don't want to feed it to OpenAI for embeddings, or use online models.

  • @torontoyes
    @torontoyes Рік тому +1

    Can you do a video on Autogen and LangChain? Maybe throw in SuperAgent as well.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      Will be likely covering this in upcoming videos

  • @HannesSmit-gl7qq
    @HannesSmit-gl7qq 8 місяців тому

    Hello, I just run your script around 05:53 with python3 and pip3. However it says ` Could not import openai python package. Please install it with `pip install openai`. (type=value_error)`. Which version of that dep should I add to get a coherent project with your code?

  • @bwilliams060
    @bwilliams060 Рік тому +6

    Excellent unpack! Can you please provide a link to this notebook?

  • @axelrein9901
    @axelrein9901 Рік тому +1

    This is amazing stuff. Would love to see a deeper dive into it.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому +2

      Thanks for watching! I'm already working on some deep dive videos

  • @auslei
    @auslei Рік тому

    I am finding the challenge is the splitting of documents. It needs to be large enough to cater for the search but small for context windows. I tried to use large pieces and another split when trying to extract information. Not sure if it is the "right" way.

  • @muhammadhaseeb2895
    @muhammadhaseeb2895 Рік тому

    Absolutely love the way you explained.

  • @محمودجلیلی-ش1م
    @محمودجلیلی-ش1م Рік тому

    want to translate a PDF file larger than 5MB using langchain while preserving the formatting of the text and images within the PDF. Please guide me on how to do this, and if possible, provide a tutorial for this matter. Thank you.

  • @PremkumarD
    @PremkumarD 7 місяців тому

    maybe this is a dumb question, at 7:54 when you say llm=llm in that line, did you define a variable called llm somewhere ?

  • @Glimmer-t44
    @Glimmer-t44 Рік тому +116

    OpenAI API keys usage is not free. I had to add a payment method, before the keys started working. Without a valid payment method the keys doesn't return any results.

    • @AbdullahWins
      @AbdullahWins Рік тому +15

      It was free at that time.
      early api users got 18$ credit and last time one of my fnd got 5$ credit about a month ago.
      but now it's not free.

    • @jameswei2253
      @jameswei2253 Рік тому +7

      yes. and I tried yesterday then realised how quickly charge can add up if no control of using it.

    • @AntonioLop14
      @AntonioLop14 10 місяців тому

      you don't need to pay. You need to earn. Check bittensor and work there

    • @collinspo
      @collinspo 9 місяців тому +1

      Elon 's working on that 😂

    • @bazookaman1353
      @bazookaman1353 9 місяців тому +3

      Use local llms.

  • @xGogita
    @xGogita 10 місяців тому

    Brilliant. Structured and clear.

  • @shadhumydee9730
    @shadhumydee9730 11 місяців тому

    I have a question to understand the difference between RAG and Langchain. Since you have used Pinecone to store data in a store which can later be used as a context to the LLM, can I call this - it's an implementation of RAG using Langchain?

  • @evam796
    @evam796 10 місяців тому +1

    How is Langchain still relevant now that knowledge, llm and actions are available in OpenAI’s Custom GPTs ?

    • @user-pp4gh4gb2w
      @user-pp4gh4gb2w 4 місяці тому +1

      LangChain offers much more flexibility in comparison. Moreover ChatGPT isn't the only model out there. LangChain acts as a good base for all apps based on open source models

  • @babakbandpey
    @babakbandpey Рік тому +1

    Thanks friend. You answered a lot of questions here and the repo, helped understanding your presentation much better. Please share more. Have a great day.

  • @hectorprx
    @hectorprx Рік тому

    Thanks for the clarity , all the best

  • @ChrisHarasty
    @ChrisHarasty Рік тому

    Excellent overview - Thanks!

  • @KayYesYouTuber
    @KayYesYouTuber Рік тому

    Simply fantastic. Thank you very much for explaining it so well.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      Appreciate the comment! 🙏 Thanks for watching

  • @anandthanumalayan
    @anandthanumalayan 7 місяців тому

    How is similarity search returning 4 chunks = answer? How is this extending gpt3.5 to work on top of my data? Even if I imagine those texts are my custom data, langchain just returns similar chunks of text? Which elastic search can do as well?!?

  • @JoshuaDawson-e6x
    @JoshuaDawson-e6x Рік тому +1

    Wonder how useful this might be to use with repos? Imagine you could chat with GPT in it knows your entire codebase and could use specific examples in your conversations. Of course there are some security concerns but the trade off might be worth it.

    • @twinlens
      @twinlens Рік тому

      I want to explore doing exactly this but with a private LLM instance rather than shipping data to GPT or elsewhere. I've been using gpt-engineer, which is super fun. When it can create a codebase and then iterate on it, more fun.

  • @bunnihilator
    @bunnihilator Рік тому +1

    Can these LLM return an entity data with all its attributes, or do they only return conversation text?

  • @mhm7129
    @mhm7129 Рік тому

    Excellent work!

  • @SokoBuilds
    @SokoBuilds Рік тому

    One thing I noticed is that the dimensions in the vector store isn’t the correct amount required for ada-002. Wondering why that is as it could inhibit performance.

  • @gnanaprakash-ravi
    @gnanaprakash-ravi 7 місяців тому +1

    Hi, this video is one of the best, but now langchain changed its modules and classes, please update us with the new video, for eg: simplesequentialchain is not supporting now!!

  • @leonardosouzaconradodesant6213

    Great!!! Fantastic! Awesome! Thank you for sharing!

  • @Skandawin78
    @Skandawin78 7 місяців тому

    how do you fetch only the relavant chunk from the vector database and then send it to LLM along with the user query for the answer? that part is not clear

  • @ledjon
    @ledjon Рік тому

    I didn’t understand the interaction at that end. What is doing the “thoughts” and how is the python code that needs to run determined?

  • @WilmanArambillete
    @WilmanArambillete Рік тому

    great video thanks for sharing. I have a question i am a newbie at this, why do we need to do the query in the vector DB? I mean the idea is to use an LLM, inject my data which could be stored into a DB and then ask the model which would include my data to get a response right? But why do i need to do a syntatic search to my DB then ? I am confused

    • @florinfilip6355
      @florinfilip6355 8 місяців тому +1

      Wilman, embeddings must be stored somewhere (typical a vector database) in order to retrieve the document relevant to the question quickly using the indexes.

  • @RobbieMraz
    @RobbieMraz 8 місяців тому

    Thank you this is the info I was looking for.

  • @transcenderstarship1254
    @transcenderstarship1254 Рік тому

    At 6:38 you say ChatGPT 3.5 was used instead of ChatGPT 4, which is the latest version. You say this is because of some current limitations. What are those limitations? I have used both and there would need to be a really good reason not to use version 4. Please advise.

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      At the time of the recording, the OpenAI service was unstable when using GPT-4. Now everything seems to work fine and I only use GPT-4

  • @peanuts4132
    @peanuts4132 Рік тому

    So what do I do when I want to use a chat model + a non-chat-model? I want to have openai chat model with the ability to output audio files. but the audio LLM does not output text which chainlit expects. It outputs a bytes object, which could be converted to base64. Am i supposed to chain it with openaichat or create a BaseTool that calls the other LLM?

  • @mwonderlin
    @mwonderlin Рік тому +2

    This is excellent - I have a question re the splitting, lets imagine you have email templates that average like 2000 tokens a piece or IG captions with like 500 tokens - should things like this be embedded as one chunk or what is the advantage to splitting up into say 100 token splits?

  • @임성준-t4t
    @임성준-t4t 11 місяців тому

    thanks for video but i think it might me need some money. can i just use only the openai api for model? can i use llama?

  • @p80mod
    @p80mod Рік тому

    I would like to use LangChain to ask questions in human language about our company's data and get answers from our SQL database. Does that mean exposing the data to LLM?

    • @rabbitmetrics
      @rabbitmetrics  Рік тому

      Hi, thanks for watching! That depends on your implementation. You can have the LLM write the SQL and then handle the data processing without revealing anything about the data to the LLM. But if you want to go further than that, and have the LLM help you extract insights you should think about using open source alternatives where you can keep your data private.

  • @lachevre6421
    @lachevre6421 10 місяців тому

    Thank you very much for your video, it's so well explained! One question: is it really necessary to connect tools like Zappier with an API? Thanks to Zappier we can do a lot of things, but if we can already do it natively with the LangChain API, in what context can it be useful?
    Thanks again for your video, I'm very excited about what we can create!

    • @rabbitmetrics
      @rabbitmetrics  10 місяців тому

      It really depends on your use case, you can do a lot with only LangChain/OpenAI. If you are already using Zapier in your flow it might make sense to use Zapier AI Actions.

  • @piyushkumartiwari1207
    @piyushkumartiwari1207 Рік тому +1

    How can I implement all these in my app?

  • @aiartrelaxation
    @aiartrelaxation Рік тому

    Big question, how safe will be my personal data. Cause it's feeding my personal information, and what does it stop not exchange my information.