LangChain101: Question A 300 Page Book (w/ OpenAI + Pinecone)

Поділитися
Вставка
  • Опубліковано 24 кві 2024
  • Twitter: / gregkamradt
    Or get updates to your inbox: mail.gregkamradt.com/signup
    In this tutorial we will load a PDF book, split it up into documents, get vectors for those documents as embeddings, then ask a question.
    --AI Generated Description--
    In this tutorial, I am is discussing how to query a book using OpenAI, LangChain, and Pinecone, an external vector store, for semantic search.
    I'm demonstrating how to split up the book into documents, use OpenAI embeddings to change them into vectors, and then use Pinecone to store them externally.
    I'm then showing how to ask a question and get an answer back in natural language. This technique can be used to query books as well as internal documents or external data sets.
    --AI Generated Description--
    0:00 - Intro
    1:31 - Diagram Overview
    3:33 - Code Start
    5:46 - Embeddings
    6:33 - Pinecone Index Create
    7:45 - First Question
    9:33 - Ask Questions w/ OpenAI
    Code: github.com/gkamradt/langchain...

КОМЕНТАРІ • 617

  • @edzehoo
    @edzehoo Рік тому +143

    So even Ryan Gosling's getting into this now.

    • @DataIndependent
      @DataIndependent  Рік тому +7

      It's a fun topic!

    • @blockanese3225
      @blockanese3225 Рік тому +8

      @@DataIndependent he was referring to the fact you look like Ryan Gosling.

    • @Author_SoftwareDesigner
      @Author_SoftwareDesigner Рік тому +13

      ​@@blockanese3225 I think understands that.

    • @blockanese3225
      @blockanese3225 Рік тому +3

      @@Author_SoftwareDesigner lol I couldn’t tell if he understood that when he said it’s a fun topic.

    • @nigelcrasto
      @nigelcrasto Рік тому

      yesss

  • @nigelcrasto
    @nigelcrasto Рік тому +3

    you know it's something big when The GRAY MAN himself is teaching you AI!!

  • @krisszostak4849
    @krisszostak4849 Рік тому +5

    This is absolutely brilliant! I love the way you explain everything and just give away all notes in such detailed and easy to follow way.. 🤩

  • @64smarketing57
    @64smarketing57 11 місяців тому +1

    This is exactly what I was looking to do, but I could'nt sort it out. This video is legit the best resource on this subject matter. You're gentleman and a scholar. I tip my hat to you, good sir.

  • @blocksystems202
    @blocksystems202 Рік тому +3

    No idea how long i've been searching the web for this exact tutorial. Thank you.

  • @NaveenVinta
    @NaveenVinta Рік тому +18

    Great job on the video. I understood a lot more in 12 mins than from a day of reading documentation. Would be extremely helpful if you can bookend this video with 1. dependencies and set up and 2. turning this into a web app. If you can make this into a playlist of 3 videos, even better.

  • @davypeterbraun
    @davypeterbraun 11 місяців тому +10

    Your series is just so so good. What a passionate, talented teacher you are!

  • @401kandBeyond
    @401kandBeyond 11 місяців тому

    This is a great video and Greg is awesome. Let's hope he puts together a course!

  • @lostnotfoundyet
    @lostnotfoundyet 10 місяців тому +1

    thanks for making these videos! I've been going through the playlist and learning a lot. One thing I wanted to mention that I find really helpful in addition to the concepts explained is the background music! Would love to get that playlist :)

    • @DataIndependent
      @DataIndependent  10 місяців тому +1

      Thank you! A lot of people gave constructive feedback that they didn't like it. Especially when they sped up the track and listed to it on 1.2x or 1.5x
      Here is where I got the music!
      lofigenerator.com/

  • @virendersingh9377
    @virendersingh9377 Рік тому

    I like the video because it was to the point and the presentation with the initial overview diagram is great.

  • @sarahroark3356
    @sarahroark3356 Рік тому +78

    OMG, this is exactly the functionality I need as a long-form fiction writer, not just to be able to look up continuity stuff in previous works in a series so that I don't contradict myself or reinvent wheels ^^ -- but then to also do productive brainstorming/editing/feedback with the chatbot. I need to figure out how to make exactly this happen! Thank you for the video!

    • @DataIndependent
      @DataIndependent  Рік тому +3

      Nice! Glad it was helpful

    • @areacode3816
      @areacode3816 Рік тому +3

      Agreed. Do you have any simplified tutorials? Like explaining langchain I fed my novel into chatgpt page by page it worked..ok but I kept running into roadblocks. Memory cache limits and more.

    • @thebicycleman8062
      @thebicycleman8062 Рік тому

      @@areacode3816 maybe from ur pinecone reaching its limit? or ur 4000 gpt3 token limit? i would check these first, if its pinecone the fix is easy, jus buy more space, but if its due to gpt then try gpt4 it has double the token at 8k or if that doesnt work i would figure out an intermediary step in between to introduce another sumarizing algorithm before passing it to gpt3

    • @gjsxnobody7534
      @gjsxnobody7534 Рік тому +2

      How would I use this to make a smart chat bot for our chat support on our company? Specific to our company items

    • @shubhamgupta7730
      @shubhamgupta7730 10 місяців тому

      @@gjsxnobody7534I have same query!

  • @Crowward92
    @Crowward92 Рік тому

    Great video man. Loved it. I had been looking for this solution for some time. Keep up the good work.

  • @ninonazgaidze1360
    @ninonazgaidze1360 7 місяців тому

    This is super awesome!!! And so easily explained! You made my year. Please keep up the greatest work

  • @MrWrklez
    @MrWrklez Рік тому +2

    Awesome example, thanks for putting this together!

    • @DataIndependent
      @DataIndependent  Рік тому

      Nice! Glad it worked out. Let me know if you have any questions

  • @haouasy
    @haouasy 7 місяців тому

    Amazing content man , love the diagrams and how you deliver ,absolutely professional .
    quick question , is the text returned by the chain is exactly the same from the book or does the openAI engine make some touches and make it better ?

  • @HelenJackson-pq4nm
    @HelenJackson-pq4nm Рік тому

    Really clear, useful demo - thanks for sharing

  • @tunle3980
    @tunle3980 11 місяців тому +1

    Thank you very much for doing this. It's absolutely awesome!!! Also can you do a video on how to improve the quality of answers?

  • @davidzhang4825
    @davidzhang4825 11 місяців тому +2

    This is gold ! please do another one with data in Excel or Google sheet please :)

  • @Mr_Chiro_
    @Mr_Chiro_ 11 місяців тому

    Thank you soooo much I am using this knowledge soo much for my school projects.

  • @vinosamari
    @vinosamari Рік тому +22

    Can you do a more indepth Pinecone video? It seems like an interesting concept alongside embeddings and i think it'll help seam together the understanding of embeddings for more 'web devs' like me. I like how you used relatable terms while introducing it in this video and i think it deserves its own space. Please consider an Embeddings + Pinecone fundamentals video. Thank you.

    • @DataIndependent
      @DataIndependent  Рік тому +5

      Nice! Thank you. What's the question you have about the process?

    • @ziga1998
      @ziga1998 Рік тому

      @@DataIndependent I thinks that general pinecone video would be great, and connecting it with LangChain and building similar apps to this would be awesome

    • @ko-Daegu
      @ko-Daegu 11 місяців тому

      Weaviet is even better

  • @nsitkarana
    @nsitkarana 10 місяців тому +1

    Nice video. i tweaked the code and split the index part and the query part so that i can index once and keep querying - like how we would do in the real world. Nicely put together !!

    • @babakbandpey
      @babakbandpey 10 місяців тому +1

      Hello, Do you have an example of how you did that. This is the part that I have become confused about how to reuse the same indexes. Thanks

    • @karimhadni9858
      @karimhadni9858 9 місяців тому

      Can you pls provide an example?

  • @nickpetolick4358
    @nickpetolick4358 Рік тому

    This is the best video i've watched explaining the use of pinecone.

  • @DanielChen90
    @DanielChen90 Рік тому +1

    Great tutorial bro. You're really doing good out here for us the ignorant. Took me a while to figure out that I needed to run pip install pinecone-client to install pinecone. So this is for anyone else who is stuck there

  • @ShadowScales
    @ShadowScales 6 місяців тому

    bro thank you so much honestly this video means so much to me, I really appreciate this all the best in all your future endeavors

  • @ThomasODuffy
    @ThomasODuffy Рік тому

    Thanks for this very helpful practical tutorial!

  • @PatrickCallaghan-yf2sf
    @PatrickCallaghan-yf2sf 7 місяців тому

    Fantastic video thanks. I obtained excellent results (accuracy) following your guide compared to other tutorials I tried previously.

    • @DataIndependent
      @DataIndependent  7 місяців тому

      Ah that's great - thanks for the comment

    • @aaanas
      @aaanas 6 місяців тому

      Was the starter tier of pinecone enough for you?

    • @PatrickCallaghan-yf2sf
      @PatrickCallaghan-yf2sf 6 місяців тому

      Its one project only on starter tier, that one project can contain multiple documents under one vector vector db. For me it was certainty enough to get an understanding of the potential.
      From my limited experience, to create multiple vector db's for different project types you will need to premium/paid and the cost is quite high.
      There may be other competitors offering cheaper entry level if you wish to develop apps but for a hobbyist/learning the starter tier on pinecone is fine IMO.

  • @caiyu538
    @caiyu538 8 місяців тому

    Great series.

  • @thespiritualmindset3580
    @thespiritualmindset3580 7 місяців тому

    this helped me a lot, thanks, for the updated code in description as well!

  • @3278andy
    @3278andy Рік тому

    Amazing tutorial Greg! I'm able to reproduce your result in my env, I think in order to ask about follow up questions, chat_history should be handy

  • @____2080_____
    @____2080_____ Рік тому

    This is such a game changer. Can’t wait to hook all of this up to GPT-4 as well as countless other things

  • @user-xp2ym1ng2h
    @user-xp2ym1ng2h 6 місяців тому

    Thanks as always Greg!

  • @sunbisoft9556
    @sunbisoft9556 Рік тому

    Got to say, you are awesome! Keep up the good work, you got a subscriber here!

    • @DataIndependent
      @DataIndependent  Рік тому +1

      Nice! Thank you. I just ordered upgrades for my recording set up so quality will increase soon.

  • @walter7812
    @walter7812 6 місяців тому

    Great tutorial, thanks so much!

  • @agiveon1999
    @agiveon1999 Рік тому +1

    This is great, thanks! have you thought about how to extend it to be able to CHAT about the book? (as opposed to a question at a time). I am running into problems figuring out when to keep a chain of chat and when to realize its a new or related question that needs new pulling of similar docs

  • @luisarango-jm8eq
    @luisarango-jm8eq Рік тому

    Love this brother!

  • @Juniorventura29
    @Juniorventura29 10 місяців тому +3

    Awesome tutorial, brief and easy to understand, Do you think this could be an approach to make semantic search on private data from clients? my concern is data privacy so, I guess by using pinecone and openAI, is that openAI only process what we send (to respond in a NL), but they don't store any of our documents.

  • @kelvinromero
    @kelvinromero 7 місяців тому

    Hey Greg amazing content, learning a lot from your videos!
    But I'm running into a problem, I was looking into the source code, and I noticed that the Pinecone.from_texts method indexes/stores the data, so it's not ideal to be running multiple times, right? Do you have any suggestion to improve this?

  • @waeldimassi3355
    @waeldimassi3355 11 місяців тому

    Amazing work ! thank you so much !!

  • @guilianamustiga2962
    @guilianamustiga2962 6 місяців тому

    thank you Greg! very helpful tutorial!!

  • @josevl8678
    @josevl8678 Рік тому +3

    Great video! Thanks a lot for sharing! One question: Once you have already loaded the vectors into Pinecone and closed your environment. How can you query the Pinecone DB if you don't have anymore the docsearch object?

  • @lukaszwiktor
    @lukaszwiktor Рік тому

    This is gold! Thank you so much!

  • @JoanSubiratsLlaveria
    @JoanSubiratsLlaveria 8 місяців тому

    Excellent video!

  • @philipsnowden
    @philipsnowden Рік тому +2

    Your videos are amazing. Keep it up and thanks!

    • @DataIndependent
      @DataIndependent  Рік тому

      Thanks Philip. Anything else you want to see?

    • @philipsnowden
      @philipsnowden Рік тому

      @@DataIndependent I'm curious what's a better option for this use case and would love to hear your thoughts. Why LangChain over Haystack? I want to pass through thousands of text documents into a question answering system and am still learning the best way to structure it. Also, an integration into something like Paperless would be cool!
      I'm a total noob so excuse my ignorance. Thanks!

    • @DataIndependent
      @DataIndependent  Рік тому +1

      @@philipsnowden I haven't used Haystack yet so I can't comment on it.
      If you have 1K text documents you'll definitely want to get embeddings and store them, retrieve them, then pass them into your prompt for the answer.
      Haven't used paperless yet either :)

    • @philipsnowden
      @philipsnowden Рік тому

      @@DataIndependent Good info, thank you.

    • @philipsnowden
      @philipsnowden Рік тому

      @@DataIndependent Could you do a more in depth explainer on this? I'm struggling to take a directory of text files and get it going. I've been reading and trying the docs for langchain but am having a hard time . And can you use the new turbo 3.5 model to answer the questions? Thanks for your time, have a tip jar?

  • @RomuloMagalhaesAutoTOPO
    @RomuloMagalhaesAutoTOPO Рік тому

    Great explanation. Thank you.

  • @svgtdnn6149
    @svgtdnn6149 Рік тому +1

    thanks for the great content! do you know how to better control the cost of having such a retrieval-based chatbot? Based on my experience, it is quite costly to run QnA on just the simple pdf that provided in LangChain repo, using default embeddings and llm models provided from the langchain example

  • @AtulThakorPeppercorn
    @AtulThakorPeppercorn 8 місяців тому

    Brilliant video

  • @sabashioyaki6227
    @sabashioyaki6227 Рік тому +2

    This is definitely cool, thank you. There seem to be several dependencies left out. It would be great if all dependencies were shown or listed...

    • @DataIndependent
      @DataIndependent  Рік тому +1

      ok, thank you and will do. Are you having a hard time installing them all?

    • @benfield1866
      @benfield1866 Рік тому

      @@DataIndependent hey I'm stuck on the dependency part as well

  • @ininodez
    @ininodez Рік тому +2

    Great video!! Loved your explanation. Could you create another video on how to estimate the costs? Is the process of turning the Documents to Embeddings using OpenAI running every time you make a new question? or just the first time? Thanks!

    • @silent.-killer
      @silent.-killer Рік тому

      Pinecone is basically a search engine for ai. It doesn't need the entire book but just segments of it instead. This saves a lot of tokens cause only segments of information end up in the prompt.
      Like adding some information into gpt's short term memory

  • @fareedbehardien
    @fareedbehardien Рік тому

    Would love to see an example of adding another book after you've done this one. What would be some of the considerations and fine-tuning you'd make as a result of the second upload

    • @DataIndependent
      @DataIndependent  Рік тому +3

      You could add more documents to your existing index and it shouldn't be a problem.
      However once you start to add a bunch of information, pre-filtering your vectors will become more important.
      Ex: If you know the answer comes from 1 of your 3 books then you can tell Pinecone to only return docs from that 1 book

  • @JuaniPisula
    @JuaniPisula Рік тому

    Great video! Do you know how Pinecone deals with the similarity of sequences of different length? For example, matching the 1k tokens documents in the video's db with the short query questions you ask.

  • @bartvandeenen
    @bartvandeenen Рік тому

    I actually scanned the whole Mars trilogy to have something substantial, and it works fine. The queries generally return decent answers, although some of them are way off.
    Thanks for your excellent work!

    • @DataIndependent
      @DataIndependent  Рік тому

      Nice! Glad to hear it. How many pages/words is the mars trilogy?

    • @bartvandeenen
      @bartvandeenen Рік тому

      @@DataIndependent About 1500 pages in total.

    • @keithprice3369
      @keithprice3369 Рік тому

      Did you look at the results returned from Pinecone so you could determine if the answers that were off were due to Pinecone not providing the right context or OpenAi not interpreting the data correctly?

    • @bartvandeenen
      @bartvandeenen Рік тому +1

      @@keithprice3369 no I haven't.good idea to do this. I know have gpt4 access so can use much larger prompts

    • @keithprice3369
      @keithprice3369 Рік тому

      @@bartvandeenen I've been watching a few videos about LangChain and they did bring up that the chunk size (and overlap) can have a huge impact on the quality of the results. They not only said there hasn't been much research on an ideal size but they said it should likely vary depending on the structure of the document. One presenter suggested 3 sentences with overlap might be a good starting point. But I don't know enough about LangChain, yet, to know how you specify a split on the number of sentences vs just a chunk size.

  • @rodrigomarques7128
    @rodrigomarques7128 7 місяців тому

    This is awesome!!!!

    • @DataIndependent
      @DataIndependent  7 місяців тому +1

      Nice! Glad it worked out

    • @rodrigomarques7128
      @rodrigomarques7128 7 місяців тому

      @@DataIndependent what's open source alternative you indicate for the model embedding and QA model?

  • @cheunghenrik7041
    @cheunghenrik7041 11 місяців тому

    Thanks for the tutorial series! May I ask could I work with multiple different PDFs at the same time (except combining them?)?

  • @ritik1857
    @ritik1857 11 місяців тому

    Thanks Ryan!

  • @alvaromseixas
    @alvaromseixas Рік тому +3

    Hey, Greg! I'm trying to connect the dots on GPT + langchain and your videos have been excelent sources! To give it a try, I'm planning to build some kind of personal assistant for a specific industry (i.e. law, healthcare), and down the road the vector database will become pretty big. Any guideline on how to sort the best results and also how to show the source of where the information was pulled from?

    • @DataIndependent
      @DataIndependent  Рік тому +2

      Nice! Check out the langchain documentation for "q&a with sources" you're able to get them back pretty easily.

  • @dogchaser520
    @dogchaser520 Рік тому

    Succinct and easy to follow. Very cool.

  • @saburspeaks
    @saburspeaks Рік тому

    Amazing stuff with these videos

  • @roberthahn9040
    @roberthahn9040 Рік тому

    Really awesome video!

  • @rayxiao460
    @rayxiao460 11 місяців тому

    Very impressive.great job.

  • @lnyxiux9654
    @lnyxiux9654 Рік тому +1

    Thanks for sharing !

    • @DataIndependent
      @DataIndependent  Рік тому

      Nice! Glad it worked out

    • @lnyxiux9654
      @lnyxiux9654 Рік тому

      @@DataIndependent Yep ! It was a bit of pain to get unstructured properly set up but after that it's all good. Impressive results very quickly !

    • @DataIndependent
      @DataIndependent  Рік тому +1

      @@lnyxiux9654 I shared the same pain...that part didn't make it to the video

  • @sovopl
    @sovopl 11 місяців тому +1

    Great tutorial, I wonder how to generate questions based on the content of the book? I would probably have to pass the entire content of the book to the GPT model.

  • @johnsmith21170
    @johnsmith21170 9 місяців тому

    awesome video, very helpful! thank you

  • @geethaachar8495
    @geethaachar8495 9 місяців тому

    That was fabulous thank you

  • @user-vc2sc9rq7t
    @user-vc2sc9rq7t Рік тому

    Thanks for your tutorials on Langchain, certainly helps alot and appreciate what you're doing here! Would like to better understand how pinecone helps in this use case as compared to your prev tutorial on 'custom files +chatgpt'. Would i be able to upload multiple documents to query in that prev tutorial or would pinecone be necessary?

    • @DataIndependent
      @DataIndependent  Рік тому

      Pinecone is good when you want to store your vectors in the cloud. This can help when you're building a more robust app. In the previous tutorial I was using Chroma which is more local based.

  • @rajivraghu9857
    @rajivraghu9857 Рік тому

    Excellent 👍

  • @quantum_ocean
    @quantum_ocean 11 місяців тому

    Thanks for sharing. Could you elaborate on why you didn’t use overlap?

  • @satvikparamkusham7454
    @satvikparamkusham7454 Рік тому

    Excellent video! Thanks for this!
    Is there a way to use conversational memory while doing generative Q&A?

    • @DataIndependent
      @DataIndependent  Рік тому

      Big time - check out the latest webinar on this exact topic. It should be on the langchain twitter

  • @jonathancrichlow5123
    @jonathancrichlow5123 9 місяців тому +1

    this is awesome! my question is, what happens when the model is asked a question outside of the knowledge base that was just uploaded? For example, what would happen if you asked a question about who is the best soccer player?

  • @RodolphoPortoSantista
    @RodolphoPortoSantista Рік тому

    This video is very good!

  • @pramodm6168
    @pramodm6168 10 місяців тому

    Thank you - Super helpful to understand how to use external data sources with OpenAI. What are some of the limitations of this approach i.e. size of content being indexed in pinecone, any limits on correlating and summarizing data across multiple documents/sources, can I combine multiple types of sources of information about a certain topic (document, database, blogs, cases etc.) into a single large vector?

  • @mlg4035
    @mlg4035 Рік тому

    Short, but very sweet video! Question: does this work for documents in other languages? Say, Japanese, for example?
    And, is there a text splitter for Japanese? (a la ChaSen, Kuromoji, etc.)

  • @carlosbenavides670
    @carlosbenavides670 7 місяців тому

    Thanks for sharing, pretty good.
    QQ, did you make a version of this using Chroma?

  • @thepracticaltechie
    @thepracticaltechie Рік тому

    Awesome video! Is there a way to embed the prompt and response interface into a website, more like a chatbot experience?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    Great content

  • @HerroEverynyan
    @HerroEverynyan Рік тому +3

    Hi! Awesome tutorial. This is exactly what I was looking for. I really love this series you've started and hope you'll keep it up. I also wanted to ask:
    1. What's the difference between using Pinecone or another vector store like Chrome, FAISS, Weaviate, etc? And what made you choose Pinecone for this particular tutorial?
    2. What was the cost for creating embeddings for this book? (time & money)
    3. Is there a way to estimate the cost of embeddings with LangChain beforehand?
    Thank you very much and looking forward to more vids like this! 🤟

    • @DataIndependent
      @DataIndependent  Рік тому +1

      For your questions
      1. The difference with Pinecone/Chrome,etc. Not much. They store your embeddings and they run a similarity calc for you. However the space is super new, as things progress one may be a no brainer over another. Ex: You could also do this in GCP but you'd have to deal with their overhead as well.
      2. Hm, unsure about the book but here is the pricing for Ada embeddings: $0.0004 / 1K tokens. So if you had 120K word book which is ~147K tokens, it would be $.05. Not very steep...
      3. Yes, you can calc the number of tokens you're going to use and the task, then look up their pricing table and see how much it'll be.

    • @DataIndependent
      @DataIndependent  Рік тому

      ​@@myplaylista1594 This one should help out
      help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them

    • @klaudioz_
      @klaudioz_ Рік тому

      @@DataIndependent It can't be so expensive. text-embedding-ada-002 is about ~3,000 pages per US dollar (assuming ~800 tokens per page).

    • @DataIndependent
      @DataIndependent  Рік тому +1

      @@klaudioz_ ya, you’re right my mistake. I didn’t divide by the extra thousand in the previous calc. Fixing now

    • @klaudioz_
      @klaudioz_ Рік тому

      @@DataIndependent No problem. Thanks for your great videos !!

  • @adamsnook5135
    @adamsnook5135 Рік тому

    Hi great video, look forward to diving into some more of your stuff. I just wanted to ask a question about using this method but to query something like Airtable information. I have a hotel company and it would be really useful to be able to have users ask questions on the data I’ve collected about the hotels. Thank you! Also have you looked into Xata?

    • @DataIndependent
      @DataIndependent  Рік тому

      Check out the CSV loader which can be used when you extract data from airtable

  • @tazahglobal8662
    @tazahglobal8662 Рік тому

    Loved it. 1 Question, what model of openai does this approach uses? For example, davinci etc?

  • @calebsuh
    @calebsuh 9 місяців тому

    Another great tutorial Greg!
    Curious if you've played around with Faiss. And if so, what you think of Pinecone vs Faiss?

    • @DataIndependent
      @DataIndependent  9 місяців тому +1

      Yep! I've played around with it and love it for local use cases. I had a hard time w/ a supporting library in it the last time I used it

    • @calebsuh
      @calebsuh 9 місяців тому

      @@DataIndependent Pinecone was getting expensive for us, so we're trying out Faiss now

  • @sunil_modi1
    @sunil_modi1 Рік тому

    Your videos is really awesome and very helpful.
    What approach should i take if i want to make semantic search from structured (tabular) data instead of free text using openai and langchain?

    • @DataIndependent
      @DataIndependent  Рік тому +1

      There might be a better answer out there...but my take is that, since you'll need to feed text into OpenAI, then you can make documents out of your rows first, get embeddings for those documents, then do your similarity search.
      It'll take some translation and file formatting

  • @kennethodamtten2836
    @kennethodamtten2836 Рік тому

    Thank you sir...

  • @roberthuff3122
    @roberthuff3122 Рік тому

    Great stuff! What GUI wrapper do you recommend?

  • @knallkork700
    @knallkork700 Рік тому

    Hey, great video! What do you mean when you say that it's going to be more expensive with additional documents? What drives the cost?
    Thank you!

  • @mosheklein3373
    @mosheklein3373 10 місяців тому +1

    This is really cool but i havent yet seen a query for a specific information store (in your case, a book) that chatgpt cant natively answer. For example i queried chatgpt the questions you asked and got detailed answers that echoed the answers you received and then some.

  • @nattapongthanngam7216
    @nattapongthanngam7216 10 днів тому

    Appreciate it!

  • @nathanburley
    @nathanburley Рік тому

    This is a great video - succinct and easy to follow.
    Two questions:
    1) How easy is it to add more than one document to the same vector db
    2) Is it possible to append an additional ... field(?) to that database table - so that the provenance of the reference can be reported back with the synethised result?

    • @DataIndependent
      @DataIndependent  Рік тому +1

      1) Super easy. Just upload another
      2) Yep you can, it's the metadata field as you can add a whole bunch. People will often do this for document id's

    • @nathanburley
      @nathanburley Рік тому

      @@DataIndependent Amazing (and thanks for the reply). One final follow up then, is it easy / possible to delete vectors from the db too (I assume yes wanted to ask). I assume this is done by using a query e.g. if meta data contains "Document ID X" then delete?

  • @kennt7575
    @kennt7575 Рік тому

    It’s incredible instructions. In my case, I have some documents in Vietnamese language, will Pinecone support utf8 ? OpenAI + langchain + pincone,.. very helpful in many fields especially in customer services

  • @niharraut8195
    @niharraut8195 8 місяців тому

    How do we scale this up for 300 books lets say?
    Can we create two layer of searches? First for may be superficial word match search using embedding? And then contextual search on shortlisted documents? What models should we use for superficial search in that case? Thanks Greg

    • @DataIndependent
      @DataIndependent  8 місяців тому +1

      Big time, you could also do search on metadata too. Like the book title.
      Searching on both embedding and keyword is called hybrid search

  • @yonathan310393
    @yonathan310393 11 місяців тому

    This is a great video. It helped a lot. I have a question. I am new to this, and I am having trouble splitting this code to make the queries now directly to the previously uploaded data, instead of uploading the vectors again. I want to use what I already have in Pinecone. How do i do that?

  • @daryladhityahenry
    @daryladhityahenry Рік тому

    Hi. I kind of curious, with so many open source chat gpt like right now, can we use that instead of openAI API? For example, using dolli and use only about 8B parameter. Is it possible?
    And also, about the embeddings, we can use another embedding too right? Is it the same with bag of words kind of thing?
    Thank you. Great video!

  • @andytesii
    @andytesii 9 місяців тому

    love it!

  • @kennethleung4487
    @kennethleung4487 Рік тому

    Awesome video as always. Noticed that there is the standard load_qa_chain, and on the other hand we also have VectorDBQA. Which one should be the choice to go for?

    • @DataIndependent
      @DataIndependent  Рік тому +1

      Depends on your task. The VectorDBQA will be a convenient way do handle the document similarity for you.
      Or you could do it manually yourself w/ load_qa_chain.

  • @PizzaLord
    @PizzaLord Рік тому +4

    Nice!
    I was working with pinecone / gpt code recently that gave your chat history basically infinite memory of past chats by storing them in pinecone which was pretty sweet as you can use it to give your chatbot more context for the conversation as it then remembers everything you ever talked about.
    Will be combining this with a custom dataset pinecone storage this week (like a book) to create a super powered custom gpt with infinite recall of past convos.
    Would be curious on your take, particularly how to keep the book data universally available to all users but at the same time keeping the past chat data of a particular user totally private but still being able to store both types of data on the free tier pinecone which I can see you are using (and I will be using too).

    • @DataIndependent
      @DataIndependent  Рік тому

      Nice! That's great. Soon if you have too much information (like in the book example above), you'll need to get good at picking which pieces of previous history you want to parse out. I imagine that won't be too hard in the beginning but it will later on.

    • @PizzaLord
      @PizzaLord Рік тому

      @@DataIndependent Doesnt the k variable take care of this? It only returns the top k number in order of relevance that you end up querying.
      Or are you talking about the chat history and not the corpus?
      I see no reason why you would not just specify a k variable of 5 or 10 in regard to the chat history too. For example if a user was seeking relationship advice and the system knew their entire relationship history and the user said something like "this reminds of of the first relationship that I told you about", it would be easy for the system to do an exact recall of the relationship, the name of the partner and from there recall everything very quickly using the k variable on the chat history.
      I use relationships as an example because I just trained my system on a book that I wrote called sex 3.0 (something that gpt knows nothing about) and I am going to be giving it infinite memory and recall this week.

    • @DataIndependent
      @DataIndependent  Рік тому

      @@PizzaLord Yes, the K variable will help w/ this. My comment was around the chance for more noise to get introduced the more data you have. Ex: More documents creep in that share a close semantic meaning, but aren't actually what you're looking for. For small projects this shouldn't be an issue.
      Nice! That's cool about the project. Let me know how it goes.
      The langchain discord #tools would love to see it too

    • @PizzaLord
      @PizzaLord Рік тому

      @@DataIndependent Another thing I will look at, and I think it would be cool if you looked at it too, is certain chat questions triggering an event like a graphic or a video link being shown where by the video can be played without leaving the chat. This can be done by either embedding the video in the chat response area or by having a separate area of the same html page which is the multimedia area or pane that gets updated.
      After all the whole point of langchain is to be able to chain things together, no? Once you chain things together you can get workflow.
      This gets around one of chat gpts main limitations right now which is that its text only in terms of what you can teach it and the internet loves its visuals and videos.
      Once this event flow stuff is in place you can easily use it to flow through all kinds of workflow with gpt at the centre like collecting data in forms, doing quick survey so you can store users preferences and opinions about what they might want to get out of an online course that you are teaching it and then storing that in a vector DB. It can become its own platform at that point.

    • @DataIndependent
      @DataIndependent  Рік тому

      @@PizzaLord You could likely do that by defining a custom tool, grabbing an image based off a URL (or generating one) and then displaying in your chat box. Doing custom tools is interesting and I'm going to look into a video for that.

  • @deanshalem
    @deanshalem 10 місяців тому +1

    Greg, you are INCREDIBLE! Your channel and GitHub are a goldmine. Thank you 🙏. At 9:09, what install on Mac is necessary to assess methods like that?

    • @deanshalem
      @deanshalem 10 місяців тому

      Also, I’ve been trying to make some type of “theorems, definitions, and corollaries” assistant which extracts from my textbook all the math theorems, definitions, and corollaries. The goal there was to create textbook summaries to reference when I work through tough problems which require me to flip back and forth through my book all day long.
      But more interesting, I am struggling to create a “math_proofs” assistant. Your approach in this video is awesome, but I can’t find any of your resources in which you use markdown, or latex, or any mathematical textbook to be queried. I use MathPix to convert my textbooks to latex, wordDoc, or markdown. But when I use my new converted markdown text, despite working hand-in-hand with the lang chain documentation, I still fail to get a working agent that proves statements.
      I feed the model:
      “Prove the zero vector is unique” and it replies nonsense, even though this proof is explicitly written in the text. It is not even something it had to “think” to produce (simple for the sake of example, these proofs are matrix theory so they get crazy). Could you please chime in?

    • @DataIndependent
      @DataIndependent  10 місяців тому +1

      Pulling all of that information out could be tough. I have a video on the playlist around "topic modeling" which is really just pulling structured information out of a piece to text. That one may be what you're looking for

  • @tfhighlander2280
    @tfhighlander2280 Рік тому

    Great video as usual! What do you think about hosting the vector database on firebase?

    • @DataIndependent
      @DataIndependent  Рік тому

      I think it sounds great - If it works for your use case then it's solid. The goal is impact and not necessarily the 100% optimal solution.

  • @ninonazgaidze1360
    @ninonazgaidze1360 7 місяців тому

    Greg, I currently use FAISS for QA-ing pdfs and want more accurate results. Would you recommend trying Chroma or Pinecone over FAISS for the same tasks - QA with pdfs?

    • @DataIndependent
      @DataIndependent  7 місяців тому +1

      Are you concerned that the documents being returned are similar enough? If so, that’s likely the similarity algorithm you’re using rather than the vector db itself.
      Or are you concerned the reasoning ability over the docs isn’t good enough? Then that’s a model thing.
      Or even perhaps you might need a different retrieval method to get better docs

    • @ninonazgaidze1360
      @ninonazgaidze1360 7 місяців тому

      Thanks, that helps a lot! The second case is my struggle and I will try different models.@@DataIndependent

  • @shaunchen5054
    @shaunchen5054 Рік тому

    Great video , I am wondering is there way to use the PDFs which made from photocopy of the document ( need to convert image to text )

  • @valdinia-office2910
    @valdinia-office2910 11 місяців тому

    In LangChain is "similarity search" used as a synonym for "semantic search", or they are referring to different types of search?
    To my knowledge similarity search focuses on finding items that are similar based on their features or characteristics, while semantic search aims to understand the meaning and intent behind the query to provide contextually relevant results

  • @danilovaccalluzzo
    @danilovaccalluzzo Рік тому +1

    great video. thanks so much.
    How do you query the index without creating the embeddings all the time? is it possible?
    thanks

    • @nihonkeizaishinbun2254
      @nihonkeizaishinbun2254 10 місяців тому

      Hi, i found this : docsearch = Pinecone.from_existing_index(index_name, embeddings)

  • @cnmoro55
    @cnmoro55 Рік тому

    How does langchain wraps the history of the chat ? Or it doesn't ?
    Internally, how does it send the prompt to OpenAI ?
    Thanks for the amazing tutorial

  • @yashkhd1100
    @yashkhd1100 Рік тому

    Excellent...!! Just one question, Once we load data is this data now belongs to OpenA/ChatGPT. ? In other words can they use this uploaded book data to answer questions that other users may ask?