Chat with Multiple PDFs | LangChain App Tutorial in Python (Free LLMs and Embeddings)

Поділитися
Вставка
  • Опубліковано 27 чер 2024
  • In this video you will learn to create a Langchain App to chat with multiple PDF files using the ChatGPT API and Huggingface Language Models.
    Welcome to our comprehensive step-by-step tutorial on building a powerful chatbot that allows you to ask questions about your multiple PDFs using LangChain and ChatGPT API. In this project-based video tutorial, we will guide you through the process of harnessing the capabilities of LangChain, a cutting-edge framework for developing language model-powered applications.
    -----------------------
    USEFUL LINKS
    👉 Github repo: github.com/alejandro-ao/ask-m...
    💬 Join the Discord Help Server - link.alejandro-ao.com/HrFKZn
    ❤️ Buy me a coffee... or a beer (thanks): link.alejandro-ao.com/l83gNq
    ✉️ Join the mail list: link.alejandro-ao.com/AIIguB
    --------------------------
    Powered by ChatGPT, an advanced AI language model, our chatbot implementation enables you to interact with your PDF documents in a whole new way. We will also explore the utilization of Huggingface language models to enhance the chatbot's performance.
    Throughout this Python tutorial, you'll learn how to integrate LangChain into your application, leveraging its data-awareness and agentic features, allowing your language model to tap into various data sources and interact with its environment seamlessly.
    Key Topics Covered:
    - Introduction to LangChain and its principles of data awareness and agency
    - Exploring the capabilities of ChatGPT API and its potential for artificial intelligence applications
    - Step-by-step walkthrough on setting up the LangChain framework in Python
    - Integration of Huggingface language models for enhanced chatbot functionality
    - Building a project-based chatbot application that answers questions based on your PDFs
    - Optimizing your chatbot for efficient and accurate responses
    - Best practices for leveraging open-source GPT models, including GPT-3.5 and GPT-4
    - Unlocking the potential of ChatGPT and LangChain to create innovative applications
    Whether you're a developer, AI enthusiast, or simply curious about the world of artificial intelligence, this tutorial is perfect for you. Join us on this exciting journey to develop your very own PDF-powered chatbot using LangChain, ChatGPT API, and the power of Python.
    ------------------------------------------------------------------------------
    TIMESTAMPS
    0:00 Intro
    1:31 Setup
    3:26 Create GUI
    8:50 Add your API Keys
    11:46 How this works (Diagram)
    16:41 Handle process button
    19:43 Extract text from PDFs
    24:33 Split text into Chunks
    29:26 Embedings
    32:30 OpenAI Embeddings
    36:24 Instructor Embeddings
    40:57 Create ConversationChain
    46:18 Make conversation persistent
    50:14 HTML templates
    55:04 Display Chat History
    1:02:05 Free Huggingface LLM
    1:06:10 Conclusion
    Keywords:
    #ChatGPT #streamlit #langchain #OpenAI

КОМЕНТАРІ • 1 тис.

  • @alejandro_ao
    @alejandro_ao  5 місяців тому +7

    💬 Join the Discord Help Server: link.alejandro-ao.com/981ypA
    ❤ Buy me a coffee (thanks): link.alejandro-ao.com/YR8Fkw
    ✉ Join the mail list: link.alejandro-ao.com/o6TJUl

    • @qwadwojohn2628
      @qwadwojohn2628 2 місяці тому

      Hi Alejandro, any help on how I can setup the remote GitHub repository?

  • @erniea5843
    @erniea5843 10 місяців тому +18

    Well done! That overview diagram is very helpful and I appreciate that you referred back to it often. Too often tutorial videos neglect the system overview aspects but you made it easy to see how it all fit together

  • @user-cs9qn4il6x
    @user-cs9qn4il6x 9 місяців тому +39

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 The video tutorial aims to guide the building of a chatbot that can chat with multiple PDFs.
    00:38 ❓ The chatbot answers questions related to the content of the uploaded PDF documents.
    01:33 🔧 The video tutorial also covers the setting up of the environment, including the installation of necessary dependencies like Python 3.9.
    02:14 🔑 After setting up the environment and installing dependencies, the video progresses to explain the installation of other needed components to execute the task.
    03:38 👩‍💻 The video demonstrates the design of a graphical user interface (GUI) using Streamlit imported as 'St'.
    05:44 🎨 The sidebar of the GUI contains a file-upload feature for the chatbot to interact with PDF documents.
    07:11 🗳️ A 'Process' button is added to the sidebar as an action trigger for the uploaded PDF documents.
    08:57 🗂️ The tutorial explains how to create and store API keys for OpenAI and Hugging Face in an .env file.
    12:26 📄 The video further explains how the chatbot operates: it divides the PDF's text into smaller chunks, converts them into vector representations (embeddings), and stores them in a vector database.
    14:17 🧲 Using these embeddings, similar text can be identified: when a question is asked by a user, it converts the question into an embedding and identifies similar embeddings in the vector store.
    15:28 📚 The identified texts are passed to a language model as context to generate the answer for the user's question.
    19:54 🧩 The video guides the viewers to create functions within the application to extract the raw text from the PDF files.
    23:37 📋 The video further shows how to encode the raw extracted text into the desired format.
    25:03 ✂️ The tutorial provides guidance on creating a function to split the raw text into chunks to feed the model.
    25:28 📜 The presenter explains how to create a function that divides the text into smaller chunks using a library - Laungchain, which uses a class called 'character text splitter'.
    29:58 🌐 The presenter introduces OpenAI's embedding models for creating vector representations of the text chunks for storage in the Vector store.
    31:37 🏷️ The instructor model from Hugging Face is introduced as a free alternative to OpenAI's and is found to rank higher in the official Hugging Face leaderboard.
    33:59 💽 The speaker explains how to store the generated embeddings locally rather than in the cloud using Files from Langchain, a database to store numeric representations of text chunks.
    36:06 ⏱️ Demonstrates how long it could take to embed a few pages of text locally with the instructor model compared to the Open AI model.
    40:07 🔄 The host introduces conversation chains in Langchain, which allow for maintaining memory with chatbot and enabling follow-up questions linked to previous context.
    44:17 🧠 The presenter details how to use conversation retrieval chains for creating chatbot memory and how it aids in generating new parts of a conversation based on history.
    48:05 🔄 The speaker covers how to make variables persistent during a session using Streamlit's session state, useful for using certain objects outside their initialization scope.
    50:23 🎨 The presenter proposes a method of generating a chatbot UI by inserting custom HTML into the Streamlit application, offering fine-tuned customization.
    51:05 📝 The presenter introduces a code pre-prepared to manage CSS styles of two classes - chat messages and bots. Styling is discussed with reference to images and HTML templating for distinct user and bot styles.
    53:07 🔂 The presenter shows how to replace variables within HTML templates, using Python's replace function. By replacing the message variable, personalized messages can be displayed using the pre-arranged template.
    57:42 🗣️ The speaker demonstrates how to handle user input to generate a bot's response using the conversation object. The response is stored in the chat history and makes use of previous user input to generate context-aware responses.
    01:00:14 🔄 A loop is introduced to iterate through the chat history. Messages are replaced in both the user and bot templates resulting in a more dynamic conversation history displayed in the chat.
    01:03:14 💬 The host highlights how the chatbot is able to recall context based on the user's previous queries. The AI remembers the context from previous messages and appropriately answers new queries based on that.
    01:03:27 🔄 The speaker introduces how to switch between different language models, using Hugging Face models as an example. These models from Hugging Face can be used interchangeably with OpenAI's with minor adjustments in the code.
    01:06:00 🔁 The presenter demonstrates how the system works using different models. The response from the Hugging Face model is fetched in the same manner as the previous OpenAI model.
    Made with HARPA AI

  • @wapoipei
    @wapoipei 28 днів тому

    I've been searching for this topic with working samples and you gave us a full working project. You have a gift in teaching, keep it up mate. Thank you Alejandro!

  • @AdegbengaAgoroCrenet
    @AdegbengaAgoroCrenet 7 місяців тому +18

    I rarely comment on YT videos and I must say your sequencing and delivery of this content is really good. Its informative, clear, concise and straight to the point. No fluff or hype, just good and quality content with exceptional delivery. I couldn't help but subscribe to your channel and smash the like button. I have seen alot of videos about this and they don't deliver the kind of value you have

    • @alejandro_ao
      @alejandro_ao  5 місяців тому

      thank you man, it mean's a lot!

  • @weimai6553
    @weimai6553 10 місяців тому +4

    Nice video! I like how you mention all the little details people will miss. Video deliver is clear throughout. Keep up the work!

  • @shivamroy1775
    @shivamroy1775 11 місяців тому +48

    Great quality content. I absolutely love that you took the time to explain everything in such great detail and walk us through the coding process, Unlike on UA-cam few other video compromise explainability and knowledge for pace. Please keep up the good work. Also, the explanation of the system diagram of the application was by far the best explanation I have ever seen.

    • @WildFire49
      @WildFire49 10 місяців тому

      is your project working? when i process my pdfs it is not getting converted into chunks, What should i do?

    • @martinkrueger937
      @martinkrueger937 8 місяців тому

      anyone knows how to use Azure instead of Open ai?

    • @MachineLearningZuu
      @MachineLearningZuu 7 місяців тому

      Yes I am using. What is the issue ? @@martinkrueger937

  • @pathmonkofficial
    @pathmonkofficial 10 місяців тому +42

    The use of Huggingface language models takes this to another level, enhancing performance and functionality. The tutorial's step-by-step approach to setting up LangChain and building the chatbot application is truly valuable.

    • @alejandro_ao
      @alejandro_ao  10 місяців тому +16

      you are truly valuable

    • @kaushikas4764
      @kaushikas4764 4 місяці тому

      What huggingface model is he using here?

    • @maximus3159
      @maximus3159 3 місяці тому +1

      This comment sounds suspiciously AI generated

  • @Pramesh37
    @Pramesh37 3 місяці тому +11

    Mate, you're a legend. I was searching for tutorials on Langchain framework, HuggingFace, LLM and Embeddings to understand the concept. But this one practical implementation gave me the entire package. Great pace, clear explanation of concepts, overall amazing tutorial. You are a gifted teacher and I hope you continue to teach such rare topics. Earned yourself a subscriber, looking forward to more such videos.

    • @xspydazx
      @xspydazx 2 місяці тому

      in reality we should not be using any form of cloud AI systems unless they are FREE !! Thats Point 1...
      But also we should be focussing on Hugging face models !
      All tasks can be performed with any model !
      Even the embeddings can be extracted from the model ! so no need for externeal embeddings providers , the embedding used should ALWAYS be from the model , the rag can be added to the tokenized prompt and injected as the content , so pre-Tokenized datasets are useful, reducing the search time and rag speed for local systems : (we cannot be held to ransom using the intenet as a back bone for everything and making these people richer each day !"
      the services provided by a vector store are easily created in python without third artly librarys, but any library which is complely open source and local is perfect !
      in fact we should be looking at our AI_researchers , to fill our rag based on our expectations and after examining and filtering it shouldl be able to be , extracted up to the llm ! (fine tuned in as talkng to the llm DOES NOT TEACH IT!)

  • @speerunscompared
    @speerunscompared 11 місяців тому +19

    This tutorial is excellent. It's nice that you also explained some of the smaller details, like the environment variable setup, and how this works with git.

  • @theophilus4723
    @theophilus4723 8 місяців тому +2

    Thank you so much Alejandro! The content was great. The explanation was clear and concise. Looking forward for more contents like this. Great job!

  • @RickeyBowers
    @RickeyBowers 11 місяців тому +20

    Your pacing and coverage of material is excellent! A progressive external database seems like some future steps. Could support multiple applications, caching at the file level. I can imagine querying a project (selection of files). Suppose it could get more meta - making decisions based on response content.
    Really, looking forward to wherever you take us!

    • @alejandro_ao
      @alejandro_ao  11 місяців тому +1

      absolutely, there are so many ways that these applications can be scaled up for your own projects! keep it up :)

  • @junyang1710
    @junyang1710 Рік тому +4

    you are such a good teacher, everything is explained so clearly. Thank you!

  • @alejandro_ao
    @alejandro_ao  Рік тому +11

    Hey there! Let me know what you want to see next 👇

    • @EntertainmentDoseByAkash
      @EntertainmentDoseByAkash Рік тому

      Me also doing the same. However, what's your charges approx. per project?

    • @pyw
      @pyw Рік тому +2

      amazing, can the app response answers with the original pdf context?

    • @EntertainmentDoseByAkash
      @EntertainmentDoseByAkash Рік тому

      Yes anything can be answers except images. But accuracy and speed is low

    • @alejandro_ao
      @alejandro_ao  Рік тому +1

      ​@@pyw hey there, yes that's the idea. the app responds only with the context in your PDF files. regarding images, it would depend on the images in your doc, but in some cases we could make the app read that too :)

    • @sushantraikar1
      @sushantraikar1 Рік тому

      I have dropped you an email with the request. Please have a look and let me know

  • @rainbowtrout8331
    @rainbowtrout8331 7 місяців тому

    The way you explain each step is so helpful! Thank you

  • @seanjames1626
    @seanjames1626 Рік тому +2

    I have definitely subscribed! Great work. Thank you!

  • @dendrites
    @dendrites 8 місяців тому +26

    Perfectly executed tutorial. Definitely worth a coffee. If you are taking suggestions, I'd be interested in a tutorial (or just exploring potential solutions) on comparing content between two documents; or more specifically answering questions about changes/updates between document versions and revisions. There are many situations where changes are made to a document (e.g. new edition of a book; documentation for python 2 vs 3; codebase turnover; etc.), and while 'diff' can show you exactly what changed in excruciating detail, it would be nice to have an LLM copilot that can answer semantic questions about doc changes. For example a bioinformatics professor might want to know how they should update their course curriculum as they transition from edition 3 to edition 4 of a textbook (e.g. Ch4 content has been moved to Ch5 to make room for a new Ch4 on advances in gene editing; Ch7 has major revisions on protein folding models).

    • @alejandro_ao
      @alejandro_ao  5 місяців тому +5

      hey there! sorry for the late reply, this is a great idea! i started recording videos again a couple weeks ago and they are going up soon. this is totally something that could be very useful to a lot of people. i will look into that! and thanks for the coffee, you are amazing!!

  • @VladimirBalko
    @VladimirBalko 11 місяців тому +18

    🎯 Key Takeaways for quick navigation:
    00:00 📝 This video tutorial demonstrates building a chatbot application that allows users to interact with multiple PDFs simultaneously.
    04:20 🛠️ The tutorial uses Streamlit to create the graphical user interface for the application, enabling users to upload PDFs and ask questions.
    10:20 🔐 API keys from OpenAI and Hugging Face Hub are used to connect to their APIs for language models and embeddings.
    16:39 📚 The application processes PDFs by converting them into chunks of text, creating embeddings, and storing them in a vector store.
    24:07 🔢 The large text from PDFs is split into smaller chunks to be fed into the language model for answering user questions.
    25:28 🧩 The tutorial demonstrates how to divide text into chunks using the "character text splitter" class from the "LangChain" library.
    29:31 📚 Two ways to create vector representations (embeddings) of text chunks: OpenAI's paid embedding models and the free "Hugging Face Instructor" embeddings.
    32:35 🏭 Demonstrates how to create a vector store (database of embeddings) using OpenAI's embeddings or Hugging Face's Instructor embeddings. The Instructor option is free but can be slower without a GPU.
    35:51 🕑 Processing time comparison: OpenAI's embeddings processed 20 pages in about 4 seconds, whereas Instructor embeddings on CPU took around 2 minutes for the same task.
    41:00 💬 Utilizing "conversation chain" in LangChain to build a chatbot with context and memory for a more interactive experience. Demonstrates how to create and use the conversation object.
    51:05 💻 The video demonstrates how to create templates for styling chat messages (CSS) in a Python app for displaying chatbot conversations.
    52:15 📜 CSS is imported and added to the HTML template for styling the chat messages in the Python app.
    54:10 🔄 The Python function `replace` is used to personalize the chat messages and display user-specific messages in the bot template.
    56:41 📝 User inputs are handled to generate responses using a language model (OpenAI or Hugging Face) and displayed with a chat-like structure.
    01:04:07 🏭 The tutorial shows how to switch from using OpenAI to Hugging Face language models in the Python app for chatbot interactions.
    Made with HARPA AI

    • @alejandro_ao
      @alejandro_ao  11 місяців тому +3

      cool

    • @texasfossilguy
      @texasfossilguy 10 місяців тому

      wow

    • @Sahil-ev5pm
      @Sahil-ev5pm 9 місяців тому

      @@alejandro_ao Good project but how to host this to showcase in our resume please guide for the same.

  • @armandopena3272
    @armandopena3272 4 місяці тому

    Well done! Congratulations. So far, this has been the clearest tutorial on the topic.

    • @alejandro_ao
      @alejandro_ao  4 місяці тому

      thank you! i'm glad to hear that :)

  • @gbengaomoyeni4
    @gbengaomoyeni4 6 місяців тому +2

    Wow! This guy is simply brilliant! Continue the good work bruh. You just gat a subscriber!

  • @fishbyte
    @fishbyte Рік тому +9

    Hi Alejandro, thank you for making the series of Langchain tutorials. I have learned a lots! I wonder if you could show us how to ask a question over multiple uploaded files with different formats (e.g., PDFs + csv files).

    • @francoislepron2301
      @francoislepron2301 11 місяців тому

      This would be really helpful. Do you think that such a tool set is able to recognize the fields in an invoice, such as the provider, the date, the invoice reference, and the amounts and quantities for each article, the total price, and after we can query the tool for all invoices received from a specific provider and so on ?

  • @sandorkonya
    @sandorkonya Рік тому +23

    Nice project! Since langchain's pdf reader saves the page as metadata, if you ask something, the results (the pages of the pdf) could be shown in an embeded /canvas next to the chat. This way one could see the relevant pages of the corresponding PDFs, not just the straight answer.

    • @maxbodley6452
      @maxbodley6452 Рік тому +14

      Yeah that sounds like a great idea. Do you know how you would go about doing that?

    • @kaiserchief500
      @kaiserchief500 10 місяців тому

      @@maxbodley6452 have you got some information of how that works?

    • @xt3708
      @xt3708 9 місяців тому

      bump

    • @oleum5589
      @oleum5589 9 місяців тому

      how would you do this

    • @sandorkonya
      @sandorkonya 9 місяців тому

      @@oleum5589 langchain.document_loaders.pdf.PyPDFLoader --> Loader also stores page numbers in metadata.

  • @donconkey1
    @donconkey1 10 місяців тому +2

    Excellent Video!! You are an great teacher and a master of the material you present. Thanks your videos really help and save me a lot of time.

  • @laurentlemaire
    @laurentlemaire 7 місяців тому

    Excellent video! Thanks for describing it so clearly and with the helpful git repo.

  • @iftrejom
    @iftrejom Рік тому +15

    Thank you, man! I had so much fun replicating this project, I feel I learnt a lot with it. I am a AI student and this is the kind of content that make candidates appealing to employers. I will try to build up some projects of my own with all the great stuff I just learnt.

    • @alejandro_ao
      @alejandro_ao  11 місяців тому +3

      that's awesome mate! keep building side projects and don't forget to look back to see your progress 💪

    • @deekshithkumar2153
      @deekshithkumar2153 9 місяців тому +1

      Can you please answer this, Why am I not getting any output as shown in in video other than this
      load INSTRUCTOR_Transformer
      max_seq_length 512
      load INSTRUCTOR_Transformer
      max_seq_length 512
      Is it a problem with my system specifications or anything else?

    • @alangeorge1090
      @alangeorge1090 9 місяців тому

      Even I'm currently facing the same issue, still unresolved :(@@deekshithkumar2153

    • @mohammedalqaisi7114
      @mohammedalqaisi7114 8 місяців тому +1

      @@deekshithkumar2153 I'm having the same problem have you found a solution? maybe the data are not loaded into the faiss correctly idk?

    • @aishu2623
      @aishu2623 6 місяців тому +1

      Sir a small doubt in this project can we upload any pdf and ask questions or we need to upload the same pdf what the person has uploaded?

  • @svenst
    @svenst Рік тому +38

    Hey, thanks for this tutorial. Small hint: it’s recommended to use pypdf instead of PyPDF2, since this branch was merged back into pypdf. ;-)

  • @ShikharDadhich
    @ShikharDadhich 9 місяців тому

    Awesome video! I am able to follow and run exactly what you did, thanks a lot man!

  • @techandprogramming4688
    @techandprogramming4688 8 місяців тому +1

    Great content! Thanks for sharing all the knowledge so beautifully and smartly, without getting things complicated.
    Also, I would like to say that please more & more of COMPLEX projects for us, LLM as a product or a complete software product, and also some things on LLMOps

  • @MZak-js7oy
    @MZak-js7oy Рік тому +5

    Thank you so much for the detailed explanation. one curious question as i'm planning to use instructor model locally.
    how to store the embeddings db locally instead of reprocessing it everytime you initialize the app?

  • @tictaco31530
    @tictaco31530 Рік тому +3

    Very nice and thanks very much for sharing!! With little experience got this to work and I see a lot of potential.
    It should be possible to save and load a FAISS index file. But I'm not able to get this to work. So instead of uploading a lot of PDF's each time you could access an already generated - and saved - vector store. Also an option to append PDF's later on would be nice. And... does the vector store have info on what comes from which pdf? And some metadata about the pdf's? Goal: to see the creation date or modified date. To see when that info was created (and may be outdated now ;-) Or to determine which info is newer and older.
    And a plus one on dr. Kónya's question. Would be nice to see the references of where the answer was based on.

  • @pickelbarrelofficial1256
    @pickelbarrelofficial1256 11 місяців тому +1

    You are so good at explaining this, you've got a real talent there.

    • @alejandro_ao
      @alejandro_ao  11 місяців тому

      the student has 50% percent of the merit ;)

  • @jugjiwanseewooruttun7198
    @jugjiwanseewooruttun7198 6 місяців тому

    Thank you Alejandro, it is very well explained succinctly. Your clarity in explaining the steps made it easy. You are valuable.

  • @BrandonFoltz
    @BrandonFoltz Рік тому +4

    I cannot believe I got this running (because I am a coding idiot). EXCELLENT work.
    Do you know if there is a simple way to get the chat to display in reverse? I.e. the latest query/response is at the top so you don't have to scroll down each time?
    Keep up the great content. You are on your way.

    • @alejandro_ao
      @alejandro_ao  Рік тому +8

      thank you man! i'm glad got this to work 💪 to display the chat in reverse, you just need to reverse the array containing the messages before displaying it. you can add these 2 lines and then loop through this new array:
      reversed_messages = st.session_state.messages
      reversed_messages.reverse()
      you need to run the `reverse()` method in a new variable to not mess up the messages history you have.
      ps. your videos are gold btw

    • @BrandonFoltz
      @BrandonFoltz Рік тому +2

      @@alejandro_ao I will give that a try!
      Very kind of you to say my friend. Lots of us out here just trying to do good work and help others learn.
      Our viewers are the gold; we just provide the light so they can shine.

    • @riyajatar6859
      @riyajatar6859 Рік тому

      def handle_userinput(user_question):
      response = st.session_state.conversation({'question': user_question})
      st.session_state.chat_history = response['chat_history']
      chat_list = st.session_state.chat_history
      # rev_msg = st.session_state.chat_history
      # chat_list.reverse()
      # st.write(st.session_state.chat_history)

      USER_INPUT = np.arange(0,len(chat_list),2).tolist()
      BOT_RESPONSE = np.arange(1,len(chat_list),2).tolist()
      USER_INPUT.reverse()
      BOT_RESPONSE.reverse()
      for i,j in zip(USER_INPUT,BOT_RESPONSE):
      st.write(user_template.replace(
      "{{MSG}}", chat_list[i].content), unsafe_allow_html=True)
      st.write(bot_template.replace(
      "{{MSG}}", chat_list[j].content), unsafe_allow_html=True)

    • @MirthaJosue
      @MirthaJosue 10 місяців тому +2

      ha, ha, ha... I felt the same way until I watched this video

  • @thiagocorreaNT
    @thiagocorreaNT Рік тому +5

    Congrats, great content!
    How can I show the PDF link that the response refers to?

  • @learnthetech7152
    @learnthetech7152 11 місяців тому +1

    Hi Alejandro, this is a superb tutorial and thanks so very much for this. Like me, am sure many have got inspired by this. And you know what, I saw it is an hour long video, but at no point I felt it to be so long, its super engaging.

    • @alejandro_ao
      @alejandro_ao  11 місяців тому

      you are amazing, thank you for being around! i have more videos coming :)

  • @karannesh7700
    @karannesh7700 6 місяців тому

    This video is pure gold! Thanks @Alejandro great work! helped me a lot !

  • @scottregan
    @scottregan Рік тому +7

    Hey mate, thanks so much. This is my first ever coding and I am thrilled to have it working.
    However, like many others, I am hitting the token limit-- I know this is super obvious to anyone with tacit knowledge, but you've made a beginner's guide so so bear with us. I assumed langchain would take care of this and only "call" the LLM for relevant chunks?. Otherwise, what is the point of this whole project? This is my error: "This model's maximum context length is 4097 tokens. However, your messages resulted in 20340 tokens. Please reduce the length of the messages."

    • @charlesd774
      @charlesd774 Рік тому +1

      you cant send the entire conversation each time, you have to cut it off at some point. another option is to generate some kind of summary of each message so you can send in summaries instead. This is from a thread on openai forums

  • @guanjwcn
    @guanjwcn Рік тому +11

    Thanks for the insightful videos as always, Alejandro! Could you also do a tutorial on persistent vectorstore? For the same set of docs, if the app is refreshed, the embeddings of the docs would need to be re-done, which might not be cost effective if openai embedding is used. Not sure whether persistent vectorstore like pinecore would allow embeddings to be saved on local disk from its first use and the app can just read from there subsequently.

    • @alejandro_ao
      @alejandro_ao  Рік тому +28

      hey there, thanks :) sure. indeed, in this example, the vectorstore is in memory, which means that it will be deleted when you refresh the app. pincone, as far as i know, works only on the cloud. but for local storage i'd probably go for either qdrant or chroma. i'll make a video about that soon!

    • @lordmelbury7174
      @lordmelbury7174 Рік тому +6

      @@alejandro_ao A Langchain + Qdrant vid would be really useful! 👍👍

    • @Sergio-rq2mm
      @Sergio-rq2mm Рік тому

      @@alejandro_ao Could you not write the vectorstore variable to file and then source it later?

    • @mairex9978
      @mairex9978 Рік тому +1

      chroma could be a solution, you can try it out

    • @tictaco31530
      @tictaco31530 Рік тому

      +1

  • @maria-wh3km
    @maria-wh3km 9 місяців тому

    You are awesome, well presented and the code is so clean and perfect. Big thank you!

  • @samsquamsh78
    @samsquamsh78 10 місяців тому

    fantastic video and great pace and explanations of each steps and functions. I subscribed!

  • @GrahamAndersonis
    @GrahamAndersonis 11 місяців тому +4

    Great video! Question-when you have mixed pdf (text and tables) do you need to preprocess the tabular data in some way…like format/convert the inline table to a CSV string, or is Pypdf doing enough preprocessing so the table rows can be ingested?

    • @alejandro_ao
      @alejandro_ao  10 місяців тому +4

      hey there! pypdf works pretty well with pdfs that are only text and ideally compiled directly from a text editor. if you have more complicated files, with tabular data (or scanned documents from a photo), i recommend you perform OCR on them to be sure that you get all the data form it.
      when the file contains tabular data or is hard to process, i usually use pdf2image to convert the file to image and then use pytesseract.image_to_string to do OCR on it. i hope this helps!
      sorry for the late reply, i'm out in summer vacation! and thanks for the tip 💪

    • @GrahamAndersonis
      @GrahamAndersonis 10 місяців тому

      @@alejandro_ao myself, I’ve been pre-converting pdfs to MS Word (direct word import) and then exporting table objects to pandas dataframes. Text objects are treated normally. Every object has an index for inline ordering.
      I haven’t tried it-you might be able to use Adobe Extract API.
      Question-Have you tried the pre-converting the pdf-to-Word approach? This can be automated, btw. Iterating with python-docx is easy.
      If so, does that behave better than converting to image? Thanks for a great channel!

  • @topanimespro
    @topanimespro Рік тому +6

    Hello, I wanted to express my gratitude for this tutorial. I'm curious to know if the concepts discussed here can also be applied to PDFs that are not primarily written in English (applicability to other languages such as Arabic or French)?

  • @ronan4681
    @ronan4681 Рік тому +1

    Thank you Sir, one of the clearest instructional videos I have watched. Look forward to following your videos

  • @ssgoh4968
    @ssgoh4968 11 місяців тому

    Best tutorial ever. Very organised and easy to follow and understand.

    • @alejandro_ao
      @alejandro_ao  11 місяців тому +1

      probably cause you’re the best learner ever 😎

  • @qwerto-ye5pe
    @qwerto-ye5pe Рік тому +2

    Hello and thank you for this project, I just wanted to ask if there's a better way to split the text, for example, wouldn't be better breaking the text after a "." or a ","?

    • @rulesmen
      @rulesmen 11 місяців тому

      Breaking the text after a n/ means you are spliting by parahraphs instead of sentences.

  • @DadCooks4Us
    @DadCooks4Us Місяць тому +3

    Some of the content is deprecated. Following through the content as I am trying to learn becomes a but difficult. Are you planning on updating this?

  • @antarikshverma8999
    @antarikshverma8999 10 місяців тому

    Thank you for clean and lucid explanation

  • @MrBekimpilo
    @MrBekimpilo 10 місяців тому +1

    This is one of the best tutorials ever, caters to a wide audience. The explanations and everything were on point.

    • @alejandro_ao
      @alejandro_ao  9 місяців тому +1

      thanks mate, i appreciate it

    • @MrBekimpilo
      @MrBekimpilo 9 місяців тому

      @@alejandro_ao you welcome. I will reach out sometime via email.

  • @user-el7qg2gd8d
    @user-el7qg2gd8d 11 місяців тому +3

    Hello Sir, Thank you for this amazing tutorial.
    I have implemented using the HuggingFaceInstructEmbeddings for embeddings and HuggingFaceHub for the conversation chain.
    I am getting the below error:
    ValueError: Error raised by inference API: Input validation error: `inputs` must have less than 1024 tokens. Given: 1080
    Please guide on how we can resolve this issue.
    Thanks :)

  • @GraceLiying
    @GraceLiying Рік тому +6

    Hi Alejandro. Thank you so much for making this video. This is extremely help to me. I followed your tutorial and made my own pdf chatbot. I also made a cool testing if you are interested in. ua-cam.com/video/EynIc0Shgrw/v-deo.html. I utilized a fictitious document to prevent the LLM from accessing its existing knowledge, and it was doing well. I noticed some problems of current code. Once the conversation became longer, the session_state may lost chat_history. But overall this is a very fun project to work with. Keep up with your excellent work!

  • @sahiljamadar7324
    @sahiljamadar7324 3 місяці тому

    I was interested in taking a taste in LLM and this video just fulfilled my taste. I completed this project and it works fine and gave me a lot of learning about the vectorstore, the LLM itself which very much appreciated. THANKS ALOT MAN!!!

  • @sfisothecreative99
    @sfisothecreative99 7 місяців тому

    I just had to subscribe. Great quality content!

  • @adriangheorghe8814
    @adriangheorghe8814 6 місяців тому +1

    I have been dreaming of something like this for months, great work. I can't wait for the video on persistent vector stores, a real game changer.

    • @alejandro_ao
      @alejandro_ao  5 місяців тому +1

      in next week’s video i use a persistent vector store :)

    • @akarunx
      @akarunx 4 місяці тому

      @@alejandro_ao Any updates on persistent vector stores? Eagerly waiting for.

  • @dswithanand
    @dswithanand 4 місяці тому

    explained in very simple way and anyone starting beginner to advance can easily digest the content of the video. successfully completed the project. thanks bro

    • @alejandro_ao
      @alejandro_ao  4 місяці тому

      very glad to hear this! keep it up!

  • @Tejas07777
    @Tejas07777 9 місяців тому

    best video so far on LLMs 🔥🔥🔥🔥

  • @ermax7
    @ermax7 Рік тому

    You are simply the best. Thanks for sharing us valuable knowledge, bruh.
    ✌️

  • @jeffg56
    @jeffg56 10 місяців тому

    Dude amazing job on this! Keep em coming!

    • @alejandro_ao
      @alejandro_ao  10 місяців тому

      thanks a ton! i will as soon as i come back from summer vacation!

  • @wolfrowell9435
    @wolfrowell9435 6 місяців тому

    Outstanding tutorial! Congrats 🚀🚀

  • @tonyww
    @tonyww 6 місяців тому

    Thank you so much for your high-quality technical walk through of the project. I found it very fascinating.

  • @paule7656
    @paule7656 9 місяців тому

    Thank you sooo much!! That's a great piece of educational content!

  • @woojay
    @woojay 7 місяців тому

    Thank you so much. This was super helpful for my own that I was building.

  • @DG-je7ed
    @DG-je7ed Рік тому

    Thanks! You are a great teacher!

  • @Mr_Chiro_
    @Mr_Chiro_ Рік тому

    Thank you so much. Keep making more tutorials. You are really helpful

  • @TheVinaysuneja
    @TheVinaysuneja Рік тому

    Love your way of explaining 🎉🎉🎉 thank you very much

  • @CesarVegaL
    @CesarVegaL Рік тому

    Great Job Alejandro. Congrats. Thanks

  • @techlifejournal
    @techlifejournal Рік тому

    Great idea. Thanks for sharing your knowledge.

  • @aboudezoa
    @aboudezoa 11 місяців тому

    Nicely explained ! Thanks.

  • @Robert-vj6up
    @Robert-vj6up 11 місяців тому

    Great tutorial on LLM! 🔥

  • @berendjdejong
    @berendjdejong 8 місяців тому

    Great content, enjoyed watching it, explained very clearly

  • @harshmunshi6362
    @harshmunshi6362 2 місяці тому

    Really good tutorial! Had to adapt and make some changes for my use case, but good intro!

  • @kirthiramaniyer4866
    @kirthiramaniyer4866 5 місяців тому

    Very thorough in explaining - good tutorial! Thanks

  • @crystal14w
    @crystal14w 10 місяців тому +2

    This was great! I was able to build it with no problem 😄 the only issue I had was the human photo being outdated so I tried to upload a new photo but it didn’t update.
    Major warning ⚠️ to those who test their apps alot. Don’t waste your free API, because OpenAI will ask you for your card number and take away $5 😢 I didn’t know that was a thing until now. I built another project with OpenAI API so just keep tabs everyone 🙏
    This was a great video! Thanks so much 👏

    • @alejandro_ao
      @alejandro_ao  5 місяців тому +2

      hey there, that's a good point! oh that's strange. anyways, you can now use the latest streamlit chat module, which allows you to create a chat-like UI with a few lines instead of building it all in HTML and CSS like we did here :)

  • @GEORGEBELG
    @GEORGEBELG Рік тому +1

    Excellent explanation and coding. Thank you

  • @ronicksamuel2912
    @ronicksamuel2912 5 місяців тому +1

    that was a great detailed and direct tutorial, you are a good teacher.💪💪

  • @swithmerchan92
    @swithmerchan92 9 місяців тому

    you are a master sensei .... masters of masters THANKS

  • @sammriddhgupta5614
    @sammriddhgupta5614 5 місяців тому

    Awesome video!! Concise explanations, and it works with openai, thank you!

  • @Patrick-hl1wp
    @Patrick-hl1wp 10 місяців тому

    Awesome video tutorial, super clear explaination, thank you😊

  • @jamesallison9725
    @jamesallison9725 11 місяців тому +1

    Terrific tutorial, you are a born teacher :)

    • @alejandro_ao
      @alejandro_ao  11 місяців тому

      you are just amazing, thanks 🤓

  • @Sulls58
    @Sulls58 10 місяців тому

    You are an amazing teacher. well done!

    • @alejandro_ao
      @alejandro_ao  9 місяців тому

      i appreciate a lot, thanks 😊

  • @RandyHawkinsMD
    @RandyHawkinsMD Рік тому

    Easy to follow. Thank you🙂

  • @duanxn
    @duanxn 6 місяців тому

    Great job, extremely helpful, Keep up, Thanks!

  • @veranium24
    @veranium24 Місяць тому

    Great video dude. Really well explained

  • @ninocrudele
    @ninocrudele 10 місяців тому

    Amazing content, very well explained, I immediately subscribed to you channel, please keep going !

    • @alejandro_ao
      @alejandro_ao  10 місяців тому

      awesome, thank you! i totally will :)

  • @EDUARDOCAPANEMAecapanema
    @EDUARDOCAPANEMAecapanema 8 місяців тому

    Great tutorial. Thank you!

  • @AzfarImtiaz
    @AzfarImtiaz 11 місяців тому

    Excellent video!

  • @abagatelle
    @abagatelle 11 місяців тому

    Brilliant video and so very helpful. Thanks very much 😊

  • @Xavirue
    @Xavirue 11 місяців тому

    Really great video, thanks!

  • @pdelosrioslorenzo
    @pdelosrioslorenzo 10 місяців тому

    Muchas gracias. Vaya lujo de tutorial ❤

  • @mateosabando
    @mateosabando 10 місяців тому

    Amazing video!!! Keep it up!

  • @lalitaawasthi8201
    @lalitaawasthi8201 7 місяців тому

    Thank you so much for such amazing content. It was really helpful.

  • @rakeshmk281
    @rakeshmk281 9 місяців тому

    Cool.. Nice precsentation and deployement !! Thanks Alejandro

  • @kolawoleagoro6893
    @kolawoleagoro6893 4 місяці тому

    Amazing Video!! 👏🏾🎉 Thank you 🙏🏾

  • @haiderkhalilpk
    @haiderkhalilpk Рік тому

    Very clear instructions

  • @AkritiKumari-kc8fr
    @AkritiKumari-kc8fr 2 місяці тому

    Thank you Alejandro, love your videos :)

  • @deerajspritle2264
    @deerajspritle2264 6 місяців тому

    Great Work!!

  • @smash666
    @smash666 10 місяців тому

    very clear. Thank you!

  • @Arkantosi
    @Arkantosi 22 дні тому

    Keep up the good work, you're great!

  • @Matepediaoficial
    @Matepediaoficial Рік тому

    You are incredible!!!

  • @hamza-kacem
    @hamza-kacem 10 місяців тому

    Excellent tutorial ❤ keep it up.

  • @dakshpatel4557
    @dakshpatel4557 2 місяці тому

    Thank you , this was the best ❤