323 - How to train a chatbot on your own documents?

Поділитися
Вставка
  • Опубліковано 19 сер 2024

КОМЕНТАРІ • 46

  • @Ethan-gs5ib
    @Ethan-gs5ib Рік тому +6

    Better than most paid courses online! Thanks.

  • @alisonwright2189
    @alisonwright2189 Рік тому +12

    I've been using the function ChatOpenai() rather than Openai() to call the model "gpt-3.5-turbo" which costs $0.002 rather than $0.025. Cheaper and more powerful, can still be used for standard querying.

    • @vishnuvardhanvaka
      @vishnuvardhanvaka Рік тому

      Hello mam, can please make a video on usage costs and other cost factors about openai api

  • @robosergTV
    @robosergTV Рік тому +9

    Would be nice to make the same video but for Llama-2. Llama-2 can run in our private cloud. Many companies dont want to use OpenAI because of data privacy concerns. Also Llama-2 is completely free and can run locally for free.

  • @pabolusatyavivek9481
    @pabolusatyavivek9481 Рік тому +1

    Thanks, Sreeni. Your content is always the best!

  • @souravran
    @souravran Рік тому +2

    GPT is general purpose and its been trained on millions pieces of text so that it can understand human language. Sure, it might be able to answer specific questions based on the information that it was trained on - for example, "Who is the CEO of Google?" - but as soon as you need to produce specific results based on your product, results will be unpredictable and often just wrong. GPT-3 is notorious for just like confidently making up answers that are just plain wrong.
    There are two approaches to address this:
    1) Fine-tune the model - Need to retrain the model with your own custom data or every time new data is added
    2) Context injection - Pre-process knowledge base (embedding), store it as object or in database, based on user's query, search your knowledge base for most relevant info, inject the top most relevant pieces into the actually prompt as context

    • @carlos.duclos
      @carlos.duclos Рік тому

      For very specific data extraction, do you think it'd be better to train your own model, for instance using LayoutLMv3?

  • @TLogan-eu7qt
    @TLogan-eu7qt 11 місяців тому +1

    Great vid. thank you for your time and effort for these vids.

  • @humaitrix
    @humaitrix 4 місяці тому

    Great material! Thanks for sharing, good job 🚀

  • @amnn8507
    @amnn8507 26 днів тому

    Thank you for your great videos. Just a quick note, you are not training anything here, you're building a RAG system. You could say "training" if you were optimizing the parameters of a model (e.g. neural nets) for minimizing a loss function.

  • @deanstarkey4375
    @deanstarkey4375 Рік тому

    this was awesome! I never do any coding, and was able to follow and do it

  • @BlazeArteryak
    @BlazeArteryak Рік тому +4

    I have an pdf with thousands of pages, is the gpt-4 able to undestand and memorize all of it ? My questions to this big pdf need to correlate all the information.

  • @AlexDerBar
    @AlexDerBar Рік тому +1

    Hi Sreeni! Love the content, everything's always amazingly explained. I was wondering if you were planning on covering the YOLOv7 algorithm. It would be really interesting seeing a video of you covering it and your takes on it.
    Keep up the good content :)

  • @drayhancolak
    @drayhancolak Рік тому

    you ate amazing mate. thank you for awesome lectures

  • @develom_ai
    @develom_ai 9 місяців тому +1

    Great video. Thanks!👍

  • @91255438
    @91255438 Рік тому

    thank you! It's exactly I was looking for.

  • @mdabdullahalhasib2920
    @mdabdullahalhasib2920 Рік тому

    Always appreciate your work. Thanks sir...

  • @AdnanKhan-mi2kf
    @AdnanKhan-mi2kf Рік тому +1

    Hi Sreeni,
    I enjoy your content every time I see it.
    Just a question why you jumped from 311 to 323?

    • @DigitalSreeni
      @DigitalSreeni  Рік тому

      Good observation. I have already created content and written code for the remaining videos (312-322) and they focus on image analysis and optimization techniques. I recorded a couple more language model videos based on viewer questions so I had to assign them new numbers that do not follow the sequence. I don't want to reshuffle all numbers or wait a few months to release another language model video.

  • @happyg8682
    @happyg8682 Рік тому +1

    Thank you very much for this great video! Could you please let me know here we used ChatGPT or GPT4? And it’s not fine tuning here, it’s embedding, right? Which one do you think is better? Fine tuning or embedding? Thank you very much!

  • @vishnuvardhanvaka
    @vishnuvardhanvaka Рік тому

    Sir Can you please make a video on usage costs of api and other cost factors !

  • @telexiz
    @telexiz 3 місяці тому

    Thanks!

  • @romanemul1
    @romanemul1 Рік тому

    The biggest problem is the API key. Try to make it using without all this Open AI company. What happen if you dont extend your API key subscription ? Will the pipeline just stop working ?

  • @amedyasar1021
    @amedyasar1021 9 місяців тому

    nice tutorial... how could I limit the topic only with the PDFs? for example in case that the chatbot must not answer.

  • @anshikak3
    @anshikak3 4 місяці тому

    Does it work of csv filled with numeric data converted to pdf and then imported in the file?

  • @a3hindawi
    @a3hindawi 4 місяці тому

    Thanks

  • @elibrignac8050
    @elibrignac8050 Рік тому

    can you link the txt file you used

  • @BlazeArteryak
    @BlazeArteryak Рік тому

    Is it better than chatwithpdf plugin model ?

  • @bropocalypseteam3390
    @bropocalypseteam3390 Рік тому

    Where's the training?

  • @kai-yihsu3556
    @kai-yihsu3556 Рік тому

    May I ask if this tutorial example simply extracts the content from the PDF article as context and sends it along with the question to the OPENAI API? Or is there any training being done locally? I'm curious about this because the video mentioned the use of an API KEY. Thank you.

    • @guiomoff2438
      @guiomoff2438 Рік тому

      Regarding tokenization, when you use the OpenAI API, both your PDF data and your question will go through tokenization processes. The text from your PDF file will be tokenized to prepare it for input to the model, and your question will also be tokenized to match the model's input format. The tokenization ensures that the text is divided into smaller units that the model can process.
      The tokenizations for your PDF data and question are independent of each other. The model doesn't directly compare the tokenizations to extract relevant content from your PDF file. Instead, the model processes the tokenized input and generates responses based on its understanding of the language and context. The model doesn't have direct access to the original PDF data or its specific tokenization.
      OpenAI doesn't have access to your data!

    • @guiomoff2438
      @guiomoff2438 Рік тому

      You need an API key to add the openAI API layer on your model.

    • @DigitalSreeni
      @DigitalSreeni  Рік тому +2

      No training is happening, just a vector match of embeddings. I've used the term 'training' in the tutorial but what I should have said was that embeddings are being matched.

    • @kai-yihsu3556
      @kai-yihsu3556 Рік тому

      @@DigitalSreeniThank you so much! 😊

  • @shubhamdubey9181
    @shubhamdubey9181 5 місяців тому

    But langchain is free ?

  • @user-uu7te1ob1b
    @user-uu7te1ob1b 5 місяців тому

    But IDK how to code😢😢😢😢😢😂😂

    • @DigitalSreeni
      @DigitalSreeni  5 місяців тому

      Don't worry. There are a lot of service providers out there that allow you to train your own chatbots, just costs some $$$

  • @ronaldgourgeot2759
    @ronaldgourgeot2759 10 місяців тому

    Thanks!