How To Install Quivr 🏹 - Chat With ANY Document from ANYWHERE! (PDF, CSV, PPT, TXT)

Поділитися
Вставка
  • Опубліковано 21 жов 2024

КОМЕНТАРІ • 317

  • @StanGirard
    @StanGirard Рік тому +71

    Thanks for the great video ! It really helps the project !
    Since you recorded (probably 3 days ago ;) ) we created a simple script to help people get started. There is only one command to run for the env variables and the migration 🔥

    • @matthew_berman
      @matthew_berman  Рік тому +14

      Amazing! I recorded this last week actually, just trying out an editor so it took a bit longer to publish.
      Great work on this, Stan. Maybe you can comment on using a local vector DB instead of Supabase? Seems maybe people want that.

    • @StanGirard
      @StanGirard Рік тому +3

      @@matthew_berman we will look into it but we won’t be moving completely away from supabase.
      We will allow people to choose the vector store of their choosing in a near future ( once main features are out). We won’t go away from supabase all together however. It is still used for user management and sql databases.

    • @Wandermonk844
      @Wandermonk844 Рік тому +3

      @@StanGirard Could this read my entire Obsidian vult? So I could say, write a character for my book and ask it to give him a backstory based on lore in my vult?

    • @crawfordscott3d
      @crawfordscott3d Рік тому

      This is really amazing and so useful. Thanks

    • @wvagner284
      @wvagner284 Рік тому +2

      It's really a beatifull project! Congrats! Unfortunately I'm facing some issues trying interact with simple content (no blaming or critisizing). I'll keep following the project with great attention. It's so usefull and, best of all, open source. You are doing really a great job! Regards from Brazil

  • @michaellavelle7354
    @michaellavelle7354 Рік тому +4

    This model jumps through a lot of tech hoops to simplify getting started. I'm so glad you found it. Thank you.

  • @Dijital.Akademi
    @Dijital.Akademi Рік тому +41

    If we could use it with a local database, that would be fantastic.. I don't think any company would trust a database server hosted somewhere on the internet. But the implementation is great. As always you explained very well how to install it. Thank you.

    • @chrissheridan9938
      @chrissheridan9938 Рік тому +6

      Exacrly what i thought!, but i looked it up,It uses sub-base, looking at the sub-base git hub, they support local instances running, so this can be truly all local!

    • @foxdog9332
      @foxdog9332 Рік тому +1

      Can you use it with chroma? Or is it only cloud?

    • @mjp152
      @mjp152 Рік тому +2

      my sentiments exactly - dont think making an on-premise version would be that difficult though.

    • @Pure_Science_and_Technology
      @Pure_Science_and_Technology Рік тому +3

      Yes. Use a local DB. Modify what you want.

    • @StanGirard
      @StanGirard Рік тому +4

      You can self host supabase locally ;)

  • @Tenly2009
    @Tenly2009 Рік тому +59

    This looks very cool - but it would have been much more useful if you had installed Supabase locally in Docker - and demonstrated it with one of the self-hosted language models - rather than using the cloud based Supabase and a cloud based LLM (Anthropic or openai). When it comes to chatting with our own documents - keeping your data local and private is a big concern. Also, if some of us wanted to turn this into a managed service for local companies - or even use it with our employers - most of them are not going to allow uploading proprietary or confidential documents to a cloud-based provider (no matter what their privacy policy says).
    Perhaps in a future video?

    • @clray123
      @clray123 Рік тому +2

      Catch 22 of all this cloud hosted crap.

    • @tsadigov1
      @tsadigov1 Рік тому

      Supabase is open source, I did set up quivr with local supabase

    • @Tenly2009
      @Tenly2009 Рік тому +9

      @@tsadigov1 Yep. It is. But he didn’t self-host it here, he used their cloud-hosting which defeats all of the privacy concerns many of us have. This video would have been far more useful if he had demonstrated how to run the whole thing locally and privately.

    • @arthurnemeth3888
      @arthurnemeth3888 Рік тому +4

      This comment should be at the top. Until then it's not "local" and thus not private. Nothing I would do with my files. But looking forward to where all this is going to

    • @Jagosix
      @Jagosix Рік тому +2

      @@tsadigov1 - IS there any way you can help me out with setting Quivr with Supabase up locally? Thanks.

  • @Berghamote
    @Berghamote Рік тому +3

    Dear sir,
    I really love how you do your videos : Straight to the point, high pace, and god damn relevant.
    Thanks a bunch for the energy you are putting into this.
    Cheers

  • @PeterShannon_
    @PeterShannon_ Рік тому +111

    The biggest limitation I can see right now with these various "chat with your private files" projects is the small context window of most of the LLMs today. I want to be able to converse with a private LLM that is able to synthesize across thousands of my internal documents - an entire directory structure, etc. However, from what I understand so far, the LLM can only access this content via the very small slice that is fed to it via the context window along with the prompt. If, for example, Falcon 40b has a context window of 2048 tokens and if I've loaded private documents into a vector database at the suggested chunk size of 512 tokens, then along with my prompt only three chunks of my private documents can fit into the context window. The LLM is left blind to everything else. Am I on the right track here? If so, I don't see how this approach scales to meet the real demand. It's cute to do simple fact retrieval from a document, but we really want the LLM to be as facile with private documents as it is with the information it was originally trained on - to be able to summarize large bodies of information and reach across different stores of data to synthesize and find real insights. Will much larger context windows get us there, or is this a dead end and should we be looking at how to actually train an LLM on our private data?? I would love to learn what the community is thinking on this question.

    • @His-Soldier
      @His-Soldier Рік тому +8

      Thi billion token mark will come at some point. In the meantime, you can use llm's to create notes on your documents and the use those. This way a lot more can fit in the token window.

    • @OnigoroshiZero
      @OnigoroshiZero Рік тому +15

      @@His-Soldier Microsoft already showed a model with 1b token limit (LongNet).
      We just need these models to start being optimized to run locally on at least 10-12gb of VRAM, or with a combination of VRAM and RAM if they require more.

    • @foxdog9332
      @foxdog9332 Рік тому +2

      Yeah I would look into training the data into the model that way it has more context vs reading the files. But the problem is you still deal with token limit so if there is info missing it would be hard to tell imo.

    • @mikairu2944
      @mikairu2944 Рік тому +9

      @atmaram64 I can't even imagine what it would be like chatting with the humongous amounts of notes I've been taking for years. Longnet and portable AIs can't come soon enough

    • @ryzikx
      @ryzikx Рік тому +7

      don't worry context windows will be obsolete within the next year

  • @PeterGiordano
    @PeterGiordano 11 місяців тому +1

    Phenomenal video and one of many that I follow along!!
    What would you recommend for hosting the entire project in the cloud so others could access it? RunPod? Other?

  • @michaelmarkoulides7068
    @michaelmarkoulides7068 Рік тому +1

    You are officially my favorite youtuber period

  • @dariuszkot7651
    @dariuszkot7651 Рік тому +24

    Hi. :) Your explanations are maybe the best on the internet, but pls start to listen to comments about unintentionally misleading viewers. Do you agree that "fully private and local" mean: everything is located and hosted only on my computer, can work even without an internet connection, and no external entity may gain access to my data? Can I suggest making a setup scheme under the video, listing LLM and other parts of the setup, and clearly separating what is FULLY private AND local (FPL), from externally hosted databases, IPs, and other tools, introducing counterparty risks etc?

    • @spicer41282
      @spicer41282 Рік тому +4

      👍Agreed 1000%! He does a great job! But needs an assistant or an AI Agent (pun intended). Because he is already busily working on the next episode.
      But yah... he definitely needs to pay attention to what you have suggested!
      I, 2nd the motion!

    • @AlexGeo925
      @AlexGeo925 Рік тому +5

      Agreed, thank you. I also love the content here, but this delineation is critical for me, as someone starting to get into helping businesses set up LLMs for themselves, if they don’t trust ChatGPT and other cloud-based AIs

    • @dariuszkot7651
      @dariuszkot7651 Рік тому +3

      @spicer41282, @alexgeorgi8217 thank you for your support. I think we cannot make with "open source AI" and "local AI" the same mistakes we made with Web 2: Google and FB. "Free" was not actually free, it was just a novel way to extract valuable data. Now we know there is a need for a critical question: Do we allow them to extract data or we do not? This time delineation must be clear, and we cannot be distracted again.

    • @tompayne36
      @tompayne36 Рік тому

      Are you saying you are advising businesses on how to adopt AI but do not understand, after watching Matt’s very clear videos, which of these components are local and which are cloud? I am not trying to be obnoxious, but is sounds like you need to do a bit more research.

    • @dariuszkot7651
      @dariuszkot7651 Рік тому

      @@tompayne36 I'm learning, and not advising.

  • @marcfruchtman9473
    @marcfruchtman9473 Рік тому +3

    This is really amazing! It would have been great to see more examples such as power point and video.
    Great video, and thank you for stepping thru the instructions.

  • @KhaledFouad73
    @KhaledFouad73 Рік тому +3

    Great content Matt and thanks for keeping us updated with latest. I am sure it's exhausting given the speed of development in the field. I am also on Mac and was wondering what are you using for your terminal to get this fancy color codes and autocompletion. I am currently using fig for command line, but it is not as neet as yours.

  • @valcoiyongoekala5924
    @valcoiyongoekala5924 Рік тому +1

    Thanks for your precious technical info

  • @pastoryoda2789
    @pastoryoda2789 Рік тому

    Let’s Goooo! I’ve been looking for something like this

  • @reikar2064
    @reikar2064 Рік тому +12

    If you can figure out how to do it 100% locally, I'd be very appreciative if you could post a new video or an update! Otherwise great video and thanks for keeping us up to date on this type of stuff!

    • @matthew_berman
      @matthew_berman  Рік тому +4

      I pinged the author of Quivr about it :)

    • @reikar2064
      @reikar2064 Рік тому +2

      @matthew_berman oh nice, thanks!

    • @matthew_berman
      @matthew_berman  Рік тому +5

      @@reikar2064 he said it’s coming soon!

    • @chrisyung655
      @chrisyung655 Рік тому +4

      @@matthew_bermanBe great if you could post an update video on this!

    • @innocentiuslacrim2290
      @innocentiuslacrim2290 Рік тому +1

      As long as you are using openai models, it cannot be 100% local - even if the DB is local.

  • @jeremybristol4374
    @jeremybristol4374 Рік тому

    This is awesome! Thanks for the detailed installation walk-through!

  • @tunestyle
    @tunestyle Рік тому +1

    Outstanding!!! Working on this build right now.

  • @thelandoflofi
    @thelandoflofi Рік тому +15

    Awesome ! Would you be able to make a video to show us how to fine tune an open source model with personal data ?

    • @redbaron3555
      @redbaron3555 Рік тому +3

      🎯👏🏻👍🏻

    • @williamwong8424
      @williamwong8424 Рік тому +4

      yes please. not using openai key but llm. i think its just placing the file in the folder but please do that. and is there alternative to supabase?

    • @dukemagus
      @dukemagus Рік тому +4

      99% of the time you don't need to fine tune an LLM (it's super expensive). Just make your embeddings and use it as context with your favorite LLM

    • @dariuszkot7651
      @dariuszkot7651 Рік тому +1

      @@dukemagus Any idea how to JUST (quickly and easily?) make "my embeddings"?

    • @thomasbattle969
      @thomasbattle969 Рік тому

      @@dariuszkot7651 Yeah I want to know how to do this as well.

  • @hoangnam6275
    @hoangnam6275 Рік тому

    Thanks bro, best wishes for u. U the man of the community

  • @bixbe_sglearn
    @bixbe_sglearn Рік тому +1

    Kudos to you. So easy even for non-developers

  • @kushis4ever
    @kushis4ever 6 місяців тому

    hey there, thanks for the video as usual a great one. i have noted that the installation instructions have changed on their documentation page.

  • @TihomirBozic69
    @TihomirBozic69 Рік тому

    Liked, subscribed and left a comment :) thanks for the video, this might be a missing link for my next project :)

  • @Zivafgin
    @Zivafgin Рік тому +1

    Hi Matthew,
    I love your channel, and I want to say big-time thanks. Thank you for providing us with fantastic projects and making them easily accessible. I really appreciate it, and if you don't mind, I have a couple of questions:
    1) I encountered difficulties running LocalGPT on my PC due to insufficient RAM. I'm curious about the system requirements for this, before jumping into the journey again...
    2) Besides privacy and running it locally, what distinguishes this from using the Code interpreter for interacting with documents?

    • @matthew_berman
      @matthew_berman  Рік тому +1

      Thank you! I’m not sure about system requirements for local gpt, it depends on the model you use. Code interpreter is better at running Python scripts against document vs just chatting with them

    • @Zivafgin
      @Zivafgin Рік тому

      @@matthew_berman Thanks, but I meant- if you know the requirement for running Quivr locally? (cuz I had already a problem last time with LocalGPT )

  • @hermysstory8333
    @hermysstory8333 Рік тому +1

    Many thanks Matthew once again!!!

  • @syrrjun
    @syrrjun Рік тому

    best local LLM model since I installed
    Thank you for your explanation

    • @seanolivas9148
      @seanolivas9148 Рік тому

      5 people said this didn't work in the last few weeks but you got it working?

  • @jamesyoungerdds7901
    @jamesyoungerdds7901 Рік тому

    That is just amazing, thank you! I'm wondering, do you think it would be useful as a customer support chatbot? ie. upload all your knowledge base and customer support information and have customers chat with it to resolve their questions?

  • @mitchrosefelt8918
    @mitchrosefelt8918 Рік тому

    Amazing!
    Thanks for posting this.

  • @Alex-gc2vo
    @Alex-gc2vo Рік тому +20

    I'm afraid you may be unintentionally misleading your viewers. in many of your videos, including this one, you use words like "private" and "local" which is usually true when talking about the LLM but then you go and pair it with an externally hosted vector DB like pinecone or supabase which is just as much of a security risk / external dependency if not more than using a web hosted LLM.

    • @MDNQ-ud1ty
      @MDNQ-ud1ty Рік тому +1

      Yes, these companies are mining the data by getting people to freely send in their data. You do all the work and they get all the power. Nothing is free, these companies are not doing this to help you, they are doing it to hurt you.

    • @matthew_berman
      @matthew_berman  Рік тому +7

      Changed the title. Although I think you can have a local db, I didn’t confirm it yet.

    • @endlessvoid7952
      @endlessvoid7952 Рік тому +1

      I don’t think you can use a local database, last I saw it was paired with supabase only

    • @GeoffClark
      @GeoffClark Рік тому +1

      @@endlessvoid7952supabase has a local install option though

    • @careyatou
      @careyatou Рік тому +1

      So local LLM is good and supabase can be local too? Does that mean it could all be local and private?

  • @aysberg9403
    @aysberg9403 Рік тому +3

    A tutorial on Quivr private llm would be nice 👍

  • @gwynmorris7199
    @gwynmorris7199 Рік тому

    Awesome stuff.. Thank you for taking the time and sharing.

  • @TrevorMatthews
    @TrevorMatthews Рік тому +2

    Another great video. It did leave me with a couple questions. Is it possible to search across all the documents in your library? It kind of looked like you were going into a document and then searching it. The other thing I want to know is what information is actually being sent to chat GPT? Are my documents in their entirety being shared? I’d like to have a totally self hosted solution to use in the organization.

    • @helix8847
      @helix8847 Рік тому +1

      No as the LLM's are still limited to context size. Example GPT4 can only look at 6000 words at at time anything outside of that it, it forgets.

    • @clray123
      @clray123 Рік тому +1

      It basically looks up a piece of your document based on your query and sends this piece to ChatGPT for summarization.

  • @The-Sound-Machine
    @The-Sound-Machine Рік тому

    Is there a video on this channel or would some recommend a video so I can see how to properly put this on the internet for my company to use? Also, can you create permissions within this so only certain people can access certain files? Lastly how do I quantify tokens to the data? Is a token a certain amount of data it can ingest? Thanks for any and all help, super exited to dig in!

  • @xorqwerty8276
    @xorqwerty8276 Рік тому +4

    Are you still limited by the gpt token limit? Or can you
    Upload large documents here? (100 pages+)

  • @MeinDeutschkurs
    @MeinDeutschkurs Рік тому

    Is it also possible to have the database local? If not, I cannot use it. 1. what happens with the data in the cloud? 2. who gets as well access to it - except of me. 3. EU data law.

  • @Robson.S.Carreiro
    @Robson.S.Carreiro Рік тому +1

    This was just what i was looking for

  • @medec021
    @medec021 Рік тому

    Very very good explanation. Thank you.

  • @mrala
    @mrala Рік тому +2

    Thank you for the effort Matt, and I have a question;
    what local model will be the best for using documents of middle east languages. (e.g. persian, arabic)

    • @matthew_berman
      @matthew_berman  Рік тому +2

      I would guess falcon, since it was created in the UAE

  • @NicholasCarn
    @NicholasCarn Рік тому +1

    Hi, thanks for the tutorial :) If possible please do a new tutorial covering a complete local install with local supabase and local offline model as this is difficult :)

  • @amj2048
    @amj2048 Рік тому +1

    This is so cool. Thanks for sharing!

  • @antdx316
    @antdx316 Рік тому

    100% the best AI tools channel. Do people know of any others so Matthew Berman can go through those tools and share the clear cutting steps here?

  • @jasestu
    @jasestu Рік тому +1

    So, you can use a local model, but you have to use a cloud database? It would be good if you were more clear on the architecture at the start when giving an overview.

  • @micbab-vg2mu
    @micbab-vg2mu Рік тому +1

    Great I have to try it.Thank you for the video.

  • @OnigoroshiZero
    @OnigoroshiZero Рік тому +4

    This will be great to use when the models that I run locally have a larger token limit (100k tokens or more), and optimized for use with a consumer grade GPU with 10-12gb or VRAM.
    I have hundreds of documents with game design and stories/lore ideas, and many times I create multiple variations of the same mechanics or make changes to lore/characters that are worse than my originals because I am getting lost on the thousands of pages in these.

    • @mirek190
      @mirek190 Рік тому

      we have local models 8K already and soon maybe 100K

  • @HaroldCrews
    @HaroldCrews Рік тому +3

    I can see this being useful for dropping technical papers to request an aggregate summary or to place manuals into it and asking for technical support. The could easily become the engine behind support chatbots. What is the largest size document it can accommodate?

    • @helix8847
      @helix8847 Рік тому +1

      The size of the docoument it can read is determinded by the LLM, not this software.

    • @clray123
      @clray123 Рік тому +1

      The aggregate summary for any technical paper already exists and is called an abstract.

  • @yoyiyoyi7216
    @yoyiyoyi7216 Рік тому +5

    How is this better than private GPT?

  • @wajidalikhan28
    @wajidalikhan28 Рік тому +1

    That's amazing, can we use it as chatbot for customer support or training sessions or other services? If you guide me that will be awesome.

  • @lmd4881
    @lmd4881 Рік тому

    absolutely amazing :)

  • @simple-security
    @simple-security Рік тому

    I'd love to be able to feed this a list of news sites.

  • @mrtn5882
    @mrtn5882 Рік тому

    Very, very exciting! Can it answer questions and tell you the source (ie. which document/documents) were used to answer the question?

  • @ItsBozzy
    @ItsBozzy Рік тому

    What terminal application are you using? I like the layout/color coding.

  • @blocksystems202
    @blocksystems202 Рік тому +1

    Can you explain how you might deploy your own to Vercel? so connect your cloned Github repo to Vercel and then walk through the build process for deploying this. I'm thinking a cloud hosted version is better than a locally hosted, but also to act as a reduncy incase the main one goes down or PR's cause unintended disruptions. thanks a lot.

  • @timothymaggenti717
    @timothymaggenti717 7 місяців тому

    Well not so local when you need to use outside database? Thanks for the video

  • @cyc00000
    @cyc00000 Рік тому

    This looks very promising

  • @TheNewPhysics
    @TheNewPhysics Рік тому

    I setup quiver and it is working. I realized that I don't have an openai subscription (I have the ChatGPT Plus but that doesn't allow me to use the API). I have gpt4all in my ubuntu system and Quiver is working in Docker containers.
    How do I direct quiver to use the models I have locally?
    I followed the description but it seems I missed the details associated with that part.
    I would ask the people from quiver but I couldn't find a forum or discord.
    Thanks

  • @tompayne36
    @tompayne36 Рік тому

    I was excited to try this on a free Oracle Cloud VM I have, but I have not been able to get it working yet. I get to the docker compose step…but found out the Oracle machine is using “podman” (a docker clone?). The compose runs for quite some time but eventually exits with an error code. Does the Docker expect a Windows or MacOS architecture? Specific flavors of Linux?

  • @federicoloffredo1656
    @federicoloffredo1656 Рік тому

    Very interesting. I wonder if it can be feed with emails and if not if you know any application that can. Thanks

  • @Miksa-fx2yb
    @Miksa-fx2yb Рік тому +2

    Thanks for the tutorial. How should I use local LLM with Quivr?

    • @matthew_berman
      @matthew_berman  Рік тому

      In the video, I point out a place where you can put the name of your local model and location.

  • @eunomiac
    @eunomiac Рік тому

    What limitations are imposed on it by the token limits of LLMs? Sorry if I missed your explanation of this somewhere, but I couldn't help wondering how it could possibly maintain a context spanning even one moderately-sized document, let alone many?

  • @drgnmsr
    @drgnmsr Рік тому +1

    Wait, doesn’t GPT4ALL do this already with a beautiful GUI, NO coding required, and is actually fully locally stored in your own computer ???🤔

  • @tonyblack2141
    @tonyblack2141 Рік тому

    Have you tried comparing the quality of the response from this with Chatbase? Would be nice to know. Thanks for your work

  • @PrattyNanda
    @PrattyNanda Рік тому

    Hi Matthew! What are the other file formats that this could potentially work with? Could this work with AutoGPT? This could provide context and autogpt would execute. Replaces middle management far more effectively

  • @nuanceshow
    @nuanceshow Рік тому

    What I'm really waiting for is something like this that runs completely local due to privacy concerns over the documents. Also is there a size limit on the document that you upload?

  • @iitiim
    @iitiim 3 місяці тому

    pdf protected documnet will work and what about pdf having image of social id and driving license like can it tell driving license numbers?

  • @AkulSamartha
    @AkulSamartha Рік тому

    Awesome video.👌 Can you please make a video on how I can upload it on the internet so that many people can use it, by accessing the URL.

  • @ahogervorst6467
    @ahogervorst6467 11 місяців тому

    Hi Matt Anton from the Netherlands, can you help me, I can't get it to work, I have done everything, just like your video, buddy, I always end up on the main page of the Quivr, can you help me with this?

  • @Novelltrade
    @Novelltrade Рік тому +1

    Thank you for your hard work. I will appreciate it if you can create a video on how to implement this on a cloud instance or a server.

    • @FreeFormDesigner
      @FreeFormDesigner Рік тому

      I tried to run it on a virtual server, but unfortunately it does not work. Registration is successful, does not create a new brain "Fetching your data...". How can you solve this problem?

    • @ultraversegroup9056
      @ultraversegroup9056 Рік тому

      I have the same problem and I think it something to do with supabase
      Anyone connected successfully to quiver from outside of local area network? If do please share

    • @FreeFormDesigner
      @FreeFormDesigner Рік тому

      @@ultraversegroup9056 I deployed on a cloud server. But I installed the desktop version of the server and work via remote desktop in a browser on the cloud. If I connect to the web server on the cloud from my computer, it doesn't work, there is no connection to the database

  • @Gl0we22
    @Gl0we22 Рік тому +1

    why has this project got the same name as quivr the vr archery game? i was so confused when i saw the title

  • @אריאלבןמשה-פ1ה
    @אריאלבןמשה-פ1ה Рік тому

    Hi, thank you very much for the tutorial. Can I receive answers in other languages ​​as well? How do I add the option to ask in Hebrew? Thank you.

  • @tvbox6533
    @tvbox6533 Рік тому +1

    can this work with ExLLama?
    can you do a video of 10% fre usage (including lm and embedings) ?

  • @kirtg1
    @kirtg1 Рік тому

    I have 1000 page Microsoft document that I want to query for specific answers.
    Can this be done with quivr the way it is set up in your video?

  • @bennoreuter4393
    @bennoreuter4393 Рік тому +1

    Difference to localGPT, privateGPT and others?

  • @DanielBrenes
    @DanielBrenes Рік тому

    Followed closely until I attempted to build the SQL Query via the provided script and got this warning message ''Destructive operation
    We've detected a potentially destructive operation in the query. Please confirm that you would like to execute this query.

  • @nicholaswright8782
    @nicholaswright8782 Рік тому

    Could you use this with books? Or is there a better option?

  • @mjp152
    @mjp152 Рік тому

    If supabase is just a postgres with extra makeup it probably wouldn't be that difficult to build a version that runs entirely on-premise. The setup SQL script look very vanilla.

  • @yumpone
    @yumpone Рік тому

    Interesting but what is it actually doing? It feels like it would be an infinite context but that's not possible, right? So, how is it actually using the documents provided? Can't find anything on it. I guess it has to add the file to context dynamically but that would be interesting to know.

  • @hondo190
    @hondo190 Рік тому +2

    If you have to store your data in an external datase it is not a local tool. Sadly

  • @janalgos
    @janalgos Рік тому +1

    I wonder how this compares to code interpreter? Unfortunately I don't have access to code Interpreter yet.

    • @matthew_berman
      @matthew_berman  Рік тому

      It's a pretty different product, solving different problems.

  • @rrrila8851
    @rrrila8851 Рік тому

    Can we connect it to a falcon 40b or something similar so we dont use openapi AT ALL?

  • @DoozyyTV
    @DoozyyTV Рік тому

    What are the tokens for? You have to pay for tokens?

  • @saisupriyapavarala4996
    @saisupriyapavarala4996 Рік тому

    @MatthewBerman I have followed the same procedure. But I am facing issues with creation of brain and some other errors. Can you please help me?

  • @sgramstrup
    @sgramstrup Рік тому +4

    How F* annoying that every other AI project requires an account at some ridiculous unnecessary service provider. Most of these needed services already comes in open source form, and should just be installed with the original software. NO need for creating accounts everywhere. I don't know if it is because devs are getting paid to use/mention these shit services, or they were to lazy to use open software, but fuck it's annoying..

    • @clray123
      @clray123 Рік тому

      That is too difficult for the cloud kiddies.

  • @waraiotoko374
    @waraiotoko374 Рік тому

    Awsome project. But it is 100% privat? This db is in the cloud? Can I use other db?

  • @punchbowldeals
    @punchbowldeals 11 місяців тому

    7:47 it will tell you to run destructive query, go ahead.

  • @surajthakkar3420
    @surajthakkar3420 Рік тому

    instead of Supabase, can we use Postgre SQL?

  • @MedoHamdani
    @MedoHamdani 11 місяців тому

    What does the temperature & the tokens do?

  • @maxmerca
    @maxmerca Рік тому

    Why should I sign up to login to an app running in docker runing on MY machine?
    Why in the case I wanted that, would ask permision to goolge to acces an app running on my pc?
    It is bad enough to have to log in to play a game that you buyed and are executing locally !!! (besides multiplayer)

  • @punchbowldeals
    @punchbowldeals 11 місяців тому

    Step 4 has changed mate,
    Medo Hamdani

  • @SkizzieSpeedruns
    @SkizzieSpeedruns Рік тому +2

    What PC specs do you need to run this with the gpt 3.5 turbo?

    • @matthew_berman
      @matthew_berman  Рік тому +1

      Any. Since it won’t be running the model locally

  • @jjolla6391
    @jjolla6391 8 місяців тому

    why do you have to drag-and-drop? why cant you specify an area of your file system (or other knowledge soruces) that it ingests .. and then tell it "find the doc that talks about the constitution"

  • @PapaBradKnows
    @PapaBradKnows Рік тому

    Yes im at a stopping point, I need a refresher to make this work. It certainly looks cool, but I really want to make it work Locally to Parse my plethora of Data on my external Hard drive. Oh well, guess im showing my age.

  • @hiromaohi
    @hiromaohi Рік тому

    1) Under custom prompt, within the settings column of my brain, selecting a quivr personality does not work period. When i add my prompt title and content manually it never saves, ever. The other areas in settings save*
    2) I have succesfully uploaded two PDF articles and one Word document. It seems like sometimes, within the chat, the brain decides to acknowledge one at a time, sometimes all, but the majority of the time it will give me an answer indicating the opposite. Sometimes it might even say that it cannot view documents period "as an AI assistant".
    3) I noticed that that I have already used half of my "brain size" - is this normal? From examples that I have seen on youtube, people uploaded an enormous amount in comparison to what I have.
    4) At some point in time, the chats will just stop working, any inputs will be ignored, thus I will just go start a new chat to at least try progressing.
    Conclusions - I had the help of a computer engineer friend to help me with the install as it was troublesome and I was OVER THE MOON to start using it for my research endeavors. Alas I have not been able to leverage this tech as demonstrated on youtube. Please help!

    • @hiromaohi
      @hiromaohi Рік тому

      I have made some progress since posting this, but I have run into additional challenges. My "brain" is full, it is around 9MiB so I tried creating a second brain, but this second brain is no longer accepting uploads

  • @edgarbernal-martinez4698
    @edgarbernal-martinez4698 Рік тому

    awesome project. thank you for sharing.
    Unfortunately, I quickly got to "error_code=rate_limit_exceeded error_message='Rate limit reached..."

  • @foxdog9332
    @foxdog9332 Рік тому +1

    Is it possible to use local storage like chroma?

  • @goss4444
    @goss4444 Рік тому

    It looks like you are exposed if you leave the defaults in place with Supabase. But I could be wrong... I locked it down and without setting up any policies and Quiver no longer works. So yeah, help me out here? I am not asking for tech support, just a discussion of "exposure".

  • @toddgattfry5405
    @toddgattfry5405 Рік тому +1

    Do you need to have a paid account with ChatGPT to use the chat?

  • @joeigineversleepigottaeat8969

    This is golden

  • @ShaneInseine
    @ShaneInseine Рік тому

    Did ChatGPT's new file upload feature make Quivr redundant?

  • @punchbowldeals
    @punchbowldeals 11 місяців тому

    What about this error
    You have to remove (or rename) that container to be able to reuse that name.
    Using Ubuntub OS

  • @r34ct4
    @r34ct4 Рік тому

    Could i feed it a chat conversation history document and have it act as a chat bot based on that documrbt?

  • @jagvindersingh4543
    @jagvindersingh4543 Рік тому

    If we are using Open AI API how do we ensure that we don't give away our documents, specifically to use cases of businesses where they don't feel safe exposing any documents with open AI just like that. any thoughts?

    • @jayr7741
      @jayr7741 Рік тому

      Open ai API key is paid or free? Plz reply can we use free API key for this?

  • @selvamanysrinivasan6440
    @selvamanysrinivasan6440 Рік тому

    Please elaborate at time 3:25 for Windows 10 installation. There is no Base>Desktop. git command not recognized message. Thank you