PrivateGPT 4.0 Windows Install Guide (Chat to Docs) Ollama & Mistral LLM Support!

Поділитися
Вставка
  • Опубліковано 16 лис 2024

КОМЕНТАРІ • 287

  • @Offsuit72
    @Offsuit72 7 місяців тому +14

    I cannot thank you enough. I've been struggling for several days on this, it turns out I was using outdated info and half installing the wrong versions of things. You made things so clear and I'm thrilled to be successful in this!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому +1

      You're welcome! I am glad to hear the video assisted! Thanks so much for reaching out.

  • @radudamian3473
    @radudamian3473 7 місяців тому +9

    Thank you. Liked and subscribed. I most appreciate your patience to give step by step, easy to understand and follow instructions. Helped me, a total noob...so hat off

  • @OmerAbdalla
    @OmerAbdalla 6 місяців тому +3

    This is a great installation guide. Precise and clear steps. I made one mistake when I tried to setup the environment variable in Anaconda Command Prompt instead of Powershell prompt and once I fixed my mistae I was able to complete the configuration successfully. Thank you very much.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      You're welcome! Thanks for reaching out. Glad the video helped.

  • @bananacomputer9351
    @bananacomputer9351 6 місяців тому +4

    after two hours of research,I start over with your tutorial, and finished in 10 minutes,thank you, thank you!!!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Glad it helped! Thanks for the feedback.

    • @RobertoDiaz-ry1pq
      @RobertoDiaz-ry1pq 4 місяці тому +1

      how did you do it ive been stuck for 2 days now watching this same video

  • @mellochord
    @mellochord Місяць тому +2

    I love it when things just work, and as you went through the process step by step, you made it feel simple. Thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Місяць тому +1

      You're welcome! Glad the video assisted. Thanks for the feedback.

  • @likanella
    @likanella 5 місяців тому +2

    Thank you, thank you so much. There were no detailed instructions anywhere. Everything worked out! You're great!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      You're welcome! Glad to hear you are up and running. Thanks for the feedback!

  • @ScubaLife4Me
    @ScubaLife4Me 5 місяців тому +1

    Thank you for taking the time to make this video, it was just what I was looking for. 😎

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Glad it was helpful! Thanks for taking the time to reach out.

  • @shailmatrix
    @shailmatrix 4 місяці тому +1

    Thanks for creating a clear and concise video to understand the process of running Private GPT.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Glad it was helpful! Thanks for the feedback, much appreciated.

  • @curtisdevault6427
    @curtisdevault6427 5 місяців тому +1

    Thank you for this! I've been struggling with this for a few days now, you provided up to date and clear instructions that made it super simple!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Great to hear! Glad you are up and running. Thanks for the feedback.

  • @bamokinamoandadestin7888
    @bamokinamoandadestin7888 4 дні тому +1

    great job. It's working!!! Although my computer is slow for RAG applications.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 дні тому

      Pleasure, glad you are up and running. Thanks for the feedback..

  • @christopherpenny6216
    @christopherpenny6216 4 місяці тому +1

    Thank you sir. This is incredibly clear. Many others make assumptions about what I know already - you covered everything. Great guide!

  • @chahrah.5209
    @chahrah.5209 6 місяців тому +1

    Huge thank for the video, AND for taking the time to help solve problems in the comments, it was just as helpful. Definitely subscribing.

  • @maxxfussle
    @maxxfussle Місяць тому +1

    This was so useful and clearly explained - thank you!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Місяць тому

      You're very welcome! Glad the video assisted. Thanks for reaching out. Much appreciated.

  • @makin1408
    @makin1408 4 місяці тому +1

    "Thank you so much! Finally got it working after trying a bunch of tutorials. Yours really did the trick- super helpful!

  • @Matthew-Peterson
    @Matthew-Peterson 7 місяців тому +1

    Brilliant Guide. Subscribed.

  • @hadit2964
    @hadit2964 2 місяці тому +1

    nice and thorough !
    tyvm worked perfectly 🙏

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Pleasure, glad the video assisted and thanks for the feedback.

  • @philippmueller5807
    @philippmueller5807 2 місяці тому +1

    Very helpful video! I installed it several months ago, and it works just fine. Can you please make an instructional video on how to update and maintain the chatbot? Private-GPT 0.6 is now available and requires a newer Python version. Ollama can also be updated. I just don't know how to update the chatbot properly. Or is it easier to simply delete everything currently installed and start from scratch?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому +1

      Hi, thanks for the feedback. Glad the video assisted. Thanks for the idea. I am going to start working on a new PrivateGPT video this week. I will look into checking upgrading a current install.

  • @DeTruthful
    @DeTruthful 6 місяців тому +1

    Thanks man did a few other tutorials couldn't figure it out. This made it so simple. Subscribed!

  • @JiuJitsuTech
    @JiuJitsuTech 6 місяців тому +1

    Thank you for this vid! I watched several others and this was the most straight forward approach. Super helpful !!

  • @nunomlucio5789
    @nunomlucio5789 7 місяців тому +2

    In terms of speed, I feel that the previous version is way faster than this one using Ollama, previous version I mean using CUDA and so.... in terms of answering and even loading documents

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому +1

      Agreed, just increases the build difficulty as bit. 👨‍💻 Thanks for reaching out.

  • @MyBrenden
    @MyBrenden День тому +1

    awesome mnr awesome thank you

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 годин тому

      Pleasure. glad the video got you up and running. Thanks for the feedback.

  • @Ana-kd4ff
    @Ana-kd4ff 28 днів тому +1

    Very nice tutorial! but how do I quickstart next time I want to use it again?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  27 днів тому

      Hi, Check the bit in the video from 9:18 onwards. You need to open an Anaconda PowerShell prompt. Activate your conda environment again. navigate back to your project folder. Set environment variable you want to use. Run the SW again. Let me know if you are up and running. Thanks for the feedback

  • @Quicksilver87878787
    @Quicksilver87878787 5 місяців тому

    Thanks! Is there any specific reason why you are using Conda as opposed to virtualenv?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, I use Anaconda for most of my AI environments when working on Windows. I find it easy to work with and install required SW etc. Thanks for reaching out.

  • @RyanHokie
    @RyanHokie 6 місяців тому +1

    Thank you for your detailed tutorial

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      You’re welcome 😊 Glad the video assisted. Thank you so much for the feedback.

  • @likanella
    @likanella 4 місяці тому +1

    Hey there! I was wondering if you could help me out with something. I'd love to create a tutorial on adding Nvidia GPU support, but I can't seem to find any clear, helpful guides on the topic. I've tried a few times, but I'm still a bit lost. Would you be able to help me out? Thanks so much!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Hi, with Private GPT now offloading the processing to 3rd parties like Ollama you might want to check out the capabilities of chosen backend. Maybe have a look at my Ollama video on the channel for some ideas. Thanks for reaching out and for the feedback. Let me know if you come right with this.

  • @aysberg9403
    @aysberg9403 7 місяців тому +1

    excellent explanation, thank you very much

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Pleasure! Glad the video assisted. Thanks for the feedback!

  • @ilieschamkar6767
    @ilieschamkar6767 6 місяців тому +1

    It worked like a charm, thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Great to hear! Thanks for the feedback much appreciated.

  • @bobsteave1236
    @bobsteave1236 4 місяці тому +1

    Wow , got it working both you 2.0 and 4.0 guides for this. Thanks you forever! but Now I want to change the model that is in the UI? How do I do this? The whole reason I did this was to have other models with ollama.. and the guide doesn't show this

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Hi, glad you are up and running. You can change the Ollama model. Download and install the model you want to use and change your config files. Check the Ollama example shown on the below link.
      docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @Abhiram00
    @Abhiram00 4 місяці тому +1

    it worked like a charm. thank you so much

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      You're welcome! Thanks for the feedback. Glad you are up and running.

  • @maxxxxam00
    @maxxxxam00 7 місяців тому +1

    Excellent video, very clear step guides. Do you have or could you make a docker compose file that does all the steps in a docker environment?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, thank you so much for the feedback, let me look into it and I will revert soon! Thanks.

  • @cinchstik
    @cinchstik 7 місяців тому

    got it to run on virtual box. works great! Thanks

  • @PAPAGEORGIOUKONSTANTINOS
    @PAPAGEORGIOUKONSTANTINOS 3 місяці тому +1

    REALLY THANK YOU. it's the first tutorial i've used and everything worked as a charm. Can you make another tutorial in windows enviroment where i could make the usage public? Like having an actual url to use it on my wordpress webpage? (running from my pc that i have set up in my house)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Hi, Glad the video assisted and you got PrivateGPT up and running! Thanks for the feedback much appreciated. I will for sure look into your idea on making a video showcasing how to host the solution externally. Thanks for reaching out.

  • @erxvlog
    @erxvlog 5 місяців тому +1

    This was excellent. One issue that did come up was uploading pdfs....there was an error related to "nomic". I signed up for nomic and installed it. PDFs seem to be working now.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Thanks for reaching out. Glad to hear you are up and running.

  • @rummankhan5499
    @rummankhan5499 5 місяців тому +1

    awesome ! best tutorial ever... can you please make a video on web deploy/upload of local/privategpt... without openai (if thats doable)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, thank you for the feedback! Noted on the video idea. Glad you are up and running.

  • @msh3601
    @msh3601 3 місяці тому

    Thanks!! so if we close everything, and wanting to run it again, do we open a powershell window in admin mode, activate conda env, go to directory, and then try "make run"?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 місяці тому

      Hi, yes that is correct. Just do it all in an admin mode Anaconda PowerShell terminal and it should be fine. Hope you are up and running. Thanks for reaching out!

  • @fishingbeard2124
    @fishingbeard2124 6 місяців тому

    Can I suggest that next time you make a video like this you enlarge the window with the commands. 75% of your window in blank and the important text is small so I think it would be helpful to have less blank space and larger text. Thanks

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Thanks for the input, agreed I started to zoom in on the command prompts in newer vids. Thanks for reaching out and I hope the video helped.

  • @dauwswinnen2721
    @dauwswinnen2721 5 місяців тому +1

    I did everything but installed the wrong model. How can I change models after doing everything?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому +1

      Hi,
      If you are using Ollama you can just update your config file in your PrivateGPT folder to point to the model downloaded in Ollama. if you want multiple models on Ollama its fine. I use my Ollama to feed numerous AI frontends with multiple LLMs running.
      Check the link below for the defaults:
      Default Mistral 7b.
      On your Ollama box:
      Install the models to be used, the (default for PRIVATEGPT settings-ollama.yaml) is configured to use mistral 7b LLM (~4GB) and nomic-embed-text Embeddings (~275MB).
      Commands to run in CMD:
      ollama pull mistral
      ollama pull nomic-embed-text
      ollama serve
      docs.privategpt.dev/installation/getting-started/installation

  • @chjpiu
    @chjpiu 7 місяців тому +1

    Thanks a lot. Please let me know how to change the LLM model in privategpt? For me, the default model is mistral

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, sorry for the late reply. You can check out this page to change your LLM. Let me know if you came right with this. Thanks for reaching out! 🔗docs.privategpt.dev/manual/advanced-setup/llm-backends🔗

  • @igor_scratcher
    @igor_scratcher Місяць тому +1

    Hey man, thanks for the tutorial everything worked, but, there is a problem, i can't use or chat with my documents because there is an error with the name: "Collection make_this_parameterizable_per_api_call not found" and i don't know what is this. could you help me?
    thx

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Місяць тому

      Hi, have a look at the below link. This issue is when you submit a prompt but have not document selected. Let me know if this helps.
      github.com/zylon-ai/private-gpt/issues/1334

  • @Zbhullar
    @Zbhullar 2 місяці тому +1

    you sir are amazing! I have one issue I have everything up and running but when uploading documents takes forever!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Hi, Glad to hear you are up and running and the video could assist. Regarding ingestion maybe have a look at the below link. Let me know if it helps. Thanks for reaching out.
      docs.privategpt.dev/manual/document-management/ingestion

  • @creamonmynutella2476
    @creamonmynutella2476 6 місяців тому +2

    is there a way to make this automatically start when the system is powered on?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Sure its possible with PowerShell scripts. Let me check it out and revert.

  • @anjeel08
    @anjeel08 6 місяців тому

    This is simply superb. I could install it and run it with your clear step by step instructions. Thank you so very much. However I do notice that uploading the documents to be able to chat with my own set of data takes so long time. Is there a way we can tweak this and make uploading the document easier. I am only using 1 word doc of 30 pages with mainly text and one pdf document of size 88 pages with text and images. word doc was uploaded in 10 min but the pdf runs endlessly. Appreciate if you could make a video on how to use Open AI instead of one of the online providers to get speed (When confidentiality is not important). Thank you in advance for your tip.

    • @firatguven6592
      @firatguven6592 6 місяців тому

      I wrote also a comment, I am complaining about the same issue. I have also the version 2.0 also from him and as if it wasnt uploading slow enough, but in version 2.0 in any case the upload was considerably faster. I had during the upload 80% load on my 32 Thread cpu but now in 4.0 the cpu is just boring itself with 5%, which explains the slower upload. The parsing nodes are genereting the embeddings much slower. Since I have more than 10000 pdf files, it is unacceptable to wait endless during the uplad. Now I am waiting since 40 minutes only for 2 huge files with around 3000 pages, which took with the old one only 20 mins totally. I have no idea, how long it will take till end and we are talking only about 2 x files. The other 9998 Files will not even be uploaded in one year if the problem will not be solved. I am disappointed to lose time with 4.0.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right..
      docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @feliphefaleiros9540
    @feliphefaleiros9540 5 місяців тому +1

    muito bem explicado, obrigado por pelos videos. Em todas versoes que explicou mostrou passo a passo você é foda
    very well explained, thank you for the videos. In all the verses you explained, you showed step by step you are awesome

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Thanks for the feedback. Glad you are up and running and the video assisted.

  • @LeapoldButtersStotch
    @LeapoldButtersStotch 4 місяці тому

    GREAT guide, it worked flawlessly the first time I ran through it! However when I powered my PC back on the second day and tried to start it back up I got lost. How do I start this up again?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому +1

      Hi,
      You need to activate the conda environment again. Then just follow the last steps in the video again to startup the system. Let me know if you are up and running. Thanks for reaching out.

  • @ParvezKhanPathan-ns3kw
    @ParvezKhanPathan-ns3kw 2 місяці тому

    Facing this No module named 'build' while installing poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-ollama"
    ... pls help

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Hi, just checking in if you managed to resolve this issue? Sounds like required SW missing. Can you confirm installing the required SW as shown 3:35 in the video onwards. Also make sure you are running a version of Python within 3.11.xx. Take note of the make install and configuration from 5:20 onwards. Make sure you follow along in the correct prompts, you will note I change from command prompts to anaconda or anaconda PowerShell prompts. Let me know if you managed to resolve. Thanks for reaching out.

  • @senorperez
    @senorperez 7 місяців тому

    thank you, but how can we make use of NVDIA gpu if we have one on our device, like i have NVDIA T600

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, if you build with Ollama it will offload to the GPU automatically (Nvidia or AMD). It does not hammer it to its full potential I have seen. This utilization will get better each evolution of the project. Let me know if you got the GPU to kick in when offloading.

  • @farfaouimohamedamine3288
    @farfaouimohamedamine3288 6 місяців тому +1

    Hi, thank you for your tutorial i have followed the steps as u did but i get this error when i try to install the dependencies of the privateGpt :
    (privategpt) C:\pgpt\private-gpt>poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
    No Python at '"C:\Program Files\Python312\python.exe'
    NOTE : for the virtual envirement, i did not created inside the system32 directory, i did the creation on the pgpt directory

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, did you get this resolved. Yes, I also created a pgpt folder in the root of the drive. Just to confirm are you running Python 3.11.xx? Let me know if you came right with this.

    • @facundolopez1792
      @facundolopez1792 2 місяці тому

      Seems you didn't check "add python to PATH" when installed python. Google how to add manually to system PATH.

  • @Reality_Check_1984
    @Reality_Check_1984 7 місяців тому +1

    Looks like they released a 0.5.0 today. If you install this now and look at the version it will be 0.5.0. All of your install instructions still work as it wasn't a fundamental change like the last big update. They added pipeline ingestion which I hope fixes the slow ollama ingestion speed but so far I still think llama is faster.

    • @Reality_Check_1984
      @Reality_Check_1984 7 місяців тому +1

      so I ran it over night and ollama is still not performing well with ingestion. It definitely under utilizes the hardware for ingestion. Right now a lot of the local LLMs don't seem to leverage the hardware as well when it comes to ingestion. That is an improvement I would like to see in general. Not just of ollama or privateGPT. The ability to ingest faster through better hardware utilization/improved processing and storing ingest files long term on the drive along with the ability to query the drive and load relevant chunks into the vRAM would significantly expand the depth and breadth of what these tools can be used for. vRAM is never going to offer enough and constantly training models won't work either.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi thanks for the update. Had a bit of a scare with the update available moments after publishing this vid 😊. Thanks for the confirmation, I also checked and the install instructions remain intact. Appreciate the feedback. PS. I totally agree with the performance comment made.

  • @drSchnegger
    @drSchnegger 7 місяців тому +1

    If I make a Prompt, I get an error: Collection make_this_parameterizable_per_api_call not found
    whenn i do another Prompt, i get the errror:
    NoneType' object has no attribute 'split

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому +1

      Hi, from what I can gather you will get this error if you prompt documents but no documents are loaded. Can you ensure you uploaded documents into PrivateGPT and selected them prior to prompt. Let me know if you come right with this. Thanks for reaching out! If problem persits check out these links and check if it helps, github.com/zylon-ai/private-gpt/issues/1334 , github.com/zylon-ai/private-gpt/issues/1566

    • @JiuJitsuTech
      @JiuJitsuTech 6 місяців тому +2

      From the git issues page, this resolved the issue for me. "This error occurs when using the Query Docs feature with no documents ingested. After the error occurs the first time, switching back to LLM Chat does not resolve the error -- the model needs to be restarted." Enter Ctrl-C in Powershell Prompt to stop the server and of course 'make run' to re-start.

  • @NoName-bh3no
    @NoName-bh3no 24 дні тому

    Awesome.. It actually works. Just the basic text, though. For some reason it gives the following error when attempting to upload a file:
    error
    llama_index.core.ingestion.pipeline.run_transformations() got multiple values for keyword argument 'show_progress'
    any idea why?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  24 дні тому

      Hi, Glad the video assisted. It might be the file type you are using. Test with a .txt file and PDF. Do you then still get this issue? Otherwise you can check if the below maybe applies.
      github.com/zylon-ai/private-gpt/issues/2100

  • @abhiudaychandra
    @abhiudaychandra 6 місяців тому

    Hi. Thanks for the great video, but the uploading of even just one document & answering is slow that I just cannot use it any further. Could you please tell me how to uninstall the privategpt, other applications I can of course uninstall, and is there some command i should enter to remove files?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, yes its a bit slower against a local LLM dependent on the GPU available in the machine. Did you try and use Open AI or one of the online providers, if you want to have it super fast. If confidentiality is not your main concern maybe give it a go. If you want to remove just uninstall all SW and delete the project folder you built PrivateGPT in and you should be fine. Thanks for reaching out.

  • @workmail6406
    @workmail6406 6 місяців тому

    Hello, I have managed to follow the instructions up until 9:50 for running the environment with make run, however when I iniate the command in an administrator anaconda powershell after locating it to my private-gpt folder I encounter the error "The term 'make' is not recognized as the name of a cmdlet, function". I have no idea how I can get Anaconda Powershell to recongnize the prompt to run on my Windows pc. What can I do to finally start the private gpt server?

    • @workmail6406
      @workmail6406 6 місяців тому

      Now that I installed gitbash from the makeforwindows website it works. However, I now run into this error when running make run:
      Traceback (most recent call last):
      File "", line 198, in _run_module_as_main
      File "", line 88, in _run_code
      File "C:
      gpt\private-gpt\private_gpt\__main__.py", line 5, in
      from private_gpt.main import app
      File "C:
      gpt\private-gpt\private_gpt\main.py", line 4, in
      from private_gpt.launcher import create_app
      File "C:
      gpt\private-gpt\private_gpt\launcher.py", line 12, in
      from private_gpt.server.chat.chat_router import chat_router
      File "C:
      gpt\private-gpt\private_gpt\server\chat\chat_router.py", line 7, in
      from private_gpt.open_ai.openai_models import (
      File "C:
      gpt\private-gpt\private_gpt\open_ai\openai_models.py", line 9, in
      from private_gpt.server.chunks.chunks_service import Chunk
      File "C:
      gpt\private-gpt\private_gpt\server\chunks\chunks_service.py", line 10, in
      from private_gpt.components.llm.llm_component import LLMComponent
      File "C:
      gpt\private-gpt\private_gpt\components\llm\llm_component.py", line 9, in
      from transformers import AutoTokenizer # type: ignore
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py", line 26, in
      from . import dependency_versions_check
      ImportError: cannot import name 'dependency_versions_check' from partially initialized module 'transformers' (most likely due to a circular import) (C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py)
      make: *** [run] Error 1
      Any Idea how I can resolve this?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, can you confirm loading all the required SW including all the Make steps I perform from 3:35 into the video. Let me know if you were able to resolve this. Also confirm you are running everything in the same terminals and admin mode where needed. Make sure you use Python within 3.11.xx in your Anaconda Environment.

  • @pranavmalhotra7635
    @pranavmalhotra7635 6 місяців тому +1

    ERROR: Could not find a version that satisfies the requirement pipx (from versions: none)
    ERROR: No matching distribution found for pipx
    I am receiving this error and hence I am unable to proceed with thwe installation any tips?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, can you confirm you are installing pipx in a normal admin mode command prompt. Just to check if you followed the steps from 6:30 in the video onwards. If still not working can you confirm you have Python 3.11.xx installed with the pip package that ships with. Let me know if you came right with this. Thanks for reaching out.

  • @tarandalinux8323
    @tarandalinux8323 5 місяців тому

    Thank you for the great video. I'm at 9:48 and the command $env:PGPT_PROFILES="ollama" gives me an error: The filename, directory name, or volum label syntax is incorrect.
    (privategpt) C:\gpt\private-gpt>$env:PGPT_PROFILES="ollama" (I don't get the colors you get)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, can you confirm you are running this in your Anaconda PowerShell terminal, Check the steps I use from about 9:20 in the video. Let me know if you are up and running.

  • @guille8237
    @guille8237 6 місяців тому +1

    i got it running but I want to change the model to deep seek coder, how do I do it?never mind

  • @innavoigd
    @innavoigd 8 днів тому

    Hello, unfortunalety, around 9:48 while "make run", got caught on error loop of missing "gradio", then "-- extras llms-ollama", then "tensor", then "gradio"...
    i will update my comments if i break through

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 дні тому

      Hi, When you set the environment variables and execute make run, you must do that in an Anaconda PowerShell Prompt. Did you follow the way I do it from 9:15 in the video? Let me know if you come right with this..

  • @rchatterjee48
    @rchatterjee48 5 місяців тому +1

    Thank you very much it works

  • @jcpamart83
    @jcpamart83 4 місяці тому

    Yesterday I tried to install pgpt with the last versions, but it was a bad way. Now I installed all with the good versions, but in the installation of poetry, it answer me this :
    No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe'
    No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe'
    'poetry' already seems to be installed. Not modifying existing installation in 'C:\Users\MYPATH\pipx\venvs\poetry'.
    Pass '--force' to force installation.
    I force it, but anyway, mini conda is the old installation.
    But how can I do now ????
    Thanks for your help

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 місяці тому

      Hi, Apologies missed this comment. Did you come right with this? Can you confirm Python version is 3.11.xx. It has to be in with 3.11 branch. Did you follow all the steps in the video from 3:35 onwards. Let me know if you got past this issue. PS Remember to add Python to your path when the installer starts. Thanks for reaching out.

  • @SuffeteIfriqi
    @SuffeteIfriqi 5 місяців тому

    Such a great video, which in my case makes it even frustrating because I'm literally stuck at the last step.
    It says:
    make: *** Keine Regel, um "run" zu erstellen. Schluss.
    Which translates into:
    make: ***No Rule, to create "run". Stop.
    Any idea what this might be caused by? I've restarted the entire process twice, no luck...
    Thank you so much.

    • @SuffeteIfriqi
      @SuffeteIfriqi 5 місяців тому

      I suspect it might caused by Gnu's path, although I did include it in the env variables...

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, did you manage to resolve this issue. Please check from about 9:15 into the video. Are you completing these steps in an Admin Anaconda PowerShell with the environment activate and from the correct folder. Let me know if you came right. Thanks..

  • @Blazerboyk9
    @Blazerboyk9 3 місяці тому

    Verified ensured path for poetry, when running the anaconda prompt I get 'poetry' is not recognized as an internal or external command,
    operable program or batch file.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Hi, just checking (6:26 in the video) can you confirm you installed poetry in admin command prompt so it available to Windows OS? Then after you go on in the Anaconda prompt? Hope you are up and running by now. Thanks for reaching out.

    • @Blazerboyk9
      @Blazerboyk9 2 місяці тому

      @@stuffaboutstuff4045 still not running unfortunately, I verified poetry installed in CMD, then when attempting to use in anaconda I get nothing

  • @Isak_Isak
    @Isak_Isak 4 місяці тому

    Great tutorial, but I have a problem when I want to search or query the file I have imported. I have an error "Initial token count exceeds token limit". I already have increased the limit but nothing change, how can I solve the error?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Hi, Are you handing off to Open AI or Ollama? Have a look at the below link to check if same applies. Thanks for reaching out.
      github.com/zylon-ai/private-gpt/issues/1701

  • @vaibhavdivakar4653
    @vaibhavdivakar4653 7 місяців тому +1

    I followed te steps and for some reason when i do make run command, it is giving me "no Module called uvicorn".
    i installed the module using pip command it still says the same error..
    :(

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому +2

      Hi, does it not launch at all and stops with this error. Its seems its the webserver that needs to start. When you launch it I know it can display uvicorn.error message but when you open the browser you will see the site up and everything works.
      If you get this, uvicorn.error - Uvicorn running on http : // 0. 0. 0. 0 : 8001 then it works. But by the comment it sound like you have the whole module missing. PrivateGPT is a complicated build, but the steps in video are valid, I would suggest retracing the required SW and versions required like Python etc. and the setup steps just to make double sure no steps were missed. I also find more success running the terminals in admin mode to avoid issues. Let me know if you came right with this and thanks for making contact.

    • @SiddharthShukla987
      @SiddharthShukla987 6 місяців тому +2

      I also faced the same issue because I forgot to start the env. Check yours too

  • @frankbradford2869
    @frankbradford2869 6 днів тому

    This was going good up to make run. The whole thing errors out. Any tips on how to fix this??

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 дні тому

      Hi, When you set the environment variables and execute make run, you must do that in an Anaconda PowerShell Prompt. Did you follow the way I do it from 9:15 in the video? Let me know if you come right with this..

  • @claudioalvesmoura
    @claudioalvesmoura 3 місяці тому

    First, I tried to install with Python 3.1.9, but it did not work. With 3.1.7, it went well...
    In minute 3.33, you check the operation of ollama, but the output is
    $ ollama serve
    Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
    Is that how it suppose to be? Thanks in advance..

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 місяці тому

      Hi, just to confirm did you follow all the steps for Ollama using the video on the channel. Regarding the error either your Ollama web server is up and running already or something else is using port 11434. Did you open a browser and go to the address? Anything loading up on that port? Thanks for reaching out, let me know if you are up and running.
      ua-cam.com/video/rIw2WkPXFr4/v-deo.htmlsi=kDCCeTRbC2BoTfl-

  • @quandoomniflunkusmoritati9359
    @quandoomniflunkusmoritati9359 4 місяці тому

    Please address the issue of huggingface tokens and login for the install script. I have been all over the net and tried different solutions and script mods, including the huggingface CLI. However I have not been able to install a working copy yet (yes, I did accept the access the to Mistral repo on huggingface too. The python install script fails on Mistral and on the transformers and tokenizer. Shows message for gated repo but I have authenticated on the CLI and tried passing the token in the scripts. Still failing.... HELP!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Hi, are you using Ollama as backend? If not follow the steps in the Ollama video on the channel and just hook up your install to that. Otherwise test this on a hosted LLM like OpenAI. You should not struggle if you follow the steps exactly in the 2 videos. Let me know if you are up and running.

  • @mohith-qm9vf
    @mohith-qm9vf 6 місяців тому

    Hi, will this installation work for ubuntu? if not what changes do I need to make??? thanks a lot

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Hi, just checking if you got this built on Ubuntu. If not you can follow the steps for Linux using the link below. Thanks for reaching out.
      docs.privategpt.dev/installation/getting-started/installation

    • @mohith-qm9vf
      @mohith-qm9vf 5 місяців тому +1

      @@stuffaboutstuff4045 thanks a lot!!

  • @lherediav
    @lherediav 7 місяців тому +2

    For some reason Anaconda doesnt recognize the CONDA command in my end doestn show (base) at the begining of the anaconda prompt, any solutions? i am stuck in that part 7:46 part

    • @lherediav
      @lherediav 7 місяців тому

      when i open anaconda prompt shows this: Failed to create temp directory "C:\Users\Neo Samurai\AppData\Local\Temp\conda-\"

    • @thehuskylovers1432
      @thehuskylovers1432 7 місяців тому

      Same issue Here i cannot pass this neither v2 or this version

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, just checking if you came right with this? When you open your Anaconda Prompt or Anaconda PowerShell prompt they must open and load and show (base). Is this not showing in both Anaconda Prompt or Anaconda PowerShell prompt? Did you try and open both in admin mode? It seems there is a problem with the anaconda install on the machine.

    • @lherediav
      @lherediav 2 місяці тому

      @@stuffaboutstuff4045 found the solutino my username was Neo Samurai with a space in the middle, and conda cant handle that, new user without spaces and worked just fine

  • @YellowBrown-vm1sk
    @YellowBrown-vm1sk Місяць тому

    How can I use phi3.5 in private-gpt the response is pretty low for a 4gb vram?? I need help?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  Місяць тому

      Hi, you can change the DB on Ollama and update the Private-GPT config file to use the model you want. Have a look at the below page and check the Using Ollama section. Let me know if you come right.
      docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @Dresta-u6l
    @Dresta-u6l 6 місяців тому

    is python 3.12 will do the work or specifically i need 3.11 ?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, You would need Python 3.11.xx. The code currently checks if the installed Python version is in that range. I got build errors with 3.12 installed in the environment. Let me know if you are up and running.

  • @EditorboyAmit-f4v
    @EditorboyAmit-f4v 6 місяців тому

    i have a question how do i run it again if my system restarts what steps do i have to do again or command to run again can we set to autostart when my system starts

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Hi, you can just run Anaconda PowerShell prompt again, activate the environment you created. Make sure you are in the project folder. Set the env variable you want to use and execute make run. Check the steps performed in the Anaconda PowerShell from 9:24 in the video. Let me know if you are up and running. Thanks for reaching out.

  • @OscarPremium-ql5hh
    @OscarPremium-ql5hh 5 місяців тому

    How do I start it up again ones I finished all the steps in the video successfully? Just visit the browser domain again?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, you will have to activate your conda environment. Make sure you are in the project folder and launch Anaconda PowerShell again. Check the steps from 9:24 in the video. Let me know if you are up and running.

    • @OscarPremium-ql5hh
      @OscarPremium-ql5hh 5 місяців тому +1

      @@stuffaboutstuff4045 Wow, Thanks for your answer! Just amazing!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Pleasure, sorry missed this one. Thanks for the feedback.

  • @MeczupGezgin
    @MeczupGezgin 10 днів тому +1

    Oldukça açık ve net sana nasıl teşekkür edebilirim bilemiyorum. very very big thanks :)

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  8 днів тому

      Pleasure, Glad the video got you up and running. Thanks for the nice feedback. Much appreciated.

  • @ballmain5623
    @ballmain5623 4 місяці тому +1

    This is great. Can you please also do a mac version?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  4 місяці тому

      Hi, thanks will do. I am looking at making updated video on PrivateGPT soon. Will update when I can get it out.

  • @andybirder4970
    @andybirder4970 3 місяці тому

    Is this one working? I checked tons of privategpt video for my win 11, didn't work 😢

  • @vichondriasmaquilang4477
    @vichondriasmaquilang4477 5 місяців тому

    so confuse what is the purpose of install ms visual studio? you didnt use it

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, VS Studio components are used in the background for compilation and build of the programs. Hope the video helped and your PrivateGPT is up and running.

  • @JanaFourie-cm5eh
    @JanaFourie-cm5eh 6 місяців тому

    Hi, when querying files only the sources appear after it stopped running (files ingestion seems to work fine). How can I fix this? Or is it still running but extremely slow...?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, did you come right with this. There are some good comments on this video on speeding up the install including working with large docs that slow down the system. Check the link below, maybe this can assist. Also check the terminal when this happens for any hints on what might be hanging up.
      docs.privategpt.dev/manual/document-management/ingestion#ingestion-speed

    • @JanaFourie-cm5eh
      @JanaFourie-cm5eh 5 місяців тому

      @@stuffaboutstuff4045 Thanks, how can I contact you? I noted you are South African through the accent!

  • @hasancifci1423
    @hasancifci1423 6 місяців тому +1

    Thanks! Do NOT start with the newest version of Python. It does not support. If you did, uninstall it. If you have a problem with pipx install poetry, delete the pipx folder.

  • @Whoisthelearner
    @Whoisthelearner 6 місяців тому

    Great thanks for the awesome video, i wonder whether you would know any similar setup fro the new llama3 llm? If yes, it would be great if you can make a new video about it!!!! Great thanks!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому +1

      Hi, sure you can. You can install llama3 on Ollama. You would need to change the config files. The link below should assist until I can update this video. Thanks for the feedback and the video idea.
      docs.privategpt.dev/manual/advanced-setup/llm-backends

    • @Whoisthelearner
      @Whoisthelearner 5 місяців тому

      @@stuffaboutstuff4045 Great thanks for the prompt reply and the link. Looking forward to your new video as well!! You make very easy for beginner like me! Really appreciate your work

    • @Whoisthelearner
      @Whoisthelearner 5 місяців тому

      @@stuffaboutstuff4045 if you don't mind, allow me to ask a question, I am plannign to adopt the ollama approach but i don't know of what part of the video should i turn to the command PGPT_PROFILES=ollama make run. Great thanks!

  • @drmetroyt
    @drmetroyt 5 місяців тому

    Hope could install this as docker container

  • @haseebmemon294
    @haseebmemon294 2 місяці тому

    How do I use ollama 70b? It keeps using 8b? I even deleted 8b and ran it but it keeps downloading and running 8b. I am struggling with documentation.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому +1

      Hi, you must update the settings-ollama.yaml to the model you want to use. Just pull it first on your Ollama server. Have a look at the below link for how to update file.
      docs.privategpt.dev/manual/advanced-setup/llm-backends
      Model names can be found here.
      ollama.com/library
      i.e. llama3:70b
      Let me know if you got it up and running.

    • @haseebmemon294
      @haseebmemon294 2 місяці тому

      @@stuffaboutstuff4045 It seems to be working, thanks a lot (I was using llama3.1:70b instead of llama3:70b). I am wondering if there is any way to change the default C drive download location to any other drive to run the model off of?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  24 дні тому

      Hi, I see there are threads on this but it seems to be a bit of a mission. Let me investigate and revert.

  • @patrickdarbeau1301
    @patrickdarbeau1301 7 місяців тому

    Hello, I got the following error message when running the command:
    " poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant" "
    " No module named 'build' "
    Can you help me ? Thanks

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, did you install all the required SW I install at the start.

    • @Matthew-Peterson
      @Matthew-Peterson 7 місяців тому

      Close both Anaconda Prompts and restart the process. Dont rebuild your project though. GPT4 says its a connection issue when creating and sometimes a computer restart sorts the issue. Worked for me.

    • @guille8237
      @guille8237 6 місяців тому

      open your tomfile and update the tom file with the correct Build version then update the lock file

  • @Xoresproyect
    @Xoresproyect 3 місяці тому

    I have been trying this tutorial for three times now... I finnally got to minute 9.50 to the comand "make run" and get the "The term 'make' is not recognized as the name of a cmdlet, function" error. I have all the variables added as far as i know... Any hints on how to solve this?

    • @Xoresproyect
      @Xoresproyect 3 місяці тому

      I finnally got this error bypassed just skipping the "make - run" command and getting straight to the " poetry run python -m private_gpt" from there everything went fine!

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 місяці тому

      Hi, apologies for the late reply, I was afk a bit. I am glad you got it up and running. My input on the original comment would have been, I take it you did follow the make portion from 5:28 . Did you add make to your system variables. From 9:20 make sure to use admin mode PowerShell to set env and launch with make run. 😉

  • @ΠαναγιώτηςΣκούρτης-λ6ρ

    i install it and works but is very very slow to answer is it possible to speed it up?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, it is not the fastest with Ollama, the upside is its relatively easy to get working. Should confidentiality not be an issue using the Open AI profile will increase speed exponentially. You could also build this local if you have a proper GPU but expect a more complicated install to follow. Thanks for reaching out.

  • @The_Gamer_Boi_2000
    @The_Gamer_Boi_2000 6 місяців тому

    whenever i try to install poetry on pipx it gives me this error "returned non-zero exit status 1."

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, just checking in if you resolved this issue? Just want to confirm that you are following the steps I use in the video to install poetry, please check from 6:28 in the video. I use a command prompt in admin mode to complete all these steps. From 7:36 we back in Anaconda and Anaconda PowerShell prompts. Also confirm you are using Python 3.11.xx for the Anaconda environment otherwise you will get a bunch of build errors and failures. Let me know and thanks for reaching out.

    • @The_Gamer_Boi_2000
      @The_Gamer_Boi_2000 6 місяців тому

      @@stuffaboutstuff4045 im pretty sure i was doing those steps but im using webui now instead cuz its easier to do

  • @matteosalvatori2941
    @matteosalvatori2941 3 місяці тому

    Hi, I state that I am not an expert. I followed all the steps and created my private page, but the mask to write does not appear. I tried to upload documents but it reports connection problems. I restarted the pc and now the page 127.0.0.1:8001 does not open at all. gives me connection problems. can you help me ?

    • @matteosalvatori2941
      @matteosalvatori2941 3 місяці тому

      Hello, then I figured out how to get back into my personal page by entering ananconda powershell and repeating the steps in the video. I am left with the problem that I cannot communicate with AI even without uploading documents. I don't see the prompt to write. Help!

    • @matteosalvatori2941
      @matteosalvatori2941 3 місяці тому +1

      So, I fixed the problem of the dialog box. The problem is chrome, using Edge the dialog box appears. The problem remains that if I try to chat with AI the waiting time is very long and then I get a connection error.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 місяці тому

      Hi, glad to hear you got the system up and running. To offload to Ollama you need a reasonable GPU to handle the processing. If resources are an issue then maybe offload to OpenAI or Groq, but then money becomes a problem 😉. Glad to hear you can use the system. Thanks for reaching out.

  • @Stealthy_Sloth
    @Stealthy_Sloth 6 місяців тому +1

    Please do one for llama 3.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Thanks for the idea. If you want you can try and get it up and running. You can install the 8b model if you use Ollama. (ollama run llama3:8b) The link below has the example configs that would need to change. Thanks for reaching out and for the feedback, much appreciated. docs.privategpt.dev/manual/advanced-setup/llm-backends

  • @travisswiger9213
    @travisswiger9213 7 місяців тому

    how do i restart this? I've got it running a few times, but I if i restart i have a hell of a time getting it working again. can i make a bat file some how?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому +1

      Hi, when you launch it in the Anaconda PowerShell Prompt, just go back to that terminal when done and press "Control + C". This will shut it down. You can save the starting profile as a PowerShell script and start it or as bat if you use cmd. Thanks for making contact, let me know if you came right with this.

  • @BetterEveryDay947
    @BetterEveryDay947 7 місяців тому +1

    can you make a vs code version?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, thanks for the idea, they release new version so quickly I will check how I can incorporate in the next one. Thanks for reaching out.

  • @FunkyZangel
    @FunkyZangel 6 місяців тому

    Can I do this all completely offline? I have a computer that has no access to the internet. I want to see if i can download everything into a usb and then transfer it over to that computer. Can anyone help me please

    • @Whoisthelearner
      @Whoisthelearner 5 місяців тому

      I think you can once you have everything installed, at least that works for me

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, as noted below, that correct. Once installed you can disconnect the machine if you have the LLM local. Let me know if you come right.

    • @FunkyZangel
      @FunkyZangel 5 місяців тому

      @@stuffaboutstuff4045 Hi thanks for the reply. I am struggling a little understanding this. Do I have to download a portable version for everything or just a portable VSC? Meaning if I want the privategpt to work on another machine from the thumbdrive, do I just need to transfer the VSC files or must I transfer everything, such as git, anaconda, python etc?

  • @AstigsiPhilip
    @AstigsiPhilip 5 місяців тому

    Hi, is this privategpt can handle 70,000 pdf files?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  5 місяців тому

      Hi, I personally have not worked with massive datasets. I know some in the comments have. You might want to check out the link for bulk and batch ingestion.
      docs.privategpt.dev/manual/document-management/ingestion#bulk-local-ingestion

  • @mrxtreme005
    @mrxtreme005 7 місяців тому

    20gb space required?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, yes if you load all the required SW. This ensure you dont get errors if you build the other non Ollama options..

  • @SirajSherief
    @SirajSherief 6 місяців тому

    Can we do this for Ubuntu machine ?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Hi, yes you can, the packages and flow will be similar to the video but obviously following the Linux steps. You can check out what's involved in building on Linux by checking out the below link. Thanks for reaching out, let me know if you come right. 🔗docs.privategpt.dev/installation/getting-started/installation

    • @SirajSherief
      @SirajSherief 6 місяців тому

      Thanks for your kindly response. But now I'm facing a new problem while try to run the private_gpt module:
      "TypeError: BertModel.__init__() got an unexpected keyword argument 'safe_serialization'"
      Please convey me how to resolve this error?

  • @alicelik77
    @alicelik77 7 місяців тому

    Time 9:21 you opened new anaconda powershell prompt. Why did you need new powershell prompt even you were working on a powershell prompt already

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому +1

      Hi, look carefully, I am in a normal Anaconda prompt at that stage and the next commands need to go into Anaconda PowerShell. 👨‍💻 Thanks for reaching out, hope the video helped..

  • @firatguven6592
    @firatguven6592 6 місяців тому

    Thank you very much it works like your previous guide privateGPT2.0. But compared to the previous GPT2.0 this one is uploading the files much slower, as if it wasnt slow enough. with the 2.0 my all 32 Threads CPU was working under 80% load during the upload process, you could see that it is doing something important due to the load. But now is the CPU load only around 5%, which takes considerably more time, because I guess the parsing nodes are genereting now the embeddings much slower. This is unfortunately a deal braeaker for me. Since I have lots of huge pdf files which needs to be uploaded. I cannot wait 1 week or more just for upload. At the end a 4.0 version should be improvement but I cannot see any improvements here. Can somebody list a real improvement list please except the ollama, which is for me not a real improvement becaue the version 2.0 worked also very fine. I will switch back to 2.0, unless I can understand where is the failure?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right..
      docs.privategpt.dev/manual/advanced-setup/llm-backends

    • @firatguven6592
      @firatguven6592 6 місяців тому

      @@stuffaboutstuff4045 Thanks for advice, if change anything in the backend, it comes to error, despite according to the official manual and your explanation. If I setup ofr both ollama then it works but as mentioned the file upload is extreme slow. Now I found a solution by installing from scratch according to version 2.0 with llmacpp with huggingface embeddings, whereas I changed the ingest_mode from single to parallel now it works much faster. There should be more options in order to increase the speed by increasing the bash size or worker counts. Since they did not work before, i will not change and corrupt the installation as long as you can provide a manual how to increase the embedding speed to maximum most probably with help of gpu like in chat. The GPU support in chat works good but during embedding the GPU is not being used

    • @firatguven6592
      @firatguven6592 6 місяців тому +1

      @@stuffaboutstuff4045 after changing to parallel the cpu utilization is at 100% and that explains the faster embedding. Since I have one of the fastest consumer cpus the result is now finally satisfying.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      @@firatguven6592 Awesome, glad you are running at acceptable speeds.

    • @firatguven6592
      @firatguven6592 6 місяців тому

      ​@stuffaboutstuff4045 in addition to that, i could change some paramters in settings.yaml with help of LLM. These are, batch size to 32 or 64, dimension from 384 to 512, device to cuda and ingest_mode: parallel, which gave the most improvement. Now the embeddings are really fast. Thank you very much. I would like to test once also the mode sagemaker, since I could not succeed that mode working. I will try it later again.

  • @anishkushwaha9973
    @anishkushwaha9973 7 місяців тому

    Not working it's showing error whatever prompt im giving

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, what error do you get? Let me know and maybe I can help you out.. Thanks!

    • @anishkushwaha9973
      @anishkushwaha9973 7 місяців тому

      ​@@stuffaboutstuff4045its showing Error Collection make_this_parameterizable_per_api_call not found

  • @JeffreyMerilo
    @JeffreyMerilo 6 місяців тому +1

    Great video! Thank you so much! Got it to work with version 5. How can we increase the tokens? I get this error File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\chat_engine\context.py", line 204, in stream_chat
    all_messages = prefix_messages + self._memory.get(
    ^^^^^^^^^^^^^^^^^
    File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\memory\chat_memory_buffer.py", line 109, in get
    raise ValueError("Initial token count exceeds token limit")

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому

      Hi, can you have a look at this post and see if it helps. Let me know if you come right.
      github.com/zylon-ai/private-gpt/issues/1701

  • @Omnicypher001
    @Omnicypher001 7 місяців тому

    Using a Chrome browser to host a web app doesn't seem very private to me.

  • @RobertoDiaz-ry1pq
    @RobertoDiaz-ry1pq 4 місяці тому

    i am having so much trouble installing private gpt ive follow every step 10 times when back started the video over and over and over at this point i give up, i wish i was one of those people in the comments saying thanks, i mean im still thankful that i even got as far as i went.
    hopefull someone out there could lend a helping hand. im still knew to this coding and ai world but im so interested in this stuff.
    ps anyone out care to help "comment"

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  3 місяці тому

      Hi, just checking in if you got PrivateGPT up and running. This is one of the more difficult builds and you needs to follow the steps in the video carefully. Let me know if you came right with this, thanks for reaching out.

  • @reaperking537
    @reaperking537 6 місяців тому

    private-gpt answers me blank. any solution?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  6 місяців тому +1

      Hi, can you confirm what LLM you sending it to? Ollama local like in the video? Are you getting no responses when you ingests docs and on the LLM Chat? Both not working? Anything happening in the terminal when it process in the Web UI? Let me know and we can hopefully get you up and running.

    • @reaperking537
      @reaperking537 6 місяців тому

      @@stuffaboutstuff4045 I have difficulty with PROFILES="ollama" (LLM: ollama | Model: mistral). I followed the same steps indicated in the video. LLM Chat (no file context) doesn't work, it gives me blank responses; and Query files doesn't work either, it also gives me blank responses. The error I get in the terminal is the following: [WARNING ] llama_index.core.chat_engine.types - Encountered exception writing response to history: timed out

    • @reaperking537
      @reaperking537 6 місяців тому

      @@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.

    • @reaperking537
      @reaperking537 6 місяців тому +1

      @@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  2 місяці тому

      Pleasure! Glad you are sorted. Thanks for the feedback. Fix noted.

  • @VaporFever
    @VaporFever 7 місяців тому

    How can I add llama3?

    • @stuffaboutstuff4045
      @stuffaboutstuff4045  7 місяців тому

      Hi, if you are using Ollama you can install it and test it out, I am currently downloading it to test.
      8B Model can be installed on Ollama using - ollama run llama3:8b or you can install the 70B Model -ollama run llama3:70b. Let me know if you get it working.