Not sure why everyone commenting negative on narrator. Installation on windows is difficult for privateGPT. I tried and got many errors. This video he explained clearly what errors we may get and how to resolve them. He even uninstalled chocolaty and reinstalled for us to understand how to do that. I like funny narration. Don't know why people can't appreciate genuine efforts.
@@Tubby-oq3uu I like the video very much and the audio is great and inspiring. Can a chatbot be installed on the Python 🐍 console 🖥 of a program like Paraview? Paraview is a software to display 3D models and it contains a Python 🐍 console to run scripts for repetitive tasks.
What happens if you ask it to "create a windows installer for privategpt". Going through all of the rigamarole to install it when it is unclear how the end-product will perform is enough to make me skip it. Just saying..
😂 so true. Creating a Windows installer for PrivateGPT would be a complex task due to its dependencies. For now, I recommend following the manual installation process to ensure proper setup and compatibility.
Crazy voice, but your tutorial seems the only one on internet to work correctly and no need additional jobs to search installation options except for downloading anaconda from beginning. Thank you very much! Maybe you know, can I enable AMD support vith vram512mb?
Thank you for the kind words! Regarding AMD support, it can be tricky. You might need to use a CPU-only version or explore alternatives like WSL2 on Windows.
@@bradcasper4823 Try running "poetry run pip install tomlkit" within your project environment. If that doesn't work, ensure you're using the latest version of Poetry and that your environment is activated.
Regarding language models, check the config.yaml file in the project root for model settings. As for external queries, PrivateGPT typically runs as a local server. Check the project documentation for API endpoints if available.
For installing multilingual-e5-base, you can use Hugging Face's transformers library. Try "pip install transformers" and then use the model's Hugging Face identifier.
From what I understand, running PrivateGPT on AMD GPUs can be challenging. You might need to use a CPU-only version or explore alternatives like using WSL2 on Windows.
Make sure you're in the correct directory where the pyproject.toml file is located. Use 'cd' to navigate to the project root before running Poetry commands.
I understand your frustration. The complexity often comes from the project's dependencies and customization options. Hopefully, future versions will simplify the process.
The installation process is far too long and convoluted. I think GPT4all had a quicker install process. PS- She clearly wanted to be on the other side of the ocean after those jokes haha
GPT4all is for the maker only, not for all. GPT4all is just a stupid tool that never help with dealing with the docs you provide to. Keeps skipping the provided docs and starts making up some random answers. Most of the time doesn't load docs you provide. Settings shows that you are using GPU but all the work done via CPU. Didn't like it at all.
PrivateGPT uses local processing, so your data should remain private. However, always review the project's privacy policy and settings for the most up-to-date information.
privategpt is insane if it thinks im going to install conda, poetry, mingw, and all this total bs on my computer! if you really need all this crap, it's time to make it docker compatible, and just have an image.
I understand your frustration. While these dependencies are necessary for the current setup, a Docker image could indeed simplify the process. I'll pass along the suggestion to the PrivateGPT developers.
Awesome tutorial grandpa... After going through number of tutorials I was literally going to give up but you saved me ... Now I have a place to come if nothing works... 😊
make not installed. An error occurred during installation: Unable to obtain lock file access on 'C:\ProgramData\chocolatey\lib\995c915eb7cf3c8b25f2235e513ef8ca0c75c3e7' for operations on 'C:\ProgramData\chocolatey\lib\make'. This may mean that a different user or administrator is holding this lock and that this process does not have permission to access it. If no other process is currently performing an operation on this file it may mean that an earlier NuGet process crashed and left an inaccessible lock file, in this case removing the file 'C:\ProgramData\chocolatey\lib\995c915eb7cf3c8b25f2235e513ef8ca0c75c3e7' will allow NuGet to continue.
This error suggests a file permission issue or a locked file. Try running the installation as an administrator. If that doesn't work, manually delete the lock file mentioned in the error message, then attempt the installation again.
It keeps saying ImportError: Local dependencies not found, install with `poetry install --extras llms-llama-cpp` and even after running that line and doing make run, it then complains with: ImportError: UI dependencies not found, install with `poetry install --extras ui` And then if I install that and try make run, it then goes back to the same first error! ImportError: Local dependencies not found, install with `poetry install --extras llms-llama-cpp It's a cycle. They seem to be uninstalling each other. What to do?!
That is interesting, I wonder if you try running poetry install --extras "llms-llama-cpp ui" to install both dependencies simultaneously to see if it resolves the cycle of conflicting installations.
thank you so much. i tried via other tutorials but this is the only that worked so far. installing this is not an easy task as all the steps are scattered all over the internet but you made it easier.
i sympathize with ur video but it didnt work for me i still get an error after using the prompt from the github user. it might be cuz i have an amd gpu on windows? idk
I'm sorry to hear you're still encountering errors. AMD GPUs on Windows can be tricky with some AI projects. Consider trying a CPU-only setup or exploring alternatives like WSL2 for better compatibility.
The CPU run worked but i'm facing trouble running the following command for running with GPU: $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python numpy==1.26.0 I ran it in the anaconda powershell after cding to the repo and activating the environment but I get an error when it's building wheels: ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
Are you running from powershell or command prompt? Have you installed the CUDA toolkit? Maybe try running the commands separately for each thing with multiple pip installs. May need to post it as an issue on the project github's Issues.
Haha, yeah, I understand the frustration. The complexity is due to the project's dependencies and customization options. Hopefully, future versions will simplify the process.
I apologize for any perceived slowness. AI model performance can vary based on hardware and settings. I'll try to provide more context on expected performance in future videos.
Simply liking the video wasn't enough. I had to post a comment and also subscribe. Thank you for showing the errors and how to mitigate them. Other tutorials will skip the error side of the work to show how professional they are and ordinary people will encounter to errors. The process is not as fluid as they show. But you made my problems go away. THANK YOU
Thank you so much for your kind words! I'm glad the troubleshooting section was helpful. It's important to me to show the real process, including the challenges.
Thank you. Worked for me too, after some adjustmens. Could you make a video how to choose a model from hugging face and what the criterias could be for choosing a model. Nothing there yet, on youtube.
I was looking for this type of service to process my text locally since centuries, and after I succeeded in the installation and running the program I was so so happy that I came and hit subscribe again and again and again ..... forgot that I am already subscribed 🤣 .... UA-cam should put this video on the front page! People, including myself, suffered a lot and couldn't figure out how to setup the latest privateGPTon windows due to many complexities involving the process. Thank you soooooooooooooooooooooooo much! Well, you know what, I might just download this video, its a treasure!
Building wheels can be tricky. Make sure you have the latest C++ build tools installed. If issues persist, try using a pre-built wheel or consider using a CPU-only version. Also, recently when I did an install using a pre-built wheel for llama-cpp-python, I installed the CUDA toolkit of the matching version. For example, try this with CUDA 12.1 installed: pip install --no-cache-dir --force-reinstall llama-cpp-python --extra-index-url abetlen.github.io/llama-cpp-python/whl/cu121 numpy==1.25.2
when I run, poetry run python scripts/setup I now get this error "None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Traceback (most recent call last): File "D:\Windows\LLM\privateGPT\scripts\setup", line 8, in from private_gpt.paths import models_path, models_cache_path ModuleNotFoundError: No module named 'private_gpt'" Even after installing torch, I get this error that "no module named private_gpt"
For CMAKE config, ensure it's properly installed and added to your system PATH. For the module errors, make sure you're in the correct directory and that all dependencies are properly installed.
Visual Studio isn't always required, but the C++ build tools it provides are often necessary. You can try installing just the build tools if you prefer.
That's unusual. Make sure you have the latest CUDA drivers installed. Also, check if there are any specific GPU settings in the PrivateGPT config that need adjustment.
Some latency is normal, especially during the first run or with larger models. Performance can vary based on your hardware and the specific model being used.
Not sure why everyone commenting negative on narrator. Installation on windows is difficult for privateGPT. I tried and got many errors. This video he explained clearly what errors we may get and how to resolve them. He even uninstalled chocolaty and reinstalled for us to understand how to do that. I like funny narration. Don't know why people can't appreciate genuine efforts.
thank you! 🙏
@@Natlamir i liked the narration and the jokes, i thought it was funny when the ai woman would pop up like she joined the call or something lol
@@Tubby-oq3uu
I like the video very much and the audio is great and inspiring.
Can a chatbot be installed on the Python 🐍 console 🖥 of a program like Paraview?
Paraview is a software to display 3D models and it contains a Python 🐍 console to run scripts for repetitive tasks.
I wasn't a fan of your character until the first error, then I knew you were the real OG. Like button smashed.
Glad you appreciated the real-world troubleshooting! I believe it's important to show the challenges along with the solutions.
Thanks for great video. You have saved me a lot of time on how to make privateGPT work locally
You're welcome! I'm happy I could save you some time and frustration.
out of 5 attemps this the only one that has worked for me!, and i don have a clue on how to programm xddddd! congratulations!
great, thanks!
🙏
What happens if you ask it to "create a windows installer for privategpt". Going through all of the rigamarole to install it when it is unclear how the end-product will perform is enough to make me skip it. Just saying..
😂 so true. Creating a Windows installer for PrivateGPT would be a complex task due to its dependencies. For now, I recommend following the manual installation process to ensure proper setup and compatibility.
Crazy voice, but your tutorial seems the only one on internet to work correctly and no need additional jobs to search installation options except for downloading anaconda from beginning.
Thank you very much!
Maybe you know, can I enable AMD support vith vram512mb?
Thank you for the kind words! Regarding AMD support, it can be tricky. You might need to use a CPU-only version or explore alternatives like WSL2 on Windows.
7:27 Everything is working till this, I've got an error saying:
from tomlkit import array
ModuleNotFoundError: No module named 'tomlkit'
pip install tomlkit doesn't help
@@bradcasper4823 Try running "poetry run pip install tomlkit" within your project environment. If that doesn't work, ensure you're using the latest version of Poetry and that your environment is activated.
And second question - can we make query to this PrivateGPT installation from outside applications? I can't find any information about this.
Regarding language models, check the config.yaml file in the project root for model settings. As for external queries, PrivateGPT typically runs as a local server. Check the project documentation for API endpoints if available.
trired: poetry install --with ui
Poetry could not find a pyproject.toml file
poetry --version (works ok)
(privateGPT) C:\ai>poetry --version
Poetry (version 1.7.1)
Did you go back to the original directory? The narrator used private-gpt under his c:\ai root. Once I changed the dir, it worked.
Ensure you're in the correct directory containing the pyproject.toml file. Use 'cd' to navigate to the project root before running Poetry commands.
could you do one for installing audiosr?
Thanks for the suggestion. I'll consider making a tutorial on installing AudioSR in the future.
Hi there ..
Can anyone knows how to install multilingual-e5-base??? or "small"???
For installing multilingual-e5-base, you can use Hugging Face's transformers library. Try "pip install transformers" and then use the model's Hugging Face identifier.
Is there a way to make it work on an AMD card?
From what I understand, running PrivateGPT on AMD GPUs can be challenging. You might need to use a CPU-only version or explore alternatives like using WSL2 on Windows.
Poetry could not find a pyproject.toml file in C:
Make sure you're in the correct directory where the pyproject.toml file is located. Use 'cd' to navigate to the project root before running Poetry commands.
Why cannot this be easier
I understand your frustration. The complexity often comes from the project's dependencies and customization options. Hopefully, future versions will simplify the process.
Ok, i tried all the stuff... and then... i have a AMD GPU...
AMD GPUs can be challenging with some AI projects. You might need to use a CPU-only version or explore alternatives like WSL2 on Windows.
To understand you need to put a super slow motion video 😡
I appreciate the feedback. I'll consider adjusting the pacing in future videos for better clarity.
The installation process is far too long and convoluted. I think GPT4all had a quicker install process.
PS- She clearly wanted to be on the other side of the ocean after those jokes haha
GPT4all is for the maker only, not for all. GPT4all is just a stupid tool that never help with dealing with the docs you provide to. Keeps skipping the provided docs and starts making up some random answers. Most of the time doesn't load docs you provide. Settings shows that you are using GPU but all the work done via CPU. Didn't like it at all.
You're right, the process is quite involved. Hopefully, future versions will simplify it. And yes, those jokes might have been a bit... much! 🤣
It uses lamaindex is this realy private?
PrivateGPT uses local processing, so your data should remain private. However, always review the project's privacy policy and settings for the most up-to-date information.
The voiceover made me suspicius but your video helped me a LOT!! Thanks!
Thank you! I'm glad the content was helpful despite any reservations about the voiceover.
privategpt is insane if it thinks im going to install conda, poetry, mingw, and all this total bs on my computer! if you really need all this crap, it's time to make it docker compatible, and just have an image.
I understand your frustration. While these dependencies are necessary for the current setup, a Docker image could indeed simplify the process. I'll pass along the suggestion to the PrivateGPT developers.
@@Natlamir sorry for my tone, but yes. it would be simpler.
Awesome tutorial grandpa... After going through number of tutorials I was literally going to give up but you saved me ... Now I have a place to come if nothing works... 😊
Same!!! This grandpa is a privateHero!!!
@@piezoelectric627 exactly 💯
Thank you so much! I'm glad the tutorial was helpful.
instead of "poetry install --with ui" try "poetry install --extras "ui llms-llama-cpp vector-stores-qdrant embeddings-huggingface"
Thank you for sharing that alternative command. It's helpful to have different options for troubleshooting.
Thank you SO much for this video. I was struggling with the error messages but thanks to your help, I now have this running 💖💖
great!
Why is the only video that actually gets you a working solution in Windows this one... Great job.
Thank you! I'm happy to hear the tutorial was helpful for Windows users.
make not installed. An error occurred during installation:
Unable to obtain lock file access on 'C:\ProgramData\chocolatey\lib\995c915eb7cf3c8b25f2235e513ef8ca0c75c3e7' for operations on 'C:\ProgramData\chocolatey\lib\make'. This may mean that a different user or administrator is holding this lock and that this process does not have permission to access it. If no other process is currently performing an operation on this file it may mean that an earlier NuGet process crashed and left an inaccessible lock file, in this case removing the file 'C:\ProgramData\chocolatey\lib\995c915eb7cf3c8b25f2235e513ef8ca0c75c3e7' will allow NuGet to continue.
This error suggests a file permission issue or a locked file. Try running the installation as an administrator. If that doesn't work, manually delete the lock file mentioned in the error message, then attempt the installation again.
It keeps saying ImportError: Local dependencies not found, install with `poetry install --extras llms-llama-cpp` and even after running that line and doing make run, it then complains with:
ImportError: UI dependencies not found, install with `poetry install --extras ui`
And then if I install that and try make run, it then goes back to the same first error!
ImportError: Local dependencies not found, install with `poetry install --extras llms-llama-cpp
It's a cycle. They seem to be uninstalling each other. What to do?!
That is interesting, I wonder if you try running poetry install --extras "llms-llama-cpp ui" to install both dependencies simultaneously to see if it resolves the cycle of conflicting installations.
thank you so much. i tried via other tutorials but this is the only that worked so far.
installing this is not an easy task as all the steps are scattered all over the internet but you made it easier.
You're welcome! I'm glad the tutorial was helpful in navigating the complex installation process.
Appreciate your effort man 😂 Thanks alot eventho I didnt try it yet, but I was just enjoying your video
Thank you for your kind words! I'm glad you enjoyed the video.
i sympathize with ur video but it didnt work for me i still get an error after using the prompt from the github user. it might be cuz i have an amd gpu on windows? idk
I'm sorry to hear you're still encountering errors. AMD GPUs on Windows can be tricky with some AI projects. Consider trying a CPU-only setup or exploring alternatives like WSL2 for better compatibility.
the only video that allowed me to use this stuff. Kudos!
Thanks! Glad the information was helpful.
Love the narration, specially at 1.5x or 1.75x speed. Great content 👍👍
im still unable to follow his instructions because of his narration
Thank you! I'm glad you enjoyed the narration style.
Grate video for Windows users ! Please mention your email. I need some advice from one expert as you.
I appreciate your interest. For professional inquiries, I haven't yet added that info, still working on that.
Thanks. Got this to work successfully compared to the Ubuntu based instuctions
Great to hear it worked for you! Windows setup can be tricky, so I'm glad the instructions were helpful.
3:00 possible Inception moment if in a future video you refer to this one referring to the first one.
i like this idea, i may have to do that! 😂
The CPU run worked but i'm facing trouble running the following command for running with GPU:
$env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python numpy==1.26.0
I ran it in the anaconda powershell after cding to the repo and activating the environment but I get an error when it's building wheels:
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
Are you running from powershell or command prompt? Have you installed the CUDA toolkit? Maybe try running the commands separately for each thing with multiple pip installs. May need to post it as an issue on the project github's Issues.
Omg the narration voice is not on point yet, that model needs some fine tuning :p
Thanks for the feedback. I'm continuously working on improving the narration quality.
After Sooooooooo many videos and tries, this method worked. Thank you @Natlamir
I'm so glad it worked for you! Thank you for the feedback.
why is this so hard, like aint no way this cant be all done with one installer
Haha, yeah, I understand the frustration. The complexity is due to the project's dependencies and customization options. Hopefully, future versions will simplify the process.
Great video, will wait for the 1 click install lol 🤣
😂
@natlamir, I’d like to share some ideas with you. Which is the best way to reach you?
i am working on getting contact information created
too slowwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
I apologize for any perceived slowness. AI model performance can vary based on hardware and settings. I'll try to provide more context on expected performance in future videos.
I'm waiting for Video-RETALKING with Gradio 😢
i created a gradio UI for that one a while back: ua-cam.com/video/FWptSS09I_A/v-deo.html
Simply liking the video wasn't enough. I had to post a comment and also subscribe. Thank you for showing the errors and how to mitigate them. Other tutorials will skip the error side of the work to show how professional they are and ordinary people will encounter to errors. The process is not as fluid as they show. But you made my problems go away. THANK YOU
Thank you so much for your kind words! I'm glad the troubleshooting section was helpful. It's important to me to show the real process, including the challenges.
Hello for all! Big thanx for your manual! Please answer me, how to change language model to gpt4all or any other? I cant find file .env :(
I haven't tried that before, I wonder if there is something documented on how to do that on their github?
thank you grandpa!!!!
important for other student: download visual studio 2022 with C++ CMake integrated component, or u cant build wheel (error)
haha! :D Thank you for the additional tip about Visual Studio components. It's very helpful for others to know.
@Natlamir I got few more errors to install transformers , huggingface_hub, llama_in , llama_index etc . did it not give these errors for you
I don't remember receiving those errors.
Thank you. Worked for me too, after some adjustmens.
Could you make a video how to choose a model from hugging face and what the criterias could be for choosing a model.
Nothing there yet, on youtube.
I'm glad it worked for you. Thanks for the video suggestion - I'll consider making a guide on choosing models from Hugging Face.
I was looking for this type of service to process my text locally since centuries, and after I succeeded in the installation and running the program I was so so happy that I came and hit subscribe again and again and again ..... forgot that I am already subscribed 🤣 .... UA-cam should put this video on the front page! People, including myself, suffered a lot and couldn't figure out how to setup the latest privateGPTon windows due to many complexities involving the process. Thank you soooooooooooooooooooooooo much! Well, you know what, I might just download this video, its a treasure!
haha aww thank you! that is great that it worked! thank you for the kind words. 🙏
For some reason I cannot get the wheel to build for llama-cpp-python
Building wheels can be tricky. Make sure you have the latest C++ build tools installed. If issues persist, try using a pre-built wheel or consider using a CPU-only version. Also, recently when I did an install using a pre-built wheel for llama-cpp-python, I installed the CUDA toolkit of the matching version. For example, try this with CUDA 12.1 installed:
pip install --no-cache-dir --force-reinstall llama-cpp-python --extra-index-url abetlen.github.io/llama-cpp-python/whl/cu121 numpy==1.25.2
I was not able to get it to work with vis studio 2022, I had to use 2019 and cuda 11.7.1
Thank you for sharing your experience. It's helpful to know about compatibility issues with different versions.
when I run, poetry run python scripts/setup
I now get this error "None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "D:\Windows\LLM\privateGPT\scripts\setup", line 8, in
from private_gpt.paths import models_path, models_cache_path
ModuleNotFoundError: No module named 'private_gpt'"
Even after installing torch, I get this error that "no module named private_gpt"
I anyway have to run "poetry install" and it got resolved
@@arvindelayappan3266 Great! Glad you were able to get it resolved.
great
Thank you!
Do you think there is something similar for reading documents for oogabooga?
Oobabooga has some document reading capabilities. You might want to look into extensions or plugins that enhance its document processing abilities.
@natlamir do you have config steps for CMAKE
For CMAKE config, ensure it's properly installed and added to your system PATH. For the module errors, make sure you're in the correct directory and that all dependencies are properly installed.
Is visual studio really required? not just the compiling tools?
Visual Studio isn't always required, but the C++ build tools it provides are often necessary. You can try installing just the build tools if you prefer.
I have an RTX 3050 GPU , when i switched from CPU to GPU it became much slower (3 mins to answer).
anyone knows why ?
There is a line to enable gpu u need to run it if u are using cpu mode already.
That's unusual. Make sure you have the latest CUDA drivers installed. Also, check if there are any specific GPU settings in the PrivateGPT config that need adjustment.
its normal the latency when the model its rendering o writing?
Some latency is normal, especially during the first run or with larger models. Performance can vary based on your hardware and the specific model being used.
literally nothing worked after 3 days of trying
I'm sorry to hear you're having difficulties.
damn, you saved my ass. hugs and kisses!
Thank you! I'm glad I could help.
Thanks! It has been helpful. 🙂
You're welcome! Glad it was helpful.
Stop my destop and not work
I'm sorry to hear you're having trouble.
you didn't pass away did you
haha, been busy with stuff recently, haven't gotten a chance to do more videos.
You made my day dude :D :D :D
I'm happy to hear that! Thank you for the feedback.
Jokes are perfect!
Thank you! I'm glad you enjoyed them.
Muy bueno tu video me gusto mucho gracias
¡Gracias! Me alegra que te haya gustado el video.
why you screaming
lol I love his narration style. too peculiar
TTS-AI David Attenborough voice at like a 1.5x speed. Very amusing!
I apologize if the audio levels were too high. I'll work on balancing the audio better in future videos.
@@rraul Thank you! I'm glad you enjoyed the narration style.
@@vbridgesruiz-phd That's an interesting comparison! haha!