So far, some of my "key findings" regarding "different Ollama-supported models" include: 1/5. Use OpenAI's "text-embedding-3-large" for high-quality embeddings -- but it is somewhat expensive. 2/5. Use "text-embedding-3-small" for a balance between performance and cost. 3/5. In addition to "llama3.1:8b", the "mistral:latest" model has good performance across various tasks. 4/5. For PDF's, use text extraction tools like PyPDF2 or pdfminer, but we must remove or skip encoding errors. Finding the ideal chunk size and overlap is difficult! 5/5: We must set up good benchmark datasets of relevant PDFs to compare results. ALSO: Unfortunately, "faiss-gpu" appears to be deprecated but an older conda version can be run under MS Win 10/11; however, the latest version appears to run only under Linux and, perhaps, Mac OS !?!?!
very interesting. issues: 1. it uses tkinter! 2. it strips the text from the pdf so it doesn't preserve page numbers - so you can't ask questions about where the text was found.
I am experimenting with different Ollama-supported models and embedding models to see what current works 'best' for PDF's. Any Recommendations ? Thank You.
@@cjjb So far, some of my "key findings" regarding "different Ollama-supported models" include: 1/5. Use OpenAI's "text-embedding-3-large" for high-quality embeddings -- but it is somewhat expensive. 2/5. Use "text-embedding-3-small" for a balance between performance and cost. 3/5. In addition to "llama3.1:8b", the "mistral:latest" model has good performance across various tasks. 4/5. For PDF's, use text extraction tools like PyPDF2 or pdfminer, but we must remove or skip encoding errors. Finding the ideal chunk size and overlap is difficult! 5/5: We must set up good benchmark datasets of relevant PDFs to compare results. ALSO: Unfortunately, "faiss-gpu" appears to be deprecated but an older conda version can be run under MS Win 10/11; however, the latest version appears to run only under Linux and, perhaps, Mac OS !?!?!
@@cjjb "So far, some of my "key findings" regarding "different Ollama-supported models" include: ... 1/5. Use OpenAI's "text-embedding-3-large" for high-quality embeddings -- but it is somewhat expensive. 2/5. Use "text-embedding-3-small" for a balance between performance and cost. 3/5. In addition to "llama3.1:8b", the "mistral:latest" model has good performance across various tasks. 4/5. For PDF's, use text extraction tools like PyPDF2 or pdfminer, but we must ... remove or skip encoding errors. Finding the ideal chunk size and overlap is difficult! 5/5: We must set up good benchmark datasets ... of relevant PDFs to compare results. ALSO: Unfortunately, "faiss-gpu" appears to be deprecated but an older conda version can be ... run under MS Win 10/11; however, the latest version appears to run only under Linux and, perhaps, Mac OS !?!?! "
Thank you very much for your content. The file sizes I will be using are large and it runs quite slowly on my local machine. What can I do to speed it up?
So if I want to teach my AI my uni lessons, should I make all my pptx pdf docx files in to a single pdf . I am new to AI training and I am kinda struggling
I made a pull request to the repo and he accepted it. If you repull the repo you should now have access to uploading PDF, TXT, and JSON files into the context.
Thanks to share your kwoledge with us! I have some questions because i have a similar code but it is not work as welll when the context is so extensive, how can i do? also, the chatbot lost when the history chat is large, i don't know hot to fix it. Yo have some idea? thanks
I get the following error: python localrag.py Traceback (most recent call last): File "/Users/eil-its/Documents/experiments/workspace-python/llama3rag/localrag.py", line 130, in response = ollama.embeddings(model='mxbai-embed-large', prompt=content) File "/Users/eil-its/Documents/experiments/workspace-python/llama3rag/llama/lib/python3.9/site-packages/ollama/_client.py", line 198, in embeddings return self._request( File "/Users/eil-its/Documents/experiments/workspace-python/llama3rag/llama/lib/python3.9/site-packages/ollama/_client.py", line 73, in _request raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: failed to generate embedding
Try another Model delete the model he tells you to use in the video 'mistral' you can do this by running the command 'ollama rm mistral' and then install a model that is working for me and the one he has listed in his github llama3 'ollama pull llama3'. and do not forget 'ollama pull mxbai-embed-large'.
Hi, thank you for the brilliant video and tutorial! I was wondering if you had any experience with implementing this process with LlaMA3 rather than Mistral, or if there was any difference in the implementation? Thank you :)
Getting this error: import PyPDF2 File "C:\Anaconda\lib\site-packages\PyPDF2\__init__.py", line 12, in from ._encryption import PasswordType File "C:\Anaconda\lib\site-packages\PyPDF2\_encryption.py", line 34, in from ._utils import logger_warning File "C:\Anaconda\lib\site-packages\PyPDF2\_utils.py", line 55, in from typing_extensions import TypeAlias ModuleNotFoundError: No module named 'typing_extensions'
I followed all the steps just like you, but when I want to upload the pdf I encounter the following issue: pdf.py': [Errno 2] No such file or directory Do you know what is going wrong?
This video is a misleading tutorial that oversimplifies the process of creating a local RAG (Retrieval Augmented Generation) system. While the presenter claims to create a "100% local RAG in around 70 lines of code," they fail to address the complexities and limitations of such a system. The tutorial relies heavily on pre-built libraries and models, such as Ollama and sentence-transformers, without providing a deep understanding of how these components work together. Moreover, the presenter does not discuss the potential drawbacks of using a local RAG system, such as the limited amount of data it can handle and the lack of real-time updates. The video may give viewers a false sense of ease in implementing a RAG system, without considering the necessary expertise and resources required for a robust and reliable solution.
i found an error when i run localrag.py File "D:\easy-local-rag-main\localrag.py", line 134, in response = ollama.embeddings(model='mxbai-embed-large', prompt=content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama\_client.py", line 198, in embeddings return self._request( ^^^^^^^^^^^^^^ File "C:\Users\HP\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama\_client.py", line 73, in _request raise ResponseError(e.response.text, e.response.status_code) from None ollama._types.ResponseError: failed to generate embedding
Just want to say that you're probably one of the easiest to follow and most intuitive persons I've seen on UA-cam doing guides for LLMs!
Thanks!
thnx mate:) appriciate it!
Really appreciate all your content and how much energy you put into learning all this and sharing it. Thanks buddy
So far, some of my "key findings" regarding "different Ollama-supported models" include: 1/5. Use OpenAI's "text-embedding-3-large" for high-quality embeddings -- but it is somewhat expensive. 2/5. Use "text-embedding-3-small" for a balance between performance and cost. 3/5. In addition to "llama3.1:8b", the "mistral:latest" model has good performance across various tasks. 4/5. For PDF's, use text extraction tools like PyPDF2 or pdfminer, but we must remove or skip encoding errors. Finding the ideal chunk size and overlap is difficult! 5/5: We must set up good benchmark datasets of relevant PDFs to compare results. ALSO: Unfortunately, "faiss-gpu" appears to be deprecated but an older conda version can be run under MS Win 10/11; however, the latest version appears to run only under Linux and, perhaps, Mac OS !?!?!
The discord link is still not working for me. Says it’s expired. Did you update the link or is this something on my end?
same
Again, Thank You! As you suggest this local RAG program works fairly well and is certainly 'good enough' for my personal use cases.
can I add a folder including multiple PDFs and txts?
Thank you so much for this tutorial, Could you make a video with some use cases with RAG in your daily life?
WTF? YOU PUT INTO THE AI SYSTEM DOCUMENTS THAT YOU WANT TO SEARCH ON THEN YOU ASK AI ABOUT YOUR DOCUMENTS? pretty simple use case.
very interesting. issues: 1. it uses tkinter! 2. it strips the text from the pdf so it doesn't preserve page numbers - so you can't ask questions about where the text was found.
it works after ollama run mistral this thank you
I am experimenting with different Ollama-supported models and embedding models to see what current works 'best' for PDF's. Any Recommendations ? Thank You.
Any findings?
@@cjjb So far, some of my "key findings" regarding "different Ollama-supported models" include: 1/5. Use OpenAI's "text-embedding-3-large" for high-quality embeddings -- but it is somewhat expensive. 2/5. Use "text-embedding-3-small" for a balance between performance and cost. 3/5. In addition to "llama3.1:8b", the "mistral:latest" model has good performance across various tasks. 4/5. For PDF's, use text extraction tools like PyPDF2 or pdfminer, but we must remove or skip encoding errors. Finding the ideal chunk size and overlap is difficult! 5/5: We must set up good benchmark datasets of relevant PDFs to compare results. ALSO: Unfortunately, "faiss-gpu" appears to be deprecated but an older conda version can be run under MS Win 10/11; however, the latest version appears to run only under Linux and, perhaps, Mac OS !?!?!
@@cjjb "So far, some of my "key findings" regarding "different Ollama-supported models" include:
...
1/5. Use OpenAI's "text-embedding-3-large" for high-quality embeddings -- but it is somewhat expensive.
2/5. Use "text-embedding-3-small" for a balance between performance and cost.
3/5. In addition to "llama3.1:8b", the "mistral:latest" model has good performance across various tasks.
4/5. For PDF's, use text extraction tools like PyPDF2 or pdfminer, but we must
... remove or skip encoding errors. Finding the ideal chunk size and overlap is difficult!
5/5: We must set up good benchmark datasets
... of relevant PDFs to compare results. ALSO: Unfortunately, "faiss-gpu" appears to be deprecated but an older conda version can be
... run under MS Win 10/11; however, the latest version appears to run only under Linux and, perhaps, Mac OS !?!?! "
@@cjjb please see the “key findings” that I posted for everyone’s review.
I cannot find the pdf.py in your git, what does the pdf script do
Thank you mate, now my friends think I'm an AI Engineer😎😎
Harikasınız
Thank you very much for your content. The file sizes I will be using are large and it runs quite slowly on my local machine. What can I do to speed it up?
Does anyone else's script get stuck on generating the embeddings when you run it?
same here.. anyone having solution to this.
@@AbhishekKumar-vt1iu it is due to limitations of computing resources in local machine
Cool Beans! Thanks so very much you have helped me enormously.
Thanks again
The phrase, "cool beans" gives me so much nostalgia.
So if I want to teach my AI my uni lessons, should I make all my pptx pdf docx files in to a single pdf . I am new to AI training and I am kinda struggling
Thanks for this video!
hello.must install python,ok?do you have a full tutorial for this?
Please how would you advice we run this on JSON files, not PDF.
I have several Q-A in . JSON format, not PDF.
I made a pull request to the repo and he accepted it. If you repull the repo you should now have access to uploading PDF, TXT, and JSON files into the context.
@@ClipsofCoolStuffmarkdown? 😅
Hey! There's any solution to make a good RAG like this using ollama on Openwebui?
This is cool! how can you run this on a web interface? just curious
Great! Thank you! I'd like to have the same for local code (django). One liners won't work, so how to do this?
Thanks to share your kwoledge with us! I have some questions because i have a similar code but it is not work as welll when the context is so extensive, how can i do? also, the chatbot lost when the history chat is large, i don't know hot to fix it. Yo have some idea? thanks
Great! How to remove pdf?
didnt work for me, despite what the video said the code is using from openai import OpenAI
yeah, also doesnt' work for me. i have no openai api key and am not getting one foss or die
Oh Man that worksssssssssss!
I get the following error:
python localrag.py
Traceback (most recent call last):
File "/Users/eil-its/Documents/experiments/workspace-python/llama3rag/localrag.py", line 130, in
response = ollama.embeddings(model='mxbai-embed-large', prompt=content)
File "/Users/eil-its/Documents/experiments/workspace-python/llama3rag/llama/lib/python3.9/site-packages/ollama/_client.py", line 198, in embeddings
return self._request(
File "/Users/eil-its/Documents/experiments/workspace-python/llama3rag/llama/lib/python3.9/site-packages/ollama/_client.py", line 73, in _request
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: failed to generate embedding
me too, found any solution?
same facing this error
same error
ollama pull mxbai-embed-large , thats works for me
Try another Model delete the model he tells you to use in the video 'mistral' you can do this by running the command 'ollama rm mistral' and then install a model that is working for me and the one he has listed in his github llama3 'ollama pull llama3'. and do not forget 'ollama pull mxbai-embed-large'.
How would you deploy this in AWS for production? Would you use ollama and download the entire LLM model on an EC2 instance?
use an inference endpoint for llm and a cpu based ec2 instance. It may be cheaper
very good stuff
Hi, thank you for the brilliant video and tutorial! I was wondering if you had any experience with implementing this process with LlaMA3 rather than Mistral, or if there was any difference in the implementation? Thank you :)
Will you be doing a video’s on ChatGPT the App version anymore?
Hello sir lets get this on!!!
what if i have plain text 40kb size? (not pdf)
hi, can i build all the same using docker?
Getting this error: import PyPDF2
File "C:\Anaconda\lib\site-packages\PyPDF2\__init__.py", line 12, in
from ._encryption import PasswordType
File "C:\Anaconda\lib\site-packages\PyPDF2\_encryption.py", line 34, in
from ._utils import logger_warning
File "C:\Anaconda\lib\site-packages\PyPDF2\_utils.py", line 55, in
from typing_extensions import TypeAlias
ModuleNotFoundError: No module named 'typing_extensions'
I followed all the steps just like you, but when I want to upload the pdf I encounter the following issue:
pdf.py': [Errno 2] No such file or directory
Do you know what is going wrong?
What if my pdf has like a 1 million word/token, would this still work?
script get stuck on generating the embeddings!
muchas gracias, muy util, funciono el codigo
Really nice work! .. but is this really fully local? ... OpenAI?
... Huh?
no working in mac
same on linux. apparently it's calling openai?
Which video card and how much memory?
This video is a misleading tutorial that oversimplifies the process of creating a local RAG (Retrieval Augmented Generation) system. While the presenter claims to create a "100% local RAG in around 70 lines of code," they fail to address the complexities and limitations of such a system. The tutorial relies heavily on pre-built libraries and models, such as Ollama and sentence-transformers, without providing a deep understanding of how these components work together. Moreover, the presenter does not discuss the potential drawbacks of using a local RAG system, such as the limited amount of data it can handle and the lack of real-time updates. The video may give viewers a false sense of ease in implementing a RAG system, without considering the necessary expertise and resources required for a robust and reliable solution.
Thanks, GPT 4.
FYI GPTZero states this comment is 100% AI generated.
Jeez, it almost academic if it weren't reeking of AI
Hahaha 😂 @@userou-ig1ze
Well no shit no one’s gonna write a whole semantic search on parsed documents. Thats like saying “im gonna program my own garbage collector in java”
awesome, i dont get it, anything easier that does not require any code ? thank you :)
LOL
Should add at thevery beginning that this does not expose any web UI ! Not useful for majority of people
Free Palestine.
i found an error when i run localrag.py
File "D:\easy-local-rag-main\localrag.py", line 134, in
response = ollama.embeddings(model='mxbai-embed-large', prompt=content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama\_client.py", line 198, in embeddings
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\HP\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama\_client.py", line 73, in _request
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: failed to generate embedding
please guide
@@kumarrohit557 have you tried before doing that ollama pull mxbai-embed-large ?
ollama pull mxbai-embed-large
@@EduardoJGaido .
@@kumarrohit557 .
Cant make Torch to use GPU. It always uses CPI. Torch settings seem hardcoded?
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': "model 'mistral' not found, try pulling it first", 'type': 'api_error', 'param': None, 'code': None}}
You have to download the model. ollama pull mistral