Awesome. Thanks for a great video and building such a great tool. Please keep it up and build a larger community by integrating with popular framework like llama index or dspy.
@@phidata Yes, I have been searching for how to use dspy to produce datasets from PDFs. Using ollama (local llm) to facilitate this process would be fantastic.
Hi @phidata, thanks for sharing such a nice project. I followed your instructions and everything went as in the video. However, when testing it I found out that some URLs are loaded without problems whereas others are not loaded, I just get a "Could not read website" message on the UI and nothing on the console. Are there any restrictions or configurations that we should be aware for reading URLs? Another thing, how can I read TXT files also and not only PDFs? something like adding support for other types of files to be ingested... where in the "phidata" repository is this done?
Excellent work, this tool is fantastic. Do you know if there is a limit on the amount of PDFs that I can use to feed the model? Do I have to upload all the PDFs every time I restart the tool or are all the PDFs stored persistently in the knowledge base? Even if I restart Docker . Again, this tool is amazing, thank you.
Thanks for the video. Have you tried AirLLM? Allows you to run Llama 3 70B uncompressed on a 4GB GPU. Can you do a tutorial on that? Also, have you tested any of the quantized versions of Llama 3 70B?
I've installed docker desktop, but the code pasted into the terminal does not work. I don't see any instructions on how to link docker to the app. Please help!
I want to use mysql for both the knowledge base and storage but i don't see an appropriate tool and don't want to hack and slash my own without understanding your underlying structure ... prety please? :D
HI there, Awesome Video. Thanks for it. Question: How can I do, if I want to upload to a hosting service such as pythonanywhere, Azure, etc.? Any thoughts I appreciate it. 👍
Groq is definitely faster, but also here we're running the 8b model whereas on Groq we're running the 70b model. Also on local it can depend on how overloaded my system is, 3-4 pycharms, a couple of google chromes and the models get slower. Have you tested it locally vs groq?
@@phidata not successfully yet. i got the error"UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 7: ordinal not in range(128)" while using nomic-embed-text for the embedding as shown in your cookbook/llms/groq/rag/
@@JTIAPBN Please can you try with another document? The error seems to be with a character encoding in your file. Here's more details: stackoverflow.com/a/9942822
Great sharing. I tried following your steps, but got error when I read the pdf file. 'StatementError: (builtins.ValueError) expected 768 dimensions, not 0', what might be the reason for this? thx.
You can send it to the LLM like: `llm=Ollama(model="llama3", options={temperature: 0.1}),` Docs: docs.phidata.com/llms/ollama#:~:text=Timeout%20for%20requests-,options,-Dict%5Bstr%2C%20Any
Hey! Thanks for the video. I'm hitting the following error: "ERROR (psycopg.OperationalError) connection failed: FATAL: no pg_hba.conf entry for host "", user "ai", database "ai", no encryption " Any idea how to solve it? Was anything missing in the readme file? I don't see where the file is being configured in the docker container
I was able to solve the issue by going to `/var/lib/postgresql/data/pgdata/pg_hba.conf` in the Docker container files and adding the following line: `host ai ai /32 trust` Make sure to replace `` with the IP that is shown in the error. Then restart docker and refresh the page. It should be usable now.
excellent job. this is exactly what I am looking for. Will try it out.
This video is one of the most beautiful ones I've ever seen.❤
Thank you ❣this is motivation to make more videos
Exceptional tutorial, thank you very much, it works very well, installation took me several attempts but at the end worked thanks a lot Ashpreet
Awesome. Thanks for a great video and building such a great tool. Please keep it up and build a larger community by integrating with popular framework like llama index or dspy.
Thanks San! I'm on it :)
@@phidata Yes, I have been searching for how to use dspy to produce datasets from PDFs. Using ollama (local llm) to facilitate this process would be fantastic.
Fantastic stuff and very practical. thank you
Hi @phidata, thanks for sharing such a nice project. I followed your instructions and everything went as in the video. However, when testing it I found out that some URLs are loaded without problems whereas others are not loaded, I just get a "Could not read website" message on the UI and nothing on the console. Are there any restrictions or configurations that we should be aware for reading URLs?
Another thing, how can I read TXT files also and not only PDFs? something like adding support for other types of files to be ingested... where in the "phidata" repository is this done?
Excellent work, this tool is fantastic. Do you know if there is a limit on the amount of PDFs that I can use to feed the model? Do I have to upload all the PDFs every time I restart the tool or are all the PDFs stored persistently in the knowledge base? Even if I restart Docker . Again, this tool is amazing, thank you.
Excellent video. Could you consider supporting another format, such as doc, markdown, or CSV… in the future? Thanks a lot
absolutely :)
This is great. Thanks.
🧡
Without using ollama can we build this ?? If yes can you show us the way
What do you use to record your screen and zoom keeping such good quality?
thanks!
I use screenstudio, its very very good :)
Very great. Since i'm French speaker can I use it with a French Version of LLAMA 3 ?
Yes you can sir, give it a try and let us know how it goes
Thanks for the video. Have you tried AirLLM? Allows you to run Llama 3 70B uncompressed on a 4GB GPU. Can you do a tutorial on that? Also, have you tested any of the quantized versions of Llama 3 70B?
finally a good channel.
How is it going? It’s too quite on this channel.
I've installed docker desktop, but the code pasted into the terminal does not work. I don't see any instructions on how to link docker to the app. Please help!
I want to use mysql for both the knowledge base and storage but i don't see an appropriate tool and don't want to hack and slash my own without understanding your underlying structure
... prety please? :D
Which theme are you using, sir?
HI there, Awesome Video. Thanks for it. Question: How can I do, if I want to upload to a hosting service such as pythonanywhere, Azure, etc.? Any thoughts I appreciate it. 👍
is the speed of the response faster than groq local rag from your previous video? thanks for the video. :)
Groq is definitely faster, but also here we're running the 8b model whereas on Groq we're running the 70b model. Also on local it can depend on how overloaded my system is, 3-4 pycharms, a couple of google chromes and the models get slower.
Have you tested it locally vs groq?
@@phidata not successfully yet. i got the error"UnicodeEncodeError: 'ascii' codec can't encode character '\xa0' in position 7: ordinal not in range(128)" while using nomic-embed-text for the embedding as shown in your cookbook/llms/groq/rag/
@@JTIAPBN Please can you try with another document? The error seems to be with a character encoding in your file. Here's more details: stackoverflow.com/a/9942822
@@phidata I did try others documents but no luck
Great sharing. I tried following your steps, but got error when I read the pdf file. 'StatementError: (builtins.ValueError) expected 768 dimensions, not 0', what might be the reason for this? thx.
I got a similar error, have you find a solution for this?
i follow all the steps,but when i start app,it show only llama3(no user input option are shown) only one option just llama3.
If llama installed in different server, how to open the URL
you can set the `host` value on the llm=Ollama(host="your url"). read more: docs.phidata.com/llms/ollama#params
@@phidata separate video to call URL and get the result should help everyone. By the way loved your videos very much
how to host this app at internet
Ive been trying to edit the llama3 temperature cuz it tends to hallucinate a lot, where should I go to edit it?
You can send it to the LLM like: `llm=Ollama(model="llama3", options={temperature: 0.1}),`
Docs: docs.phidata.com/llms/ollama#:~:text=Timeout%20for%20requests-,options,-Dict%5Bstr%2C%20Any
@@phidata Im using the groq api with export GROQ_API_KEY=***, where can I edit temp?
does this work in Windows 10?
Yes it does :)
The problem is 200 MB file limit… otherwise it is nice
You can increase the limit :) this is just set on the front-end (streamlit) but you can do as big a file as you like
Llama3 similar to chatgpts
Hey! Thanks for the video. I'm hitting the following error:
"ERROR (psycopg.OperationalError) connection failed: FATAL: no pg_hba.conf entry for host
"", user "ai", database "ai", no encryption "
Any idea how to solve it? Was anything missing in the readme file? I don't see where the file is being configured in the docker container
I was able to solve the issue by going to `/var/lib/postgresql/data/pgdata/pg_hba.conf` in the Docker container files and adding the following line:
`host ai ai /32 trust`
Make sure to replace `` with the IP that is shown in the error.
Then restart docker and refresh the page. It should be usable now.
Great you were able to solve this!! such an odd error :/