Hey, I am Really curious how you were able to get Hermes in GPT4ALL to view online photos. I've tried using localdocs to connect my models to web content bet no dice....
What version of gpt4all are you using? It was reported earlier that certain versions after 3.4.2 had issues with some of the models. In my case, no special configuration was required.
@@AISoftwareDevelopers I updated to their latest version (1.6.1) It's on windows, I can get my visual models to hallucinate an answer(Based on whatever the address says) but other than that no dice.
The models are stored on my local hard drive - 14'' MPB (late 2023) with M3 and 1TB storage. You can store them in an external drive, if space is an issue. The operations will be slower, but it will work. Thanks for the comment!
I am not aware of any limits, but parsing a PDF of that size will be a challenge for any application, not just gpt4all. A powerful CPU, tons of RAM and GPU may be able to help. Otherwise, you may want to parse the PDFs into Markdown first, using something like LlamaParse (paid) and then process the MD files in gpt4all. The embeddings will still take time though.
I don’t see a reason why not. The models are downloaded to a folder you can configure and therefore load and use from anywhere else you need to. Great question!
Can I use this as a alocal server and use its API hosted locally for my other projects? If it does it will be awesome and not then I think that's a good next iteration feature it needs to implement❤
You can use sideloaded models just fine, but it might require tweaking the chat template. The latest version - 3.6.0 - which was just released does have replacements and examples for several well known sideloaded models.
@@adamtreat7582 thanks for chiming in. What is a good link to learn more about this? If there's enough interest, maybe I can throw together a quick tutorial on how to side-load models?
Thank you. This is great content.
Glad you enjoyed it
Hey, I am Really curious how you were able to get Hermes in GPT4ALL to view online photos. I've tried using localdocs to connect my models to web content bet no dice....
What version of gpt4all are you using? It was reported earlier that certain versions after 3.4.2 had issues with some of the models. In my case, no special configuration was required.
@@AISoftwareDevelopers I updated to their latest version (1.6.1) It's on windows, I can get my visual models to hallucinate an answer(Based on whatever the address says) but other than that no dice.
Nube here. Very interesting, Thanks. You have several models downloaded. Are they on an external drive? and what is the configuration of your machine?
The models are stored on my local hard drive - 14'' MPB (late 2023) with M3 and 1TB storage. You can store them in an external drive, if space is an issue. The operations will be slower, but it will work. Thanks for the comment!
I would love a video on how to build custom high quality datasets with nomic
You really can't using their PC models ... they are very stupid.
Can you elaborate on the use case and the tools? Is this with Nomic Atlas or GPT4ALL?
Hey, thanks for sharing! Does it have a limit on PDF file size? I've got some files that are almost 5GB. Will it work?
I am not aware of any limits, but parsing a PDF of that size will be a challenge for any application, not just gpt4all. A powerful CPU, tons of RAM and GPU may be able to help. Otherwise, you may want to parse the PDFs into Markdown first, using something like LlamaParse (paid) and then process the MD files in gpt4all. The embeddings will still take time though.
As it's an installed application, is there any way to use the local models inside an editor such as Visual Studio or Windsurf?
Yes in options activate openai endpoint
I don’t see a reason why not. The models are downloaded to a folder you can configure and therefore load and use from anywhere else you need to. Great question!
Can I use this as a alocal server and use its API hosted locally for my other projects? If it does it will be awesome and not then I think that's a good next iteration feature it needs to implement❤
already done see my other post
Yes, as @themax2go pointed out, you can configure and expose an API endpoint and have other apps use the models.
@@AISoftwareDevelopers thanks I will surely try
If you use their 3.5.0 and above you won't be able to side load models ... Downgrade to 3.4.2 which rocks.
I wasn't aware of this, but after checking they have already released 3 minor updates since the video was recorded. A fast-paced team, for sure 😃
@@AISoftwareDevelopers Those minor updates still don't load most HF models out of the box. Your luck may vary. I use 3.4.2
You can use sideloaded models just fine, but it might require tweaking the chat template. The latest version - 3.6.0 - which was just released does have replacements and examples for several well known sideloaded models.
@@adamtreat7582 thanks for chiming in. What is a good link to learn more about this? If there's enough interest, maybe I can throw together a quick tutorial on how to side-load models?