thanks for the vídeo. I was looking this exactly (runing ia on a handheld with Linux). I thought well... if someone have tried it then it's probable be a GPD device user so I just type "Local AI on GPD" and your reddit post was the first result on startpage xD
Much support has been made during the last months. They are even working on GPU support. Once that is done and supported by my AMD GPU, I might make an other video and test the speed. I got a new machine with 32gb RAM, so that might help, too
A friend of mine a while ago asked if I could find the one Facebook created as it leaked online. There were 4 models. 6.5b 13b 28b 65b. B is for billion neurons I believe the 65b needs at least 58gb of ram but sith 32 could run the 28b. Having no idea how to compile the code was still interesting as hell to hear about. Thanks. This seems like a good alternative I could run if I wanted to 😁
Yeah, the "b" is the number of connections. The Vicuna and snoozy models are based on those Facebook data, but improved, so go ahead and try them. Basically every model with "vicuna" in the name is based on those Facebook data. Going beyond 13b requires very high amount of VRAM. The gpt4all models are quantized (compressed) and made CPU friendly, so they run with less RAM requirements and on CPU. It's basically the same model. If you follow that link in t he video description to the online demo, select 'vicuna' and you can try it online if you want. I think it would not harm if I'll do an educational video about all the different terms related to those AI models. No promise and my next 1-2 videos will be GPD/Handheld related. Hope that helps :)
Have you noticed on you Max2 that the battery drains even when it’s powered down? Powered down @98% powered back up 8 hrs later and it’s @73% Thanks for the video.
No, not for me. If your battery goes down that much in just 8hrs, then something must be wrong. Are you sure, you did not enter sleep mode or hibernation? You might have a bad battery. I do not know any case of that, but there is always a small chance of failure. If the device discharges normally while turned on, the battery is probably fine.
I reached out them, and they asked me to let it run all the way down to zero the recharge back to 98% a couple of times. I noticed that if it’s left on it doesn’t run down so fast versus when I have it completely off. So it might just be that the battery needs to be conditioned by letting it completely drain. Appreciate the videos and the responses. Great channel.
can it run on old device? what is the minimum requirement of the device, and how to know our device's rom? thanks for the great content, this subject is really interesting
It's hard answering those questions. People have run this on a raspberry pi and on phones AFAIK, so in theory at least the smaller 7b models should be able to run on any hardware which is not too ancient. I'd say 8gb of ram is required, at least if you are running Windows, which uses 2gb all by itself. Also the video is basically outdated already, as there is a new and much better commercially usable model out there and the UI has been updated. I am confident, that the performance of those models will improve in the near future and will run on many more devices. Maybe if I am lazy I might do a performance demo on my Win Max 2, so people can see the real time speed of those models running on laptops.
Thanks :) I might do an easy to make follow up video which prompts a few questions so people can see how the performance of the different models is on my machine.
thanks for the vídeo. I was looking this exactly (runing ia on a handheld with Linux). I thought well... if someone have tried it then it's probable be a GPD device user so I just type "Local AI on GPD" and your reddit post was the first result on startpage xD
Nice :)
I expect the results to be way better now since my video is quite old.
Love it, we need more offline ai
Much support has been made during the last months.
They are even working on GPU support.
Once that is done and supported by my AMD GPU, I might make an other video and test the speed.
I got a new machine with 32gb RAM, so that might help, too
@@williamether
There is still no update regarding GPU support.
However they are working on it :)
A friend of mine a while ago asked if I could find the one Facebook created as it leaked online. There were 4 models. 6.5b 13b 28b 65b. B is for billion neurons I believe the 65b needs at least 58gb of ram but sith 32 could run the 28b. Having no idea how to compile the code was still interesting as hell to hear about.
Thanks. This seems like a good alternative I could run if I wanted to 😁
Yeah, the "b" is the number of connections.
The Vicuna and snoozy models are based on those Facebook data, but improved, so go ahead and try them. Basically every model with "vicuna" in the name is based on those Facebook data.
Going beyond 13b requires very high amount of VRAM.
The gpt4all models are quantized (compressed) and made CPU friendly, so they run with less RAM requirements and on CPU. It's basically the same model.
If you follow that link in t he video description to the online demo, select 'vicuna' and you can try it online if you want.
I think it would not harm if I'll do an educational video about all the different terms related to those AI models. No promise and my next 1-2 videos will be GPD/Handheld related.
Hope that helps :)
Have you noticed on you Max2 that the battery drains even when it’s powered down? Powered down @98% powered back up 8 hrs later and it’s @73% Thanks for the video.
No, not for me.
If your battery goes down that much in just 8hrs, then something must be wrong.
Are you sure, you did not enter sleep mode or hibernation?
You might have a bad battery. I do not know any case of that, but there is always a small chance of failure.
If the device discharges normally while turned on, the battery is probably fine.
I reached out them, and they asked me to let it run all the way down to zero the recharge back to 98% a couple of times. I noticed that if it’s left on it doesn’t run down so fast versus when I have it completely off. So it might just be that the battery needs to be conditioned by letting it completely drain. Appreciate the videos and the responses. Great channel.
@@skandashiva1805 Thanks :)
Hope you can fix your battery.
can it run on old device? what is the minimum requirement of the device, and how to know our device's rom? thanks for the great content, this subject is really interesting
It's hard answering those questions. People have run this on a raspberry pi and on phones AFAIK, so in theory at least the smaller 7b models should be able to run on any hardware which is not too ancient.
I'd say 8gb of ram is required, at least if you are running Windows, which uses 2gb all by itself.
Also the video is basically outdated already, as there is a new and much better commercially usable model out there and the UI has been updated.
I am confident, that the performance of those models will improve in the near future and will run on many more devices.
Maybe if I am lazy I might do a performance demo on my Win Max 2, so people can see the real time speed of those models running on laptops.
@@mystechry thank you so much for answering :D
Nice video bro
Thanks :)
I might do an easy to make follow up video which prompts a few questions so people can see how the performance of the different models is on my machine.