My only issue is the lack of time for making the videos; if it were me, I would be doing them all the time 🙂 BTW, did you catch the LLM hallucinating? It says that FreeBSD was first released in the year 2000.... yeah... sure... hahahaha
Hi. The ports tree is itself relatively large - for me it takes about 1 or 2 GB in size, but this might depend on how you pull it (e.g. all history, or only enough for the last commit, i.e. shallow clone). I would definitively recommend to use pkg instead of compiling everything because not only it can take some space, but can also take a huge amount of time to compile. As ollama is already available in the ports tree, just "pkg install" it :-) You might also find some interesting discussion about the size of the ports tree in here: forums.freebsd.org/threads/how-much-space-do-i-need-to-run-portsnap-fetch-extract.64586/ The FreeBSD forums is an awesome source of information :-) Cheers!
Hi. Yes, you can run your own AI model in your home server with ollama. I heard it can even run on a RPi, but I did not try this. Problem, though, is that it might require a beefy video card, if you want the model to run at any usable speed... Cheers :-)
Hi. Thank you for the comment. No, I do not think that you can find it in windows... This script was developed by Yuri Victorovich, for the FreeBSD project; however, you can get it from here: github.com/freebsd/freebsd-ports/blob/main/misc/ollama/files/ollama-limit-gpu-layers and adapt it to your own use-case... Basically, you can see that the script runs the ollama command to get the Modelfile file, adapts it by adding the num_gpu parameter, and then recreates the model again with ollama... So, you can also easily reproduce the steps in Windows CMD. Cheers!
This was very interesting...... thank you!
My only issue is the lack of time for making the videos; if it were me, I would be doing them all the time 🙂
BTW, did you catch the LLM hallucinating? It says that FreeBSD was first released in the year 2000.... yeah... sure... hahahaha
thanku gud video
Thank you for your comment and positive feedback :-)
Very cool.
Thank you :-)
How it cost to use ports on freebsd in term of disk .Great video
Hi. The ports tree is itself relatively large - for me it takes about 1 or 2 GB in size, but this might depend on how you pull it (e.g. all history, or only enough for the last commit, i.e. shallow clone). I would definitively recommend to use pkg instead of compiling everything because not only it can take some space, but can also take a huge amount of time to compile. As ollama is already available in the ports tree, just "pkg install" it :-)
You might also find some interesting discussion about the size of the ports tree in here: forums.freebsd.org/threads/how-much-space-do-i-need-to-run-portsnap-fetch-extract.64586/
The FreeBSD forums is an awesome source of information :-)
Cheers!
Thank you for the great video.
Does this mean we can run our own AI at home server?
Hi. Yes, you can run your own AI model in your home server with ollama. I heard it can even run on a RPi, but I did not try this. Problem, though, is that it might require a beefy video card, if you want the model to run at any usable speed...
Cheers :-)
Ollama isn't by Meta. Llama is.
Thanks for clarification. I will update the cover ASAP… Cheers :-)
13:30 Yeah, I'm sure FreeBSD was first released in 2000 and not 1993... 😬
AI has hallucinations FTW... 🤣
From Wiki: 1 November 1993; 31 years ago
is this script available for usage in windows? I cannot seem to find it anywhere
Hi. Thank you for the comment.
No, I do not think that you can find it in windows... This script was developed by Yuri Victorovich, for the FreeBSD project; however, you can get it from here: github.com/freebsd/freebsd-ports/blob/main/misc/ollama/files/ollama-limit-gpu-layers and adapt it to your own use-case...
Basically, you can see that the script runs the ollama command to get the Modelfile file, adapts it by adding the num_gpu parameter, and then recreates the model again with ollama... So, you can also easily reproduce the steps in Windows CMD.
Cheers!