Ollama - Run LLMs Locally - Gemma, LLAMA 3 | Getting Started | Local LLMs
Вставка
- Опубліковано 3 жов 2024
- This video is about getting started with Ollama to run LLMs locally.
Join membership for exclusive perks: / @siddhardhan
Deep Learning Projects Playlist: • Deep Learning Projects
Machine Learning Projects Playlist: • Machine Learning Projects
Download the Course Curriculum File from here: drive.google.c...
LinkedIn: / siddhardhan-s-741652207
Telegram Group: t.me/siddhardhan
Facebook group: www.facebook.c...
Getting error in any of the codes that I have explained? Mail the details of the error to: datascience2323@gmail.com
Instagram: / siddhardhan23
bro please complete deep learning course asap...!!!! It will be very helpful.
Excellent video brother!❤, how can I create a bot that takes pdfs as its input and gives answer to our question from those pdfs, but LOCALLY as you did in this video.
Bro, from how many years you are in this field learning??
Thanks for all your great training video. However, I don't understand what the purposes of the first three lines in the messages. The lines with the system, user and assistant role. Are these lines required for this gemma LLM? Thanks
I am getting slow responses from ollama with model llama3:instruct How to improve response speed, it takes around 2-3 minutes to return a response.
does Ollama use my pc's cpu or gpu?
why did you chnge to linux os
It's much better to work with python, my pc is more efficient now. Also most deployment servers will be linux in companies. So it's better to get used to it.
Need your help where can I contact you