How to Download Llama 3 Models (8 Easy Ways to access Llama-3)!!!!
Вставка
- Опубліковано 17 кві 2024
- 🔗 Links 🔗
This tutorial shows how to download the newly released Meta AI's Llama 3 models.
you'll learn to download and use the Llama 3 models locally and also on free websites!
llama.meta.com/docs/getting_t...
llama.meta.com/llama-downloads/
huggingface.co/meta-llama/Met...
www.kaggle.com/models/metares...
huggingface.co/NousResearch/M...
huggingface.co/mlx-community/...
www.meta.ai/
huggingface.co/chat/
labs.perplexity.ai/
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - ko-fi.com/1littlecoder
🧭 Follow me on 🧭
Twitter - / 1littlecoder
Linkedin - / amrrs - Наука та технологія
How to use Llama 3 Locally (Full Tutorial) - ua-cam.com/video/ZrqCm5jE_nQ/v-deo.html
great work getting these videos up in such short time! really helpful!
Thank you for providing so many different ways to access Llama 3. I didn't even know half of them before watching the video.
Thanks for the appreciation!
For reference: I have 12gb VRAM and 32gb RAM and I can run the llama3 70b 4bit quant (barely by splitting the ram and VRAM so that 11gb vram used and 31gb ram used). It takes me a minute for each word but it works. I recommend trying 3bit quant or sticking to llama3 8b unless you have patience or better hardware :)
When u run llama2 locally using ollama which gpu is adviceable.
please make videos how we can use these models,test the models for different scenarios or may be using on web apps. there are no videos on these on youtube
I have been trying the image generation on this, and this is substantially better and fast.
In future, if possible can you make a tutorial on LLama 3 with images?
When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.
i had a very bad experience downloading it, i have macbook air m2 8gb and it lagged it so hard it was like using a cheap laptop. also i had installed llama3 which provided false code when i asked a program for prime numbers and when i talked about ollama it told me to "keep things respectful and not use any vulgur languages"
Bro you are the GOAT
I was so confused when reading the README
Thanks bro!
also its now so over used huggingface is down 4-22-24 in my region. cant access it
AGreed - great video to understand our options to get ANY model up and running. I've opted for Ollama as everything else seems too complicated or too expensive.
to be very honest this is my first time trying something like this. I lowkey need a step by Step tutorial especially for windows😪
I only have 8 gb ram. Are those 2 or 3 bit quantized versions any good? because i can run only those.
8gb is for office user not for running models
Hey big thanks!
You're welcome! In case if this can help ua-cam.com/video/ZrqCm5jE_nQ/v-deo.htmlsi=3cVqxFerw-I2CRni
Where is the google colab code
your right perplexity labs has llama3 running at it fast
install ollama, open cmd, type ollama run llama3 .. done
Ollama run llama3 will only have a 2k context window?
I guess you can change that with the model template file
Great video. Yes can you please create a Colab example for Lama 3?
To run or to fine-tune?
Not to forget RAGNA Desktop App. Even though it’s only for Mac available yet ;)
Bro what specs are needed to run these models?
What if my laptop doesn't have a GPU?
This tutorial assumes that you don't have a GPU on a consumer grade CPU laptop. You can run this
@@1littlecoder Thank you for your reply. I downloaded Ollama windows preview. It works well.
Thank you for your video. Can you connect over Linkedin if possible?
Sure my LinkedIn is in the video description
I'm so impatient for Groq to host the mode, soon we will see blazing fast high quality agent working together
Even perplexity one is quite faster..not sure if they're offering api as well
Nousresearch must have removed those LLM. No accessible or seen.
Bro, how did you made the video so fast lol
Try to use through the jan
Does it work well m
@@1littlecoder without any problems
Colab notebook
GGUF??????