How to Install Llama3 On Any Computer
Вставка
- Опубліковано 2 чер 2024
- In this video, I walk you through the process of installing Llama 3 on your computer.
This enables you to use the power of AI with no limits, even if your favorite chatbot's servers go offline. Plus, it means that none of your potentially private data is being sent who-knows-where, it all stays on your local machine.
You can copy the below links into LM Studio search function to directly access these models.
Links:
lmstudio.ai/
huggingface.co/meta-llama/Met...
huggingface.co/cognitivecompu...
---------------------------------------------------------------------------------
Chapters:
0:00 Intro
0:23 LM Studio
0:54 Download a Model
1:41 Chat Offline
2:09 More Options
#ai #chatgpt #llama3
---------------------------------------------------------------------------------
🔑 Get My Free ChatGPT Templates: myaiadvantage.com/newsletter
🎓 Join the AI Advantage Course + Community: myaiadvantage.com/community
🤯 Unlock ChatGPT's true potential: shop.myaiadvantage.com/produc...
🐦 Twitter: / theaiadvantage
📸 Instagram: / ai.advantage
🛒 AI Advantage Shop: shop.myaiadvantage.com/ - Наука та технологія
Thank you! I've been waiting for this with dolphin on a basic Mac.
I love the way you talk!❤
Yes Yes, me too. Personally, only reason I watch the channel. My attorney teaches me the rest.
Thanks, very useful.
Dark mode and zoom in. Perfect. Thanks.
Are you able to upload documents (PDFs for example) in this local application of the LLM? If not, is there any way to do that? Great video as usual!
hah, this is what I just asked too. Did not see your question. I am in same boat as you as I upload stuff to gpt4 and that is one of the main functions I require.
Not inside LM studio (that is a closed source application) but yes combining other open source applications to set up what is a commonly known as RAG (Retrieval Augmented Generation). There are tutorials of how to set up RAG locally in youtube (not in this channel obviously) but if you want a really simple way you can do it with an RTX card (not in mac hardware obviously since mac hardware don't allow to plug GPUs) and their "chat with RTX" application that comes with RAG integrated.
Not this simply no. As the other comment by @paelnever suggests this would be done with RAG which makes this way more complicated. Will research if there are any simpler ways beside chat with rtx which requires a Nvidia card.
@@aiadvantage Probably the easier way would be to combine any LM open source framework like ollama with "open interpreter", that is actually much more than RAG but hey, still open source and free.
@@paelnever Yes, ture. Should just be noted that even Ollama requires a basic understanding of a command line interface. For most users using GPT-4 with code interpreter is the way to go for now.
ok, here is what I am looking for. I use chatgpt 4 due to the ability for it to look at certain files. I can upload a file and say, curate the data or other info but easily allows me to upload a file. Is this possible in LM Studio? Thanks for you dedication to the videos you make... long time subscriber here.
Can you run this on a AWS Lightsail instance? I noticed a brief flash on the screen on Microsoft, Google and Amazon
Awesome, thank you for this! Would it be possible to run Llama 3 70B locally on a M1 Mac Studio? Could I do it with 32GB ram? (if not, would would I need hardware wise with a Mac to run it locally). Thanks!
No, but for half the price you can do it in a PC with linux.
Hmm will be close. Just download LM Studio, search for LLama-3-70B and it will show you right away. Let me know how it went.
Mobile phone when ?
Admin privileges required??
Is there anything even close to an equivalent that could be ran on an M2 IPad Pro?
nice... 👏🍎🍎🍏🍏 So much for my excuse for buying an H100.
1:40
What does that laptop hat lady have to do with anything?
🤣
Don't underestimate her. She is running Llama 70B in a vineyard
free GPT4 or GPT3?
Llama 3 8B is better than GPT-3.5 but worse than GPT-4. Hope that answers your question
The 8B parameters version that he's running in the video is actually better than gpt3.5 turbo. If you want something equivalent to gpt4 you need to run the 70B version but if you want acceptable inference speed for that you better use a graphic card with 40Gb vram or more and then obviously we are not talking about apple hardware but PC hardware, preferably with linux.
Misleading title, not any computer
Title is misleading - won’t run on “any” computer. LM Studio won’t run on Intel-based MacBooks, like mine. 😢
Try jan dot ai. Same thing, fully open source, works on intel macs, just not updated as frequently
What about for mobiles like PHi 3 ?
Anyway trying to run even a small model in such a crap of computer would be a pain, get ready to wait 10 seconds to write every single word. You can get a medium grade PC with a second hand nvidia card with 24Gb vram for less than 500 bucks and with that getting inference speeds around 15 tokens/second.