Mistral NeMo - Easiest Local Installation - Thorough Testing with Function Calling
Вставка
- Опубліковано 12 вер 2024
- This video installs Mistral NeMo locally and tests it on multi-lingual, math, coding, and function calling.
🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
bit.ly/fahd-mirza
Coupon code: FahdMirza
▶ Become a Patron 🔥 - / fahdmirza
#mistralnemo
PLEASE FOLLOW ME:
▶ LinkedIn: / fahdmirza
▶ UA-cam: / @fahdmirza
▶ Blog: www.fahdmirza.com
RELATED VIDEOS:
▶ Code www.fahdmirza....
▶ Resource mistral.ai/new...
All rights reserved © 2021 Fahd Mirza
💛Following is the thorough coverage of 3 newly released models from Mistral including local installation and code:
🔥Codestral Mamba - ua-cam.com/video/pgk7Si9qlQg/v-deo.htmlsi=fIxnHY319Obqtt24
🔥Mistral NeMo - ua-cam.com/video/wTZ5M73ehfw/v-deo.htmlsi=YgzNp4fKRMo_JiTG
🔥Mathstral 7B - ua-cam.com/video/E5WWK9IeYpc/v-deo.htmlsi=iye0ALFXVw7Fjs-F
What's the main difference between mamba and NeMo
Main diff is that Mamba is on mamba architecture which is state space model and NeMo is transformers based model.
@@fahdmirza Which one is faster ?
Dude, you’ve become one of my trusted go-to-guys for any new model
Any chance this model go to Ollama? D:
Again, boss like coverage. Thank you!
My pleasure!
Hi bro, thanks. What are the minimum requirements to run the model locally?
Thanks for the nice video. Could you execute code that the model generates? This is the essence of code testing, proven functionality.
What is your machine config? Sorry if answered before.
All good, its already mentioned in video ,cheers.
@@fahdmirza Ah! A6000! Not for everyone :)
Is 16gb of ram enough to run these models in the long term?
gguf not work with lama cpp why?
It does
Honestly, 20 shirts would take the same time as 4 shirts if you put them in the sun. That's not good reasoning 😂
cheers
I played with the cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b-gguf with a 8 bit quantized version in a 64 GB RAM i9 13900K 4090RTX Hardware with LM Studio. Very pathetic summarizing capability . Its coding capability in C# was very subpar.....
Thanks for feedback
how run gguf on lama cpp?