Mistral NeMo - Easiest Local Installation - Thorough Testing with Function Calling

Поділитися
Вставка
  • Опубліковано 12 вер 2024
  • This video installs Mistral NeMo locally and tests it on multi-lingual, math, coding, and function calling.
    🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahd...
    🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
    bit.ly/fahd-mirza
    Coupon code: FahdMirza
    ▶ Become a Patron 🔥 - / fahdmirza
    #mistralnemo
    PLEASE FOLLOW ME:
    ▶ LinkedIn: / fahdmirza
    ▶ UA-cam: / @fahdmirza
    ▶ Blog: www.fahdmirza.com
    RELATED VIDEOS:
    ▶ Code www.fahdmirza....
    ▶ Resource mistral.ai/new...
    All rights reserved © 2021 Fahd Mirza

КОМЕНТАРІ • 21

  • @fahdmirza
    @fahdmirza  Місяць тому

    💛Following is the thorough coverage of 3 newly released models from Mistral including local installation and code:
    🔥Codestral Mamba - ua-cam.com/video/pgk7Si9qlQg/v-deo.htmlsi=fIxnHY319Obqtt24
    🔥Mistral NeMo - ua-cam.com/video/wTZ5M73ehfw/v-deo.htmlsi=YgzNp4fKRMo_JiTG
    🔥Mathstral 7B - ua-cam.com/video/E5WWK9IeYpc/v-deo.htmlsi=iye0ALFXVw7Fjs-F

    • @kishoreembeti2613
      @kishoreembeti2613 Місяць тому

      What's the main difference between mamba and NeMo

    • @fahdmirza
      @fahdmirza  Місяць тому

      Main diff is that Mamba is on mamba architecture which is state space model and NeMo is transformers based model.

    • @kishoreembeti4924
      @kishoreembeti4924 Місяць тому

      @@fahdmirza Which one is faster ?

  • @proterotype
    @proterotype 25 днів тому

    Dude, you’ve become one of my trusted go-to-guys for any new model

  • @tytanyo007
    @tytanyo007 Місяць тому +1

    Any chance this model go to Ollama? D:

  • @MrMoonsilver
    @MrMoonsilver Місяць тому

    Again, boss like coverage. Thank you!

  • @CarlosGomez-fj8bz
    @CarlosGomez-fj8bz Місяць тому

    Hi bro, thanks. What are the minimum requirements to run the model locally?

  • @samyio4256
    @samyio4256 Місяць тому

    Thanks for the nice video. Could you execute code that the model generates? This is the essence of code testing, proven functionality.

  • @PratikBodkhe
    @PratikBodkhe Місяць тому +1

    What is your machine config? Sorry if answered before.

    • @fahdmirza
      @fahdmirza  Місяць тому

      All good, its already mentioned in video ,cheers.

    • @PratikBodkhe
      @PratikBodkhe Місяць тому +2

      @@fahdmirza Ah! A6000! Not for everyone :)

  • @user_t9732
    @user_t9732 Місяць тому

    Is 16gb of ram enough to run these models in the long term?

  • @user-cb7yl4nr6h
    @user-cb7yl4nr6h 24 дні тому

    gguf not work with lama cpp why?

  • @AshWickramasinghe
    @AshWickramasinghe 10 днів тому

    Honestly, 20 shirts would take the same time as 4 shirts if you put them in the sun. That's not good reasoning 😂

  • @thegooddoctor6719
    @thegooddoctor6719 Місяць тому

    I played with the cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b-gguf with a 8 bit quantized version in a 64 GB RAM i9 13900K 4090RTX Hardware with LM Studio. Very pathetic summarizing capability . Its coding capability in C# was very subpar.....