HUGE - Run Models Directly from Hugging Face with Ollama Locally

Поділитися
Вставка
  • Опубліковано 18 жов 2024

КОМЕНТАРІ • 30

  • @siddhubhai2508
    @siddhubhai2508 2 дні тому +2

    Really this is the most awaited thing everyone wanted including me 😢

  • @emmanuelkoupoh7979
    @emmanuelkoupoh7979 2 дні тому

    That's great. Yesterday, I was wondering when will I have the chance to run Ministral models via ollama.
    Thank you for presenting this.

    • @emmanuelkoupoh7979
      @emmanuelkoupoh7979 2 дні тому

      Got this, I think because there is a form to fill for Ministral and some other models.
      Error: pull model manifest: Get "Authentication%20required?nonce=uJTLPEnU0-Br15UXm5zbPg&scope=&service=&ts=1729173119": unsupported protocol scheme ""
      Any idea for bypassing it, please?

    • @fahdmirza
      @fahdmirza  2 дні тому

      Glad I could be of assistance!

  • @takimdigital3421
    @takimdigital3421 2 дні тому

    Amazing bro ! Thank you so much for this update and effort you are the best ❤

  • @ImmanuelThePhenomenaKant
    @ImmanuelThePhenomenaKant 2 дні тому

    I've been waiting for this forever!

  • @mryanmarkryan
    @mryanmarkryan 2 дні тому

    Thanks for the heads up[!

  • @Jegatheesh07
    @Jegatheesh07 2 дні тому

    Thanks fahad. Good 👍

  • @geraldhewes
    @geraldhewes 2 дні тому

    This is awesome

  • @QorQar
    @QorQar 2 дні тому

    Thank you, and my question is how do I run a model in Olam with several parts, whether it was downloaded and I want to add it to Olam with a modelfile or by downloading from huggface

  • @BinaryDataEntertainment
    @BinaryDataEntertainment День тому

    You can run only GGUF models?

  • @bilaljamal-e1t
    @bilaljamal-e1t День тому

    OK nice how about Running Models Directly NO GGUF format from Hugging Face with Ollama Locally

  • @s3m3sta
    @s3m3sta 2 дні тому

    YESS!!!

  • @HassanAllaham
    @HassanAllaham 2 дні тому

    Thanks for the good content.
    That's a very useful feature but:
    As I understand, Ollama now can download the GGUF file then it will change it into the blobs it uses to run the the model where these blobs are stored by default in .ollama folder.
    For example in windows it will be in:
    C:\Users\{user_name}\.ollama\models\blobs
    but the question is:
    Where does Ollama store the downloaded GGUF file????? and most importantly, does it keep it or delete it after downloading it?????
    If Ollama keeps the downloaded GGUF file, then this feature is a truly wonderful one

    • @fahdmirza
      @fahdmirza  2 дні тому

      It does keep the gguf file, though renames it. Linux: /usr/share/ollama/.ollama/models
      macOS: ~/.ollama/models
      Windows: C:\Users\%username%\.ollama\models

    • @HassanAllaham
      @HassanAllaham 2 дні тому

      @@fahdmirza Thanks for the info. If it keeps the GGUF file then it can be used in other LLM inference engines .. for example it can be used in LM studio or vllm 🌹🌹🌹

    • @chouawarasteven
      @chouawarasteven 2 дні тому

      ​@@HassanAllaham truly revolutionary. The models can even be used in VS code too.
      I use phi3.5 mini as a substitute for cursor.
      This is really great news

    • @chouawarasteven
      @chouawarasteven 2 дні тому +1

      ​@@HassanAllaham truly revolutionary. The models can even be used in VS code too.
      I use phi3.5 mini as a substitute for cursor.
      This is really great news

  • @AliAlias
    @AliAlias 2 дні тому

    Nice❤, but what about template and stops 🤔

    • @fahdmirza
      @fahdmirza  2 дні тому

      Those features remain part of ollama.

    • @AliAlias
      @AliAlias 2 дні тому

      @@fahdmirza I mean, for this method is ollama import it automatically from repo?