Unlock Ollama's Modelfile | How to Upgrade your Model's Brain using the Modelfile

Поділитися
Вставка
  • Опубліковано 7 сер 2024
  • In this video, we are going to analyse the Modelfile of Ollama and how we can change the Brain of the Models in Ollama.
    A model file is the blueprint to create and share models with Ollama.
    The modelfile includes the following Instructions viz. FROM, PARAMETER, TEMPLATE, SYSTEM, ADAPTER, LICENSE AND MESSAGE.
    Link: github.com/ollama/ollama/blob...
    Let’s do this!
    Join the AI Revolution!
    #ollama #modelfile #milestone #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels
    CHANNEL LINKS:
    🕵️‍♀️ Join my Patreon: / promptengineer975
    ☕ Buy me a coffee: ko-fi.com/promptengineer
    📞 Get on a Call with me - at $125 Calendly: calendly.com/prompt-engineer4...
    ❤️ Subscribe: / @promptengineer48
    💀 GitHub Profile: github.com/PromptEngineer48
    🔖 Twitter Profile: / prompt48
    TIME STAMPS:
    0:00 Intro
    0:30 Download Ollama
    1:15 Startup Ollama
    4:10 Introducing the Modelfile
    5:15 Modelfile in Depth
    7:46 System in Modelfile
    8:20 Construct Custom Model from Modelfile
    9:18 Test the new Custom Model
    10:43 Messages in Modelfile
    12:57 Next Video Conclusion
    🎁Subscribe to my channel: / @promptengineer48
    If you have any questions, comments or suggestions, feel free to comment below.
    🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!
  • Наука та технологія

КОМЕНТАРІ • 40

  • @drmetroyt
    @drmetroyt 5 місяців тому +2

    Thanks for taking up the request ... 😊

  • @TokyoNeko8
    @TokyoNeko8 5 місяців тому +5

    I use the web ui and I feel it's much easier to manage the modelfiles and the obvious history tracking oc the chat etc etc.

  • @renierdelacruz4652
    @renierdelacruz4652 5 місяців тому +2

    Great Video, Thanks very much.

  • @fkxfkx
    @fkxfkx 5 місяців тому +1

    Great 👍

  • @user-ms2ss4kg3m
    @user-ms2ss4kg3m 3 місяці тому +1

    great thanks

  • @saramirabi1485
    @saramirabi1485 Місяць тому +1

    Have a question is it possible to fine-tune the llama-3 in Ollama?

  • @enesnesnese
    @enesnesnese 2 місяці тому +1

    Thanks for the clear explanation. But can we also do this for the llama3 model built on the ollama image in Docker? I assume that containers do not have access to our local files

    • @PromptEngineer48
      @PromptEngineer48  2 місяці тому

      Yes, you can

    • @enesnesnese
      @enesnesnese 2 місяці тому +1

      @@PromptEngineer48 how? Should I create a file named Modelfile in container? Or should I create in my local? I am confused

    • @PromptEngineer48
      @PromptEngineer48  2 місяці тому

      @@enesnesnese you should create the modelfile in local and you could run the model created from this modelfile in container

    • @enesnesnese
      @enesnesnese 2 місяці тому +1

      @@PromptEngineer48 got it. Thanks

  • @khalidkifayat
    @khalidkifayat 5 місяців тому +1

    nice one, questions was how to use mistral_prompt for production purposes OR sending to client ??

    • @PromptEngineer48
      @PromptEngineer48  5 місяців тому

      Yes. U can push this to your Ollama login under your models. Then anyone will be able to pull the model by saying like Ollama pull promptengineer48/mistral_prompt . I will show the process in the next video on Ollama for sure.

    • @khalidkifayat
      @khalidkifayat 5 місяців тому +1

      ​@@PromptEngineer48appreciated mate

  • @michaelroberts1120
    @michaelroberts1120 4 місяці тому +1

    What exactly does this do that koboldcpp or sillytavern does not already do in a much simpler way?

    • @PromptEngineer48
      @PromptEngineer48  4 місяці тому

      basically if i can get the models running on ollama, we open another door of integration.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 4 місяці тому +1

    do you have a video showing how to use crewai and ollama together?

  • @autoboto
    @autoboto 5 місяців тому +2

    This is great info. One thing I have wanted to do is migrate all my local models to another drive. With Win11 I was using wsl2 with Linux ollama then I installed windows ollama and lost the reference to the local models. I rather not download the models again. In addition would be nice to be able to migrate models to another SSD and have ollama reference the alternate model path.
    OLLAMA_MODELS in windows works but only for downloading new models. When I copied models from the original wsl2 location to new location ollama would not recognize the models in the list command
    Curious if anyone has needed to relocate the high number models to new location and have ollama able to refence this new model location

  • @UTubeGuyJK
    @UTubeGuyJK 4 місяці тому +1

    How does modelfile not have a file extension? This keeps me up at night not understanding how that works :)

    • @PromptEngineer48
      @PromptEngineer48  4 місяці тому +1

      I will find the reason and give you a night's sleep.

    • @robertranjan
      @robertranjan 4 місяці тому +1

      ❯ ollama run mistral
      >>> does a computer filename must have a extension?
      A computer file name does not strictly have to have an extension, but it is a common convention in many computing systems, including
      popular operating systems like Windows and macOS. An extension provides additional information about the type or format of the data
      contained within the file. For instance, a file named "example.txt" with no extension would still be considered a valid file, but the
      system might not recognize it as a text file and may not open it with the default text editor. In contrast, if the same file is saved
      with the ".txt" extension, the system is more likely to open it using the appropriate text editor.
      One popular file like `Modelfile` without an extension is `Dockerfile`. I think, developers named it like that one...

  • @JavierCamacho
    @JavierCamacho 3 місяці тому +1

    Stupid question. Does this creates a new model file or it just creates a instruction file for the base model to follow instructions?

    • @PromptEngineer48
      @PromptEngineer48  3 місяці тому

      New Model File

    • @JavierCamacho
      @JavierCamacho 3 місяці тому +1

      @PromptEngineer48 so the size on driver gets duplicated...? I mean 4gb of llama3 plus an extra 4gb for whatever copy we make?

    • @PromptEngineer48
      @PromptEngineer48  3 місяці тому +1

      @@JavierCamacho No the old is not used. just the new one

    • @JavierCamacho
      @JavierCamacho 3 місяці тому

      @@PromptEngineer48 thanks

  • @EngineerAAJ
    @EngineerAAJ 5 місяців тому +1

    Is it possible to prepare a model with RAG and then save it as a new model?

    • @PromptEngineer48
      @PromptEngineer48  5 місяців тому +1

      To prepare a model for RAG, we would need to do finetune the model separately using other tools, get the .bin file or gguf file, then convert to Ollama intergration mode.

    • @EngineerAAJ
      @EngineerAAJ 5 місяців тому +1

      @@PromptEngineer48Thanks, I will try to take a deeper look into that, but something says that I won't have enough memory for that :(

    • @PromptEngineer48
      @PromptEngineer48  5 місяців тому

      Try on runpods

  • @romanmed9035
    @romanmed9035 3 місяці тому +1

    how do I find out when the model is actually updated? when was it filled with data and how outdated are they?

    • @PromptEngineer48
      @PromptEngineer48  3 місяці тому

      U will have to put a different name for the model...

    • @romanmed9035
      @romanmed9035 3 місяці тому +1

      @@PromptEngineer48 Thank you. but I asked how to find out the date of relevance when I download someone else's model and not make my own.

    • @PromptEngineer48
      @PromptEngineer48  3 місяці тому

      if you ollama list command in cmd, you will see all the list of models in your own system