Ollama - Loading Custom Models

Поділитися
Вставка

КОМЕНТАРІ • 47

  • @5Komma5
    @5Komma5 8 місяців тому +5

    That worked. Thanks.
    If the Model page lacks information and there is a similar model available you can get the modelfile by loading the model and using
    ollama show --modelfile

    • @samwitteveenai
      @samwitteveenai  8 місяців тому +1

      yes they have added some nice commands since I made this video. You can also make changes now when using the model is running and export those to the setup/modelfile I will try to make a new updated video

    • @MasonJames
      @MasonJames 8 місяців тому +1

      Would also love to see a new version - I've referenced this one several times @@samwitteveenai thank you!

  • @carlosparica8131
    @carlosparica8131 9 місяців тому

    Hello Mr Witteveen. Thanks for the informative video! May I request if you could do a more indepth explanation on what a model file is and how it works, more specifically on TEMPLATE?

  • @brunapupoo4809
    @brunapupoo4809 28 днів тому

    ollama create modelfile -f ./modelfile
    transferring model data 100%
    converting model
    Error: open config.json: file does not exist

  • @gammingtoch259
    @gammingtoch259 Місяць тому

    How can Import this files like a ln -s, a simbolic link but that ollama does it automaticly? is it possible?
    The problem is that is ollama keep in a folder with hashing names, and i need this ggufs model for other program too and want not duplicate

  • @guanjwcn
    @guanjwcn 11 місяців тому +2

    can’t wait for Windows version to try it out.

  • @dib9900
    @dib9900 4 місяці тому

    Where can I get expected for the model Parameters & Template values for a given model if the Modelfile is not included with the model to convert to ollama format?
    I'm specifically interested in Embeddings model, not the LLM.
    For example, for this model SFR-Embedding-Mistral-GGUF

  • @jasperyou645
    @jasperyou645 3 місяці тому

    Thank you for your sharing! I just want to know could I just run Jackolope in unquantized version by Ollama? It seems GGUF file is used to store the quantized model.

  • @the_real_cookiez
    @the_real_cookiez 7 місяців тому +1

    This is so awesome. With the new gemma LLM, I wanted to load that model in. Thank you!

  • @t-dsai
    @t-dsai 11 місяців тому +1

    Thank you Mr. Witteveen for this helpful video. One question: Is it possible to have the ollama settings directory to a custom place instead of the default "~/.ollama"?

    • @StevenSeiller
      @StevenSeiller 10 місяців тому

      +1 🤏 My system drive is small compared to my TBs data drives.

  • @HunterZolomon
    @HunterZolomon 7 місяців тому

    Appreciate this a lot, thanks! The stop parameters in your example don't seem necessary as a default though (even detrimental for some models, they stop halfway through the response), and could be explained a bit more thoroughly. You could do a clip going through the parameters, starting with PARAMETER num_ctx ;)

  • @BikinManga
    @BikinManga 4 місяці тому

    thank you, your example modelfile template. save me from headache of loading yi custom model. it's perfect!

  • @nicolashuve3558
    @nicolashuve3558 6 місяців тому

    Hey, thanks for that. Where are models located on a mac? I can't seem to find them anywhere.

  • @yunomi26
    @yunomi26 9 місяців тому

    Hey so I wanted to build a rag architecture. Can I use one of embedding models from the MTEB and make that model through ollama, then use it withgenerate embedding api of ollama to generate embedding? But then, it can only use the api for the model which is running - and i wanted mistral to generate completion and gte for embedding. how do you think i can solve it?

  • @noob-ep4lx
    @noob-ep4lx 6 місяців тому

    hello! Thank you so much for this video but I ran into a problem, my storage was full halfway through my installation and the progress bar paused (stuck on 53.48%) and whenever I close and re-enter the code, it would check 5 files and skip 5 files and pause there. Is there any way to fix?

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      I suggest just go in and delete the model files and start again

  • @jasonp3484
    @jasonp3484 5 місяців тому

    Outstanding my friend! I learned a new skill today! Thank you very much for the lesson

  • @julian-fricker
    @julian-fricker 11 місяців тому

    Thanks for the great video, you should give LM Studio a try. Makes finding and downloading models easier, can make use of gpu and allows you to run these models with a Chat-GPT compatible API.

  • @Annachrome
    @Annachrome 11 місяців тому

    Thanks for introducing me to Ollama! I am running open-source models on LangChain, but having trouble with models calling (or not using) custom tools appropriately. Would you mind making a tutorial for initializing agents without openai models? Perhaps with prefix/format_instructions/suffix kwargs. 🙏 All the docs, tutorials, deeplearning courses use openai models.... 😢

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology 11 місяців тому

    Do you have a video about setting up a pc for running LLMs? What GPU, memory, what software needed. So on?

  • @AKSTEVE1111
    @AKSTEVE1111 5 місяців тому

    It worked like a charm, thank you! Just need to look at the model with my web browser.

  • @nitingoswami1959
    @nitingoswami1959 11 місяців тому

    It doesn't support multi threading 😭😭

  • @nitingoswami1959
    @nitingoswami1959 11 місяців тому

    I have 16 gig ram and Tesla graphic card but ollama is still taking time to generate the answer it seems like it only uses the cpu to do the work but how I can utilise both cpu and GPU simultaneously 🤔🤔

    • @LeftThumbBreak
      @LeftThumbBreak 11 місяців тому

      if you're running a tesla graphic card i'm assuming you're running on a linux machine and not a mac. if that's so are you sure you're running the linux distro? i run ollama all the time on gpu equipped servers and its running on the gpu.

    • @nitingoswami1959
      @nitingoswami1959 11 місяців тому

      @@LeftThumbBreak running on Ubuntu but when I send the 1st request using curl and at the same time when I send the 2nd request then it waits for 1st request and then it will process the second one why is it happening is it due to cpu or not having multi threading?

  • @KarlHeinzBrockhausen
    @KarlHeinzBrockhausen 9 місяців тому

    icant find any folder with models inside in ubuntu, only temp files,

    • @KratomSyndicate
      @KratomSyndicate 8 місяців тому

      models are located in \usr\share\ollama\.ollama\models\ or in wsl2 located in \\wsl.localhost\Ubuntu\usr\share\ollama\.ollama\models\

  • @thaithuyw4f
    @thaithuyw4f 10 місяців тому

    what is the folder you put the model file?
    I even can't see the primary model folder of ollama. Even I use realpath and which,...

    • @thaithuyw4f
      @thaithuyw4f 10 місяців тому

      oh sorry, now I found the model folder of ollama, only files started by sha256.... so I think your download folder is just a somewhere folder.
      but when I run create I get the error, even when using sudo:
      ⠦ transferring context Error: rename /tmp/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c16463723071254256853 /usr/share/ollama/.ollama/models/blobs/sha256:08c6abdff588bf35db696057c1cd7861caf722e7e2c25b2ab7c18c1646372307: invalid cross-device link
      Do you know why?

    • @samwitteveenai
      @samwitteveenai  10 місяців тому

      not sure why. you have a blob like that, normally it will be a named file and then ollama will make the blob etc. and copy to the right location. Your model (text) file should specify the path to the llama cpp file.

  • @wiltedblackrose
    @wiltedblackrose 11 місяців тому

    I've had a lot of issues running ANY model with ollama. It keeps crashing on me. Did you have that too? (Btw, there is an issue open right now...)

    • @samwitteveenai
      @samwitteveenai  11 місяців тому

      so far for me it has been pretty rock solid on 2 macs that I have been running it on.

    • @wiltedblackrose
      @wiltedblackrose 11 місяців тому

      @@samwitteveenai So I assume in CPU only mode... That explains it. The issue I was facing was with cuda

  • @savelist1
    @savelist1 11 місяців тому

    Hi Sam wondering you have not done any llama Index videos ?

    • @samwitteveenai
      @samwitteveenai  11 місяців тому

      I have done a few but didn't release them, for a variety of reasons (changing the API etc) I will make some new ones. I do use LlamaIndex for certain work projects and it has some really nice features.

  • @MichealAngeloArts
    @MichealAngeloArts 11 місяців тому

    Thanks for sharing this. Is there a link to that model file you show in the video (on github etc)?

    • @samwitteveenai
      @samwitteveenai  11 місяців тому

      I updated it in the description but here it is - huggingface.co/TheBloke/jackalope-7B-GGUF/tree/main

    • @MichealAngeloArts
      @MichealAngeloArts 11 місяців тому

      @@samwitteveenai Sorry I didn't mean to ask about the HF model files (the GGUF) but the model 'configuration' file used by Ollama to load the model. Obviously plenty of 'model files' terminology in the loop 😀

    • @responsible-adult
      @responsible-adult 11 місяців тому

      Jackalope running wild (template problem? )
      Really liking the the ollama series, but having a Jackalope problem.
      Using the jackalope configuration text file I tried to copy from the video, when I run the subsequent model the "creature" goes into a loop and starting generating questions for itself and answering them. I think it's related to the template.
      Please post the exact known to work configuration file jackalope. Thanks!

    • @samwitteveenai
      @samwitteveenai  11 місяців тому +1

      @@responsible-adult you can take the template from something like mistralorca by loading that model and using "/show template". sounds like you have an error in that or possibly you are using a lower quantized version??

    • @MasonJames
      @MasonJames 11 місяців тому

      I'm also stuck on this step. My modfile works, but the resultant model doesn't seem to "converse" well. Not sure how to troubleshoot the modfile specifically.@@MichealAngeloArts

  • @RobotechII
    @RobotechII 11 місяців тому +1

    Really cool, very Docker-esque!

  • @bimbotsolutions6665
    @bimbotsolutions6665 10 місяців тому

    AWESOME WORK, THANKS A LOT...