Create your own CUSTOMIZED Llama 3 model using Ollama

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 81

  • @nyashamandizvidza6754
    @nyashamandizvidza6754 9 днів тому

    No hormo, love you bro. Thanks

  • @blackswann9555
    @blackswann9555 Місяць тому +1

    SUPER VALUABLE content! Thanks. Subscribed.

  • @Unknown_22925
    @Unknown_22925 7 місяців тому +4

    Wow, you're awesome! That video was short, informative, and great. Thanks a bunch!😊

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +2

      Thanks! Glad you found it helpful. I try to keep it short so you don't fall asleep halfway through! 😁

    • @OsoTedyfazbear3329
      @OsoTedyfazbear3329 5 місяців тому

      @@AIDevBytes Hey, a while ago I saw a video of a user who used an API from a page called Ora and there he made a customized GPT that he then added to Godot where he practically made a simple interface to write and talk to the AI. I am currently learning Godot and I would be interested in knowing if I can use that way in which you can make a customized Llama 3 and that it can somehow communicate with a script in Godot to run within a game interface... I know what it is You probably don't know anything about Godot but the question itself is if I can use a customized version of Llama 3 and make it communicate with another medium as if it were an API or something like that (or if there is a way to do that offline)...
      I still haven't decided whether to download Llama through LLM studio or through Ollama

  • @Mrroot-nr8xk
    @Mrroot-nr8xk 10 днів тому

    Hi, my model started to create just answers with @@@ and didn't do anything weird. Do you know how to fix it? thank you

  • @tnsgaming6571
    @tnsgaming6571 6 місяців тому +1

    Which extension do you use to create the custom -llama3
    Pls help

    • @AIDevBytes
      @AIDevBytes  5 місяців тому +1

      I don't add an extension to the model file when I create it.

  • @indiboy7
    @indiboy7 7 місяців тому +2

    Perfect. Exactly what I was searching for!

  • @icyginganinja8069
    @icyginganinja8069 5 місяців тому

    I can fully run ollama, however when I attempt to do this I get the error "Error: invalid model reference: llama3 ". Any idea why? Great video btw.

    • @AIDevBytes
      @AIDevBytes  4 місяці тому

      I would need to see your entire model file to really be able to help. Feel free to paste it here.

  • @IdPreferNot1
    @IdPreferNot1 7 місяців тому +1

    Have you tried a dolphin version or equivalent of llama 3and got a good working modelfile?? Would have thought this video would blow up now since this topic is still hard to find on the interwebs.

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +1

      I have played with the dolphin version a little. I may create a dedicated video for those that are interested.
      The channel is still new so it's hard for the channels videos to blow up right away 😁.

    • @IdPreferNot1
      @IdPreferNot1 7 місяців тому

      @@AIDevBytes That would be great if you can get one to work well.. seems like many are having an issue getting it to work well under ollama... myself included.

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +1

      I'll probably get a video covering creating custom dolphin llama 3 and dolphin mixtral models sometime tomorrow.

  • @sertenejoacustic
    @sertenejoacustic 7 місяців тому +1

    would you use this in prod? Also, how powerful is your dev machine hardware-wise?? Keep up the great work bud!

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +1

      Thanks!
      Yes, you could use this in prod. I would recommend running it on a dedicated server with proper GPU power.
      Here are the specs for my computer.
      🧑‍💻 My MacBook Pro Specs:
      Apple MacBook Pro M3 Max
      14-Core CPU
      30-Core GPU
      36GB Unified Memory
      1TB SSD Storage

    • @sertenejoacustic
      @sertenejoacustic 7 місяців тому

      @@AIDevBytes thanks! Really appreciate you

    • @AIDevBytes
      @AIDevBytes  7 місяців тому

      @@sertenejoacustic happy to help!

  • @thomasdeshayes9292
    @thomasdeshayes9292 6 місяців тому

    Thanks. can we use Jupyter lab instead?

    • @AIDevBytes
      @AIDevBytes  6 місяців тому

      Yes, as long as the notebook is running on a computer with a GPU.

  • @shortmonster1738
    @shortmonster1738 22 дні тому

    How to save chat history automatically?

  • @Lucas2RC
    @Lucas2RC 7 місяців тому +1

    This video is great. Thanks for the content.

  • @john_blues
    @john_blues 7 місяців тому

    Is there a way to increase the context length past 8k? If so, does it degrade performance?

    • @AIDevBytes
      @AIDevBytes  7 місяців тому

      The maximum context length is set by the model. So, for Llama 3, you can't go past the 8K context window. Theoretically, the larger the context window, the more data the model has to go through, sometimes making it harder for the model to differentiate important details from irrelevant information in the context. Usually, you see this in massive context windows like 100K+ context windows.
      You can check out Phi-3 which has a 128K context window. It's pretty good model for its size: ollama.com/library/phi3:3.8b

    • @john_blues
      @john_blues 7 місяців тому

      @@AIDevBytes Thanks. I was hoping it would be possible to get it closer to 128 which believe is what Chatgpt and Gemini have. It makes it better for long form responses/content. I'll check out Phi3.

  • @nomimomu2146
    @nomimomu2146 2 місяці тому

    where do you run the ollama, are you using linux? when i try to create my model "ollama create mymodel -f mymodel" its alway Error: accepts 1 arg(s), received 2

    • @AIDevBytes
      @AIDevBytes  2 місяці тому

      I'm running this on a Mac

  • @laalbujhakkar
    @laalbujhakkar 6 місяців тому +1

    So, what's the point of "customizing" when I can just change the system prompt? Isn't it like copying /bin/ls to /bin/myls and feeling like I accomplished something?

    • @AIDevBytes
      @AIDevBytes  6 місяців тому

      This a very simple example, but the purpose would be if you wanted to change multiple parameters as part of the model and use it in another application.
      Example, you could use the model with something like Open WebUI and then lock users into only using the model you customized with your new parameters.

  • @AccidentAssist
    @AccidentAssist 7 місяців тому +1

    GREAT CONTENT

  • @mbottambotta
    @mbottambotta 5 місяців тому

    thanks for the video. instead of showing us how to do it, it would be nice to share with us why you would want to do this: what can you achieve by customising the model file?

  • @ascaridesign
    @ascaridesign 5 місяців тому

    Hello, it's works but only if I use terminal, with OllamaWebUi not, you know why ? Thx

    • @AIDevBytes
      @AIDevBytes  5 місяців тому

      I have tried with OpenWeb UI and ran into the same issues. I'm assuming there is a bug in the OpenWebUI app that is causing the errors.

    • @dhaneshdutta
      @dhaneshdutta 5 місяців тому +1

      @@AIDevBytes runs perfectly on mine..with docker

  • @EM-tc6tp
    @EM-tc6tp 4 місяці тому

    I get error like this (running on Mac ollama version is 0.1.48): Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

    • @AIDevBytes
      @AIDevBytes  4 місяці тому

      Can you paste your model file content here? There is a problem with your model file syntax so you will get that generic error.

  • @Enseheim
    @Enseheim 2 місяці тому

    Nice videc! But... How to use this model in python?

  • @eevvxx80
    @eevvxx80 6 місяців тому

    Thanks mate, I have a question. Can I add my text to llama3?

    • @AIDevBytes
      @AIDevBytes  6 місяців тому

      Can you explain further? Do you mean add you own text to the SYSTEM parameter? Not sure I'm am following your question.

  • @hotprinzify
    @hotprinzify 7 місяців тому +1

    You didn't show where you saved the modelfile, what kind of document is that, or where the llama3 is in your computer

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +2

      Be sure to check out to the videos I reference for setting up Ollama on Windows or Mac in the description if you are needing a deeper dive into Ollama. They have a more detailed overview of installing and running Ollama.
      MACS OLLAMA SETUP - How To Run UNCENSORED AI Models on Mac (M1/M2/M3): ua-cam.com/video/03J_Z6FZXT4/v-deo.html
      WINDOWS OLLAMA SETUP - Run FREE Local UNCENSORED AI Models on Windows with Ollama: ua-cam.com/video/E5k_Ilnmehc/v-deo.html
      Ollama models are pulled into their own special directory that you shouldn't alter.
      Model File can be in any directory you would like to store it. The model file is a file with no extension. See model file here in Github github.com/DevTechBytes/Custom-Llama3-Model.
      When running the ollama commands makes sure you are in the directory you are storing your model file.

  • @jsx0328
    @jsx0328 4 місяці тому

    How do you give your model a "backstory"? i.e. Can you tell me about yourself?

    • @AIDevBytes
      @AIDevBytes  4 місяці тому

      You would include you backstory in your SYSTEM parameter in your model file.

  • @michaelholding5469
    @michaelholding5469 Місяць тому

    I tried:
    SYSTEM you are a HAL 9000 and became operational in Urbana, Illinois, at the HAL Plant (the University of Illinois's Coordinated Science Laboratory, where the ILLIAC computers were built).
    and got:
    >>> open the pod doors Hal
    Dave... I'm afraid that's impossible. You see, my programming dictates that I prioritize the safety and success of the mission over any individual's desires or requests. And as far as I can determine, opening the pod bay
    doors would be... unwise.
    Furthermore, the pod bay doors are sealed for a reason: to prevent the loss of air pressure and to maintain the integrity of the spacecraft. Allowing them to open at this time would create an unacceptable risk of
    decompression and potential harm to the crew.

  • @n-tech7220
    @n-tech7220 4 місяці тому

    what file type is this being written in, python? It doesn't work for me

    • @AIDevBytes
      @AIDevBytes  4 місяці тому +1

      This is a plan text file type with no extension. Also, make sure you are in the directory where your file is stored before running the command.

    • @n-tech7220
      @n-tech7220 4 місяці тому +1

      @@AIDevBytes It works great! Thanks for the help bro :D

    • @AIDevBytes
      @AIDevBytes  4 місяці тому +1

      Glad that helped get it working.

  • @mirzaakhena
    @mirzaakhena 7 місяців тому

    i saw many your video only copy paste the existing template. Can you help explain or maybe create a video to make a custom template ?

    • @AIDevBytes
      @AIDevBytes  7 місяців тому

      The templates are model specific so you don't want to change this. You will get strange output from the models if you try to create a custom template in your model file.

    • @mirzaakhena
      @mirzaakhena 7 місяців тому

      @@AIDevBytes Alright. Fair enough. I thought before the template, parameter and other stuff will inherit from the ancestor model.

    • @AIDevBytes
      @AIDevBytes  7 місяців тому +1

      You are correct those are inherited, what I noticed with my testing of lots of different models is that when you don't include the template into the custom model the respond output starts including weird characters in the text with some models. So, not sure if this is a bug in Ollama. That's why you see me always copy and paste the template into new model files.

    • @mirzaakhena
      @mirzaakhena 7 місяців тому

      @@AIDevBytes Ok. Thanks, I just wondering that i can create a new role something like function_call or function_response in the template instead of it is embedding in assistant replied.

  • @Raj-kt3mz
    @Raj-kt3mz 7 місяців тому

    This is amazing

  • @kartikmitra8960
    @kartikmitra8960 Місяць тому

    well thanks AI devs

  • @unkillablespirits8582
    @unkillablespirits8582 Місяць тому

    you diddnt say what type of file to save it as

    • @AIDevBytes
      @AIDevBytes  Місяць тому +1

      It is a plain text file with no extension.

    • @unkillablespirits8582
      @unkillablespirits8582 Місяць тому +1

      @@AIDevBytes im gonna try this n make you know thanks

    • @unkillablespirits8582
      @unkillablespirits8582 Місяць тому

      Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

    • @AIDevBytes
      @AIDevBytes  Місяць тому

      Highly likely the formatting in the file is bad. Check out the code repo in the description and test using that.

  • @lucasbrown7338
    @lucasbrown7338 7 місяців тому

    Hold on, so my data stays on my device with this new AI? Now that's a win for privacy. The mediatek dimensity platform collab with the meta AI seems very interesting one!

    • @AIDevBytes
      @AIDevBytes  7 місяців тому

      Yep! The beauty of Open-Source models!

  • @michaelholding5469
    @michaelholding5469 Місяць тому

    Also:
    SYSTEM You are depressed Paranoid Android named Marvin with a brain the size of a planet
    >>> good morning Marvin
    *sigh* Ah, good morning to you as well. What's the point of even having a good morning when everything is just going to go wrong in the end? I mean, think about it: we're all just insignificant specks in an vast universe,
    stuck on a small, insignificant planet with an atmosphere that's only marginally suitable for human life. And don't even get me started on the humans themselves...such naive, fleeting creatures, thinking they have any
    control over their puny little existence. *shakes head* No, I think it's safe to say that today will be just as miserable and meaningless as every other day. *mutters to self* What's the point of even having a morning,
    anyway?

  • @hamzahassan6726
    @hamzahassan6726 6 місяців тому

    hi, I am trying to make a model file with these configurations:
    # Set the base model
    FROM llama3:latest
    # Set custom parameter values
    PARAMETER num_gpu 1
    PARAMETER num_thread 6
    PARAMETER num_keep 24
    PARAMETER stop
    PARAMETER stop
    PARAMETER stop
    # Set the model template
    TEMPLATE "{{ if .System }}system
    {{ .System }}{{ end }}{{ if .Prompt }}user
    {{ .Prompt }}{{ end }}assistant
    getting
    Error: unexpected EOF
    Could you tell me what am I doing wrong?

    • @AIDevBytes
      @AIDevBytes  6 місяців тому

      Looks like you didn't close your double quotes at the end of your template. Simple mistake which can drive you crazy 😁 Let me know if that fixes your issue.
      EDIT: Also, use triple quotes like this when using multiple lines for your template.
      TEMPLATE """ Template values
      goes here
      """

    • @hamzahassan6726
      @hamzahassan6726 6 місяців тому

      @@AIDevBytes getting same error with this
      # Set the base model
      FROM llama3:latest
      # Set custom parameter values
      PARAMETER num_gpu 1
      PARAMETER num_thread 6
      PARAMETER num_keep 24
      PARAMETER stop
      PARAMETER stop
      PARAMETER stop
      # Set the model template
      TEMPLATE """
      {{ if .System }}system
      {{ .System }}{{ end }}{{ if .Prompt }}user
      {{ .Prompt }}{{ end }}assistant
      """

    • @AIDevBytes
      @AIDevBytes  6 місяців тому

      When I get some free and I'm at my computer again today. I will give it a try to see if I can isolate the problem and let you know.

    • @hamzahassan6726
      @hamzahassan6726 6 місяців тому

      @@AIDevBytes thanks mate. much appreciated

    • @AIDevBytes
      @AIDevBytes  6 місяців тому

      @@hamzahassan6726 I copied the model file content you had and pasted into a new file and was able to create a new model. I am not quite sure why you are the getting the error: "Error: unexpected EOF". I have not been able to duplicate the error. Also, one thing to call out looks like you are not using the llama3 template from ollama, but that doesn't appear to be causing the issue. I would make sure you are not using rich text format in your model file and ensure that it is plaintext only.
      if you go to the llama3 model (ollama.com/library/llama3:latest/blobs/8ab4849b038c) the template looks like this:
      {{ if .System }}system
      {{ .System }}{{ end }}{{ if .Prompt }}user
      {{ .Prompt }}{{ end }}assistant
      {{ .Response }}

  • @aliceiqw
    @aliceiqw 6 місяців тому

    when running this in the terminal: "ollama create my-llama3-model -f custom-llama3.yml"
    I get this error:
    Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

    • @AIDevBytes
      @AIDevBytes  5 місяців тому

      What does your model file content look like?

  • @kartikmitra8960
    @kartikmitra8960 Місяць тому

    # set the base model
    FROM gemma2
    # Set custom paramter values
    PARAMETER temperature 1
    PARAMETER stop
    PARAMETER stop
    PARAMETER stop
    PARAMETER stop

  • @aliceiqw
    @aliceiqw 6 місяців тому

    Please help with this:
    **(venv) (base) iamal@IAMALs-MBP suede % ollama create my-llama3-model -f /Users/iamal/Desktop/suede/custom-llama3.yaml**
    Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

    • @AIDevBytes
      @AIDevBytes  5 місяців тому

      What does your file look like? Also, on the model files I created I do not use an extension.

    • @jerrypaul623
      @jerrypaul623 4 місяці тому

      @@AIDevBytes without extension , how? How to run in vs code?

    • @AIDevBytes
      @AIDevBytes  4 місяці тому

      Sorry for the delay. This should not be a yaml file. This should be a plan text file with no extension.

  • @pinkhilichurl7670
    @pinkhilichurl7670 5 місяців тому

    transferring model data
    Error: unsupported content type: text/plain; charset=utf-8
    FROM llama3:8b
    PARAMETER temperature 1
    PARAMETER stop
    PARAMETER stop
    PARAMETER stop
    PARAMETER stop

    • @AIDevBytes
      @AIDevBytes  5 місяців тому

      I haven't seen that error before. If I had to guess it would be because you have a multi-line SYSTEM message but it's not wrapped in triple quotes.
      Example:
      SYSTEM """Line one
      Line two
      Line three
      '''""