Perfect Prompts Automatically

Поділитися
Вставка
  • Опубліковано 14 бер 2024
  • #ollama #textgen #prompt #comfyui #sdnext #a1111 #forge #StableDiffusion #Proteus #Stable-Cascade #Cascade #LLaVA #IFAI
    Perfect Prompts Automatically
    with IF AI tools for ComfyUI
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my (*・‿・)ノ⌒*:・゚✧

    character available at ko-fi.com/impactframes/shop
    SD related civitai.com/user/impactframes
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    VIDEO LINKS📄🖍️o(≧o≦)o🔥
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    to watch tutorials
    Join the Impact Frames fam! Subscribe now: youtube.com/@impactframes?si=...
    ko-fi.com/impactframes
    patreon.com/ImpactFrames
    / @impactframes
    ------------------------------------------------------------
    please star the repos bellow.
    custom Node
    github.com/if-ai/ComfyUI-IF_A...
    My extension for SD webUI
    github.com/if-ai/IF_prompt_MKR
    ollama
    make sure ollama versions 0.1.25 for image to prompt or wait for them to fix the bug: github.com/ollama/ollama/rele...
    github.com/ollama/ollama
    Ollama WEBUI
    github.com/ollama-webui/ollam...
    ollama Models
    huggingface.co/dataautogpt3/P...
    ollama.com/impactframes/mistr...
    ollama.com/impactframes/stabl...
    ollama.com/brxce/stable-diffu...
    ProteusV0.3
    huggingface.co/dataautogpt3/P...
    Enjoy
    ImpactFrames.
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    🔥NOTES
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Ollama commands
    ollama -h
    ollama -v
    ollama list
    ollama run name_of_the_model
  • Розваги

КОМЕНТАРІ • 52

  • @Allan2302
    @Allan2302 14 днів тому +1

    Thanks for making comfy ui better, really game changer nodes

    • @impactframes
      @impactframes  13 днів тому

      Thank you so much this is the type of comment I love to see

  • @gimperita3035
    @gimperita3035 3 місяці тому +1

    Having a lot of fun with your nodes. Thank you!!

    • @impactframes
      @impactframes  3 місяці тому +1

      Thank you so much also I made a new update and you now can use anthropic and openai api optionally. Also there is a new display text node😊

  • @MarceloPlaza
    @MarceloPlaza 12 днів тому

    Thanks for this integration, it works great.

  • @Douchebagus
    @Douchebagus 2 місяці тому +1

    This is amazing, exactly what I needed! Cheers man.

    • @impactframes
      @impactframes  2 місяці тому

      Thank you for leaving a comment 🙂

  • @NotThatOlivia
    @NotThatOlivia 3 місяці тому +3

    very nice - going to add this to my workflow ASAP!!! GJ

    • @impactframes
      @impactframes  3 місяці тому

      Thank you 🙂 glad you like it

  • @pseudoAkk
    @pseudoAkk 3 місяці тому +3

    an incredible job. don't worry about the likes, keep working wonders) there are few smart people in the world who can perceive your context... but we are all with you))

    • @impactframes
      @impactframes  3 місяці тому +1

      Thank you for the your word of encouragement, I will keep improving it thanks.

  • @sarpsomer
    @sarpsomer 3 місяці тому +1

    Neat Tutorial!

  • @SeanietheSpaceman
    @SeanietheSpaceman Місяць тому +1

    This is very good.

    • @impactframes
      @impactframes  Місяць тому

      Thank you so much I am striving to make it even better🙂

  • @xdevx9623
    @xdevx9623 3 місяці тому +2

    You don't know how much this helped me THANKS A LOTT!!
    and can you also make a video on ai video generation please (text to video)

    • @impactframes
      @impactframes  3 місяці тому +1

      Thank you yes I am working on that, I wish I could dedicate my time exclusively to this to get there faster but is coming eventually

  • @sushicommander
    @sushicommander 3 місяці тому +1

    I'm building a similar tool but with diffusers and the transformers library. I've been testing ollama as well. I'm curious what your system prompt is in the modelfile(ollama)? do you use one-shot? two-shot? Good job on the release, it's genuinely cool.

    • @impactframes
      @impactframes  3 місяці тому

      Thank you, there is no preprompt on the modelfile, I am passing a system prompt to the model as a sytem message that way you can use general models and depending of the reasoning capabilities you get different results, you can read the sytem message on the code there is another one for the Llava models since thee function is a little bit different. Thanks.

  • @alm7traf
    @alm7traf 3 місяці тому +1

    Hello, when I choose the workflows file from the upload feature in the program, a message appears saying: When loading the graph, the following node types were not found:
    Batch Load Images
    When you click on Queue prompt, another message comes
    SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
    How is the problem solved? Thank you.

    • @impactframes
      @impactframes  3 місяці тому

      Hi, if you have comfy ui Manager install the missing nodes those extra nodes are github.com/Kosinkadink/ComfyUI-VideoHelperSuite and github.com/bash-j/mikey_nodes you can get any of them or both those are to load batch images from a folder. The json error you are having I haven't seen it but you can try saving in csv or txt I have't use any json so I don't know why you are getting that error

  • @EH21UTB
    @EH21UTB 2 місяці тому +1

    Super cool, thank you for these nodes. I got it working in ComfyUI with my Open AI key, but it can't find my Ollama and models. Most of my models are with LM studio - I guess they are all in different locations on my computer (windows 11). I went to the Olama git hub page which suggested environment variables - but don't they mean paths? Can I set extra paths in the comfyUI somewhere for this?

    • @impactframes
      @impactframes  2 місяці тому +1

      Thanks. I haven't got around to installing LM studio yet but another user told me it just needed to change ollama port on the node to make it work. I guess LM studio runs ollama in the background of so it will find all the models automaticaly. I think the port is 1234

    • @EH21UTB
      @EH21UTB 2 місяці тому

      @@impactframes Thank you, I'll try that!

    • @EH21UTB
      @EH21UTB 2 місяці тому

      @@impactframes I have been working on that. Starting the LM studio server with a model and pointing the IF nodes to the right server number doesn't work. The docs for LM studio aren't flushed out but what I've read so far they use the same protocol as open AI but just with a different address. I don't remember for sure, but I think it's not possible to set the server address when you have your node set to OpenAI but perhaps it might be easy to make that change such that one could?

  • @Kingphotosonline
    @Kingphotosonline 3 місяці тому +1

    Very interested in this, however, at around the 1:30 mark, I was distracted by the avatar's... motions.

    • @impactframes
      @impactframes  3 місяці тому

      Sorry, I make the video as I work on the computer and the hands get occluded so they lose track so they glitch. I am am going to make the videos without body tracking from now thanks

    • @Kingphotosonline
      @Kingphotosonline 3 місяці тому +1

      @@impactframes Oh, it's no big deal. I just thought it was hilarious

  • @alm7traf
    @alm7traf 3 місяці тому +1

    Hello, the problem has been solved, thank you, but I faced another problem when asked to create an image that appears on the command screen. Error: ANTHROPIC_API_KEY is required
    Error: OPENAI_API_KEY is required
    Where do I get the API key?
    How is it entered if it is obtained?
    Can you explain it to me? Thanks again.

    • @impactframes
      @impactframes  3 місяці тому

      I think I fix that on the latest update but if you want to use openai you will have to enter the key

  • @Ziov1
    @Ziov1 3 місяці тому

    Can it be used to analyse images and create a prompt, say for redoing images in batch instead of having to write a prompt for each one it's generated?

    • @impactframes
      @impactframes  3 місяці тому +1

      Not yet for now the image is individual I will add from batch tomorrow or Sunday

  • @97BuckeyeGuy
    @97BuckeyeGuy 3 місяці тому +1

    How much VRAM do these LLMs require? How do you run them at the same time as running ComfyUI? Do you need a 24GB GPU in order to run them both at the same time?

    • @impactframes
      @impactframes  3 місяці тому +2

      it depends of the models you run Ollama uses both CPU it loads part of the model on RAM around the 8 minute mark I talk about the model sizes if you select quantize models like 2bit they are less accurate but produce faster outputs and take less vram and ram.

  • @1ASinyagin
    @1ASinyagin 3 місяці тому +1

    Great work!!!
    where should the model files be stored?

    • @impactframes
      @impactframes  3 місяці тому +1

      around 7:30 minute mark I show how to get the models they get installed as hash256 blobs at \usr\share\ollama\.ollama\models on linux and at C:\Users\username\.ollama\models\blobs on windows

    • @1ASinyagin
      @1ASinyagin 3 місяці тому

      @@impactframes thank you, I copied it along the path you specified, but still the node is not detected

    • @impactframes
      @impactframes  3 місяці тому +2

      @@1ASinyagin 1)_. install ollama
      2)_. Go into terminal and type: ollama run adrienbrault/nous-hermes2pro:Q5_K_S
      That will install the model and then you can ask any question to it.
      3)_. Go to your ComfyUI custom_nodes folder type CMD in the address bar it will open command prompt terminal.
      Type: git clone github.com/if-ai/ComfyUI-IF_AI_tools.git
      That will install the custom node.
      Now you can start ComfyUI and load the custom workflow that is on the workflows folder inside the custom_nodes\ComfyUI-IF_AI_tools\workflows folder you can run the queue to generate an image
      The folder I gave you before is just were ollama store your LLMs models

  • @aliyilmaz852
    @aliyilmaz852 3 місяці тому +1

    Great work!
    Just curious, are you coding all these stuff alone like indie developer?

    • @impactframes
      @impactframes  3 місяці тому +1

      Yes after my full time job but AI helps a lot, If I get stuck on something usually takes less time to find the solution is not as as hard as it used to.

    • @aliyilmaz852
      @aliyilmaz852 3 місяці тому

      Thanks for the reply.
      if it is not a burden, can you suggest me where to start to get into diffusion? I mean I want to be capable of coding something usefull as an extension@@impactframes

    • @impactframes
      @impactframes  3 місяці тому +1

      @@aliyilmaz852 best start will be learning about stable diffusion with course.fast.ai practical deep learning for coders and start with some small python projects after you know the basics. Get chatgpt or Claude or the free mistral lechat to help you along the way.

    • @aliyilmaz852
      @aliyilmaz852 3 місяці тому

      Thanks a lot! Hope I will be able to understand what you are doing, at least a little :)
      @@impactframes

  • @1videolar
    @1videolar Місяць тому

    I get this error continously and the workflow doesnt open
    "Preset text 'N:background' not found. Please fix this and queue again."

    • @impactframes
      @impactframes  Місяць тому

      Were you editing the presets seems like a preset is missing in one of your files maybe re-download them.

  • @Hakim3ii
    @Hakim3ii 2 місяці тому

    Under windows I could not make it work I get cuda error and dev didn't fix it issue: 3683
    Is it possible to use other llm local front end?

    • @impactframes
      @impactframes  2 місяці тому

      Kobold.cpp works on the IFchat node

    • @Hakim3ii
      @Hakim3ii 2 місяці тому +1

      @@impactframes I went linux and its working

    • @impactframes
      @impactframes  2 місяці тому

      @@Hakim3ii nice