Unlocking the Potential of Large Language Models with ComfyUI | Advanced Tutorial

Поділитися
Вставка
  • Опубліковано 4 лют 2025
  • In today's video, we'll learn how to harness the power of Large Language Models using ComfyUI. We'll explore loading models, generating stories, extracting tags, and creating related images. We'll utilize the ComfyUI-N-Nodes and ComfyUI-Custom-Scripts libraries to enhance ComfyUI's capabilities. The process involves installing dependencies, configuring GPT nodes, utilizing custom prompts, and much more
    ** Links from the Video Tutorial **
    ComfyUI-N-Nodes for GPT support: github.com/Nuk...
    ComfyUI-Custom-Scripts by pythongosssss : github.com/pyt...
    CUDA Download: developer.nvid...
    W64devkit : github.com/ske...
    Bin Models: huggingface.co...
    Command for install llama-cpp-python cpu only ..\..\python_embeded\python.exe -s -m pip install llama-cpp-python
    Workflow**: www.patreon.co...
    ** Let me be EXTREMELY clear: I don't want you to feel obligated to join my Patreon just to access this workflow. My Patreon is there for those who genuinely want to support my work. If you're interested in the workflow, feel free to watch the video - it's not that long, I promise! 🙏
    ❤️❤️❤️Support Links❤️❤️❤️
    Patreon: / dreamingaichannel
    Buy Me a Coffee ☕: ko-fi.com/C0C0...

КОМЕНТАРІ • 81

  • @DreamingAIChannel
    @DreamingAIChannel  Рік тому +4

    IMPORTANT UPDATE: The llama-cpp-python installation will be done automatically by the script now. If you have an NVIDIA GPU NO MORE COMPILER INSTALLATION OR EXTERNAL BATCH LAUNCH ARE NECESSARY thanks to "jllllll" repo!!!
    Also support to .bin model has been dropped (as deprecated) and only gguf models are supported now.

  • @titobandito6060
    @titobandito6060 Рік тому +1

    inSane! great content. I'm learning so much! Thank you endlessly!

  • @Archalternative
    @Archalternative Рік тому +2

    I've been following you for a while, I congratulate you. Great videos! Potentially this method can lend a hand in the creation, a really nice method.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому +1

      Thank you! Yes, I think if we can mix this with other customised nodes the possibilities become endless.

  • @swannschilling474
    @swannschilling474 Рік тому +3

    Thanks a lot for this one, this is the point where ComfyUI starts spreading its wings, since it is not only a GUI but it could run anything...
    I had a hard time switching since Auto1111 is just a great user experience, but creating workflows like this is unbeatable!!
    Still loving my Auto11 though!! 😊

  • @ysy69
    @ysy69 Рік тому +1

    very useful. will definitely try this. thank you

  • @bogdahn689
    @bogdahn689 Рік тому +1

    very intresting video, amazing new feature, thank you !

  • @yotraxx
    @yotraxx Рік тому +1

    Just an amazing treasure found through your video ! Oo
    Thank you a lot for sharing !

  • @robmacl7
    @robmacl7 Рік тому +2

    Cool new node!

  • @Chriseeverything
    @Chriseeverything 6 місяців тому

    Great content thank you.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 9 місяців тому

    really enjoyed your video. i do model training and need captioning. recently started captioning datasets in comfyui with gpt4v and claude opus but really still struggling to find a decent local model that can get the accuracy and a somewhat good speed. tried Lvava 1.6 mistral but its not on par with gpt4v and opus. what would you suggest? the needed output are basically tags format of portraits of realistic photographic people, with detailed representation of subject, outfits, poses, accessories, mood, lighting, photo style etc…

    • @DreamingAIChannel
      @DreamingAIChannel  9 місяців тому

      Hi! Thanks! Well, I don't think there's anything better than opus and gpt4v at the moment, I don't know if you've tried moondream but I think beating those two is really hard!

  • @hempsack
    @hempsack Рік тому +2

    Dear DreamingAI,
    I am thoroughly impressed with the creative approach you have taken in your recent content. Your setup has genuinely captivated my interest. I am curious to explore an idea with you: would it be feasible to develop a chapter-based storyline concept within your framework?
    The idea is to structure the narrative in distinct chapters, each culminating with a unique image prompt that encapsulates the essence of that particular segment. This approach would not only add a visual dimension to each chapter but also enhance the overall storytelling experience.
    Furthermore, it would be intriguing to see if your system can maintain a cohesive memory of the elements created in preceding chapters. Such a capability would allow for a seamless narrative transition, weaving each chapter into a unified storyline. This continuity is crucial for achieving a blended effect, ensuring that the story unfolds coherently from start to finish, accompanied by compelling visuals.
    I am eager to see if such a concept could be integrated into your current setup, enriching the storytelling experience and offering viewers a more immersive and visually engaging narrative.
    Thank you for considering this suggestion. I look forward to any thoughts you might have on this concept.
    Best regards,
    Daniel W.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Hi! Thanks for the comment! So, yes i think that is possible develope something like that in ComfyUI with my nodes but it will be or too limited or too messy IMHO, in fact I think that an idea like this one is more prone to be implemented with python and the ComfyUI API so that you can generate a kind of loop that calls for each chapter the workflow for generating the text and the image, I don't know if I made myself clear.
      I need to make a video on how to use ComfyUI API but I haven't had the time yet, surely though to follow it will need some programming basics in python 😋

    • @hempsack
      @hempsack Рік тому

      Thanks for the information on the complications of what I was doing. I was messing with it all day yesterday, and it did kind of fail miserably. It would have been a good idea though if it would have worked , anyway thanks for the information. A video on how to use that API would be really great and very much appreciated if you get the time.

  • @kollonkuri
    @kollonkuri Рік тому

    Cool video, exactly what I wanted to try to mess around with today. By any chance, are you willing to share the flow. Could go through the video and recreate, but hoped to save time to just upload the json 😇 (always trying to cut corners)

  • @arcangeel4828
    @arcangeel4828 Рік тому +1

    Great channel!!
    Could you make a video with more artistic example of GPT use? :)

  • @oraocean
    @oraocean Рік тому +1

    1:05 , Thanks for your creative video. Do I need Visual Studio 2022, w64devkit, and CUDA toolkits, all to be installed before install_dependency_ggml_models? once I get the GPT node, where should I place the llama.cpp?

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      nowhere! You just need one compiler (like visual studio 2022) to make the batch you will launch compile and install all the things for you! The only thing you will need to place in a folder are the models 😊

  • @mohacs
    @mohacs Рік тому +1

    hi there. Can you please share link of the model you are using in the video? as I can see GPT Simple Loader only loading *. gguf files but in the video you are using a *.bin file a bit confusing. thank you for great video and idea.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Hi! I've updated the pinned comment since i've dropped the support to .bin model as deprecated

  • @INVICTUSSOLIS
    @INVICTUSSOLIS Рік тому +1

    The part that I got stuck is where you mentioned cuda as its not a functionality on Mac OS but this is a brilliant video. If you can do this on Mac somehow, it would be uber-awesome for me.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому +1

      oh yes, sorry, you're right! But i know for sure that llama-cpp-python can use M2 on Mac so if you have a device with an M2 you should have no problem at all otherwise i think that there was CUDA on osx but is not supported anymore, you can try with older versions (oviusly you need to have a NVIDIA card in it!) but I don't know if it's too much of a pain in the ass, I unfortunately don't own a Mac so I can't test it otherwise I would have done it!

  • @alecubudulecu
    @alecubudulecu Рік тому

    Awesome video and thank you for the work you do with these nodes!
    Question - any ideas how to use these N nodes with gpt LLM to read an input image and describe it? Like BLIP but more advanced using LLM….

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      For now you can't, there are only a bunch of model that are able to do that and they are really expensive in terms of VRAM used, so I don't think I will implement something like that anytime soon, but certainly in the future it will be possible!

  • @seancondev3321
    @seancondev3321 9 місяців тому

    What Text to Audio AI did you use for the voiceover?

  • @JamesTrue
    @JamesTrue Рік тому

    Hello. Can anyone educate me on how to get a bin file? The video references "vicuna-13b-GPTQ-4bit-128g\ggml-Hermes-2-step2559-q4_K_M.bin" but the link in the video description has hundreds of choices but none of them offer a .bin file as a download option. They seem to all be called "model.tensor"

    • @JamesTrue
      @JamesTrue Рік тому

      You have to download a GGUF file now instead of bin since the official change in August.

    • @harshitpruthi4022
      @harshitpruthi4022 11 місяців тому

      @@JamesTrue can i nchoose randomly anyone of these GGUF

  • @Archalternative
    @Archalternative Рік тому +1

    p.s. for connecting straight lines...instead of curves, which parameter did you change? Thanks again.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому +3

      So, I explained it in a past video! 😊 here's the link to the exact moment where i change the setting: ua-cam.com/video/AjwfswzLmxU/v-deo.html It should be part of the ComfyUI-Custom-Scripts (the link to the suite is also in the description of that video)

    • @Archalternative
      @Archalternative Рік тому +1

      @@DreamingAIChannelThanks, I missed that video👍🏻

  • @1982manga
    @1982manga Рік тому

    Sorry but i really dont know how to install models from Tom Jobbins huggingface page for the GPT Loader, in fact it appear "undefined" when i load the node, building my workflow .Generally on huggingface pages i simple download a safetensor file and put into model folder...can u help me pls? I really wanna try this! THX SO MUCH!

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому +1

      Hi, only .gguf models are supported , you should look for the gguf version of the model that you want to use!

    • @1982manga
      @1982manga Рік тому +1

      thx!@@DreamingAIChannel 🤩

  • @svenhinrichs4072
    @svenhinrichs4072 Рік тому

    Can somesome tell me where i can find the .bin models ? I can only see the safetensor ones.....

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Hi, you need to search for gguf models as .bin has been deprecated!

    • @svenhinrichs4072
      @svenhinrichs4072 Рік тому

      Cool finally got it working ! THX for your instant help ! Keep up the great work !@@DreamingAIChannel

  • @SheRoMan
    @SheRoMan Рік тому

    What is CUDA Toolkit 12.2 and should I need to download it?

  • @adi792G
    @adi792G Рік тому

    I'm getting assertion error every time i try to run the workflow. Ig it had to do somthing with ggml ?! I tried different models still no progress.

  • @AccTeam
    @AccTeam Рік тому +1

    Great video! Thanks a lot! I tried to run it on linux Mint 21.2, while loading the model it gives an error: AttributeError: 'Logger' object has no attribute 'fileno'... Can you help please?

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Hi! Yes it's a conflict between llama_cpp_python and ComfyUI-manager but it should be patched now so just update ComfyUI-manager and it should work

    • @AccTeam
      @AccTeam Рік тому +2

      Super! Everything is working!!! Thank you very much for your work!!!@@DreamingAIChannel

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      👍

  • @raphaellfms
    @raphaellfms Рік тому

    Is there a way to properly load .safetensor modes?

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      nope you need to convert it in a format that llama.cpp can unterstand (ggml or the newer gguf)

  • @soultakerspirit3121
    @soultakerspirit3121 Рік тому

    Hello. I'm getting an error when I try to use both the gpu install bat files.
    ERROR: Failed building wheel for llama-cpp-python
    Failed to build llama-cpp-python
    ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
    I have Visual Studio installed.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Hi, without the full error I cannot try to help you!

    • @soultakerspirit3121
      @soultakerspirit3121 Рік тому

      @@DreamingAIChannel That was the error itself. But, as soon as I went to the llama-cpp-python github and used the pip command it had to force reinstall. It said the wheels were built successfully. So I don't know what was going on.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Uhm no, the log of that error it's huge, is like 100/150 row behind that text, however I need to check because it's weird, maybe they changed something, did you build the CUDA version of llama-cpp-python?

    • @soultakerspirit3121
      @soultakerspirit3121 Рік тому +1

      @@DreamingAIChannel I have the update CUDA for my GPU installed. Also, I'm glad you did this video. Your video is the first I've seen where you can use textgen models in ComfyUI. If this works properly, this could help those who want to create a visual novel game. Thanks for this. EDIT: Now this is weird. When I start ComfyUI, I get this error even though your node is properly installed.
      Traceback (most recent call last):
      File "E:\StableDiffusionAI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\__init__.py", line 90, in
      spec.loader.exec_module(module)
      File "", line 940, in exec_module
      File "", line 241, in _call_with_frames_removed
      File "E:\StableDiffusionAI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 3, in
      from llama_cpp import Llama
      ModuleNotFoundError: No module named 'llama_cpp'

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      @@soultakerspirit3121 I think that is because you are using the portable installation of comfyui (like me) and with the instructions on the GitHub of llama-cpp-python you have installed it in your local system and not in the portable environment. Tomorrow I will try to give it a look. I'm starting to think that maybe it's easy to provide an already compiled CUDA version of llama-cpp-python but I need to make some tests.

  • @soultakerspirit3121
    @soultakerspirit3121 Рік тому +1

    Sorry to bother again. But, I was trying to follow your video and at one point it looks like you speed up. I have really bad eye sight so I didn't see everything you did even when I keep rewinding. I have only one eye. Do you happen to have a text tutorial also?

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      sorry, i don't have a text tutorial, you could try to slow down the video. Otherwise tell me where you get lost and i'll try to help you!

    • @soultakerspirit3121
      @soultakerspirit3121 Рік тому

      @@DreamingAIChannel Before connecting to positive and negative prompt boxes. Maybe there was something before it.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      Well it's almost at the end of the video so there is a lot before that xD

  • @akkitty22
    @akkitty22 Рік тому

    I don't think this works anymore because of the nodes you are using. String doesn't seem to work the same way as you show.

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому

      "String function" you say? Well only more fields have been added (not mandatory) but it works exactly the same, you only need to use just one.
      "String" on the other hand is my node and works exactly the same

  • @CrashCaustic
    @CrashCaustic Рік тому

    Thank you, I appreciate the tutorial but could you use ai to remove all the times you say "uhh" in your videos lol

  • @raphaellfms
    @raphaellfms Рік тому

    When I try to run install_dependency_new_models I get:
    ERROR: Exception:
    Traceback (most recent call last):
    \ComfyUI_windows_portable\python_embeded\lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
    ...\cli
    eq_command.py", line 248, in wrapper
    return func(self, options, args)
    \commands\install.py", line 377, in run
    requirement_set = resolver.resolve(
    a bunch of others like this and in the end
    File "importlib\__init__.py", line 126, in import_module
    File "", line 1050, in _gcd_import
    File "", line 1027, in _find_and_load
    File "", line 992, in _find_and_load_unlocked
    File "", line 241, in _call_with_frames_removed
    File "", line 1050, in _gcd_import
    File "", line 1027, in _find_and_load
    File "", line 1004, in _find_and_load_unlocked
    ModuleNotFoundError: No module named 'scikit_build_core'
    Can you please help?

    • @raphaellfms
      @raphaellfms Рік тому

      Fixed it.
      Open the install_dependency_new_models with notepad++
      removed "--force-reinstall --upgrade --no-cache-dir" from line 15
      Now it works!

    • @DreamingAIChannel
      @DreamingAIChannel  Рік тому +1

      yeah, thanks for the report! Propably you had an older version cached in your system, that is the reason why it worked by taking that part out! the newer version it givin that problem you pasted. So i've decided to fix the 0.1.84 version in the llama install_dependency_new_models.bat so it will work everytime.