ComfyUI Tutorial Series: Ep10 - Flux GGUF and Custom Nodes

Поділитися
Вставка
  • Опубліковано 29 січ 2025

КОМЕНТАРІ • 112

  • @pixaroma
    @pixaroma  5 місяців тому +1

    You can now support the channel and unlock exclusive perks by becoming a member:
    ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
    Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.

    • @atahanacik365
      @atahanacik365 4 місяці тому

      @@pixaroma exactly 😇 I would love to have a tutorial for renting gpu power (comfy runs local, gpu from claud) 96 gb vram like 0.8usd per hour. Thank you for the content mate, not a generic video that people watch from others and copy 👍👍👏👏

    • @pixaroma
      @pixaroma  4 місяці тому

      @@atahanacik365 I just run locally since my video card can handle it, so for me is no use to rent online. I saw mimicpc has for 0.49 per hour

  • @XinZhang-lb7lj
    @XinZhang-lb7lj 21 день тому +8

    I thought this was your original voice. It's clear and beautiful. Perfect for non-native English learners to follow in, especially for me. It turns out you've put in extra effort on this. Respect!

    • @pixaroma
      @pixaroma  21 день тому

      Thank you! 😃

    • @WCRMcG
      @WCRMcG 14 днів тому

      @@pixaroma whoa this isn't your real voice!? how did you do it?

  • @jameskooiker3311
    @jameskooiker3311 14 днів тому +1

    I just wanna say THANK YOU for all your hard work on these videos. I made a huge mess of things when I first started using comfyui. After finding your videos I wiped everything and started at episode 1 with a fresh install, and now I have a much much better understanding of comfyui. Thank you breaking it all down for us. Much appreciated! Best series out there!

    • @pixaroma
      @pixaroma  14 днів тому

      Great to hear 🙂 glad is helpful

  • @TheWayneFang
    @TheWayneFang 17 годин тому

    Thank you for your teaching, it is really comprehensive. Really includes lots of info. and worth watching it till the end of the tutorial

  • @ling6701
    @ling6701 4 дні тому

    Thanks for sharing and explaining in details all those workflows and install process

  • @letitbeai
    @letitbeai Місяць тому +2

    10:00 Needs to be noted that the badges are now under settings at the left borrom... (took me 20 minutes to figure that out lol, I HATE UI changes with the passion of a thousand warriors)
    Great video as always!

  • @joshualaferriere4530
    @joshualaferriere4530 2 місяці тому

    I love the voice of these demo's, perfect enunciation.

  • @TriangoloScaleno.SCALENIUM
    @TriangoloScaleno.SCALENIUM 2 місяці тому

    YOU. ARE. THE. BEST.

  • @VieiraVFX
    @VieiraVFX 3 місяці тому +2

    My best channel on youtube... thank you!

  • @karenweiss1313
    @karenweiss1313 3 місяці тому +2

    you are amazing , thank you for all your time and effort placed into this channel. :)

  • @inkchoi
    @inkchoi Місяць тому

    Thanks!

    • @pixaroma
      @pixaroma  Місяць тому +1

      thank you for support 🙂

  • @59Marcel
    @59Marcel 5 місяців тому

    Perfect tutorial, thanks to this one I rejiggled some of my older work flows with excellent results and congrats on the 10.2K subscribers.

  • @Ozstudiosio
    @Ozstudiosio 2 місяці тому +1

    really i enjoyment with every video with you :)

    • @pixaroma
      @pixaroma  2 місяці тому

      Thank you ☺️, and thanks for joining membership

  • @flyinnleaf
    @flyinnleaf 5 місяців тому +1

    Amazing tutorial!

  • @DezorianGuy
    @DezorianGuy 4 місяці тому

    Excellent tutorial as always. Hope you stick around for a while as more models are on the way (I assume). :)

  • @IamAshishChauhan
    @IamAshishChauhan Місяць тому

    Cool videos. Keep up

  • @jennifertsang6572
    @jennifertsang6572 5 місяців тому

    Great tutorial and very knowledgeable!

  • @eslamafifi1020
    @eslamafifi1020 4 місяці тому

    thank you man
    as always great tutorial

  • @majdmmb3839
    @majdmmb3839 Місяць тому

    great tutorial ..

  • @konnstantinc
    @konnstantinc 5 місяців тому

    niiice! Ep010 already

  • @sania3631
    @sania3631 5 місяців тому

    Thank you! Great video. Much appreciated.

  • @reniferZiolo
    @reniferZiolo 10 днів тому

    seems like my WAS node cannot access the styles? they work fine on csv loader?

    • @pixaroma
      @pixaroma  10 днів тому

      if you used the steps from episode 07 it should work, go to ...ComfyUI\custom_nodes\was-node-suite-ComfyUI
      look for was_suite_config.json open it with a text editor like notepad
      where it says "webui_styles": null, you put the path to where you saved the csv, just make sure you double the backslash in the path example "webui_styles": "D:\ComfyUI\styles.csv",
      so it looks like this webui_styles": "D:\\ComfyUI\\styles.csv",
      Restart ComfyUI
      if still doesnt work, redownload the csv file and try again, sometimes when you open or edit the csv it get corupted

  • @TDKakaTrapD
    @TDKakaTrapD 2 місяці тому

    Thank you for the great tutorials!! Quick question, I can't see the styles in the Multiple Styles Selector Node, did I miss something? Looks like a .json file is needed. So means I cant use your cool 300 styles .csv sheet?

    • @pixaroma
      @pixaroma  2 місяці тому

      You can, ep 07 and ep15 show different methods

  • @longsyee
    @longsyee 4 місяці тому

    Is there any style csv for flux just like for SDXL? Looking Forward

    • @pixaroma
      @pixaroma  4 місяці тому +1

      You can use it with Flux also, just flux don't know how to do so many art styles like sdxl

  • @farey1
    @farey1 2 місяці тому

    Can I use regular FLUX t5 encoders or do in need to use gguf ones?

    • @pixaroma
      @pixaroma  2 місяці тому

      yes you can, i use q8 because is smaller in size and is almost the same quality like fp16

    • @farey1
      @farey1 2 місяці тому

      @@pixaroma Thank you. Another thing that is confusing is that I came across 2 different VAE models. One is "ae.safetensor" which is about 327 MB and the other "vae.safetensor" which is about 163 MB. ...and they both work. Which one should I use? Also, does shnell and dev use the same vae as I found separate onesto download on flux hugging face ...and they seem exactly the same?

    • @pixaroma
      @pixaroma  2 місяці тому

      @@farey1 I download the first one that appeared when flux was released the ae.safetensor, and kept using that one, didnt try another one. And for schnell is the same vae,

    • @farey1
      @farey1 2 місяці тому +1

      @@pixaroma Thank you a lot. I appreciate it.

  • @mohammedzaidkhan5687
    @mohammedzaidkhan5687 Місяць тому

    thanks a lot bro , what gpu do u use? what is the processing power?
    i got a 4 gb graphics card , is it enough for this?

    • @pixaroma
      @pixaroma  Місяць тому +1

      Is not enough, you may try to run sd v1.5 models but anything else will be slow or will not work. I have rtx4090 so 24gb of vram, to run flux okish you need like 16gb of vram

    • @mohammedzaidkhan5687
      @mohammedzaidkhan5687 Місяць тому

      @@pixaroma oh , i got 4gb on my laptop , wasnt able to stack control nets

  • @SumoBundle
    @SumoBundle 5 місяців тому +2

    Thank you. 🙏

  • @Dunc4n1d4h0
    @Dunc4n1d4h0 2 місяці тому

    Thanks for idea. I just tried GGUF version (Q8), results below (latest Comfy, 0.5Mpix latent image resolution, 20 steps):
    GGUF - 2.44s/it, 48sec
    dev-fp8 - 1.13s/it, 22sec
    dev-fp8 with --fast in Comfy: 1.44it/s, 13sec

    • @pixaroma
      @pixaroma  2 місяці тому

      it depends on your system, how much vram and ram you have, for me for example q8 is faster than q4 that is half size. And the quality is better in q8 than in fp8

  • @Jinjinyajin
    @Jinjinyajin 5 місяців тому

    Love your videos, What is the best inpainting on comfyui (not for flux). Are you going to make a video of it?

    • @pixaroma
      @pixaroma  5 місяців тому +1

      I am going to make a video about that also,, since i need to test different models, same for upscaling

  • @systeresmeraldaobene7507
    @systeresmeraldaobene7507 Місяць тому

    Greate video as always. Thank you. What do you normally use for voiceover generation (not elevenlabs)? It's so good.

    • @pixaroma
      @pixaroma  Місяць тому

      I use Elevenlabs the voice i use cost X2 credits probably that why, i also use my text so it sounds more natural

    • @systeresmeraldaobene7507
      @systeresmeraldaobene7507 Місяць тому

      @pixaroma oh I see. I thought there was a different one. With your text, is there a way you edit it to help the AI to account for pauses and pace that imitate the natural flow of speaking like human? If it wouldn't be too much of a hassle and you wouldn't mind, could you do a short video on how you achieve this natural flow? It's the best natural nonhuman I've come across recently. Thank you.

    • @pixaroma
      @pixaroma  Місяць тому

      @systeresmeraldaobene7507 maybe is the voice, search for Burt us, the pauses i leave in capcut, u just cut the audio and leave more space

    • @systeresmeraldaobene7507
      @systeresmeraldaobene7507 Місяць тому +1

      @@pixaroma Ok. Thank you. Always looking forward to your uploads. Really great work 👏🏽.

  • @Deathshot_Official
    @Deathshot_Official 5 місяців тому

    I get same error, still trying to find a solution. Great work bro

    • @pixaroma
      @pixaroma  5 місяців тому

      Let me know on discord what problem you have

  • @NicolasLuthy
    @NicolasLuthy 3 місяці тому

    Hi I Hope you can help me, I currently run comfyui through colab. However after installing all as in your video I only have SDXL or SD* in my dual clip loader.. There is no FLUX😪 Any idea why?

    • @pixaroma
      @pixaroma  3 місяці тому

      I dont know how it works on colab, but make sure the models are in the right folder, the gguf models dont go to checkpoints folder but in the unet folder instead. And needs the gguf custom node to load them

  • @OverWheelsRJ
    @OverWheelsRJ 3 місяці тому

    Thank you. thank you. thank you!!!!!

  • @miaomiaochen4285
    @miaomiaochen4285 Місяць тому

    Why my "apply controlNet" does not look like yours? It require two more inpus: vae and positive. I check the node is also from comfyUI.

    • @pixaroma
      @pixaroma  Місяць тому

      They updated the comfyui nodes, when i did the episode looked like that now are different, in some of the new epiodes i used the new node

  • @SOLOLEVELING_666
    @SOLOLEVELING_666 5 місяців тому +1

    AMAZING!!!!! NICE NICE!!!!

  • @dannykirk2917
    @dannykirk2917 Місяць тому

    Hi love the videos new to all of this and its been a massive help in terminology etc. I am trying to produce a batch of 5 images of the same female in slightly different poses. That works great however the females faces are all slightly different and i need them to be the same. Any advise very gratefully received.
    Thanks again.
    Danny

    • @pixaroma
      @pixaroma  Місяць тому +1

      Isn't an easy way to do it, that why we train lora, but need multiple images for that. You can do one photo of same character like character sheet, but i saw some new technology that was just released that might do a character from different angles so probably in a few days that will work ok, search for MV Adapter ComfyUI, once i figure it out I will make a video about that

  • @atahanacik365
    @atahanacik365 4 місяці тому

    the blur thing is about number of steps, for illustrations the image is kind of complete around 15 steps and at 20 it is iterating to different visual. You can either try 15 or 25. After 25 it may again go blurry and you are again getting a completed image around 40 steps. Fyki =)

    • @pixaroma
      @pixaroma  4 місяці тому

      thanks, I saw that later that different steps works different, is annoying I have to keep adjusting steps 😂

  • @eukaryote-prime
    @eukaryote-prime 5 місяців тому

    Are you familiar with the hyper-sd Lora’s for flux that were recently released?
    I tried to set it up but obviously don’t know enough because I just got noise

    • @eukaryote-prime
      @eukaryote-prime 5 місяців тому

      With a 4099 isn’t there a local text to speech you could use that is nearly as good was what you’ve been paying for? I don’t find it terribly natural anyways.
      Thanks for all the tuts!

    • @pixaroma
      @pixaroma  5 місяців тому +1

      I saw it yesterday but didn't test it yet

    • @pixaroma
      @pixaroma  5 місяців тому +1

      There are, tried some but didn't find a method that sounds as good. I am sure in a year we will have something as good for free, but what i tested so far didn't like it

  • @heydrong.
    @heydrong. 3 місяці тому

    Thank you for your videos, they always enhance my learning.
    prompt multiple styles selexctor's styles1,2,3,4, all none are not enabled in none.

    • @pixaroma
      @pixaroma  3 місяці тому

      In episode 7 I show how to do the settings, need to edit that custom file in a certain way, or you can try ep15 it has an easier way

    • @heydrong.
      @heydrong. 3 місяці тому

      @@pixaroma thank you

  • @thibaudherbert3144
    @thibaudherbert3144 5 місяців тому

    nice ! What about the modelsamplingflux ? You don't use max shift /base shift parameters ?

    • @pixaroma
      @pixaroma  5 місяців тому +1

      I like to keep the workflow simple, unless you need something specific that needs that function. As you saw i compared full dev with that complex workflow with the q8! Dev that doesn't have that, and the results were pretty much the same

  • @iman-e3z
    @iman-e3z 4 місяці тому

    hey tnx for the greate tutorial, for the blury images you guys can set the sampler to euler and scheduler to beta

    • @pixaroma
      @pixaroma  4 місяці тому

      Did it fix all the blurred images, for me only sometimes, also helps doing 30 steps, and sometimes flux.guidance like 2

  • @liquidmind
    @liquidmind 4 місяці тому

    helo friend!! how can i add a LORA flux loader, so i can use my custom flux lora models?

    • @pixaroma
      @pixaroma  4 місяці тому

      Hi, i just added on the discord server on the pixaroma-workflows channel one workflow for that, controlnet + an example lora, you just load your lora there, hope it helps

  • @FacsHitDifferent
    @FacsHitDifferent 2 місяці тому

    hi amazing video in theory it should work but i get this error message everytime: dualcliploadergguf `newbyteorder` was removed from the ndarray class in numpy 2.0. use `arr.view(arr.dtype.newbyteorder(order))` instead. maybe someone know how to fix that really appreciate your help.

    • @pixaroma
      @pixaroma  2 місяці тому

      You can try this
      Downgrade NumPy to a version under 2 example go to ComfyUI_windows_portable folder, in the address bar type cmd and press enter then run this command, and restart comfyui
      .\python_embeded\python.exe -m pip install numpy==1.26.3

    • @FacsHitDifferent
      @FacsHitDifferent 2 місяці тому

      @@pixaroma thank you so much worked out. u do the best tutorials

  • @TDKakaTrapD
    @TDKakaTrapD 2 місяці тому +1

    found the issue/solution!! 5:45

    • @pixaroma
      @pixaroma  2 місяці тому

      Glad you can make it work ☺️

  • @marshallodom1388
    @marshallodom1388 3 місяці тому

    Where can I get a copy of the workslow without going to discurd?

    • @pixaroma
      @pixaroma  3 місяці тому

      You can recreate it like i do in video, I am not on pc to be able to create a link, so only if you go to discord, is free, is in the pixaroma-worfklows channel and have all workflows from all episodes there

    • @marshallodom1388
      @marshallodom1388 3 місяці тому

      @@pixaroma Thanks, I am doing that now, building it from what i see on screen in the video. It's pretty easy and nothing to be intimidated by. Thanks to your clear and concise descriptions every step of the way. : )
      I don't subscribe to anything, but I have memorized your channel's name. I'll be lurking in the back of the class room from now on!

  • @andreizdetovetchi
    @andreizdetovetchi 5 місяців тому

    Ce placa video folosesti de ai vitezele alea? :)

    • @pixaroma
      @pixaroma  5 місяців тому +2

      Rtx4090 , mai dau speed la video dar cam 15 sec ia la flux sa genereze

    • @andreizdetovetchi
      @andreizdetovetchi 5 місяців тому

      @@pixaroma si cata memorie video? :) Bine, toti dau speed la video, e normal, dar m-am dat pe spate cand am vazut 1.23s/it :))) Eu mai rapid de 4.92s/it cu gguf n-am reusit, dar n-am decat un 3060 cu 12gb :/

    • @pixaroma
      @pixaroma  5 місяців тому +1

      @@andreizdetovetchi 24gb vram, si 128gb system ram, i9

  • @MarekCezaryWojtaszek
    @MarekCezaryWojtaszek Місяць тому

    Does anybody have a clue why I do not have the 'Badge' item on my ComfyUI Manager menu?

    • @pixaroma
      @pixaroma  Місяць тому +1

      I think it moved in the settings, search in the setting for badge

    • @MarekCezaryWojtaszek
      @MarekCezaryWojtaszek Місяць тому +1

      @@pixaroma Yeah, I found it. Thanks!!

  • @SoySauceFor3
    @SoySauceFor3 2 місяці тому

    oh wait this is a generated voice? Wow it sounds so real.

    • @pixaroma
      @pixaroma  2 місяці тому

      Yeah Ai voices are getting better ☺️

  • @thepoonhound3003
    @thepoonhound3003 4 місяці тому

    whats the difference between the models flux1-dev-Q5_K_S.gguf flux1-dev-Q5_1.gguf flux1-dev-Q5_0.gguf like what does the k_s and 1 and 0 mean

    • @pixaroma
      @pixaroma  4 місяці тому +1

      I am not an expert on this, I use q8 for example. The letters in the model names like Q5_K_S refer to different aspects of the model's quantization and optimization:
      - Q5: Refers to a 5-bit quantization level, balancing performance and accuracy. Higher bits like Q8 offer more accuracy, while lower bits like Q4 focus on speed and efficiency.
      - K_S: This indicates grouped (K) and stochastic (S) quantization, which applies grouping and randomness to further optimize model performance with reduced memory use.
      - 1 and 0: These numbers differentiate slight variations of the model, where "1" generally keeps more accuracy, and "0" focuses on faster inference.
      For example, Q4_K_S would be a 4-bit quantized version using grouped and stochastic methods, suitable for faster but slightly less accurate outputs compared to Q5 or Q8

  • @mesandz2623
    @mesandz2623 5 місяців тому +2

    Got here out of curiosity… could be speaking another language not on earth to me… but interested in learning.

  • @Eddy_Stylez
    @Eddy_Stylez 5 місяців тому

    I tried Flux GGUF last night and it took 10 minutes to render a 1024x1024 image compared to the 10 seconds it takes in SDXL. My RTX 3070 days are numbered lol

    • @pixaroma
      @pixaroma  5 місяців тому +1

      Try to update comfyui, i just tested now q4 version on rtx2060 6GB vram, it took 30 sec for schnell q4 and around 200 seconds for dev q4

    • @pixaroma
      @pixaroma  5 місяців тому +1

      I also tested with this model that is a mix of dev and schnell only need 4 steps just like schnell, again 30 seconds it takes on 6gb rtx2060 civitai.com/models/657607?modelVersionId=745392 you put it in the unet folder, just extract it first since is in a archive. I got the q4_0 version v2

    • @SOLOLEVELING_666
      @SOLOLEVELING_666 5 місяців тому

      Bad confy bro. I have 3060 8gb and gguf unet model is so fast, 2 min max for img 720x1280. Also update cuda . If not, just reinstal bro. Gguf is so fast just cant believe it

    • @sania3631
      @sania3631 5 місяців тому

      Something is wrong with your Comfyui. Maybe update it.

    • @Eddy_Stylez
      @Eddy_Stylez 5 місяців тому

      @@sania3631 There are so many different versions of Flux. I probably was using the wrong one? Idk, I haven't been able to find anything saying how quick a 3070 can render a Flux image. I switched back to Forge and see that it can also run Flux. I'll try the N4F version instead of GGUF, i hear it's faster.

  • @imaginegamer8189
    @imaginegamer8189 5 місяців тому

    Can you make a video on how to download CogVideoX-5B with ComfyUI. Btw your channel is very useful, keep the good work

    • @pixaroma
      @pixaroma  5 місяців тому +2

      I saw people using it but the quality is not so great, i was hoping for a better release before we do video with comfy UI and there is still more to cover on the image before we switch to video. Once something works ok I will do videos for it

    • @imaginegamer8189
      @imaginegamer8189 5 місяців тому +1

      @@pixaroma i understand, thank you

  • @Deathshot_Official
    @Deathshot_Official 5 місяців тому

    1st 🫡

  • @ComputerTechnology073124930
    @ComputerTechnology073124930 5 місяців тому

    thank you for this valuable list of tutorials, what is the best settings and models to generate images by using comfyui ? i have gtx 1060 6gb and 16gb ram?

    • @pixaroma
      @pixaroma  5 місяців тому +2

      Flux is a little much for your system, i got some crash on the gguf bigger then q4 on rtx2060 6gb but is an rtx not gtx and I have 64 gb of system ram. So what works fast for you are older version of sd1.5, then sdxl like juggernaut x hyper also works ok for me on that pc. For flux only if you test to see if can handle it or not, if doesn't crash it can take some time to generate in special first time. I used on that pc the schnell fp8 just like in episode 8 and it run ok for me

    • @SebAnt
      @SebAnt 5 місяців тому

      I saw you were using ChatGPT to copy/paste text prompts?
      The styles dropdown in episode 7/8 were amazing.
      Recently I saw a couple of UA-cam videos where people are using LLMs like Ollama or Sarge right within the ComfyUI workflow to transform a simple sentence to a descriptive text prompt.
      Are you familiar with this and would you plan to explore this is a future video?

    • @pixaroma
      @pixaroma  5 місяців тому +2

      @@SebAnt actually that is the subject on episode 11 :) I am still working on it and testing, but should be ready next week

    • @SebAnt
      @SebAnt 5 місяців тому

      @@pixaroma