Download all the workflows from the pixaroma-workflows channel on Discord. The links are also organized there. discord.com/invite/gggpkVgBf3 If you have any questions you can post them on discord or Pixaroma Community Group facebook.com/groups/pixaromacommunity
First time I ever send a tip on UA-cam. Thanks for the amazing work, the way you share your knowledge with such great pedagogy is truly appreciated. I hope you continue to create this kind of video, which is very valuable and allows beginners like me to approach these topics without any friction.
bro you are crazyyyyy!!!! your contact this seires is a money based course for free, you are a very decent techer and i just binged the whole seires and discovered you uploded this right when i was looking for flux contant to get started!!! thanks alot i wish i could support you more
Since flux launched, I kept running away from it, due to how complicated it looked. You explained quite everything needed to get started. Thank you for this video!
The way you explained this Flux model is clear and crisp 👍🏼👍🏼 Excellent, no really excellent step by step instructions 👍🏼👍🏼 I cannot explain how much this video means to me. Fantastic!! Oh, and you have a very pleasant voice. Thanks again!
I have to say that I really appreciate your help learning all of this. Your videos are very well made, detailed and informative. I have NEVER wanted to use ComfyUI and now I LOVE IT! I watched several videos and they always made my head hurt!! Yours made sense!! Great job! I'm looking forward to watching more of them... I know I am barely scratching the surface! 🤓
It's simply amazing how thoroughly you explain every step and you are so eloquent - I'm devouring every video - 2 days and already at episode 7 from being total noob at this .. TYSM ❤ (too poor now to donate but I hope soon I will be able to. Won't forget this favour of yours - I promise !)
Thanks for your great contribution.Your videos and other materials provide a comprehensive and easy-to-follow resource for using ComfyUI and Flux together.
Great course! I have one contribution. In this chapter, you could explain that by downloading the image and loading it into ComfyUI, it grabs the metadata along with the entire workflow. I was initially confused about why we needed to download the image, but then I figured it out.
as a photographer I find it pretty amazing that a company that used databased that were created whithout consent , right of usage and all ... Now after spending time money effort as the guts to gatekeep the results. I made peace whit the AI coming in a long time ago and learn to use it as the amzing tool it is but still my mindset will never change on the fact that if you are gonna use other peoples work to improve something you cant gatekeep and need to share the results.
This is the best flux beginner tutorial I have found on UA-cam and it has everything I need to get started. Thank you so much! So it looks like FP8 might scratch my itch under most circumstances. What I am wondering about is, with most LoRAs available on the internet being generated against SD1.5 or SDXL, what am I gonna combine with Flux? Also, where could I find additional styles like you have in your CSV, which are recognised by Flux?
Check the episode 7 for styles and how to use it. Only lora made for flux will work with flux, you cannot combine them. And also I saw comfyui need converted lora for comfyui the default ones didn't work
Excellent tutorial series I watched so far for ComfyUI and FLUX. Great work and thank you! By the way, can you share how to make the animation at the end this episode? D-ID or Heygen?
Unfortunately Flux Pro version is not available any longer. By the way, I think Stable Difusion is obsolete. Flux (and Mystic) is erasing it. Thank you for this lesson, again.
Thank you so much for your great work!! You really make it easy to understand the process! I have one question: Since I am a Mac M1 Max user, the Fp8 models dont really work well with my gpu. Do you know which are the equivalent fp16 files and where to finde them? Thanks!!
My dear friend, you are brilliant!) Thank you so much!) I wish you success in all spheres in your life!) Thank you for your work!) Maybe somebody knows how we can remove artifacts(like noise, some random pixels or "3d" shade on the bottom while generating the logos)?
Probably if is separate you have more control and combine different clips with different models depending on PC configuration, just like comfyui can do more if is modular. There are fp8 checkpoints that include those if you want to have clip inside
Hey thanks for sharing this! With the Flux Shnell model - I'm getting this "CheckpointLoaderSimple Error while deserializing header: HeaderTooLarge" when Queueing. I'm running ComfyUI on Runpod. Can you please advise?
someone on forge had similar problem , for him redownload fix it, like redownload the model, something similar happen with other ui github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8123
I did all the steps as instructed, when I double-clicked run_nvidia_gpu.bat, ComfyUI couldn't be launched successfully. And I had to double-click the ComfyUI icon to launch and tried to generate images using nf4. It reported:All input tensors need to be on the same GPU, but found some tensors to not be on a GPU. Then I tried anther workflow to see if it could run, same error reported. So I come here to get your help.
Can you try also on episode 10, maybe that worka. Did you use the workflow i shared on discord? Those should work unless some nodes messed up your comfyui
I had some problems with comfyui and had to reinstall on different folder because it took forever to load the model. Here are the results on the RTX060 6GB VRAM. First generation take longer because it loads the model, but after that is faster. flux1-dev-fp8 First generation time: 155 sec After that only: 113 sec flux1-schnell-fp8 First generation time: 63 sec After that only: 26 sec
@@pixaroma no luck for me on comfy ui, it takes 6 minutes compared to 1:30 minutes on forge!! same DEV fp8 model!!!! how did you manage to get so fast on comfy with 6gb VRAM?
On this video i cover that ua-cam.com/video/0Pmfpoi3-A0/v-deo.htmlsi=MA21lIwQKIJCoyom for flux just use normal language, or chatgpt but is quite good at understanding
Is it possible to have a separate directory for my flux models on a different drive? I really don't want to move my entire automatic 1111 folder to a different drive
Hello, that was very helpful but can i ask you why can't i queue the prompt on weight_dtype other than fp8_e4m3fn? It just becomes stuck on Load Diffusion Model and then ComfyUI just says Reconnecting while i don't recieve any specific message in console
Usually when it does that is run out off memory, so it is like it is crashing. So maybe is too much for your pc with other types. You can also try to add arguments in the run bat file like --lowvram to see if it helps
you dont connect it to easy positive, you connect it to clip text encoder, if you right click on text encoder you can convert widget to text, so you have 2 inputs, clip and text, so lora go to clip, and positive go to text. I have examples on discord on pixaroma-workflows
Hey, I have updated ComfyUI using the Manager/Update All option, but after that, the UI is not loading, I get a blank white screen only. Any ideas how to resolve this problem?
@@pixaroma yes, I already did that once, as I am following your Tutorials, and based on your videos, I need to update ComfyUI... but when I do, I end up with the white blank screen. Also, without the update I can not use Flux... 😞
When I try to use the prompt multiple styles selector node, the message "none" appears instead of styles. What should I do? I couldn't find a solution.
can you post on discord probably you missed a step, post some screenshots there on the comfyui channel, the link to discord is on the youtube header where the links are located
i tried f8, using a 4070 12gb card, with 32gb, and this is the second time it crashed. Not as violently as the 16bit version. I'm still not sure what i'm doing wrong. It also took forever trying to load. I bought more ram, i'm hoping that will help out here. I can only assume i'm still maxing out the ram. Maybe a bigger swap?
It shouldn't crash, i can run on 6gb of vram just takes a long time. Try looking up for gguf models and the nodes, there are like flux dev q4, q6, q8 and so on, start with q4 version and see if that one works, the workflow is a little different so search for a tutorial online first
ANYONE knows how to use control net for FLUX in FORGE? I get a model error if i use CANNY or DEPTH from XLabs-AI/flux-controlnet-collections 2024-08-17 21:38:29,461 - ControlNet - ERROR - Recognizing Control Model failed: C:\TEMP\FORGEFLUX\webui\models\ControlNet\flux-canny-controlnet_v2.safetensors
@@pixaroma yes, but on comfy i see tutorials that people actually need to wait 5 minutes for a schnell generation, on a 6GB VRAM card like i have, compared to FORGE that i can do a schnell generation in 20 seconds!!! so unless you have a comfy UI tutorial that makes a 6GB VRAM card works as fast as it does in FORGE, i might have to wait.... Is this comfy tutorial good for a 6gb Vram GPU?
I did as followed but neither the dev or schnell fp8 files let me generate images, SDXL models do work though, and yes I put them in the correct file path, I just get this message from the CMD: got prompt G:\ComfyUI_windows_portable>pause Press any key to continue . . . any ideas ?
Flux need a lot of vram maybe you don't have enough ram to run it, usually that happens when is out of vram and get that pause error, i was able to run flux schnell q4 on 6gh of vram the one from episode 10 maybe try that
you can use up to 2MP resolution, it can work quite well with different ratio compared with sdxl that if you go to high it start messing the composition. Just dont go to high and try to have width and hight divisible with 16 or 64, I saw it get better results. So for example I use 1368x768 then I upscale it to have it for youtube 16:9, but it can do also 1920x1080 but sometimes it can look a little blurry. Just keep in mind the bigger the size more time it will take to generate, and you need more vram to handle really big sizes.
@@pixaroma What can you recommend to improve photos (upscale)? So that nothing changes or changes minimally? Otherwise, some upscales change the details in the picture.
When i am in a hurry; just use the topaz gigapixel ai with sliders to a minimum :) in comfy UI i use ultimate sd upscale with low denoise. I plan to do a video for upscaling maybe next month with different methods so i will do more research then.
@@pixaroma Good! I've watched different videos and what the authors have is normal, it looks weird to me. Even watched the uses of the likes of 4x_NMKD-Siax_200k upscale.👍 topaz gigapixel I don't have it in the built-in. So I'll wait for this topic to be dealt with as well.🤔
@@Fayrus_Fuma topaz gigapixel ai is a paid software specialized on upscaling, is not in comfyUI. But I will do my research on different methods and I will present in a video my findings
I have a 3090 and 20 steps generation time with fp8 is like 35-50 seconds. I don’t use xformers python lib because I have problem installing it , but comfy ui runs even without . For you is the lack of xformers that determine the long execution time ? I was shocked when I see that 4090 can do 15 seconds.
@@pixaromaI reverted back to the old menu by disabling new manu from settings I have all those save load refresh settings but still can’t find manager and running into error: size mismatch :(
@pixaroma Any chance you could create a video teaching low vram user how to install it? I would love to become your student and be able to follow your tutorials, but unfortunately I am limited at the moment with my current hardware.
@ it depends on computer, for some there are models that work faster and other that work flower, try flux gguf q4 with a t5 q5 or q8 version for clip maybe that is able to run, and replace maybe the vae decode with vae decode tiled, not sure what other things you can try, it depends on your system, what can work for me it might not work for you, some extra system ram help, some ssd help to load faster the models and so on
did you tried one of the ready made workflows from discord for this episode? maybe you used a different loader, or didnt put it in the right folder. From the error i see is looking for it in checkpoints, but the model flux_dev should be in the unet folder
Thanks for the video! I not sure what in doing wrong I've tried to use NF4 version and I had error of memory out in the first run the I re run it and its really slowly 82s/ it while (20 minutes per gen) using the f8 version with the unets takes 2 minutes per pic. I'm using a potato lap with 6gbvram with 32 gb RAM . Do you why is happening?
I dont know maybe is some settings or something wrong with the version, try again to update comfyui. I have the same problem with rtx2060 with 6gb of vram and 64gb of ram, it took me 6 minute on second generation and 9 minutes on first generation. I think with unet it loads all clip models separate and maybe that help. Maybe wait a little until things get more stable, since is all new, and i see new version coming out each day. I am still using sdxl hyper on my old pc just because is fast and get ok results :)) and on new pc i have only half second difference between dev nf4 and dev fp8 so for me didnt help much on new pc.
I just tried the forge ui, and updated like a few seconds ago, and form 6 minutes with schnell it takes only 27 seconds, so is fast, you can give it a try until is fixed comfyui maybe to see if helps ua-cam.com/video/BFSDsMz_uE0/v-deo.html
After i run run_nvidia_gpu nothing happens, it just sits on the terminal or cmd screen, i cant even type in anything. I dont know how to open ComfyUI. Please help.
@ but cant we just use replicate to train? So i want to apply someone elses model from citivai to my own images… i trained my model on dev flux trainer but pics are too AI i want them realistic
@@goblando i just clicked on link and works fine, try mobile, try different browser, the link it works. The server is not public so you can not find it with search
is about the vram not system ram, you dont have enogh vram and flux need a lot of it. Try maybe the gguf models from ep10 there you have q4 that are smaller, q5, q8 etc all kinds depending on the vram of your video card.
I saw on reddit a post saying it was running on it but a 512px image took 6 minutes for dev nf4, so is better to just use sd 1.5 models, or maybe sdxl hyper, until you get a better video card
Download all the workflows from the pixaroma-workflows channel on Discord. The links are also organized there.
discord.com/invite/gggpkVgBf3
If you have any questions you can post them on discord or Pixaroma Community Group facebook.com/groups/pixaromacommunity
invite old?
@@jonahoskow7476 try this discord.com/invite/gggpkVgBf3
@@pixaroma Thank you!
First time I ever send a tip on UA-cam. Thanks for the amazing work, the way you share your knowledge with such great pedagogy is truly appreciated. I hope you continue to create this kind of video, which is very valuable and allows beginners like me to approach these topics without any friction.
Thank you so much for your support, more tutorials are coming each week ☺️
bro you are crazyyyyy!!!!
your contact this seires is a money based course for free, you are a very decent techer and i just binged the whole seires and discovered you uploded this right when i was looking for flux contant to get started!!!
thanks alot i wish i could support you more
thanks, glad I could help :)
Brilliant tutorial series !!! The best comfyui tutorial series on youtube by far ..
Since flux launched, I kept running away from it, due to how complicated it looked. You explained quite everything needed to get started. Thank you for this video!
It's a great adventure to follow your episodes. Thanks. This episode felt like Christmas
Glad to hear it ☺️
The way you explained this Flux model is clear and crisp 👍🏼👍🏼
Excellent, no really excellent step by step instructions 👍🏼👍🏼
I cannot explain how much this video means to me. Fantastic!!
Oh, and you have a very pleasant voice.
Thanks again!
My dude, you are the single best software trainer I have seen on YT
I have to say that I really appreciate your help learning all of this. Your videos are very well made, detailed and informative. I have NEVER wanted to use ComfyUI and now I LOVE IT! I watched several videos and they always made my head hurt!! Yours made sense!! Great job! I'm looking forward to watching more of them... I know I am barely scratching the surface! 🤓
I appreciate that, thank you 😊
Incredible work! This playlist of yours is a service to the world. Thank you very much! Subscribed and liked.
thank you 🙂
Man you are a giant in the AI community! Your work will truly leave legacy! Stay blessed! 💯
thank you so much 🙂
Thanks for the tutorial. Really well explained and great editing.
Thanks for sharing all the workflows, you are a legend.
It's simply amazing how thoroughly you explain every step and you are so eloquent - I'm devouring every video - 2 days and already at episode 7 from being total noob at this ..
TYSM ❤ (too poor now to donate but I hope soon I will be able to. Won't forget this favour of yours - I promise !)
Thank you ☺️
Thanks for your great contribution.Your videos and other materials provide a comprehensive and easy-to-follow resource for using ComfyUI and Flux together.
very clear explaination for the CK, Good tutorial😁
I will continue to comment and like every video you make cuz your content is fantastic! Thank you!
I appreciate that, thanks 🙂
Thank you so much!
This must have been a lot of work.
And it helped me so much to understand.
I'm glad I found your channel.
Cheers 😊
thank you 🙂
this is the information I was looking for presented so efficiently and neatly.. Thanks
Great course! I have one contribution. In this chapter, you could explain that by downloading the image and loading it into ComfyUI, it grabs the metadata along with the entire workflow. I was initially confused about why we needed to download the image, but then I figured it out.
Thanks, i think explained in one of the episodes, but is good to have it in the comments if someone looks for that info
This is very comprehensive information. Thanks very much for your sharing!
Great tutorial series, very easy to understand. Thank you :)
This was great. I did a side by side test using Euler/Normal and your advice... huge difference. Thanks!
Glad it helped ☺️
Thanks a ton for the amazing effort you put into your tutorials. They truly stand out!
Excellent tutorial, very detailed without being too complex. Thank you!
Amazing Quality of this video. Thanks so much for sharing
Great tutorial. I always learn so much from your videos!! 😊
as a photographer I find it pretty amazing that a company that used databased that were created whithout consent , right of usage and all ... Now after spending time money effort as the guts to gatekeep the results. I made peace whit the AI coming in a long time ago and learn to use it as the amzing tool it is but still my mindset will never change on the fact that if you are gonna use other peoples work to improve something you cant gatekeep and need to share the results.
still what they do is amazing, also you are the best 've seen at explaining this
Yes I am not going into that, they did approach the wrong way the training, but now is too late to do something about it.
This is the best flux beginner tutorial I have found on UA-cam and it has everything I need to get started. Thank you so much! So it looks like FP8 might scratch my itch under most circumstances. What I am wondering about is, with most LoRAs available on the internet being generated against SD1.5 or SDXL, what am I gonna combine with Flux? Also, where could I find additional styles like you have in your CSV, which are recognised by Flux?
Check the episode 7 for styles and how to use it. Only lora made for flux will work with flux, you cannot combine them. And also I saw comfyui need converted lora for comfyui the default ones didn't work
Thanks!
thank you so much 😊
You said leave the comment or like if find smth useful... your video is full of usefull!
Impressive work and great video! Thank you very much!
amazing vid as always! thank you!
Best guide ever!
Thanks for the vid, super informative thank you
Very good! Thanks!
Great video! Thank you!
yeahh finally got the proper updates! thanks very much!
Great Vid!
Great video !😊
Sir, your tutorial is great. will you teach about Control Net?
yes, i will do first for sdxl controlnet, and on later videos when more controlnet appear for flux for that as well
@@pixaroma Thank you very much, and I wish you a happy life.
Excellent tutorial series I watched so far for ComfyUI and FLUX. Great work and thank you! By the way, can you share how to make the animation at the end this episode? D-ID or Heygen?
I used image to video online on the Kling Ai website to get that animation
@@pixaroma Many thanks!
great video, I am going to install and try this today. I am currently using SD Forge to do some transparent bg images.
Thank you for this video. Very interesting for me
Thank you so much!
really useful . thank you so much
amazing tutorial
thank you very much
it's really helpful, thx
Unfortunately Flux Pro version is not available any longer. By the way, I think Stable Difusion is obsolete. Flux (and Mystic) is erasing it. Thank you for this lesson, again.
Flux pro was only available as API, never for download, flux is really good for the first version, hope they will release more
Is Flux no longer available before I watch this?
He talked about the pro that was never released, i use dev and schnell that are available and always was
best tut on youtube
Thank you so much for your great work!! You really make it easy to understand the process! I have one question: Since I am a Mac M1 Max user, the Fp8 models dont really work well with my gpu. Do you know which are the equivalent fp16 files and where to finde them? Thanks!!
try maybe gguf models like q8 or q4
alwayse the best
great effort here
E-boy! Still on the road, will watch the video in a couple hours.
thanks you so much!!!
Thanks!
Is it possible to use ComfyUI for changing my own videos? Like to change the style/colors of a video?
Not sure, i didn't do research on video yet, i am still on images workflows
My dear friend, you are brilliant!) Thank you so much!) I wish you success in all spheres in your life!)
Thank you for your work!)
Maybe somebody knows how we can remove artifacts(like noise, some random pixels or "3d" shade on the bottom while generating the logos)?
If the image is not too big like 1024*1024 px usually comes ok, i got artefacts when i tried to do images too big more than 2 megapixels
16:20 but the flux text encoders are encoders, so would they not go in that file?
Probably if is separate you have more control and combine different clips with different models depending on PC configuration, just like comfyui can do more if is modular. There are fp8 checkpoints that include those if you want to have clip inside
Hey thanks for sharing this! With the Flux Shnell model - I'm getting this "CheckpointLoaderSimple
Error while deserializing header: HeaderTooLarge" when Queueing. I'm running ComfyUI on Runpod. Can you please advise?
someone on forge had similar problem , for him redownload fix it, like redownload the model, something similar happen with other ui github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8123
Thanks a lot!
more, more more :! :))
thx alot!
Hello, can you make a tutorial on how to add upscaler and lora to this workflow, That would be very much appreciated
I have other episodes like 12 for upscaler and later episode with flux lora
I did all the steps as instructed, when I double-clicked run_nvidia_gpu.bat, ComfyUI couldn't be launched successfully. And I had to double-click the ComfyUI icon to launch and tried to generate images using nf4. It reported:All input tensors need to be on the same GPU, but found some tensors to not be on a GPU. Then I tried anther workflow to see if it could run, same error reported. So I come here to get your help.
Can you try also on episode 10, maybe that worka. Did you use the workflow i shared on discord? Those should work unless some nodes messed up your comfyui
@@pixaroma I‘ll watch episode 10 and download your workflow. Thank you, appreciate it!
How would you simplify their workflow?
I also find it needlessly complex.
I mostly use now the dev fp8. the workflow look similar to sdxl and the quality is good, I have more compact version on discord for those workflows
thanks very much, i have a problem : when i generete a picture it stucks : l after loarded completely in ksampler 0%__I 0/4 [00:00
Do you have enough vram? Maybe is too big for your video card. Does other models work like schnell?
ok so in minute 7:18 you mention that on a 6Gb GPU it takes 5 minutes to generate 1 image?
I am at that old pc rtx2060 6gb of vram, and 64gb of system ram. I updated the comfyui and I am redoing the test. I will get back to you with results
I had some problems with comfyui and had to reinstall on different folder because it took forever to load the model.
Here are the results on the RTX060 6GB VRAM. First generation take longer because it loads the model, but after that is faster.
flux1-dev-fp8
First generation time: 155 sec
After that only: 113 sec
flux1-schnell-fp8
First generation time: 63 sec
After that only: 26 sec
@@pixaroma nice!! lot better... is this using your tutorial for comfy ui?
@@liquidmind yes
@@pixaroma no luck for me on comfy ui, it takes 6 minutes compared to 1:30 minutes on forge!! same DEV fp8 model!!!! how did you manage to get so fast on comfy with 6gb VRAM?
where did you get this prompt styles selector and styles 😬
Check the episode 7 I explained everything there, they styles file I created and i use some custom nodes to load the file
Could you Please make a tutorial about Prompting for image generation.
On this video i cover that ua-cam.com/video/0Pmfpoi3-A0/v-deo.htmlsi=MA21lIwQKIJCoyom for flux just use normal language, or chatgpt but is quite good at understanding
Is it possible to have a separate directory for my flux models on a different drive? I really don't want to move my entire automatic 1111 folder to a different drive
check this video ua-cam.com/video/nkFr81sOehU/v-deo.html
thank you
Hello, that was very helpful but can i ask you why can't i queue the prompt on weight_dtype other than fp8_e4m3fn? It just becomes stuck on Load Diffusion Model and then ComfyUI just says Reconnecting while i don't recieve any specific message in console
Usually when it does that is run out off memory, so it is like it is crashing. So maybe is too much for your pc with other types. You can also try to add arguments in the run bat file like --lowvram to see if it helps
@@pixaroma yeah I was thinking of that, just opened performance screen and see that all of 32 GB RAM is gone. Thank you for the answer!
how do you use the lora and the styles togheter? Beacuse the Easy positive node doesnt have the clip input so Idk how to do it
you dont connect it to easy positive, you connect it to clip text encoder, if you right click on text encoder you can convert widget to text, so you have 2 inputs, clip and text, so lora go to clip, and positive go to text. I have examples on discord on pixaroma-workflows
Hey, I have updated ComfyUI using the Manager/Update All option, but after that, the UI is not loading, I get a blank white screen only. Any ideas how to resolve this problem?
Is there any indication on the cmd window a node that didn't install or something, maybe you can remove that node that cause problems
@@pixaroma there is only one warning regarding the WAS node suite: WAS Node Suite Warning: `ffmpeg_bin_path` is not set in
That should not influence i think, in worst case you reinstall comfyui, a fresh installation in a new folder
@@pixaroma yes, I already did that once, as I am following your Tutorials, and based on your videos, I need to update ComfyUI... but when I do, I end up with the white blank screen. Also, without the update I can not use Flux... 😞
Check the comments on this post on reddit someone said one of the thing worked www.reddit.com/r/comfyui/s/w3d0yIanXg
holy shit that's quality
hello, i did everything you said in the tutorial but i cannot find the nf4 checkpoint in the search, what should i do? i have 4060
Check episode 10 everyone has switched to gguf models
@@pixaroma thank you for everything man
When I try to use the prompt multiple styles selector node, the message "none" appears instead of styles. What should I do? I couldn't find a solution.
can you post on discord probably you missed a step, post some screenshots there on the comfyui channel, the link to discord is on the youtube header where the links are located
👍👍👍
Can't wait to go professional with Flux - with your tutorials by my side.
Same just descovered all this stable diffusion/civitai/comfy/black forest lab mess yesterday and I am still blown away.
ep 8 completed
i tried f8, using a 4070 12gb card, with 32gb, and this is the second time it crashed. Not as violently as the 16bit version. I'm still not sure what i'm doing wrong. It also took forever trying to load. I bought more ram, i'm hoping that will help out here. I can only assume i'm still maxing out the ram. Maybe a bigger swap?
It shouldn't crash, i can run on 6gb of vram just takes a long time. Try looking up for gguf models and the nodes, there are like flux dev q4, q6, q8 and so on, start with q4 version and see if that one works, the workflow is a little different so search for a tutorial online first
ANYONE knows how to use control net for FLUX in FORGE? I get a model error if i use CANNY or DEPTH from XLabs-AI/flux-controlnet-collections
2024-08-17 21:38:29,461 - ControlNet - ERROR - Recognizing Control Model failed: C:\TEMP\FORGEFLUX\webui\models\ControlNet\flux-canny-controlnet_v2.safetensors
I saw someone using them in comfyui, don't think it work yet with forge
@@pixaroma yes, but on comfy i see tutorials that people actually need to wait 5 minutes for a schnell generation, on a 6GB VRAM card like i have, compared to FORGE that i can do a schnell generation in 20 seconds!!! so unless you have a comfy UI tutorial that makes a 6GB VRAM card works as fast as it does in FORGE, i might have to wait....
Is this comfy tutorial good for a 6gb Vram GPU?
I did as followed but neither the dev or schnell fp8 files let me generate images, SDXL models do work though, and yes I put them in the correct file path, I just get this message from the CMD:
got prompt
G:\ComfyUI_windows_portable>pause
Press any key to continue . . .
any ideas ?
Flux need a lot of vram maybe you don't have enough ram to run it, usually that happens when is out of vram and get that pause error, i was able to run flux schnell q4 on 6gh of vram the one from episode 10 maybe try that
@ I have an RTX 3080, would that explain it?
Not sure, try smaller models and see if works
@@pixaroma my VRAM is 12GB 🤔 I’ll try smaller models anyway but it’s confusing me a little, you gained a sub btw, great content!
Question - what resolutions can be used other than 1024*1024? Will the picture be deformed in the same way if I use 1280*720/1920*1080?
you can use up to 2MP resolution, it can work quite well with different ratio compared with sdxl that if you go to high it start messing the composition. Just dont go to high and try to have width and hight divisible with 16 or 64, I saw it get better results. So for example I use 1368x768 then I upscale it to have it for youtube 16:9, but it can do also 1920x1080 but sometimes it can look a little blurry. Just keep in mind the bigger the size more time it will take to generate, and you need more vram to handle really big sizes.
@@pixaroma What can you recommend to improve photos (upscale)? So that nothing changes or changes minimally? Otherwise, some upscales change the details in the picture.
When i am in a hurry; just use the topaz gigapixel ai with sliders to a minimum :) in comfy UI i use ultimate sd upscale with low denoise. I plan to do a video for upscaling maybe next month with different methods so i will do more research then.
@@pixaroma Good! I've watched different videos and what the authors have is normal, it looks weird to me. Even watched the uses of the likes of 4x_NMKD-Siax_200k upscale.👍
topaz gigapixel I don't have it in the built-in. So I'll wait for this topic to be dealt with as well.🤔
@@Fayrus_Fuma topaz gigapixel ai is a paid software specialized on upscaling, is not in comfyUI. But I will do my research on different methods and I will present in a video my findings
I have a 3090 and 20 steps generation time with fp8 is like 35-50 seconds. I don’t use xformers python lib because I have problem installing it , but comfy ui runs even without . For you is the lack of xformers that determine the long execution time ? I was shocked when I see that 4090 can do 15 seconds.
I have xformers and yes can do like in 14 seconds a 1024px img, so more vram helps to speed up the generation, not sure how it is without xformers
@@pixaroma Thanks I will try to install xformers and see how it goes. vram is 24gb for both 3090 and 4090
I installed comy ui with the standalone build for Windows but I don't have the manager at the bottom any idea why?
Check episode 1 is a custom node that you must have installed since it is quite useful, after that it will appear
@@pixaroma Thank you, I will check all your episodes !
There is no manager now in comfy ui where is it
Did you install it? On the new interface should be with blue with a puzzle icon
@@pixaromaI reverted back to the old menu by disabling new manu from settings I have all those save load refresh settings but still can’t find manager and running into error: size mismatch :(
You can maybe uninstall the manager and try to install it again
@@pixaroma ok I will try thanks!
Is it possible to install successfully with 6 GB of Vram?
For me was slow, i used flux schnell on 6gb but for dev was slow
@@pixaroma Obrigado pela resposta 💜💜💜
@pixaroma Any chance you could create a video teaching low vram user how to install it? I would love to become your student and be able to follow your tutorials, but unfortunately I am limited at the moment with my current hardware.
@ it depends on computer, for some there are models that work faster and other that work flower, try flux gguf q4 with a t5 q5 or q8 version for clip maybe that is able to run, and replace maybe the vae decode with vae decode tiled, not sure what other things you can try, it depends on your system, what can work for me it might not work for you, some extra system ram help, some ssd help to load faster the models and so on
@pixaroma Thanks for the answer, if possible could you bring a tutorial for low vram users
flux model dont load : ERROR: Could not detect model type of: F:\ComfyUi\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux_dev.safetensors
did you tried one of the ready made workflows from discord for this episode? maybe you used a different loader, or didnt put it in the right folder. From the error i see is looking for it in checkpoints, but the model flux_dev should be in the unet folder
Thanks for the video! I not sure what in doing wrong I've tried to use NF4 version and I had error of memory out in the first run the I re run it and its really slowly 82s/ it while (20 minutes per gen) using the f8 version with the unets takes 2 minutes per pic. I'm using a potato lap with 6gbvram with 32 gb RAM . Do you why is happening?
I dont know maybe is some settings or something wrong with the version, try again to update comfyui. I have the same problem with rtx2060 with 6gb of vram and 64gb of ram, it took me 6 minute on second generation and 9 minutes on first generation. I think with unet it loads all clip models separate and maybe that help. Maybe wait a little until things get more stable, since is all new, and i see new version coming out each day. I am still using sdxl hyper on my old pc just because is fast and get ok results :)) and on new pc i have only half second difference between dev nf4 and dev fp8 so for me didnt help much on new pc.
I just tried the forge ui, and updated like a few seconds ago, and form 6 minutes with schnell it takes only 27 seconds, so is fast, you can give it a try until is fixed comfyui maybe to see if helps ua-cam.com/video/BFSDsMz_uE0/v-deo.html
After i run run_nvidia_gpu nothing happens, it just sits on the terminal or cmd screen, i cant even type in anything. I dont know how to open ComfyUI. Please help.
Nvm, i just needed to wait for like 2min on the cmd screen lol
Yes first time takes more time
I wish I could figure out how to download it and make this work.
If you have watched episode one and this one and you have windows and a good Nvidia card with enough vram it should work
can you train your OWN images with this?
You need something like flux gym to train your own images locally but i didn't try yet since it was a little bit tricky to install
@ but cant we just use replicate to train? So i want to apply someone elses model from citivai to my own images… i trained my model on dev flux trainer but pics are too AI i want them realistic
@@raz0rstr sorry I dont have much experience with training
my fav model so far,too bad it takes so long on my 3060 i am about to lay an egg
Great videos!
Discord link is expired though, can you update it plz?
the one from youtube header should always work discord.com/invite/gggpkVgBf3
@@pixaroma For some reason it always says "invalid link". Can you just give me the server name, and I can manually search for it and join? Thanks
@@goblando i just clicked on link and works fine, try mobile, try different browser, the link it works. The server is not public so you can not find it with search
✨👌💪😎🤗😎👍✨
How come I don't have the Manager button in my side menu?
Did you install it? Is it on episode 1
Yes I installed it following episode 1
When I use flux it uses up all of my ram up to 99% and won't load and I have no idea how to fix this. I have 32 gigs of ram and an RTX 3050
is about the vram not system ram, you dont have enogh vram and flux need a lot of it. Try maybe the gguf models from ep10 there you have q4 that are smaller, q5, q8 etc all kinds depending on the vram of your video card.
@@pixaroma yeah well the strange thing is when i check the task manager my ram goes straight to 99%
@@pixaroma it is my system ram for an unknown reason. I thought it would have been my video ram too but it isn't..
Maybe it takes too much time to load from where is the model now. I have it on SSD drive so it loads faster then HDD
@@pixaroma Good to know, thanks.
hi, after update i got white screen...
Try to update again maybe, what it says in the command window? What error?
Can run on GTX 1070? 32gb ram?
I saw on reddit a post saying it was running on it but a 512px image took 6 minutes for dev nf4, so is better to just use sd 1.5 models, or maybe sdxl hyper, until you get a better video card
@@pixaroma thanks bro