Just wanted to say thanks for the video. Straightforward and easy to understand. Managed to finally get up and running by following this having previously been struggling
Right! After 20 seconds I felt this is a non bu||s#|+ video that makes me hate nodes and the word comfortable. Thank Christ! I think i will finally understand this crap! So used to A1111
@@ctrlartdel the shift from A1111 to ComfyUI can feel like a huge leap, but once you get past the initial hurdles, it opens up a whole new world of possibilities you're on the right path, and soon enough, those nodes will feel like second nature.
Hello there, thank you, and the prompt is right here. " A hyper-realistic human face, left side erupting with a gnarled oak tree, bark fusing seamlessly with flesh. Right side crumbling granite, veins of quartz glinting. Roots intertwine with stone fragments. Dappled sunlight, moss-covered ground. Renaissance-inspired composition, evoking nature's reclamation of humanity "
Edit.. When I used 'Load Diffusion Model' it will detect the model type so I used that instead, but still confused as to why 'Load Checkpoint' wont detect it. *shrugs* --- ComfyUI cant seem to 'detect' the type of model for flux. Error occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: E:\ComfyUI\ComfyUI\models\checkpoints\flux1Dev_v10.safetensors If I swap it with anyone of my SD or SDXL models it renders fine. Anyway to tell comfyUI what type of model Im using ? Thanks.
Hello there, If you intend to use 'Load Checkpoint,' ensure that all model files are in the correct format and directory. You should also make sure that your ComfyUI is up to date, as upgrades can sometimes fix integration problems.
I run flux as the checkpoint version. It's been working great for me. Hands though, haven't lived up to the hype.They still end up looking like they just got pulled out of a garbage disposal. I'm trying to see if I get better results with different samplers and schedulers.
Experimenting with different samplers and schedulers is a smart move. I'd suggest really focusing on refining your prompts too, it can sometimes make a world of difference. .
I'm glad you found exactly what you were looking for.. While I haven't put together a Docker container for this workflow myself, it's definitely an interesting idea to look into. Thank you for the great suggestion!
@@goshniiAI I will try to do so and let you know. It would allow people to deploy it on online services without a hassle. Most of us, do not have adequate GPUs to run it locally.
Hello there. You are correct. The FP8 model works in conjunction with the checkpoint node and the Ksampler; however, in order to use FLUX 1 Dev, you must also have the Load Diffussion Model and the Sampler Custom Advanced node.
I have trained my own face as a lora model and I can use it on a site called “repilcate” but I have limitations in creating photos there. Is it possible to generate images using my own face model lora in Flux with Comfyui?
the model might not be placed in the correct folder, or there could be a naming issue. Make sure the FLUX model file is in the checkpoints directory within your ComfyUI setup and that it’s named correctly. Also, double-check that the file extension is ".safetensors" recognised by ComfyUI. If it still doesn’t show up, try restarting ComfyUI to refresh the list. Sometimes, just a simple reboot can resolve these.
can u make a video or reply comment . how to have a controlnet settings like prompt is more imortant, balanced or controlnet is more important in comfiui... this settings is available in automatic1111 or stable forge but i dont see this kindof settings in comfyui in whole of internet search ... thankyou... i want this settings because it is important for my workflow.. ty
That's a good point! ComfyUI does not have a direct setting for this, like Automatic1111 or Stable Forge, but you may get a similar effect by (tweaking the weights and fine-tuning the apply controlnet node) connections in your workflow. -Less weight provides the checkpoint model more flexibility. -Higher weight directs more attention to the controlnet processor. Try playing with node weights; you might find a perfect balance for you!
I have a 3060 and am running on 12GB of RAM, it is doable. Save the workflow and restart your PC or system to ensure that no intensive programs are running. Run comfyUI first thing after restarting, Load the saved workflow and then queue your prompt again. For the First generation it will take longer but remain patient until it is completed.
I also got stuck in my process. Save the workflow and restart your PC or system to ensure that no intensive applications are running. Run comfy first thing after resuming, then queue your prompt again. Remain patient until completion for the first generation.
You are very welcome, and I am glad to hear that. you can find the link here to install the CPU and GPU monitors. ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=VHNZvuuaBuI6v23D
Yes, for controlnet. At the time of writing, we only have canny detection for flux. tinyurl.com/yefcavna however I feel it will only be a matter of time until we receive the other versions. I'm not sure about the Ipadapter at the time of writing, but possibly it will also be available soon.
The GPU is running out of memory while sampling. Begin with a lower resolution and gradually increase it as you fine-tune the settings. Alternatively, close any background apps that are using GPU resources.
Hello there , you can follow these steps. -Save the workflow and restart your computer to make sure no intensive programmes are running. -When you resume, run ComfyUI first, and then queue your prompt once more. -Wait patiently for the first generation to finish.
Awesome! I believe FP8 is gaining traction because of its efficiency and decreased VRAM usage. However, FP16 can also produce excellent results, especially if you desire higher accuracy in your outputs.
The error "float8_e4m3fn" could be due to a problem with PyTorch or the CUDA version installed. I propose that you double-check that you have the most recent versions of PyTorch and NVIDIA drivers. Sometimes, a quick update may resolve compatibility issues.
@@goshniiAI Well I have the latest Nvidia drivers and downloaded and installed latest version of pytorch and here is the curious thing, when I do the pip show torch command in either Comfy cmd or my windows cmd it still show version 201 but latest is 240, any thoughts??
I totally get where you're coming from! But for those who want to understand the nuts and bolts of how it all works-and maybe even customise things to their liking-mastering the process gives you full control.
Hello, there. Your node settings or input may be incompatible. Double-check your node connections, or if you are using older nodes, updating to the latest version may solve the size mismatch problem.
Just wanted to say thanks for the video. Straightforward and easy to understand. Managed to finally get up and running by following this having previously been struggling
You're very welcome! i appreciate sharing your feedback and am glad the video helped you get up and running with FLUX.
Right! After 20 seconds I felt this is a non bu||s#|+ video that makes me hate nodes and the word comfortable. Thank Christ! I think i will finally understand this crap! So used to A1111
@@ctrlartdel the shift from A1111 to ComfyUI can feel like a huge leap, but once you get past the initial hurdles, it opens up a whole new world of possibilities
you're on the right path, and soon enough, those nodes will feel like second nature.
Although I have it running already, finding a 'here's the basic start from scratch' video is SO hard to find. Thanks for doing it!
Thank you for the awesome feedback!
Just an amazingly simple video with fantastic results! Thanks for this short & precise to the point tutorial. Very much appreciate it.
It's great to hear you found it meaningful. Thank you for the kind words!
Great tutorial, my man. The speed is a great middle ground approach - not too fast, not to slow. One love.
Thanks for the awesome feedback! I’m glad you found the pacing just right. happy Craeations, Much love to you too.
I don't usually comment the videos but as our friend said, Thank you for explaining the process from scratch ❤.
I'm really glad you made an exception to comment, your feedback means a lot! ♥
This workflow runs much faster on the initial runthrough than the flow included on the HF page. Thanks for posting.
Thanks for the feedback, & you're welcome! -I'm glad to know the workflow is speeding things up for you!
superbly well explained one by one, well done 👍
Thank you for the compliments
!
thanks man
thanks for the Luv! you are welcome.
great video!
i am glad you enjoyed it
thank you! liked and subscribed :D
Welcome aboard and thank you as well. :)
Thank you!
You are welcome and Thank you for the Love.
Thanks for this.. major help.
Glad to hear it! You are most welcome.
hey what was the prompt with the half tree half human face, from the start of the video? I really liked that one :)
Hello there, thank you, and the prompt is right here.
" A hyper-realistic human face, left side erupting with a gnarled oak tree, bark fusing seamlessly with flesh. Right side crumbling granite, veins of quartz glinting. Roots intertwine with stone fragments. Dappled sunlight, moss-covered ground. Renaissance-inspired composition, evoking nature's reclamation of humanity "
@@goshniiAI thank you very much for the prompt! will play around with it once back from vacation :)
What are you using that shows the hardware performances? :)
Great tutorial!
Thank you! it is the resource monitor.
To install it you can watch the video right here. ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=Pwz-i9EXQwVOhLQw
@@goshniiAIThank you very much 😊
Edit.. When I used 'Load Diffusion Model' it will detect the model type so I used that instead, but still confused as to why 'Load Checkpoint' wont detect it. *shrugs*
---
ComfyUI cant seem to 'detect' the type of model for flux.
Error occurred when executing CheckpointLoaderSimple:
ERROR: Could not detect model type of: E:\ComfyUI\ComfyUI\models\checkpoints\flux1Dev_v10.safetensors
If I swap it with anyone of my SD or SDXL models it renders fine.
Anyway to tell comfyUI what type of model Im using ?
Thanks.
Hello there, If you intend to use 'Load Checkpoint,' ensure that all model files are in the correct format and directory. You should also make sure that your ComfyUI is up to date, as upgrades can sometimes fix integration problems.
I run flux as the checkpoint version. It's been working great for me. Hands though, haven't lived up to the hype.They still end up looking like they just got pulled out of a garbage disposal. I'm trying to see if I get better results with different samplers and schedulers.
Have you tried the realism lora? Ive had incredible hands with it
Experimenting with different samplers and schedulers is a smart move. I'd suggest really focusing on refining your prompts too, it can sometimes make a world of difference. .
Exactly what I was looking for! Any idea to put all this into a docker container for easy deployment?
I'm glad you found exactly what you were looking for.. While I haven't put together a Docker container for this workflow myself, it's definitely an interesting idea to look into. Thank you for the great suggestion!
@@goshniiAI I will try to do so and let you know. It would allow people to deploy it on online services without a hassle. Most of us, do not have adequate GPUs to run it locally.
@@RafaelKluender Please do, and I will be glad to look into any information you provide. What online service are you currently using?
when i use the flux 1 dev standard flow, i cant use a ksampler, seems this only worls with checkpoints ?
ive been trying for a while
Hello there. You are correct. The FP8 model works in conjunction with the checkpoint node and the Ksampler; however, in order to use FLUX 1 Dev, you must also have the Load Diffussion Model and the Sampler Custom Advanced node.
I have trained my own face as a lora model and I can use it on a site called “repilcate” but I have limitations in creating photos there. Is it possible to generate images using my own face model lora in Flux with Comfyui?
YES! You can definitely use your custom LoRA model with FLUX in ComfyUI. However, make sure your LoRA training is compatible with FLUX
HOW did you load that flux model with a load checkpoint? It doesn't appear in my load checkpoint so already this doesn't work...
the model might not be placed in the correct folder, or there could be a naming issue.
Make sure the FLUX model file is in the checkpoints directory within your ComfyUI setup and that it’s named correctly.
Also, double-check that the file extension is ".safetensors" recognised by ComfyUI.
If it still doesn’t show up, try restarting ComfyUI to refresh the list. Sometimes, just a simple reboot can resolve these.
is there a way to use flux with animatediff for creating videos?
Flux with animatediff to create videos is super exciting, and I’m hoping the nodes to make that happen will be available soon.
can u make a video or reply comment . how to have a controlnet settings like prompt is more imortant, balanced or controlnet is more important in comfiui... this settings is available in automatic1111 or stable forge but i dont see this kindof settings in comfyui in whole of internet search ... thankyou... i want this settings because it is important for my workflow.. ty
That's a good point! ComfyUI does not have a direct setting for this, like Automatic1111 or Stable Forge, but you may get a similar effect by (tweaking the weights and fine-tuning the apply controlnet node) connections in your workflow.
-Less weight provides the checkpoint model more flexibility.
-Higher weight directs more attention to the controlnet processor.
Try playing with node weights; you might find a perfect balance for you!
You can't select a VAE with this workflow?
You are right. The VAE for this workflow is provided by the same checkpoint you may select from the checkpoint node
first of all thanks for this video, how can i add the color indicator that show CPU, RAM ....?
You are most welcome. you can install the resource monitor here by viewing this video ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=C9HuqHnAx3IC2XLU
Rtx 3050 with 16gb of VRam am I boned?? I keep trying but get errors each time.
I have a 3060 and am running on 12GB of RAM, it is doable.
Save the workflow and restart your PC or system to ensure that no intensive programs are running.
Run comfyUI first thing after restarting,
Load the saved workflow and then queue your prompt again.
For the First generation it will take longer but remain patient until it is completed.
Hi, how do you show VRAM use? I use crystals but it only show CPU and RAM usage. :)
Update all of your nodes and run a comfy update. I am sure that will take care of that.
@@goshniiAI Nope. Maybe it's because I run ComfyUI via Zluda (because I have an AMD GPU)...
24gb gpu, but some reason the flux image queue is stuck processing, is this a known issue?
I also got stuck in my process. Save the workflow and restart your PC or system to ensure that no intensive applications are running. Run comfy first thing after resuming, then queue your prompt again. Remain patient until completion for the first generation.
what about using loras with this?
Hello there, yes, that is possible, however, the Lora base models used must be for FLUX
How to on Queue Prompt Menu little bit Down [CUP Ram Gpu Vram Temp Hdd] Thanks for the video and easy to understand..
You are very welcome, and I am glad to hear that. you can find the link here to install the CPU and GPU monitors. ua-cam.com/video/tIfr_duWyZQ/v-deo.htmlsi=VHNZvuuaBuI6v23D
Controlnet, ipadapter possible,sir?😊
Yes, for controlnet. At the time of writing, we only have canny detection for flux. tinyurl.com/yefcavna
however I feel it will only be a matter of time until we receive the other versions.
I'm not sure about the Ipadapter at the time of writing, but possibly it will also be available soon.
@@goshniiAI sound great, sir🙏🏻🥹❤️
i get an error that says "failed to fetch"
Failing to fetch is sometimes a related glitch. Try clearing your browser cache and refreshing ComfyUI or updating any dependencies
❤🙏thx
💜♥💚
thank you!
You're welcome! thank you for your response
KSampler
Allocation on device
this error happens to me
The GPU is running out of memory while sampling. Begin with a lower resolution and gradually increase it as you fine-tune the settings. Alternatively, close any background apps that are using GPU resources.
cheers... 👍
Thank you, mate!
it crashes with a "reconnecting..." dialog
Hello there , you can follow these steps.
-Save the workflow and restart your computer to make sure no intensive programmes are running.
-When you resume, run ComfyUI first, and then queue your prompt once more.
-Wait patiently for the first generation to finish.
🙏
💜💜
5:55 you forgot to change the seed to 50 for comparison ;)
Yes, you are correct. I forgot about the seed to compare, so I changed it to fixed when I noticed. I appreciate your observation.
why is everyone using the fp8....why not the fp16? I get amazing results with the fp16
Awesome! I believe FP8 is gaining traction because of its efficiency and decreased VRAM usage.
However, FP16 can also produce excellent results, especially if you desire higher accuracy in your outputs.
Followed your workflow exactly and I still get a AttributeError: module 'torch' has no attribute 'float8_e4m3fn, I have a RTX 3070 w/8GB?
I saw on some other videos that 12 GB VRAM is the minimum needed right now for the model. Not sure how true that is or if there is a work around.
@@EmeranceLN13 Well I guess i'll just wait till they optimize it
The error "float8_e4m3fn" could be due to a problem with PyTorch or the CUDA version installed. I propose that you double-check that you have the most recent versions of PyTorch and NVIDIA drivers. Sometimes, a quick update may resolve compatibility issues.
@@goshniiAI ok will do
@@goshniiAI Well I have the latest Nvidia drivers and downloaded and installed latest version of pytorch and here is the curious thing, when I do the pip show torch command in either Comfy cmd or my windows cmd it still show version 201 but latest is 240, any thoughts??
why do difficult, there are already simple workflows out there to download lol huggingface etc
I totally get where you're coming from! But for those who want to understand the nuts and bolts of how it all works-and maybe even customise things to their liking-mastering the process gives you full control.
size mismatch for double_blocks.0.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1])
Hello, there. Your node settings or input may be incompatible. Double-check your node connections, or if you are using older nodes, updating to the latest version may solve the size mismatch problem.
Thank you!
You are most welcome.