#### Links from my Video #### Get my Shirt with Code "Olivio" here: www.qwertee.com/ blackforestlabs.ai/flux-1-tools/?ref=blog.comfy.org huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora huggingface.co/black-forest-labs/FLUX.1-Depth-dev huggingface.co/black-forest-labs/FLUX.1-Redux-dev huggingface.co/black-forest-labs/FLUX.1-Fill-dev comfyanonymous.github.io/ComfyUI_examples/flux/
I have been testing the Depth lora, but the output is very far from the input image. Does not seem to work as the controlnet depth does. Even in your video, the 2 cars have similar position, but they are not sharing the same "depth": the input red car is seen from a higher position than the output one. In my test (a bedroom) the output image sometimes is "reversed". Is this expected? Does it mean that these two Canny and Depth are far from how ControlNet works?
I would recommend lowering your Flux guidance and trying DEIS/SGM_uniform or Heun/Beta to reduce the plastic skin appearance. The default guidance for Flux in sample workflows is *way* too high. For example 3.5 is the default but 1.6-2.7 yields superior results.
Yeah, but just clarifying that it is usually better for REALISTIC prompts. With vector, anime and flatter styles, keep guidance higher (like 3.5) in order to avoid unwanted noise. Just in case someone reading this gets confused
Finally playing w/this a bit. I wish the depth map nodes would keep the same resolution as the input image. I'm sure I could just use some math nodes to do that, but seems like it would be automatic, or a checkbox on the node. This matters in these setups because the input controlnet image (depth/canny) drives the size of the latent image, thus the size of your final image.
Is there a way to adjust the depth map so comfy doesn't take it so literally, and how do you set up a batch of images so you don't have to do on at a time?
Mine threw up an error when running through Canny Edge but not with Depth Anything. If I disconnect it, run the process once and then reconnect/run, it works. Says I'm trying to run conflicting models the first time but everything exactly matches what you're running. Just letting other who might have the same issue what to do.
FIX (Output not like the input at all) : If it's not working for you, Add a small CN chain (Load controlnet+Apply controlnet) between the Pix2pix and the Ksampler , Add the image of the Depth or canny whatever to both the pix2pix AND the controlnet as usual , that way it will work 100% , just play arround with the settings , You're welcome .
Thanks for sharing! That's great news! Let's see if native control nets work better... As it usually happens with FLUX, some things just don't seem to make a lot of sense... Like what on Earth is with Flux Guidance 10? Or 30??! Also, why do we need a whole 23GB separate model just for the inpainting (which we can already do with the help of masking and differential diffusion anyways). Why? So many questions, Black Forest Labs, so many questions...
I have problems with installing many nodes (Depth Anything). Let me know what version of Python you use? I have 3.12 included with Comfy and I often have this exact problem.
comfy is selfcontaint, meaning it comes with the correct python it needs. however if you have run it for a long time, i would rename the comfy folder and download it fresh. you need to reinstall all custom node packs and move the models over, but it is worth it
@@OlivioSarikas hmm...its new installation and its give me "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'" error GPT says its because i should use python 3.10
Can we use the inpainting model together with lora trained on regular dev model? This would be game changer because like this 2 consistent unique characters in one image would be possible 🥳
I get this error "CLIPVisionLoader Error(s) in loading state_dict for CLIPVisionModelProjection:" while loading clip vision, even though I downloaded this fild (siglip-so400m-patch14-384.safetensors) 3.4 GB and this file (sigclip_vision_patch14_384.safetensors) 836 MB and placed them in my ComfyUI\models\clip_vision directory, anyone know what I should do?
#### Links from my Video ####
Get my Shirt with Code "Olivio" here: www.qwertee.com/
blackforestlabs.ai/flux-1-tools/?ref=blog.comfy.org
huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora
huggingface.co/black-forest-labs/FLUX.1-Depth-dev
huggingface.co/black-forest-labs/FLUX.1-Redux-dev
huggingface.co/black-forest-labs/FLUX.1-Fill-dev
comfyanonymous.github.io/ComfyUI_examples/flux/
👋 hi
i hate the spagetti program
where to put the folders + maybe do a forge version?
I have been testing the Depth lora, but the output is very far from the input image. Does not seem to work as the controlnet depth does. Even in your video, the 2 cars have similar position, but they are not sharing the same "depth": the input red car is seen from a higher position than the output one. In my test (a bedroom) the output image sometimes is "reversed". Is this expected? Does it mean that these two Canny and Depth are far from how ControlNet works?
I would recommend lowering your Flux guidance and trying DEIS/SGM_uniform or Heun/Beta to reduce the plastic skin appearance. The default guidance for Flux in sample workflows is *way* too high. For example 3.5 is the default but 1.6-2.7 yields superior results.
Yeah, but just clarifying that it is usually better for REALISTIC prompts. With vector, anime and flatter styles, keep guidance higher (like 3.5) in order to avoid unwanted noise. Just in case someone reading this gets confused
@@jorolesvaldo7216 your point is well taken! And the opposite is true for painting styles. Flux is weird these ways :P
Do you think there will have a GGUF version of the regular Flux model with the inpaint thing in future for low VRAM users?
Are there going to be GGUF versions of these models?
Finally playing w/this a bit. I wish the depth map nodes would keep the same resolution as the input image. I'm sure I could just use some math nodes to do that, but seems like it would be automatic, or a checkbox on the node. This matters in these setups because the input controlnet image (depth/canny) drives the size of the latent image, thus the size of your final image.
I'm getting the shapes cannot be multiplied error for some reason and I don't know why I have everything set up properly.
Is there a way to adjust the depth map so comfy doesn't take it so literally, and how do you set up a batch of images so you don't have to do on at a time?
I am seeing very grainy results with the flux fill model for inpainting, wonder if its my settings or the model
What am I missing. The output image doesn't match the input at all when I do it.
Same here... Depth and Canny seem not to work like a controlnet. I am confused.
@@stefanoangeliph I updated Comfy and it's working now.
Can you do OpenPose yet for Flux-Forge?
For making the REDUX model to work you have to add a node to control the amount of strength.
I'm using SD 3.5 L for the Ultimate Upscaler - with a detailer Lora - and it works fantastic!
Great video! Redux, Depth and Canny (have not tried Fill yet) works with Pixelwave model too.
Thank you for the video. The inpaint looks promising. Do you think the 24GB inpainting model will work with a 4060Ti (16GB of VRAM) ?
Mine threw up an error when running through Canny Edge but not with Depth Anything. If I disconnect it, run the process once and then reconnect/run, it works. Says I'm trying to run conflicting models the first time but everything exactly matches what you're running. Just letting other who might have the same issue what to do.
Got this and your share helped, thanks.
FIX (Output not like the input at all) : If it's not working for you, Add a small CN chain (Load controlnet+Apply controlnet) between the Pix2pix and the Ksampler , Add the image of the Depth or canny whatever to both the pix2pix AND the controlnet as usual , that way it will work 100% , just play arround with the settings , You're welcome .
I just noticed Olivio has a mouse callus. It is a true badge of honor.
that's a gooner blister
Those tabs and that mask shape was wild. Thanks for the info :)
Thanks for sharing! That's great news! Let's see if native control nets work better... As it usually happens with FLUX, some things just don't seem to make a lot of sense... Like what on Earth is with Flux Guidance 10? Or 30??! Also, why do we need a whole 23GB separate model just for the inpainting (which we can already do with the help of masking and differential diffusion anyways). Why? So many questions, Black Forest Labs, so many questions...
I edited my reply because I realized there's also a Lora for depth, so, my bad. But the rest is still valid, why does Flux have to be so wild?? :)))
Great video and I want you to know that I really like your shirt!
thank you :) i put a link to it in my info :)
@@OlivioSarikas Seen that, gonna get me one too!
hallow Olivio , What is the minimum GPU VRAMs that can run Flux on ComfyUI ?
How did you know you need a visual CLIP model?
I have problems with installing many nodes (Depth Anything). Let me know what version of Python you use? I have 3.12 included with Comfy and I often have this exact problem.
comfy is selfcontaint, meaning it comes with the correct python it needs. however if you have run it for a long time, i would rename the comfy folder and download it fresh. you need to reinstall all custom node packs and move the models over, but it is worth it
@@OlivioSarikas hmm...its new installation and its give me "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'" error GPT says its because i should use python 3.10
@@mikrobixmikrobix best ask in my discord. i'm not good at tech support and ask there often myself
GREAT t shirt.... and episode, as always.
Using that same workflow for inpainting, im getting error that its missing noise input.
Is it working with GGUF flux models ?
Can we use the inpainting model together with lora trained on regular dev model?
This would be game changer because like this 2 consistent unique characters in one image would be possible 🥳
I don't know but it's definitely worth a try. Just a pity it requires the full model.
I haven't tried it, but i don't see why this shouldn't work
@@Elwaves2925 It doesn't. You can convert it to fp8 yourself or grab it off of civitai.
does Redux work with gguf q4 version ? as i only has 8g Vram.
My dawg, that shirt. Love it.
Where i can find all workflows that you're using in this video?
--> 01:25
Thank you Again, OV
6:11 im not sure whats wrong but Redux output image comes out blurry
Can you run this with 12gb vram with gguf q4 flux ?
I need that shirt :O (edit: oh hello link! Thanks!!!)
2:10 What is "fp8_e4m3fn_fast" and where can i download?
did you update your comfyui? for me it was just there.
On time!!
I just want video generation in forge FLUX
I get this error "CLIPVisionLoader
Error(s) in loading state_dict for CLIPVisionModelProjection:" while loading clip vision, even though I downloaded this fild (siglip-so400m-patch14-384.safetensors) 3.4 GB and this file (sigclip_vision_patch14_384.safetensors) 836 MB and placed them in my ComfyUI\models\clip_vision directory, anyone know what I should do?
Great video.
easiest way to run flux on mac in comfy?
What about Forge integration?
not supported yet
Flux is so all over the place :/ guidance 30 :D
Would my 3070 8gb be able to run flux?
i was told yes. you might need a guf model though that has to go into the unet folder and needs the unet loader. but better ask in my discord
@@OlivioSarikas what about 3080ti 12gig
comfyui AGAIN
why everything is comfy
best it get's everything first and is the best ui to try new things