@@ApexArtistX better for people with lower end GPU's and the controlnets are better trained but it's nice to have options for Flux. The controlnets are still new. It will get better.
In-paint works great. Thanks for your workflows. However, for some reason masking subject and changing backgrounds with FLUX is weird. I think FLUX ins't yet ready for changing backgrounds.
The context mask is gold, some many times when I just need something small to inpaint and inpaint failed to know what to do. This context mask is something I did not know is available. Thank you so much
@@contrarian8870 it’s called crystools. Very handy to have. 👍🏼
2 дні тому
Again very useful, thank you! I ask, why is the mouse graphic and not photo-realistic, while the whole image is photo around it? Is there any way to make the display more consistent, or is that where Flux is at the moment?
It works much better with the second area however the results from the inpainted area are very blurry, any idea? I paint very small areas like finger wrapping around a can, that is already in the image.
Doesn't work that way. For example with SDXL models there sometimes would be trained fine tune models just for inpainting but not Lora's. Maybe eventually.
I see what you mean. I’ll see if I can figure it out. It’s definitely doable with SDXL haven’t tried with flux but I’d imagine it’s a similar process. Btw if you have a decent gpu it’s much easier to do this on invoke ai.
@@MonzonMedia When I am working on Flux Fill on humans I see that the hands, feet and sometimes the entire body gets messed up. Like the others have suggested it would be good to be able to condition Flux Fill with a controlNet specifically DWpose. I also included a power LoRA Loader to your workflow and this enhances the workflow especially for N**W stuff and you could do the same too. 😀
@@MonzonMedia I’m pretty sure Nerdy would be psyched to see “his family” slowly navigating the underground UA-cam tunnels. Now, the real question is-who’s next? You know, rodents are pretty clever. 😎🐭
HI, I'm trying to follow you with the wine and cheese example , however the included workflow is completely different from yours. I'm totally confused.
@@MonzonMedia Ah I see. Well it had me learn on my own that you can make the link render mode straight and then use reroute nodes because that was what confused me. Trying to duplicate your workflow as opposed to the link, I was like what is that? Anyone thank you. All your material is 100% unadulterated quality. I always check to see what you have for us because you have your finger on the pulse.
@@noNumber2Sherlock No worries at all! In case you want the exploded version here it is. drive.google.com/file/d/1AcSWxggnm97mc7bzRdPnkVpok8KyNVhR/view?usp=sharing There are many ways to route nodes, everyone has their own way that makes sense to them. Appreciate the kind words and support! 🙌
What has been your experience so far with the Fluxfill in-outpaint model?
SDXL better 😂
@@ApexArtistX better for people with lower end GPU's and the controlnets are better trained but it's nice to have options for Flux. The controlnets are still new. It will get better.
In-paint works great. Thanks for your workflows. However, for some reason masking subject and changing backgrounds with FLUX is weird. I think FLUX ins't yet ready for changing backgrounds.
For me, just like the other workflows, It's paints random noise when outpainting.
Anyone know why this might happen?
@@user-cz9bl6jp8b it does paint my prompt but very unreal and buzzy
The context mask is gold, some many times when I just need something small to inpaint and inpaint failed to know what to do. This context mask is something I did not know is available. Thank you so much
You’re welcome! Glad it was helpful 👍🏼
@@MonzonMedia Yes Thank you, helped me a lot!
Great to hear! I appreciate the support!
Great video buddy, thanks for sharing all the information! Also congratulations on 30K, that's fantastic!! 🙌🙌🥳🥳
Appreciate it bro! Thanks for being part of the journey! 👊🏼🙌🏼
Thank you for this awesome tutorial and sharing your workflow. You rock!
You’re welcome! Enjoy and have fun!
Thank you very much!!!!
@@baheth3elmy16 you’re welcome very much! 😊
Thanks!
You’re welcome! Let me know if or how it goes 👍🏼
Thanks. What's the name of the node that shows CPU/GPU etc in Comfy's top bar?
@@contrarian8870 it’s called crystools. Very handy to have. 👍🏼
Again very useful, thank you! I ask, why is the mouse graphic and not photo-realistic, while the whole image is photo around it? Is there any way to make the display more consistent, or is that where Flux is at the moment?
Well, it's a very short and simple prompt and I was just demonstrating how inpainting works.
@MonzonMedia Thanks!
It works much better with the second area however the results from the inpainted area are very blurry, any idea? I paint very small areas like finger wrapping around a can, that is already in the image.
Play around with the blur mask pixel numbers and maybe the rescale algorithm image size, however for smaller details there is only so much you can do.
@@MonzonMedia Thank you :)
is there a way the flux fill can work with CN, something like when u r inpainting a model or a mouse with expected pose
Sure, I have a workflow that does that but it’s with SDXL but will try to convert it to a flux workflow and I’ll share with you all when I do. 👍🏼
@@MonzonMedia looking forward to it
Is it possible to inpaint with LoRA?
Doesn't work that way. For example with SDXL models there sometimes would be trained fine tune models just for inpainting but not Lora's. Maybe eventually.
I could not figure out how to combine Flux Fill with Flux Depth or Canny, do you think is this possible?
What are you trying to do? For controlnet you just use the regular Flux dev or schnell model.
@@MonzonMedia Imagine I want to inpaint something but with the influence or conditioning of a Depth map.
@@pablo.montero Correct. I would like Flux Fill to be conditioned with ControlNet. Canny, Depth and OpenPose...all would be good.
I see what you mean. I’ll see if I can figure it out. It’s definitely doable with SDXL haven’t tried with flux but I’d imagine it’s a similar process. Btw if you have a decent gpu it’s much easier to do this on invoke ai.
@@MonzonMedia When I am working on Flux Fill on humans I see that the hands, feet and sometimes the entire body gets messed up. Like the others have suggested it would be good to be able to condition Flux Fill with a controlNet specifically DWpose. I also included a power LoRA Loader to your workflow and this enhances the workflow especially for N**W stuff and you could do the same too. 😀
Nice homage to that other guy who uses rodents in his vids.
Nerdy Rodent? hehehehe....didn't think of that actually 😊 🐭
@@MonzonMedia I’m pretty sure Nerdy would be psyched to see “his family” slowly navigating the underground UA-cam tunnels. Now, the real question is-who’s next? You know, rodents are pretty clever. 😎🐭
HI, I'm trying to follow you with the wine and cheese example , however the included workflow is completely different from yours. I'm totally confused.
The workflow is the same, the beginning is just an exploded version. The one linked in the google drive is just cleaned up and organized.
@@MonzonMedia Ah I see. Well it had me learn on my own that you can make the link render mode straight and then use reroute nodes because that was what confused me. Trying to duplicate your workflow as opposed to the link, I was like what is that?
Anyone thank you. All your material is 100% unadulterated quality. I always check to see what you have for us because you have your finger on the pulse.
@@noNumber2Sherlock No worries at all! In case you want the exploded version here it is. drive.google.com/file/d/1AcSWxggnm97mc7bzRdPnkVpok8KyNVhR/view?usp=sharing There are many ways to route nodes, everyone has their own way that makes sense to them. Appreciate the kind words and support! 🙌
@@MonzonMedia Dude you are Gold Standard! Also, that was unexpected and very kind of you. Thank you! I look forward to what you have next!