Hey Rob, what was the node you use for exporting out the entire workflow as a .png screen grab?...It's really helpful but I can't for the life of me remember. Thanks
Great wf! One idea: could it be that instead of rewriting the prompt you could use Florence to rewrite the prompt for that specific refining area? I usually try to do that with a extra text concatenation if necessary. Thanks!
At present you will always write a better prompt with your eyes and brain. I can see the use for Florence when many images need doing, but not really otherwise. I suspect that some of the issues Flux has are due to poor LLM captioning.
Actually it understands prompting better than any model before, so its easier. The problematic part is that the dev model is so huge that no consumer class gpus can handle it before 5090 coming on the next christmas.
I've noticed that Flux can be stubborn with image to image too. It doesn't like to change things a lot, which might be good for photographic resampling/upscaling where you want details preserved as much as possible, but it's bad for creative resampling/upscaling. Has anyone been able, for example, to transform a cartoon image/painting into a photo or vice versa? I've not been able to do this with Flux so far. It's a lot easier with SDXL and SD 1.5.
I have the exact same problem. I Stable diffusion the denoising is easy to manage and you can slowly increase changes on an existing image. Here on flux i can't find ways to change the style of an image using img2img, it's weird
Yes, Flux is a different beast, it doesn't hallucinate as freely as SDXL this helps prompt adherence but of course reduces its ability to remake an existing image.
@@taucalm It is because Flux needs high denoise to do IMG 2 IMG so it will often water down or remove the style. IMG 2 IMG at a low denoise is often poor quality in Flux
Thanks! Have to say, these are refreshingly different AI art videos!
Brilliant. Thanks for the time and effort posting this.
Always pleased to see when you've uploaded a new video Rob
This is great! So glad you are sharing your insights and along with the workflow, I appreciate thank you.
Lovely workflow. Thanks sir!!!
Hey Rob, what was the node you use for exporting out the entire workflow as a .png screen grab?...It's really helpful but I can't for the life of me remember. Thanks
I think pythongosssss...
@@robadams2451 ah perfect, thanks!
Great wf! One idea: could it be that instead of rewriting the prompt you could use Florence to rewrite the prompt for that specific refining area? I usually try to do that with a extra text concatenation if necessary. Thanks!
At present you will always write a better prompt with your eyes and brain. I can see the use for Florence when many images need doing, but not really otherwise. I suspect that some of the issues Flux has are due to poor LLM captioning.
Actually it understands prompting better than any model before, so its easier. The problematic part is that the dev model is so huge that no consumer class gpus can handle it before 5090 coming on the next christmas.
Need to check also the new chckpt (Flux.1 Compact) thx :) !
No JSON ?
Workflow is in the PNG
@@robadams2451 awesome content thank you
I've noticed that Flux can be stubborn with image to image too. It doesn't like to change things a lot, which might be good for photographic resampling/upscaling where you want details preserved as much as possible, but it's bad for creative resampling/upscaling. Has anyone been able, for example, to transform a cartoon image/painting into a photo or vice versa? I've not been able to do this with Flux so far. It's a lot easier with SDXL and SD 1.5.
I have the exact same problem. I Stable diffusion the denoising is easy to manage and you can slowly increase changes on an existing image. Here on flux i can't find ways to change the style of an image using img2img, it's weird
Yes, Flux is a different beast, it doesn't hallucinate as freely as SDXL this helps prompt adherence but of course reduces its ability to remake an existing image.
You can always do styletransfer on sdxl or sd and then push that generated img2img for flux. Its not a problem, is it?
@@taucalm It is because Flux needs high denoise to do IMG 2 IMG so it will often water down or remove the style. IMG 2 IMG at a low denoise is often poor quality in Flux
@@taucalm Possibly. I haven't tried it yet.