#### Links from the Video #### GET MY Batch WORKFLOW: www.patreon.com/posts/want-better-flux-114717473 Get my Basic Workflow: drive.google.com/file/d/13OOp880CidmWXfbQbBg0K64AyylL5BYB/view How to Use Turbo Flux Lora: ua-cam.com/video/Ymt6_dhkqfo/v-deo.html
I believe the higher the Max Shift, the more vibrant and more error-free image but it also makes it less photo realistic. Try 0.5 max with 0.3 base shift for photo realistic vs the default 1.15 / 05.
@@NoPhilospher tried it but it doesnt change image enough. It kinda maintain that 3d look. I tried with pony + lora style with Controlnet and image tends to shift more on the 3d style rather than lora style
Do you know about split-sigma? A higher guidance number say 3.5 creates a more vibrant image and a more interesting composition but it's less photo realistic. A lower guidance number say 2 is more photo realistic but then the colour is duller, composition can be simplier and the image can fall apart sometimes. If you use split-sigma, you can set the first few steps at 3.5 guidance then the remaining steps at guidance 2. That way you kind of get the best of both worlds. This is my understanding, I could be wrong.
Somewhat weird you didn't use the normal Flux model, but a turbo. This node may be working differently, for example, it may be strongly dependent on the number of iterations.
Cool stuff. I see that Omnigen has released publically. You covered it awhile back when it was just a paper, so I was wondering if you'd look at it again as it's out now?
1:00 maybe it could be the resolution on that the model was trained with? I mean the training images are actually still 1024x1024 so this is mostly also the best resolution for the AI to work.
@@OlivioSarikas Well, I did a little research and it looks like the base core resolution of Flex and SD3.5 are still 1024x1024 pixel and this is often the resolution at which they work best. SD1.5 was even 512x512 pixels but for finetune many used 768x768 (like myself). But I'm not up to date, didn't made a LoRA since early 2024 and at that time I preferred still 1.5. I searched for direct informations from the Flux devs but couldn't find any and after asking ChatGPT about that too it seems there is no direct source. But the most guides for finetuning refer to 1024x1024 as a minimum.
Honestly while I appreciate your videos Olivio (your content is always great) I just don't like Flux, haven't liked it since the first time I saw it. It's super hard to fine tune, and very inflexible. I very quickly went back to SDXL and only use Flux for upscaling, some inpainting or scene composition if its complex.
question, I'm using flux in forge and only the euler samplers in simple or normal seem to give me images and not static or blanks does anyone know why this is?
Flux has issues with other sampler types, you can use them but often you have to up the step count or and it can't use any of the legacy samplers. It's one of the worst things about Flux tbh
some samplers might not be supported in Forge, i would look up to see if anyone has put out of list of which ones are supported, but some samplers just may not work period
#### Links from the Video ####
GET MY Batch WORKFLOW: www.patreon.com/posts/want-better-flux-114717473
Get my Basic Workflow: drive.google.com/file/d/13OOp880CidmWXfbQbBg0K64AyylL5BYB/view
How to Use Turbo Flux Lora: ua-cam.com/video/Ymt6_dhkqfo/v-deo.html
Hi 👋
Please show us animation in flux Forge
Reminds me of how FreeU works
It does something. I don't know what. I don't know why. There are numbers. I don't know what they do. Let's call it .. better control !
😅
Love this kind of stuff where you do some testing and share the results! Hope you do more!
I believe the higher the Max Shift, the more vibrant and more error-free image but it also makes it less photo realistic. Try 0.5 max with 0.3 base shift for photo realistic vs the default 1.15 / 05.
I want control, not only pose, face, but consistence of clothes, background
@@lighteningnewspodcast make 3d models and use them as a base then use tiled+ canny + the style model. Done
@@lighteningnewspodcast mind telling me what you tryna create?
@@NoPhilospher tried it but it doesnt change image enough. It kinda maintain that 3d look. I tried with pony + lora style with Controlnet and image tends to shift more on the 3d style rather than lora style
@@sandeepm809 use the 3D one and do image to image ?
Then time to create your own AI model 😂
Do you know about split-sigma? A higher guidance number say 3.5 creates a more vibrant image and a more interesting composition but it's less photo realistic. A lower guidance number say 2 is more photo realistic but then the colour is duller, composition can be simplier and the image can fall apart sometimes. If you use split-sigma, you can set the first few steps at 3.5 guidance then the remaining steps at guidance 2. That way you kind of get the best of both worlds. This is my understanding, I could be wrong.
Somewhat weird you didn't use the normal Flux model, but a turbo. This node may be working differently, for example, it may be strongly dependent on the number of iterations.
is there a way to run flux on stable diffusion? I dont like ComfiUI too complicated for my taste...
It works well on Forge.
Cool stuff.
I see that Omnigen has released publically. You covered it awhile back when it was just a paper, so I was wondering if you'd look at it again as it's out now?
Interesting. Does this increase render time by any noticeable amount?
Have been following your videos and tutorials for a while. Thanks a lot for making ComfyUI easy to understand.
He's the whole reason I started using it over a year ago :D
I don't find the "BaseShift" node with ComfyLiterals, is it normal ?
did you try using it the way you showed us the superflux by using multiple ksampler for steps?
For a moment I tought you have been using the Catrina LoRA I've just released today.
is it possibe run flux with AMD 8GB card?
HI Love your videos !!! i have a question is it possible tor run deforum with flux ??? if yes please do a how to install video it would be awsome
1:00 maybe it could be the resolution on that the model was trained with? I mean the training images are actually still 1024x1024 so this is mostly also the best resolution for the AI to work.
which is really strange, right? why would they train on such a low res?
@@OlivioSarikas Well, I did a little research and it looks like the base core resolution of Flex and SD3.5 are still 1024x1024 pixel and this is often the resolution at which they work best. SD1.5 was even 512x512 pixels but for finetune many used 768x768 (like myself). But I'm not up to date, didn't made a LoRA since early 2024 and at that time I preferred still 1.5.
I searched for direct informations from the Flux devs but couldn't find any and after asking ChatGPT about that too it seems there is no direct source. But the most guides for finetuning refer to 1024x1024 as a minimum.
Wait until you see what happens if you run it holding the shift key until the end
Honestly while I appreciate your videos Olivio (your content is always great) I just don't like Flux, haven't liked it since the first time I saw it. It's super hard to fine tune, and very inflexible. I very quickly went back to SDXL and only use Flux for upscaling, some inpainting or scene composition if its complex.
Great explanation of how the Model Sampling FLux node works! Thanks for making it so clear for people to understand!
1.5/1.5 shifts til i die brehhh
Woo.. more Flux Wizardry.
This information has been around for months...
question, I'm using flux in forge and only the euler samplers in simple or normal seem to give me images and not static or blanks does anyone know why this is?
Flux has issues with other sampler types, you can use them but often you have to up the step count or and it can't use any of the legacy samplers. It's one of the worst things about Flux tbh
some samplers might not be supported in Forge, i would look up to see if anyone has put out of list of which ones are supported, but some samplers just may not work period