It’s just a standard flux workflow like you get with comfy, but you can grab the exact one used in the video from www.patreon.com/posts/pixelwave-flux-114819050
As NerdyRodent says, its the bog standard Flux workflow, with the only difference, outside the layout, is the inclusion of the split sampling shown at the 1:40 mark - not something i've seen before but i'll give it a go as see what it produces. Nice video as always.
Forceclip cpu 😮 and force vae cuda 0.. interesting. Does this split checkpoint and vae to gpu and clip to cpu and ram? Because ive been looking for something like that to take some load of my poor 12gb vram.
Of course? Do not tell me, XD. The video does not explain which nodes he is using, nor is it clear what interconnections between them are needed to create it yourself. However, I have already made a similar one.
Great video, mate! Quick question: have you figured out how to use Pixelwave with LoRAs, especially for character LoRAs? I tried the trick suggested by the author with the merge model, but the results were disappointing-it completely ruined all the amazing features of Pixelwave. Thanks for any tips!
As it’s a different model, the easiest way is to use pixelwave as the base and train your LoRAs on that. Makes it a bit tricky to use things like Hyper though 🫤
Yup, it’s what I’ve been using for months here on the channel! Think of it like a refiner, where you have one sampler that does part of the image before passing it on to the next. In the original video from months ago, I also then showed like an image to image to upscale / hires fix - giving essentially 3+ samplers per image. Check the flux playlist for all the fluxy videos 😉
@@NerdyRodent will look for the vid in a bit. Been using the 10 20 30 method I saw a while back. Send it to do 10 of 10 steps pass the latent on to do steps 10 to 20 (20 steps) then send that on to do steps 20 to 30 (though I found doing steps 20 to 40 was key to maintaining text quality) making for 30 (or in my case 40) steps per image with a different seed per stage. I am guessing it's a similar principle but when you called it split sigma as well it sounds like it may be different lol I was going to look at the workflow, but alas like many UA-camrs of late it's locked behind a paywall :( less of an issue if there's a guide for it though
@@DaveTheAIMad I’ve got free stuff on both patreon and hugging face too 😉 Nothing is actually locked behind a pay wall, but paying supporters do get extras!
@@NerdyRodent The workflow link in another comment states pay £3 to unlock. I looked through your other videos on flux and could not find the one on the dual sampling, tbh I would rather see a video about it and how it works than just have a workflow that has it, I am curious what it is doing. Having a workflow would be nice, learning why it does it and getting ideas from the methodoldgy is way better. do you have a video describing what it is, how it works? or is it mixed into someother video? ran out of free time for today so cant look further until after work (or during if its quiet). I also found that despite watching your videos and having them pop up frequently... i wasnt subbed so fixed that.
@@DaveTheAIMad If you’ve a hankering for the extras, or just want to say thanks, then you can indeed buy me a coffee via an individual post! Another option is to add a small biscuit to go with that, and in return you’ll unlock all the course materials there (currently over 70 posts), gain early access, become cool, etc… I know which option I’d pick 😎 For the full Nerdy Rodent ComfyUI Course focusing on the multi-sampler aspect alone, I’d go back to where it all began around a year ago with the SDXL + refiner workflows (links in the video description). As an optional extra, it’s also worth looking at the workflow basics video. After that, move on to the Pixart Sigma ones (Sigma also has a special double model version as well. I went the most nuts using Sigma, as some of those switch models and use over 5 samplers). Next up would be the video with SD3 as a refiner, and then move on to Flux videos. My recent Flux ones cover loads of options for extra samplers, schedulers, using latent multiply, and also various noise types. If you finish with the scheduler toolbox video, you should then be able to gain full control over each, individual step - likely also gaining total enlightenment by the end (*enlightenment and coolness may go down as well as up, terms and conditions apply, for entertainment purposes only, etc)
Used it in Forge but it doesn't work as expected. If I only add a image style like 'cubist' or 'psychedelic' to the prompt, with CFG = 1 it doesn't do much an always gives more or less an impressionist image output. If I up the CFG scale, the style creeps in - but soon becomes overcooked. Does this only work in ComfyUI at the moment? Or what is the trick?
You can get the styles but you don't get the same prompt adherence, text, details, higher resolutions and so on that Flux gives. It all depends on what you want and how you feel with the result, they all have pros and cons.
@@Elwaves2925 No true, if you know what you're doing you can get good results. Don't get me wrong, flux is great and all, I just fear people are just charging ahead and using flux everywhere and forgetting about even sd1.5, which is still a very powerful and fast model if used right. But you're right about pros and cons.
@@kyle-bensnyders3147 I didn't say you couldn't get good results but in no way does SD1.5 match Flux for the things I mention, not out of the box. So what I said is true and text as just one example is nowhere near as good in SD1.5. Sure you can get there with external editing or whatever but with Flux none of that is needed. However, I kind of get your point but it's not so much about forgetting, it's that Fllux (and SD3.5) are the new kids on the block. SD1.5 and SDXL aren't new, we all know what they can achieve and that's why Flux and SD 3.5 are getting all the attention right now. Personally, as much as I'm loving Flux (especially with the new Pixelwave model), SDXL (RealVis checkpoint) is still my main model and I don't see that changing. That's partly because of keeping consistency with projects on the go but also because I like what I can get out of it and it's a hell of a lot quicker right now. 🙂
@@kyle-bensnyders3147 I didn't say you couldn't get good results from SD1.5. You certainly can but Flux is objectively better at certain things out of the box, like those I mentioned. So what I say is true. However, I kind of get what you're saying but it's not people forgetting. It's that SD1.5+XL are relatively and aren't offering anything new. While Flux is the shiny new toy on the block and that's why it's getting all the attention at the moment. 🙂
Pixelwave is great. Been my goto-model since forever. 10/10
Its very good, i love the balance it has for colors and styles. Base Flux is always leaned towards cinematic.
I've been using this model for a while, now, and I absolutely love it. And yes, it does handle NSFW images well, also.
It is really good, cool to get the better art styles back.
Excellent model ! Thanks
can we use our face loras that were trained with flux dev?
great comparison, we indeed needed that.
would love a bit more about what it does worse than regular flux (if you found anything)
Do you have the workflow - I came from MJ recently so I struggle to build them from scratch still. Either way, thanks!!
It’s just a standard flux workflow like you get with comfy, but you can grab the exact one used in the video from www.patreon.com/posts/pixelwave-flux-114819050
As NerdyRodent says, its the bog standard Flux workflow, with the only difference, outside the layout, is the inclusion of the split sampling shown at the 1:40 mark - not something i've seen before but i'll give it a go as see what it produces. Nice video as always.
Forceclip cpu 😮 and force vae cuda 0.. interesting.
Does this split checkpoint and vae to gpu and clip to cpu and ram? Because ive been looking for something like that to take some load of my poor 12gb vram.
Yup. Love saving me a bit of VRAM 😁
Kijai's wrapper for Mochi next ?👍🐁
@@MilesBellas how you inserted hyperlink?
Where download the workflows from this video?
make it yourself
Of course? Do not tell me, XD. The video does not explain which nodes he is using, nor is it clear what interconnections between them are needed to create it yourself. However, I have already made a similar one.
@@glendaion-vk6pf Where is the download for this workflow that you just made then?
It’s just a model, so use any Flux workflow you like. For the exact one in the video, see www.patreon.com/posts/pixelwave-flux-114819050
Thank you👍👍
Can you still use regular flux control nets with it
Nope! It’s OmniGen 😉
Great video, mate! Quick question: have you figured out how to use Pixelwave with LoRAs, especially for character LoRAs? I tried the trick suggested by the author with the merge model, but the results were disappointing-it completely ruined all the amazing features of Pixelwave. Thanks for any tips!
As it’s a different model, the easiest way is to use pixelwave as the base and train your LoRAs on that. Makes it a bit tricky to use things like Hyper though 🫤
@@NerdyRodent Thank you very much for advice)
Is there a video on the double sampler / split sigma setup? really liked the detail in those generations.
Yup, it’s what I’ve been using for months here on the channel! Think of it like a refiner, where you have one sampler that does part of the image before passing it on to the next. In the original video from months ago, I also then showed like an image to image to upscale / hires fix - giving essentially 3+ samplers per image. Check the flux playlist for all the fluxy videos 😉
@@NerdyRodent will look for the vid in a bit.
Been using the 10 20 30 method I saw a while back.
Send it to do 10 of 10 steps pass the latent on to do steps 10 to 20 (20 steps) then send that on to do steps 20 to 30 (though I found doing steps 20 to 40 was key to maintaining text quality) making for 30 (or in my case 40) steps per image with a different seed per stage. I am guessing it's a similar principle but when you called it split sigma as well it sounds like it may be different lol
I was going to look at the workflow, but alas like many UA-camrs of late it's locked behind a paywall :( less of an issue if there's a guide for it though
@@DaveTheAIMad I’ve got free stuff on both patreon and hugging face too 😉 Nothing is actually locked behind a pay wall, but paying supporters do get extras!
@@NerdyRodent The workflow link in another comment states pay £3 to unlock.
I looked through your other videos on flux and could not find the one on the dual sampling, tbh I would rather see a video about it and how it works than just have a workflow that has it, I am curious what it is doing. Having a workflow would be nice, learning why it does it and getting ideas from the methodoldgy is way better. do you have a video describing what it is, how it works? or is it mixed into someother video? ran out of free time for today so cant look further until after work (or during if its quiet).
I also found that despite watching your videos and having them pop up frequently... i wasnt subbed so fixed that.
@@DaveTheAIMad If you’ve a hankering for the extras, or just want to say thanks, then you can indeed buy me a coffee via an individual post! Another option is to add a small biscuit to go with that, and in return you’ll unlock all the course materials there (currently over 70 posts), gain early access, become cool, etc… I know which option I’d pick 😎
For the full Nerdy Rodent ComfyUI Course focusing on the multi-sampler aspect alone, I’d go back to where it all began around a year ago with the SDXL + refiner workflows (links in the video description). As an optional extra, it’s also worth looking at the workflow basics video. After that, move on to the Pixart Sigma ones (Sigma also has a special double model version as well. I went the most nuts using Sigma, as some of those switch models and use over 5 samplers). Next up would be the video with SD3 as a refiner, and then move on to Flux videos. My recent Flux ones cover loads of options for extra samplers, schedulers, using latent multiply, and also various noise types. If you finish with the scheduler toolbox video, you should then be able to gain full control over each, individual step - likely also gaining total enlightenment by the end (*enlightenment and coolness may go down as well as up, terms and conditions apply, for entertainment purposes only, etc)
Used it in Forge but it doesn't work as expected. If I only add a image style like 'cubist' or 'psychedelic' to the prompt, with CFG = 1 it doesn't do much an always gives more or less an impressionist image output. If I up the CFG scale, the style creeps in - but soon becomes overcooked. Does this only work in ComfyUI at the moment? Or what is the trick?
great video again
The single sampler versions are generally better imo, composition wise they are just less generic.
yes thanks a lot
Wow😮
greek to me but here to show support
Get yourself an Nvidia graphics card and join the fun! 😉
None of these fine tunes will ever be usablefor commercial use, right?
they need to use schnell as starting model
Rodent! 👋
👋
i only gets terrible results out of this model. i tried the fp8 and bf16 with recommended sampler and they are equally bad. :/
🌊🌊🌊
Oh, Nerdy Rodent! 🐭🎵
He really makes my day! ☀
Showing us AI, 🤖
in a really British way! ☕🎶
Why not just use sdxl or even sd1.5 for this. You can get similar styled results at the fraction of the time and much less fuss
You can get the styles but you don't get the same prompt adherence, text, details, higher resolutions and so on that Flux gives. It all depends on what you want and how you feel with the result, they all have pros and cons.
@@Elwaves2925 No true, if you know what you're doing you can get good results. Don't get me wrong, flux is great and all, I just fear people are just charging ahead and using flux everywhere and forgetting about even sd1.5, which is still a very powerful and fast model if used right. But you're right about pros and cons.
@@kyle-bensnyders3147 I didn't say you couldn't get good results but in no way does SD1.5 match Flux for the things I mention, not out of the box. So what I said is true and text as just one example is nowhere near as good in SD1.5. Sure you can get there with external editing or whatever but with Flux none of that is needed.
However, I kind of get your point but it's not so much about forgetting, it's that Fllux (and SD3.5) are the new kids on the block. SD1.5 and SDXL aren't new, we all know what they can achieve and that's why Flux and SD 3.5 are getting all the attention right now.
Personally, as much as I'm loving Flux (especially with the new Pixelwave model), SDXL (RealVis checkpoint) is still my main model and I don't see that changing. That's partly because of keeping consistency with projects on the go but also because I like what I can get out of it and it's a hell of a lot quicker right now. 🙂
@@kyle-bensnyders3147 I didn't say you couldn't get good results from SD1.5. You certainly can but Flux is objectively better at certain things out of the box, like those I mentioned. So what I say is true.
However, I kind of get what you're saying but it's not people forgetting. It's that SD1.5+XL are relatively and aren't offering anything new. While Flux is the shiny new toy on the block and that's why it's getting all the attention at the moment. 🙂
It's a great model but I think the sampler you're using for the original model is what's causing all the bad results.