PixelWave Model = Artistic Flux in 8+GB VRAM

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 53

  • @wiz-white
    @wiz-white Місяць тому +6

    Pixelwave is great. Been my goto-model since forever. 10/10

  • @USBEN.
    @USBEN. Місяць тому +2

    Its very good, i love the balance it has for colors and styles. Base Flux is always leaned towards cinematic.

  • @97BuckeyeGuy
    @97BuckeyeGuy Місяць тому +3

    I've been using this model for a while, now, and I absolutely love it. And yes, it does handle NSFW images well, also.

  • @jibcot8541
    @jibcot8541 Місяць тому +3

    It is really good, cool to get the better art styles back.

  • @pn4960
    @pn4960 Місяць тому +4

    Excellent model ! Thanks

  • @erans
    @erans Місяць тому +2

    can we use our face loras that were trained with flux dev?

  • @OmriSadeh
    @OmriSadeh Місяць тому

    great comparison, we indeed needed that.
    would love a bit more about what it does worse than regular flux (if you found anything)

  • @RichardSekmistrz
    @RichardSekmistrz Місяць тому +1

    Do you have the workflow - I came from MJ recently so I struggle to build them from scratch still. Either way, thanks!!

    • @NerdyRodent
      @NerdyRodent  Місяць тому +2

      It’s just a standard flux workflow like you get with comfy, but you can grab the exact one used in the video from www.patreon.com/posts/pixelwave-flux-114819050

    • @MrCai01
      @MrCai01 Місяць тому

      As NerdyRodent says, its the bog standard Flux workflow, with the only difference, outside the layout, is the inclusion of the split sampling shown at the 1:40 mark - not something i've seen before but i'll give it a go as see what it produces. Nice video as always.

  • @Larimuss
    @Larimuss Місяць тому +1

    Forceclip cpu 😮 and force vae cuda 0.. interesting.
    Does this split checkpoint and vae to gpu and clip to cpu and ram? Because ive been looking for something like that to take some load of my poor 12gb vram.

    • @NerdyRodent
      @NerdyRodent  Місяць тому

      Yup. Love saving me a bit of VRAM 😁

  • @MilesBellas
    @MilesBellas Місяць тому +6

    Kijai's wrapper for Mochi next ?👍🐁

  • @glendaion-vk6pf
    @glendaion-vk6pf Місяць тому +2

    Where download the workflows from this video?

    • @bushwentto711
      @bushwentto711 Місяць тому +2

      make it yourself

    • @glendaion-vk6pf
      @glendaion-vk6pf Місяць тому +2

      Of course? Do not tell me, XD. The video does not explain which nodes he is using, nor is it clear what interconnections between them are needed to create it yourself. However, I have already made a similar one.

    • @bushwentto711
      @bushwentto711 Місяць тому +1

      @@glendaion-vk6pf Where is the download for this workflow that you just made then?

    • @NerdyRodent
      @NerdyRodent  Місяць тому +2

      It’s just a model, so use any Flux workflow you like. For the exact one in the video, see www.patreon.com/posts/pixelwave-flux-114819050

  • @blackvx
    @blackvx Місяць тому +2

    Thank you👍👍

  • @researchandbuild1751
    @researchandbuild1751 Місяць тому

    Can you still use regular flux control nets with it

  • @thewaife
    @thewaife 18 днів тому

    Great video, mate! Quick question: have you figured out how to use Pixelwave with LoRAs, especially for character LoRAs? I tried the trick suggested by the author with the merge model, but the results were disappointing-it completely ruined all the amazing features of Pixelwave. Thanks for any tips!

    • @NerdyRodent
      @NerdyRodent  18 днів тому +1

      As it’s a different model, the easiest way is to use pixelwave as the base and train your LoRAs on that. Makes it a bit tricky to use things like Hyper though 🫤

    • @thewaife
      @thewaife 18 днів тому

      @@NerdyRodent Thank you very much for advice)

  • @DaveTheAIMad
    @DaveTheAIMad Місяць тому

    Is there a video on the double sampler / split sigma setup? really liked the detail in those generations.

    • @NerdyRodent
      @NerdyRodent  Місяць тому

      Yup, it’s what I’ve been using for months here on the channel! Think of it like a refiner, where you have one sampler that does part of the image before passing it on to the next. In the original video from months ago, I also then showed like an image to image to upscale / hires fix - giving essentially 3+ samplers per image. Check the flux playlist for all the fluxy videos 😉

    • @DaveTheAIMad
      @DaveTheAIMad Місяць тому

      @@NerdyRodent will look for the vid in a bit.
      Been using the 10 20 30 method I saw a while back.
      Send it to do 10 of 10 steps pass the latent on to do steps 10 to 20 (20 steps) then send that on to do steps 20 to 30 (though I found doing steps 20 to 40 was key to maintaining text quality) making for 30 (or in my case 40) steps per image with a different seed per stage. I am guessing it's a similar principle but when you called it split sigma as well it sounds like it may be different lol
      I was going to look at the workflow, but alas like many UA-camrs of late it's locked behind a paywall :( less of an issue if there's a guide for it though

    • @NerdyRodent
      @NerdyRodent  Місяць тому

      @@DaveTheAIMad I’ve got free stuff on both patreon and hugging face too 😉 Nothing is actually locked behind a pay wall, but paying supporters do get extras!

    • @DaveTheAIMad
      @DaveTheAIMad Місяць тому

      @@NerdyRodent The workflow link in another comment states pay £3 to unlock.
      I looked through your other videos on flux and could not find the one on the dual sampling, tbh I would rather see a video about it and how it works than just have a workflow that has it, I am curious what it is doing. Having a workflow would be nice, learning why it does it and getting ideas from the methodoldgy is way better. do you have a video describing what it is, how it works? or is it mixed into someother video? ran out of free time for today so cant look further until after work (or during if its quiet).
      I also found that despite watching your videos and having them pop up frequently... i wasnt subbed so fixed that.

    • @NerdyRodent
      @NerdyRodent  Місяць тому

      @@DaveTheAIMad If you’ve a hankering for the extras, or just want to say thanks, then you can indeed buy me a coffee via an individual post! Another option is to add a small biscuit to go with that, and in return you’ll unlock all the course materials there (currently over 70 posts), gain early access, become cool, etc… I know which option I’d pick 😎
      For the full Nerdy Rodent ComfyUI Course focusing on the multi-sampler aspect alone, I’d go back to where it all began around a year ago with the SDXL + refiner workflows (links in the video description). As an optional extra, it’s also worth looking at the workflow basics video. After that, move on to the Pixart Sigma ones (Sigma also has a special double model version as well. I went the most nuts using Sigma, as some of those switch models and use over 5 samplers). Next up would be the video with SD3 as a refiner, and then move on to Flux videos. My recent Flux ones cover loads of options for extra samplers, schedulers, using latent multiply, and also various noise types. If you finish with the scheduler toolbox video, you should then be able to gain full control over each, individual step - likely also gaining total enlightenment by the end (*enlightenment and coolness may go down as well as up, terms and conditions apply, for entertainment purposes only, etc)

  • @joechip4822
    @joechip4822 Місяць тому

    Used it in Forge but it doesn't work as expected. If I only add a image style like 'cubist' or 'psychedelic' to the prompt, with CFG = 1 it doesn't do much an always gives more or less an impressionist image output. If I up the CFG scale, the style creeps in - but soon becomes overcooked. Does this only work in ComfyUI at the moment? Or what is the trick?

  • @ShubzGhuman
    @ShubzGhuman Місяць тому +1

    great video again

  • @jeffbull8781
    @jeffbull8781 Місяць тому

    The single sampler versions are generally better imo, composition wise they are just less generic.

  • @olivierniclausse1791
    @olivierniclausse1791 Місяць тому +1

    yes thanks a lot

  • @magimyster
    @magimyster Місяць тому +2

    Wow😮

  • @BroJo420Cafe
    @BroJo420Cafe Місяць тому +1

    greek to me but here to show support

    • @NerdyRodent
      @NerdyRodent  Місяць тому +1

      Get yourself an Nvidia graphics card and join the fun! 😉

  • @joecarioti629
    @joecarioti629 Місяць тому +2

    None of these fine tunes will ever be usablefor commercial use, right?

    • @sherpya
      @sherpya Місяць тому

      they need to use schnell as starting model

  • @SeaScienceFilmLabs
    @SeaScienceFilmLabs Місяць тому +1

    Rodent! 👋

  • @DrMacabre
    @DrMacabre Місяць тому

    i only gets terrible results out of this model. i tried the fp8 and bf16 with recommended sampler and they are equally bad. :/

  • @SimosFunk
    @SimosFunk Місяць тому +2

    🌊🌊🌊

  • @juanjesusligero391
    @juanjesusligero391 Місяць тому +1

    Oh, Nerdy Rodent! 🐭🎵
    He really makes my day! ☀
    Showing us AI, 🤖
    in a really British way! ☕🎶

  • @kyle-bensnyders3147
    @kyle-bensnyders3147 Місяць тому +1

    Why not just use sdxl or even sd1.5 for this. You can get similar styled results at the fraction of the time and much less fuss

    • @Elwaves2925
      @Elwaves2925 Місяць тому +2

      You can get the styles but you don't get the same prompt adherence, text, details, higher resolutions and so on that Flux gives. It all depends on what you want and how you feel with the result, they all have pros and cons.

    • @kyle-bensnyders3147
      @kyle-bensnyders3147 Місяць тому

      @@Elwaves2925 No true, if you know what you're doing you can get good results. Don't get me wrong, flux is great and all, I just fear people are just charging ahead and using flux everywhere and forgetting about even sd1.5, which is still a very powerful and fast model if used right. But you're right about pros and cons.

    • @Elwaves2925
      @Elwaves2925 Місяць тому

      @@kyle-bensnyders3147 I didn't say you couldn't get good results but in no way does SD1.5 match Flux for the things I mention, not out of the box. So what I said is true and text as just one example is nowhere near as good in SD1.5. Sure you can get there with external editing or whatever but with Flux none of that is needed.
      However, I kind of get your point but it's not so much about forgetting, it's that Fllux (and SD3.5) are the new kids on the block. SD1.5 and SDXL aren't new, we all know what they can achieve and that's why Flux and SD 3.5 are getting all the attention right now.
      Personally, as much as I'm loving Flux (especially with the new Pixelwave model), SDXL (RealVis checkpoint) is still my main model and I don't see that changing. That's partly because of keeping consistency with projects on the go but also because I like what I can get out of it and it's a hell of a lot quicker right now. 🙂

    • @Elwaves2925
      @Elwaves2925 Місяць тому

      @@kyle-bensnyders3147 I didn't say you couldn't get good results from SD1.5. You certainly can but Flux is objectively better at certain things out of the box, like those I mentioned. So what I say is true.
      However, I kind of get what you're saying but it's not people forgetting. It's that SD1.5+XL are relatively and aren't offering anything new. While Flux is the shiny new toy on the block and that's why it's getting all the attention at the moment. 🙂

  • @cosmicrdt
    @cosmicrdt Місяць тому +1

    It's a great model but I think the sampler you're using for the original model is what's causing all the bad results.