New Flux IMG2IMG Trick, More Upscaling & Prompt Ideas In ComfyUI

Поділитися
Вставка
  • Опубліковано 20 вер 2024

КОМЕНТАРІ • 18

  • @equilibrium964
    @equilibrium964 День тому +1

    From my experience with Flux and SDUpscale I think a denoising strength of 0.3 - 0.35 is the best choice. It still adds some details, but in 95% of cases no funny stuff is happening to the image.

  • @PaoloCaracciolo
    @PaoloCaracciolo 19 годин тому

    Great list of upscaling methods, I have also tried tiled diffusion and interpolating the upscaled latent with the Unsampler, these two were the best for me. Tiled diffusion is like ultimate sd upscaler but without any seams problem even at high denoise (0.7), while interpolation is complex and I don't really get it but it's the process that gave me all the best generations with Flux yet.

  • @electrolab2624
    @electrolab2624 День тому

    Enjoyed the video! 🐁🐭
    Actually - I use Flux Img2Img thusly: Denoise always stays between 0.1 to 0.18. - The base_shift always at 0.5 - max_shift can vary between 2 to 5 even. = amount of change.
    This is how the output can both get the color influence, and the LORA can add itself to the original since the max_shift effectively acts like denoise without being denoise. Makes sensei?
    Thought that was the trick.. Cheers!

  • @weirdscix
    @weirdscix 18 годин тому

    img2img is pretty easy with flux. I prefer fluxunchained with the flux sampler parameters from Essentials, paired with Florence and a promptgen model. Drop denoise to 0.80, and you will get an image with the same basic composition, drop it to 0.40, and you're getting very, very similar. 24 steps with a Q4 model, around 11Gb VRAM for a 1024x1024 takes around 45 seconds on a 3090. There's also Q5 and Q8 variants of the model.

  • @ctrlartdel
    @ctrlartdel Годину тому

    I love how your voice flow is starting to be more real and not so much news anchorish!

  • @wakegary
    @wakegary День тому

    I can only afford control net with 2 limbs but I use the "mirror" option in MSPaint to make a fully formed character. Appreciate you helping us solve this maze

  • @impactframes
    @impactframes 11 годин тому +1

    Hey Nerdy great work on the video :)

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST 22 години тому

    Amazing content! Keep up the great work!

  • @ToddDouglas1
    @ToddDouglas1 7 годин тому

    Thanks so much! Question - the Denoise Node you have up there. That ends in the Float output. What Custom Node group is that with? I can't seem to find it.

    • @NerdyRodent
      @NerdyRodent  6 годин тому

      It’s just a primitive

    • @ToddDouglas1
      @ToddDouglas1 6 годин тому

      @@NerdyRodent Ha! I'm so dumb. Thanks!!

  • @aeit999
    @aeit999 День тому

    Custom sampler I see i see. XLABS one is kinda shitty ngl.
    Also thier IPadapter is either underdeveloped or heavily censored compared do sdxl.
    I will try your method now with i2i.
    Also how you made prompts everywhere working? For me it snaps to negative, and positive is missing.

  • @stereotyp9991
    @stereotyp9991 День тому +2

    I miss the speaking avatars. Great video again though

  • @LouisGedo
    @LouisGedo День тому

    👋 hi

  • @JNET_Reloaded
    @JNET_Reloaded День тому

    cant stand flows rather do the same with code!

  • @quercus3290
    @quercus3290 10 годин тому

    would be interesting to test any Vision Encoder Decoder Models like ifmain/vit-gpt2-image2prompt-SD ( trained on the Ar4ikov/civitai-sd-337k dataset)
    Although, civit prompts may make things worse lol.