Consistency in Stable Diffusion | ControlNet Tutorial

Поділитися
Вставка
  • Опубліковано 29 сер 2024
  • #stablediffusion #aiart #animation
    In this tutorial, we are going to learn how to create consistent characters using the control net extension for automatic 1111.
    How to Install the Automatic 1111 Web UI:
    • How to Install Automat...
    Stable Diffusion Complete Beginners Tutorial:
    • AI Generated Art - Com...
    🎼🎼🎼🎼🎼🎼
    Song: ÉWN - The Light [NCS Release]
    Music provided by NoCopyrightSounds
    Free Download/Stream: ncs.io/Reloaded
    Watch: • ÉWN - The Light | Tra...
    🎼🎼🎼🎼🎼🎼
    💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾
    Between my full-time job and my family life,
    I try to find free time to create content for this channel.
    You can support me and help this channel keep growing:
    paypal.me/Mike...
    💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾💾
    💻 This Is My Development Setup (Affiliate): 💻
    ============
    Main Monitor:
    amzn.to/3M64qCJ
    Secondary Monitor:
    amzn.to/41Iu06A
    Graphics Card:
    amzn.to/3MpnXzd
    CPU:
    amzn.to/3I8nvCW
    RAM:
    amzn.to/42zqM6u
    Keyboard:
    amzn.to/3W5RFN4
    Mouse:
    amzn.to/3nTPcZs
    Headphones:
    amzn.to/3pz0By5
    Microphone:
    amzn.to/3OecJz3
    =============
    Tags for the Algorithm:
    ControlNet Net
    Consistent Characters
    Install Automatic 1111
    Deforum Extension
    Stable Diffusion
    AI Animation Video

КОМЕНТАРІ • 19

  • @BabylonBaller
    @BabylonBaller Рік тому +1

    Reference Only is an absolutely game changer, when coupled with Roop. Wow, Thank you bro 💲💲💲

  • @rwarren58
    @rwarren58 4 місяці тому

    This was a good tutorial. Thank you. Hope you make more.

  • @59aml
    @59aml 10 місяців тому +3

    Unfortunately it did not work for me. A completely different person was produced. I even followed your prompts. Other than that your tutorial was real good. Thanks

  • @willafboo488
    @willafboo488 Рік тому +2

    Please do more videos of your Unity 3D | Open-World Survival Game Tutorial Series!

  • @marko_z_bogdanca
    @marko_z_bogdanca 7 місяців тому +3

    Do not waste time for this method. Install ReActor plugin immediately and start loving your work. I really mean it. You only have to give it one reference image and it can generate other images in various angles and styles. It's insane what it can do comparing to ControlNet.

  • @SantoValentino
    @SantoValentino Рік тому +1

    Thanks

  • @revivedsoul1099
    @revivedsoul1099 Рік тому +1

    Nice

  • @322ss
    @322ss Рік тому +3

    IMO, control net's reference only doesn't work very well. It gives somewhat OK likeness or similar features, but usually a more generic and duller look. Better to train a LoRA or textual inversion for your character's face and body.

    • @revivedsoul1099
      @revivedsoul1099 Рік тому

      It did so slightly. Lora is for 1 piece right, while own created character can be used for lora? Do you know short good vid for textual inversion?

  • @Rajamahi501
    @Rajamahi501 Рік тому +1

    is this process works for our personal images also ? please help me sir

  • @itanrandel4552
    @itanrandel4552 10 місяців тому +1

    Does it only work with faces or can it be used with complete characters?

  • @pureluck8882
    @pureluck8882 Рік тому

    Would be nice to see how to create a intro/Cinematic cutscene with Stable Diffusion generated images.

  • @Rajamahi501
    @Rajamahi501 Рік тому +1

    this process not worked for my personal images , please help me sir, i given one my reference image , i given same prompts of u and same values of yours , but my results are not of my face , character totally changed , please help me how to get same charaters with different results like you , please help me sir

    • @Dante02d12
      @Dante02d12 Рік тому +4

      Reference-only doesn't seem to work for images created outside Stable Diffusion. There is a workaround that might work:
      1) Put your image in the img2img Inpaint tab. We won't paint anything in, that way we'll get the exact same picture as a result.
      2) In the resize mode, set the resolution to the same as your image. The fastest way for this is in the resize mode section: click on "resize by" and let the resize value to 1, it means it keeps the same resolution as the image.
      3) Make a picture, you will get the exact same image except now it's been built in Stable Diffusion.
      4) Set that "new" image in reference-only, and boom! You'll notice that it influences your results this time!

  • @fibuild3763
    @fibuild3763 8 місяців тому +1

    its not working at all

  • @TheGhost-kk7jj
    @TheGhost-kk7jj Рік тому

    is this possible on ComfyUI as well?

  • @dowhigawoco
    @dowhigawoco 8 місяців тому

    i do the same thing, and i have a real good portrait from my character ( like your ref ppicture ) but FFS why the picture who is using the ref got an head like a ballon ? the head is always big as fuck and yes i use the same resolution like my ref picture has and on the other hand its blocked the whole thing to change clothes, the face reffernce dont work

  • @ribeiro_rr
    @ribeiro_rr 5 місяців тому

    👎

  • @AniCho-go-Obzorov-Net
    @AniCho-go-Obzorov-Net Рік тому +1

    bla bla bla blaa