New IP Adapter Model for Image Composition in Stable Diffusion!

Поділитися
Вставка
  • Опубліковано 21 бер 2024
  • The new IP Composition Adapter model is a great companion to any Stable Diffusion workflow. Just provide a single image, and the power of artificial intelligence will analyse the very composition itself - ready for you use!
    Check out some of things you can do with it :)
    Want to support the channel?
    / nerdyrodent
    Links:
    huggingface.co/ostris/ip-comp...
    == More Stable Diffusion Stuff! ==
    * Faster Stable Diffusions with LCM LoRA - • LCM LoRA = Speedy Stab...
    * SD Generated Avatar Animation - • Create your own animat...
    * Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    * ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
    * Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    * One image = A Consistent Character in ANY pose - • Reposer = Consistent S...
  • Наука та технологія

КОМЕНТАРІ • 43

  • @ClownCar666
    @ClownCar666 3 місяці тому +4

    Thanks for sharing! I've been messing with Ip adapter all week, it's so much fun!

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST 3 місяці тому +2

    Impressive!

  • @Niffelheim
    @Niffelheim 3 місяці тому

    Hey Nerdy Rodent, thanks for the tutorial. Do you know if this can apply together with a pose control net? I want to design a character from different views, (front, back, profile) and maybe transfer style or Lora character for consistency. Any tips?

  • @DemShion
    @DemShion 3 місяці тому +1

    has anyone managed to get this working with pony checkpoint? it works with other models derived from sdxl like animagine and jugg/realvis but not pony for some reason, curious if its just me.

  • @godpunisher
    @godpunisher 3 місяці тому +2

    Nerdy's contents are amazing. Are you a mind reader? 😁

  • @farsi_vibes_edit
    @farsi_vibes_edit 3 місяці тому +3

    I wish I had found your channel earlier😢🤯❤❤🔥

  • @Remianr
    @Remianr 3 місяці тому +4

    6:54 Meme material haha!

  • @BabylonBaller
    @BabylonBaller 3 місяці тому +2

    Negative Prompt: "Bad Stuff Such as Evil Kittens" ROFL!

  • @Jcs187-rr7yt
    @Jcs187-rr7yt 3 місяці тому +1

    Is there 1.5 models that this doesn't work with? I keep getting 'header too large' error and that usually happens with model mismatch, but I'm using the 1.5 adapter. ?

    • @NerdyRodent
      @NerdyRodent  3 місяці тому +2

      Inpainting models may not work, but just your standard ones should all be fine

  • @holysabre8499
    @holysabre8499 3 місяці тому +1

    What I really want to see is a Lucid Sonic Dreams working update or something similar that's user friendly. Any idea of anything in the works similar to that or how to even achieve a similar effects using something else?

    • @NerdyRodent
      @NerdyRodent  3 місяці тому

      Lucid Dreams is slightly difficult on diffusion models 😞

  • @kariannecrysler640
    @kariannecrysler640 3 місяці тому +2

    I saw the rodent in the sky!!!! I have the witnesses!
    🤘😉

  • @dudufridak1145
    @dudufridak1145 3 місяці тому

    I like the thumbnail for this image.
    I wonder if you can create an A. I. for generating similar images, compositing text (with effects) as such.

  • @KDawg5000
    @KDawg5000 3 місяці тому

    What preprocessor do you use when using this with Automatic1111?

    • @NerdyRodent
      @NerdyRodent  3 місяці тому +1

      It’s just the same as usual, like when using ip-adapter-plus or light

    • @KDawg5000
      @KDawg5000 3 місяці тому +1

      @@NerdyRodent Hmm. It was giving me error messages no matter what I tried. Note, regular IPAdapter and the Face ID versions work. I'm not at home currently, but can share the messages later (in case anyone cares or is having the same problem).

    • @reallifecheatcodeaudiobooks
      @reallifecheatcodeaudiobooks 3 місяці тому

      Please if you find solution for this let me know. I am also struggling to make it work.@@KDawg5000

  • @ramn_
    @ramn_ 3 місяці тому

    I installed it in Forge and it ruined my installation. Now it generates only deformed images and randoms. I tried everything and I couldn't fix it, I will have to reinstall.

  • @sandy66555
    @sandy66555 3 місяці тому +3

    No hugging cats? *giggles*

    • @NerdyRodent
      @NerdyRodent  3 місяці тому +6

      Cat should never be used in a prompt! 😱

  • @wakegary
    @wakegary 3 місяці тому

    that tiger needs help and I think we should act on it.

  • @MarcSpctr
    @MarcSpctr 3 місяці тому +2

    Can you make a video on all your favorites AI tools and ComfyUI workflows ?
    Like Google's Film Interpolation, StableDiffusion, RVC Webui, MusicGen, etc

    • @comfyui
      @comfyui 3 місяці тому +1

      Complete Menu

  • @peoplez129
    @peoplez129 3 місяці тому

    Images come out all garbled on A1111

  • @Hooooodad
    @Hooooodad 3 місяці тому +1

    Mate can you show how it's done in automatic 1111, forge please ?

    • @NerdyRodent
      @NerdyRodent  3 місяці тому +1

      Select the model and your composition image (like with ComfyUI). Win!

    • @Hooooodad
      @Hooooodad 3 місяці тому

      @@NerdyRodent I tried and failed miserably , doesn't work for me on forge. Do you use pre professor?

  • @kallamamran
    @kallamamran 3 місяці тому

    Just feels like img2img

    • @NerdyRodent
      @NerdyRodent  3 місяці тому +2

      Or perhaps how you'd LIKE img2img would work, but it doesn't? :)

    • @MyAmazingUsername
      @MyAmazingUsername 3 місяці тому

      This absolutely isn't like img2img whatsoever.
      Img2img keeps the exact pixels, colors and exact layout.
      This new technique is extremely flexible and can do anything and will be more "inspired by" than "exactly the same as the input".

    • @kallamamran
      @kallamamran 3 місяці тому

      @@MyAmazingUsername Img2img definately doesn't keep the exact pixels! If it did img2img would be useless!

  • @ForeverNot-wv4sz
    @ForeverNot-wv4sz 3 місяці тому

    I can't seem to get it to work for Auto1111, Like it works but the image comes out very painted/pastel/distorted. The same thing happened to me on comfyui.. till I downloaded the 2 encoders; CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors, and added them to the \ComfyUI\models\clip_vision folder, then it worked. So I thought, maybe that's the issue with the auto1111 version? However I can't find where to put these 2 encoder files for auto1111, I tried extensions\sd-webui-controlnet\annotator\downloads\clip_vision folder but that didn't work. I've also had issues just getting it to drop down for me in the menu on the gui, the ip composition model, like when I click on the ip adapter in the control dropdown, it has ip adaptorplus etc but no composition one unless I click the refresh next to the model dropdown then I can select ALL the models (even the ones not for ipadapter) THEN I'm able to load it, but like I said, it's all foggy/blurry when I make the image. I have controlnet v1.1.441, and my auto1111 is version: v1.6.0. I'm not sure what else to do. EDIT; I just updated my auto to version: v1.8.0 still having issues.

    • @NerdyRodent
      @NerdyRodent  3 місяці тому

      Yes, you do unfortunately need to click refresh to get the full model list if you select the ipadapter filter. As for blurry images, I can’t find any way to replicate that in either Comfy or Forge 🫤

    • @ForeverNot-wv4sz
      @ForeverNot-wv4sz 3 місяці тому

      Ah I see.. well at least it's good to know the refresh feature is meant to work that way. Perhaps I need to upgrade auto to forge, maybe that's the issue here@@NerdyRodent

    • @KDawg5000
      @KDawg5000 3 місяці тому

      Are you using a preprocessor? I put the 2 "composition" models in my Controlnet folder, and get them to show up with a refresh, but I don't know which preprocessor to use? Any of the ip-adapter ones I try, never do anything. Meaning, automatic1111 just skips using controlnet (like it does when your controlnet settings don't make sense).

    • @reallifecheatcodeaudiobooks
      @reallifecheatcodeaudiobooks 3 місяці тому

      Can you please let us know what preprocessor you use in forge. I cant get this to work without proper preprocessor and If i choose ip-adapter_clip_sdxl or ip-adapter_clip_sdxl_plus_vith it gives errors and doesn't work :/@@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  3 місяці тому

      It’s just the same Sd1.5 clip vision model as normal - like you’d use with ip adapter plus, ip adapter light, ip adapter full face, etc.