We are almost there. The ultimate version Of Stable Diffusion is almost Here.
It will be a Blender Addon that will combine the recently released Blender Skeleton for MULTI-CONTROLNET.
Combined with the next version of this which will allow us to assign a Prompt, Hypernetworks and Multi-Controlnets to each Skeleton and or "Control Meshs" and the Background.
And once Text To 3D, AI Animation and Images to 3D are also inevitably implemented as Blender Addons The fusion of the 2D and 3D Workflows will be Complete.
And with it The full democratization of animation.
It Will be Glorious and at the rate we are going It will be here Sooner than we realize.
The official stable diffusion Blender add on is out, in case you haven’t seen it 😉
Thanks for sharing and explaining, SD (with the help of automatic1111) is such a wonderous toolbox. Hard to keep up with all the new stuff popping up
Thank you for taking the time to share your knowledge, another excellent video.👍
This is perfect for parents who have young kids who refuse to stand still for portraits.
Multiple characters in an image was the one thing that pushed me to put my fist through the monitor. 😄
I wonder if it is possible to make 2 subject interact with each other like handshake or hug ?
Next to cover - LoRA weight extension ? Thank you for disassembly those extensions and their mechanics, to create video with explanation what where and how.
you can lower the mlroa weight on the end of the lora ,. noraly is :1 , but you can change thato :0.55 for 55%
The interesting part is, the language model used in SD is in theory easily capable to allow more control. But the models are not trained for that purpose, yet.
@00:39 So we're just gonna ignore a random bearded head hiding in the fur between the two Washington-Lincolns?
Hold on to your Pap😄
Tortoise TTS got a bunch of updates in a recent month - faster inference, finetuning. Hope you will consider making an update video on it
What a time to be alive!
You know why he is always holding onto his papers? Because he craps his pants in surprise too often :)
Fantastic vid mate. I was JUST asking about how to do this technique on reddit. Any chance on getting more info or the text doc for the different ratios for the regions. Love the AI video in the corner too.
I can't get this to work at all. It's only generating a single character, i don't understand what i'm missing. I'm using the same settings as the video, i've restarted and updated the webui, updated all my extensions, tried different models and different samplers, made sure latent couple is enabled, out of 40 generations, not a single one has worked, i get one character or some weird merge of the two.
There are two steps to it -
1. Define your areas in the latent couple box and enable
2. Define your area sub-prompts, separated by AND
If you only have one character, like I show at 4:38, then you might have forgotten step 2.
@@NerdyRodent Check and check. Done all that. Not working for me. I don't get it.
@@IntiArtDesigns The only other thing I can think of is having too few steps, so probably worth raising an issue in the GitHub with screenshots to show.
@@NerdyRodent Ok, thanks. I've seen others raising the same issue i have, so hopefully, whatever it is, will get fixed soon. This looks like so much fun, i can't wait to use it.
So, do we need ControlNet to run latent couple and the other as I see all the videos just using it?
HOLDING ON TO MY PAPERS!! SAY IT MAN JUST SAY IT!!!
Do you have a video on how to do the exact same thing on ComfyUI?
Hehehe 🤭
Tickled by whiskers 🐭
Using chair and sofa images, I have finetuned a stable diffusion model. However, the model does not generate a living room using the chair and sofa.
Please help me !!!!
I installed and copied your settings and enabled it, and it just did the same as normal and merged the prompts into one. I set it for a bird in the top and a cat in the bottom and I got a cat bird
how do you stop it from mashing to subjects together...I don't want the half faces (one half one person the other half another person), i want two entirely separate characters
OOhh! would you be into sharing your parameters txt file cut and pastes for areas?
I'm using regional prompter to do roughly the same thing as latent couple. My question now is how do you generate two particular people for the image. Like I have created a LoRA for my face and a LoRA for my wife's face. If I put each LoRA in a separate prompting area with a prompt for a man in one and a prompt for a woman in the other, it just gives me a man and a woman whose faces are a blend of the two LoRA's. Any suggestion on how to resolve this?
Great! Very useful! Unfortunately, I speak bad English, maybe that's why I missed it: do I always have to enter the numbers manually in the "Extra generation params"? Isn't there a formula, you just have to guess? Or copy out the values that are wrongly shown in the video?
You can use the defaults for where the areas are going to be, or you can choose your own 😉
What is this software? Is it possible to run online and yhen access with my Windows 8 PC? Are there any setup guides or videos on how to do this? How much does this all cost? I'm an old-school professional Graphic Designer since 1998 - meaning I'm excited to start becoming a master of this amazing ai toolset. Thank you in advance for any help!!
This is stable diffusion (links in the description). It can run on Microsoft Windows, but it’s best to start out something like the Google Colab as unless you’re a nerd, you probably won’t have a computer capable of running it. There you get a free Linux system with a gpu than you can use via your browser - github.com/camenduru/controlnet-colab
Cool tech!
It seems the User Interfaces for both Composable Lora and Latent Couple has changed completely, since this video was uploaded. Anybody who can provide guides, or possibly links to guides for the current (as of Sept. 23. '24) version of those two plugins for Stable Diffusion (Forge)?
Am at a complete loss, for similar solutions.
Great video, thx a ton! One question:
Instead of diffrent LORA models, can we compose using different Dreambooth models?
I tried this again today and still just get nightmare fuel (TM). I see there is regional prompter now but I get equally bad results. I even tried the dog/cat you demo'd but I get a dog/cat asking to be put down :(
great video! but i have a trouble, the latent couple section does not appear to me 🤧
Hi. I had the same issue. I updated the webUI to the latest version and that fixed it for me.
this extension has been updated and it works with different words now, new video?
It hasn’t been updated for over a year. It will still work exactly the same with old versions of automatic, though may have issues with newer ones
Very nice
i get an error when clicking "visualize" under the rectangular tab. Working for anybody else?
Interesting, looks like 1.5 handles "photo of a dog and a cat" better than 2.1 ...
WHy your Lora have icons on them?
I just like seeing previews. You don’t have to have them, it’s simply a personal choice 😀
this extension seems broken. are there any alternatives you are aware of to compose regionally?
@@NerdyRodent sadly does not work with latest gradio. Could you suggest alternative? thanks
@@NerdyRodent also, i use the webui auto launcher, which doesnt allow patching the SAG unofficial extension. Stuck here
Think it would be easier just to use Blender or equivalent to generate guide images for the algorithm rather than using control net or extensions like this. A little steeper learning curve but much better potential results.
Is there a way to use this when generating images from text? The way it's been shown here I don't see much advantage over using img2img and inpainting over one of the subjects with a high denoise value to get it to draw a different subject.
Man, i installed the Latent Couple and even so it not appears the box to use it. What can it be?
In the cmd log says: cannot import name 'CFGDenoisedParams' from 'modules.script_callbacks
Good video! But I guess this can be used only for using the ai to generate two subjects? What if we want to train two people by using their photos, how could we make it happen ? Thank you very much !
@@NerdyRodent So does it mean I need to train two subjects on SD+ dreambooth first and then generate a new image by using the latent couple extension? I have tried to train two men at once but the sd+dreambooth ended up mixing them and generated two identical ones instead of two different persons. Any idea maybe ? Thank you very much !!!
@@SweetieNerdygirl no, no training required. You can just use it straight away on any model exactly as shown 👍
how to install "Fork with masks" on windows10 - SD AUTOMATIC1111 ???
How do you put an image preview on the LoRA files ?
Civitai Helper extension, will download images, prompts and even allows you to set your own thumbnail.
I dont have control model showing up in my thing
I really gotta update my 1060 6gb soon....
soooo why doesnt it show up after i installed???? this is literally riving me bonkers...
@@AlexGNewMediaJournalism just update the webui, if that doesn't work install a new copy of it and just copy the parts you need.
bonus points for Vox Machina even if you did mispronounce it :D
Anyone able to use with additional-network-extension?
I can't get this to work
Unfortunately A1111 is pretty much unused at this point. If you're lucky, the update may work - sd-webui-regional-prompter - but most stuff is done in ComfyUI now
Two months later, and it still can't get hands right.
It's over
Latent couple is not listed in available would you know why?
This should help with porn scenes for sure
oh wow 😯 we are getting to the point where we need a better interface that can take advantage of all these tools and make them easier to use, it's so cool to see what is happening though
yah someone must be cooking up a poser type interface with 3d posable hands etc etc
Seriously. Outside of Invoke and all these photoshop clones, there hasnt been a simple gui to quickly integrate all these tools. Will it be adobe or blender due to the controlNET functions? Who knows
@@audiogus2651 Funny timing! Aitrepreneur just posted a video showing the new special Blender model which lets you pose the skeleton and hands and feet and you can save out the openpose colored skeleton but also a depth map and canny map of the hands and feet.