Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068 Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
@sebastiankamph bro help me every time I try to make images by using control net , the say out of Cuda memory even when I make every sitting on low quality!!!
Man you have been THE most helpful person towards getting me started here. Thanks a lot for the very easy to understand tutorials. This is all a lot harder than I thought it was going to be.
Great content as always. Very useful information. By the way, two antennas met on a roof, fell in love and got married. The ceremony wasn't much, but the reception was excellent.
Wow, I'm just starting with Stable Diffusion and this is just mind blowing, we had all this tools without knowing it, for me it's like discovering fire, and all that knowledge is thanks to your tutorial!
I don’t usually subscribe a channel, but I saw your prompt style is free, I subscribed instantly, this is the only way I can show my respect as a cheapskate
Thanks surprised no one is talking about ControlNet1.1. The additional Line art controlnet is a massive game change for ALL line artists everywhere. This news should be everywhere but barely hear anyone talking about it. Never in history did we have a A.I. able to take a line art image and perfectly fill it in to any style what so ever, until now, yet hear no one talking about it. Also ControlNet has just released a TEST version of their lighting control, so you can take any image and control the complete lighting composition for that piece. You can specify where the light source is located as well what in the picture is being illuminated. The tool is freakn insane. I'm honestly surprised barely anyone is mentioning it.
Good video, but I'm surprised you didn't cover the new stuff in ControlNet like Open Pose Hands + Face. Those seem like by far the biggest points of interest now, as you can easily get near perfect hands and faces pointed exactly how you want. Would love a video focusing on it, since I've been tinkering with it and it works, but I don't know if I am actually using everything as well as I could be haha
I haven't updated yet and didn't even realise that was a thing in ControlNet 1.1. I agree that it would be great to cover that as it seems like a great feature. Maybe he didn't want the video to be too long, so decided to cover install and the basics?
It's mind boggeling basically no one is talking about ControlNet1.1 with all these new additions. The Lineart one as well is mind blowing, ALL my old line art I ran through it and can generate the exact line art image filled in with any style I wish. It's mind blowing. This should be top news. Also they just released their new TEST version where you can control lighting, its insane how good it is. I can specify exactly where the light source is located, the intesity and what color. As well what in the picture should be illuminated to what degree from the light. Example making a sword glow from all the light bouncing off it. This stuff is a massive game change for anyone independent
@@kernsanders3973 Thank you for speaking about this in more detail. I am very excited about this. I can't wait to make my images look right!! I have just honestly been settling for whatever pops out and just using whichever images suck the least, but this makes me very happy. This changes the game completely!!
@@kernsanders3973 i've never given controlnet a try since my version seems to be unable to find the controlnet models properly fsr. but by the sounds of this, it's worth the effort to do some tweaking and get it working properly
I believe what Sebastian forgot to mention is that OpenPose is for SD v1.5, if you will try to use it with v2.0 or 2.1 models, it will not operate, so make sure to use models trained for 1.5!
ControlNet 1.0 was already working great for me. But I had to adjust some settings to avoid some issues. 1.1 eliminated those issues and everything work so much better.
I was really happy to find that an Openpose option had been added to PoseMyArt. I hope that they keep at it and expand on the feature to match new controlnet options.
Needs IK in my opinion. But I'm not sure how much it matters as long as ControlNet can't use foreshortened poses it will continue to fall apart in any but the most basic of cases. You still get a better result not using OpenPose and instead using sketch input.
@@JohnVanderbeck Hi John! I've added IK to PMA, please check it out and let me know what you think! (: Also, you can use the normal export in PMA to get better results with foreshortened poses! (:
@@PoseMyArt So I see the IK option in the settings but don't see how it actually WORKS. Normally you would have hand and feet targets you could move and rotate which would in turn drive the IK chain, but I don't see that here. I'll keep playing with it though because if I can get it to work it will be a game changer.
No mention of the model improvements in 1.1? Most of them are just "we improved the data set and model training for better quality", but there's a few standouts: (1) The segmentation model now supports COCO protocol, which means an extra 182 colors/categories. (2) OpenPose can now track face orientation, and can also track hands better with more detailed OpenPose skeletons. (3) Lineart model and preprocessor are entirely new, and can convert lineart into images, including hand-drawn lineart (like a more structured "scribble"). (4) Shuffle model and preprcessor are entirely new; they basically distort the input image enough to keep the style but lose the details, then the model was trained on those; so it's basically ControlNet-based style transfer. (5) Instruct-Pix2Pix is brand new as well, and is trained with a mix of normal prompts and instruction prompts, so it can do instruction-based edits entirely in ControlNet, applied to any base model.
ip2p is really interesting, I've been playing with it and it kind of amazes me every time. Have to check the new improvement to poses with face and hands as well.
Mine wasn't 1:1 result of the OpenPose. Maybe because I didn't have any models for the ControlNet. But I absolutely love this tutorial. It opened up new doors for me to explore. Thanks a lot.
As an SD noob I think there's a need for making tutorials customized to specific types of SD such as A1111 tutorials, ComfyUI tutorials, Forge, etc. The biggest confusing factor (for me) is trying to figure out which additional files are required for certain things to work. For example, I tried to install the SSD-1B "thingy" (Lora?) but noticed if I selected it in the "Add network to prompt" drop down it made my renders look horrible. If choosing "none" the renders looked great. Just a random example but an opportunity for content ideas for the experts lol
earlier you showed how to add VAE to the UI at the top. Is there a way to add 'clip' at the top of the UI as well? For another video idea, I think it would be cool if you went over each controlnet model and how to use them. Openpose is fairly straightforward, but the other ones are a lot harder to use. I don't even know what half of them are supposed to do. (I'm looking at you, Hed, Seg, and Mlsd!)
Thanks for a great guide, you're helping me out so much! I've got a question, is canny best used for buildings and dead things? And open pose for humans? I'm new to this and i tried both and got pretty cool results with canny + human poses, but maybe it's not the best way..
Hey Sebastian, Quick question. In your interface I see buttons 16:9, 4:3, 1:1. How do you get those? It would be amazing to be able to switch between those modes with buttons.
Very good Tutorial. I created a pose similar to a "Handstand" Position. However, i tried lots of different models and prompts but all results are more than bad. Why does it seems almost impossible to get an image of a person doing a handstand although ControlNet has an very exact pose for it?
Hi, I have tried to figure this out but can't. Under ControlNet, there is no "ControlNet Unit 0" I only have the "Single Image" option..any help would be much appreciated
Yeah but as soon as the scene is more complex, like two people hugging, it's really hard to not get multiple arms and legs. I've yet to figure out how to do it properly.
I love your tutorial. I just beef up my computer and installed stable diffusion a few days ago so I guess I'm a newb. I was able able to install controlnet however :( installed stable diffusion Auto1111, and about 12 extensions, but .....I'm starting to believe the extension for controlnet is removed from the Github repository maybe recently, am I going nuts? Are you seeing it the extension on Github?
It's hilarious how some people say AI art doesn't take any effort. Boy how little they know! This makes conventional 3d modelling programs and advanced photoshop seem like a walk in the park...
I noted that a1111 automatically downloads preprocessors. I wonder if it will download the openpose models as well if I have selected them from the dropdown menu.
Ok it... doesn't work. I put all the same, except maybe the checkpoint because i don't know where to find it but i tried anyway multiple checkpoint so i guess that doesn't come from here, I didn't entered style yet but as i uderstand it's just automatic prompt so nothing to do with controlNet... I am lost, it generate a picture but the position of the model is completely random and doesn't follow at all the picture on controlnet with all the settings... :(
Why isn't the tile model working as described? Whenever trying to use I simply get enlarged blurry version, it never attempts to redraw tiles. No matter settings. Updated everything. Followed multiple guides to the T. Double triple checked everything.
Does LoRa not work when using controlnet? I get the same face with LoRa and without LoRa, when I only want her face from LoRa and her pose from openpose
Thanks for the vid! I managed to install Control Net and few extentions, BUT Stable Diffusions window continues to show the default interface plus the Control Net interface at the bottom.. And the ControlNet doesnt work..
I notice here you rendered in 1024x1024. It's it better to stick to 512 or768 as the models are trained for those resolutions? This is what I've heard. Is it then ok to go higher without suffering a quality loss? Thanks for the great content btw, subbed.
openpose and mediaface should fuse together and make more advanced preprocessors/models. mediapipe is way better to keep facial expressions and we still need something for holding objects like weapons, purses, umbrellas, etc.
dude, Thanks heaps for your tutorials, I have a question I can't find an answer to: How can I batch generate pose images from a bunch of frames? I have done it accidentally in stable diffusion, but I can't replicate it and it's driving me nuts.
thanks Sebastian for the info. I updated Automatic1111 controlnet to 1.1, but i still can use my old fp16 safetensor models, right? Or do i need to update to 1.1 versions?
im using collab how do i install the models, i have open pose editor but the only options are listed in the processor tab . if i select open pose i dont have any options under the models tab
@Sebastian Kamph I get an error message when running with sdxl model. I updated to the latest version, but it only works with SD models. I get "mat1 and mat2 shapes cannot be multiplied" Any idea why?
Before installing the Controlnet, how do you install Stable Diffusion? There is a chance to use it on stablediffusionweb but it's not screen that I see here nd it always gives "error". How do I get this? Can anybody help me?
Hello, I have been using control net open pose(full). The general structure seems to be accurate but the position of finger get to totally ruined for some reason. Also at times I get different position as well. I am using the realistic vision, 16:9 ratio resolution. What is that I am getting wrong.
Hi I did everything like in the video but no controlnet showing up un scrpits, it is writing install updated folders are there, but no control net in scripts; can you help me pls?
@alexnoghera657 check if each of your controlnet model has its corresponding yaml files in the models folder, and also pay attention to if the stable diffusion model is compatible with the controlnet model. I managed to fix it
Hey Im following your guide, but I dont have a list of models after clicking 'enable', it is simply not there and says 'none' and no other options available
@@sebastiankamph oh thats right! I was skipping that part because I had already used your excellent installation video, so I thought that was unneeded, thanks!
I would like to know: based on the thumbnail of the video, I thought that this feature could extract poses from simple hand-drawn black and white line drawings. This is exactly the issue I want to confirm at the moment. However, after watching the video, I realized that the pose skeleton is not derived from simple line drawings. I have tried it myself but it was not successful. So I want to know, can OpenPose 1.1 actually do this? Thank you!
When i try to download control net using the URL, it doesn't work and tell me,"AssertionError: Extension directory already exists: D:\Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-control net" but it's just an empty folder. When I delete the folder to redownload it, it tells me,"GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git fetch -v -- origin stderr: 'fatal: detected dubious ownership in repository at 'D:/Stable-Diffusion/stable-diffusion-webui/tmp/sd-webui-controlnet''. is there any way to fix this?
Nice tutorial, but still can't get stable diffusion to create the image i want, even when using ControlNet, why is this? All i want is to create an image from the point of view of a person being pinned to a wall, similar to how VI pinns Caitlyn to a wall in arcane, in that one scene.
Hey everyone I hope all of you are doing well. Can anyone help me with this error : RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' ? I recently download and installed stable diffusion and everytime I click the generate button the above error is displayed
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
Can't find the free button to download. It needs to donate on Patreon to download?
@@sebastiankamph find it. Thank you :)
I recommend you cascadeur to make model animation, it uses AI that makes more fluent animation.
Thank you very much.
@sebastiankamph bro help me every time I try to make images by using control net , the say out of Cuda memory even when I make every sitting on low quality!!!
I get easily frustrated with this complicated steps... but I want to THANK YOU for the clear way you give the explanations, and how calm you talk!
Happy to help, glad you're enjoying the content!
Man you have been THE most helpful person towards getting me started here. Thanks a lot for the very easy to understand tutorials. This is all a lot harder than I thought it was going to be.
Happy to help, and glad that you find my guides useful! I'm sure you'll get the hang of it 🌟
im 1month in and ive been getting jiggy, how are you 3 months in brother
Great content as always. Very useful information. By the way, two antennas met on a roof, fell in love and got married. The ceremony wasn't much, but the reception was excellent.
Boom, nailed it! 💫
Wow, I'm just starting with Stable Diffusion and this is just mind blowing, we had all this tools without knowing it, for me it's like discovering fire, and all that knowledge is thanks to your tutorial!
Happy to hear that. Welcome aboard! 😊
I don’t usually subscribe a channel, but I saw your prompt style is free, I subscribed instantly, this is the only way I can show my respect as a cheapskate
Thanks surprised no one is talking about ControlNet1.1. The additional Line art controlnet is a massive game change for ALL line artists everywhere. This news should be everywhere but barely hear anyone talking about it. Never in history did we have a A.I. able to take a line art image and perfectly fill it in to any style what so ever, until now, yet hear no one talking about it.
Also ControlNet has just released a TEST version of their lighting control, so you can take any image and control the complete lighting composition for that piece. You can specify where the light source is located as well what in the picture is being illuminated. The tool is freakn insane. I'm honestly surprised barely anyone is mentioning it.
Whoa, the lighting control sounds crazy. Do you have a link to some kind of video? Is it available yet?
Hi, when I saw the video title, I hoped you would cover the new preprocessors and what works best and especially with what model to achieve XYZ.
I felt the video would be too long, so mainly covering the basics and then doing follow-up videos that are more indepth. That was the idea at least 😅
Good video, but I'm surprised you didn't cover the new stuff in ControlNet like Open Pose Hands + Face. Those seem like by far the biggest points of interest now, as you can easily get near perfect hands and faces pointed exactly how you want.
Would love a video focusing on it, since I've been tinkering with it and it works, but I don't know if I am actually using everything as well as I could be haha
I haven't updated yet and didn't even realise that was a thing in ControlNet 1.1. I agree that it would be great to cover that as it seems like a great feature. Maybe he didn't want the video to be too long, so decided to cover install and the basics?
It's mind boggeling basically no one is talking about ControlNet1.1 with all these new additions. The Lineart one as well is mind blowing, ALL my old line art I ran through it and can generate the exact line art image filled in with any style I wish. It's mind blowing. This should be top news. Also they just released their new TEST version where you can control lighting, its insane how good it is. I can specify exactly where the light source is located, the intesity and what color. As well what in the picture should be illuminated to what degree from the light. Example making a sword glow from all the light bouncing off it. This stuff is a massive game change for anyone independent
@Kern Sanders I'll check this out, thank you!
@@kernsanders3973 Thank you for speaking about this in more detail. I am very excited about this. I can't wait to make my images look right!! I have just honestly been settling for whatever pops out and just using whichever images suck the least, but this makes me very happy. This changes the game completely!!
@@kernsanders3973 i've never given controlnet a try since my version seems to be unable to find the controlnet models properly fsr. but by the sounds of this, it's worth the effort to do some tweaking and get it working properly
I believe what Sebastian forgot to mention is that OpenPose is for SD v1.5, if you will try to use it with v2.0 or 2.1 models, it will not operate, so make sure to use models trained for 1.5!
ControlNet 1.0 was already working great for me. But I had to adjust some settings to avoid some issues. 1.1 eliminated those issues and everything work so much better.
you are the best! i spent lots of time trying to fix controlnet and your video saved my time
Thank you so much my friend
beginner friendly as always, good job on this video Sebastian!
I was really happy to find that an Openpose option had been added to PoseMyArt. I hope that they keep at it and expand on the feature to match new controlnet options.
Needs IK in my opinion. But I'm not sure how much it matters as long as ControlNet can't use foreshortened poses it will continue to fall apart in any but the most basic of cases. You still get a better result not using OpenPose and instead using sketch input.
@@JohnVanderbeck Hi John!
I've added IK to PMA, please check it out and let me know what you think! (:
Also, you can use the normal export in PMA to get better results with foreshortened poses! (:
@@JohnVanderbeckin my opinion
I would ask you to
Say
Funny cat
@@PoseMyArt Amazing!
@@PoseMyArt So I see the IK option in the settings but don't see how it actually WORKS. Normally you would have hand and feet targets you could move and rotate which would in turn drive the IK chain, but I don't see that here. I'll keep playing with it though because if I can get it to work it will be a game changer.
Thanks for the updates, always looking forward to when you do them.
Great video. The new ControlNet looks very promising.
No mention of the model improvements in 1.1? Most of them are just "we improved the data set and model training for better quality", but there's a few standouts: (1) The segmentation model now supports COCO protocol, which means an extra 182 colors/categories. (2) OpenPose can now track face orientation, and can also track hands better with more detailed OpenPose skeletons. (3) Lineart model and preprocessor are entirely new, and can convert lineart into images, including hand-drawn lineart (like a more structured "scribble"). (4) Shuffle model and preprcessor are entirely new; they basically distort the input image enough to keep the style but lose the details, then the model was trained on those; so it's basically ControlNet-based style transfer. (5) Instruct-Pix2Pix is brand new as well, and is trained with a mix of normal prompts and instruction prompts, so it can do instruction-based edits entirely in ControlNet, applied to any base model.
ip2p is really interesting, I've been playing with it and it kind of amazes me every time. Have to check the new improvement to poses with face and hands as well.
very very cool Sebastian!
Thank you very much! 😊😘
Brilliant video, the styles are great, your sponsor is a keeper (those pose models will come in handy for storyboards for sure!). Thanks!
Glad you enjoyed it!
Mine wasn't 1:1 result of the OpenPose. Maybe because I didn't have any models for the ControlNet. But I absolutely love this tutorial. It opened up new doors for me to explore. Thanks a lot.
i dont have any models, do you know how to get them
Great video as always Sabastian!
Thank you!
Gugugaga i need protection
Roy from PoseMyArt also added a nifty hands option to the OpenPose export. It works great!
As an SD noob I think there's a need for making tutorials customized to specific types of SD such as A1111 tutorials, ComfyUI tutorials, Forge, etc. The biggest confusing factor (for me) is trying to figure out which additional files are required for certain things to work. For example, I tried to install the SSD-1B "thingy" (Lora?) but noticed if I selected it in the "Add network to prompt" drop down it made my renders look horrible. If choosing "none" the renders looked great. Just a random example but an opportunity for content ideas for the experts lol
Thanks for the tutorial!
Thanks really nice and helpful video. I did not realize control net got so much better.
earlier you showed how to add VAE to the UI at the top. Is there a way to add 'clip' at the top of the UI as well?
For another video idea, I think it would be cool if you went over each controlnet model and how to use them. Openpose is fairly straightforward, but the other ones are a lot harder to use. I don't even know what half of them are supposed to do. (I'm looking at you, Hed, Seg, and Mlsd!)
This video is very helpful! Thanks Seb...Cya
You're very welcome! 🌟
Thanks for a great guide, you're helping me out so much! I've got a question, is canny best used for buildings and dead things? And open pose for humans? I'm new to this and i tried both and got pretty cool results with canny + human poses, but maybe it's not the best way..
2:30 There are some options for smaller files either from Civitai or Huggingface.
Man, glad i found your channel, great videos and very helpful, thank you!
Glad they're helping! 😘
Oogabooga little booger
Big plus, ok best wishes from your neighbour country
great video on the update. thanks!
thanks sebastian youre amazing
Hey Sebastian, Quick question. In your interface I see buttons 16:9, 4:3, 1:1. How do you get those? It would be amazing to be able to switch between those modes with buttons.
Aspect ratio, search in extensions tab
Come for the dad jokes, stay for good AI content.
We all love a good dad joke to brighten the day
Very good Tutorial. I created a pose similar to a "Handstand" Position. However, i tried lots of different models and prompts but all results are more than bad. Why does it seems almost impossible to get an image of a person doing a handstand although ControlNet has an very exact pose for it?
You know why I say mucho to my spanish friends?
It means a lot to them.
Excellent! 🌟🌟🌟
not convinced with controlnet. thanks for the video anyway
Hi, I have tried to figure this out but can't. Under ControlNet, there is no "ControlNet Unit 0" I only have the "Single Image" option..any help would be much appreciated
Yeah but as soon as the scene is more complex, like two people hugging, it's really hard to not get multiple arms and legs. I've yet to figure out how to do it properly.
Its now been 6 months, have you made any progress?
Thanks a lot for your videos mate.
Happy to help!
I love your tutorial. I just beef up my computer and installed stable diffusion a few days ago so I guess I'm a newb. I was able able to install controlnet however :( installed stable diffusion Auto1111, and about 12 extensions, but .....I'm starting to believe the extension for controlnet is removed from the Github repository maybe recently, am I going nuts? Are you seeing it the extension on Github?
Nice guide and all, but i gotta ask man, are you swedish by any chance ?
Sure am. Did my lovely accent and stunning Nordic looks give it away?
Really awesome tutorial, thanks a lot, it helped me!
Glad it helped!
It's hilarious how some people say AI art doesn't take any effort. Boy how little they know! This makes conventional 3d modelling programs and advanced photoshop seem like a walk in the park...
I noted that a1111 automatically downloads preprocessors. I wonder if it will download the openpose models as well if I have selected them from the dropdown menu.
Can you please do a video on removing backgrounds and explaining the settings. Thanks!
Very nice! I'll try :)
Thanks
I love the puns, never stop!
I got you! 😊
Ok it... doesn't work. I put all the same, except maybe the checkpoint because i don't know where to find it but i tried anyway multiple checkpoint so i guess that doesn't come from here, I didn't entered style yet but as i uderstand it's just automatic prompt so nothing to do with controlNet... I am lost, it generate a picture but the position of the model is completely random and doesn't follow at all the picture on controlnet with all the settings... :(
great video
Why isn't the tile model working as described? Whenever trying to use I simply get enlarged blurry version, it never attempts to redraw tiles. No matter settings. Updated everything. Followed multiple guides to the T. Double triple checked everything.
What was the keyboard shortcut for the weights in the woman waving?
Why do you prefer Mikubill over Automatic??
Does LoRa not work when using controlnet? I get the same face with LoRa and without LoRa, when I only want her face from LoRa and her pose from openpose
Posemy is amazing!! Thanks
I have to say, the dad jokes made me subscribe
Glad to have you aboard!
Should we use .pth files or .safetensors files (smaller size) as shown on other UA-cam channels like Albert Bozesan?
Thanks for the vid! I managed to install Control Net and few extentions, BUT Stable Diffusions window continues to show the default interface plus the Control Net interface at the bottom.. And the ControlNet doesnt work..
I notice here you rendered in 1024x1024. It's it better to stick to 512 or768 as the models are trained for those resolutions? This is what I've heard.
Is it then ok to go higher without suffering a quality loss?
Thanks for the great content btw, subbed.
Actually, the yaml files are downloaded with controlnet. They were there before I added the pth files
You get a like solely based on the flag joke, loved it!
Hah, thank you 😊
openpose and mediaface should fuse together and make more advanced preprocessors/models. mediapipe is way better to keep facial expressions and we still need something for holding objects like weapons, purses, umbrellas, etc.
💯
dude, Thanks heaps for your tutorials, I have a question I can't find an answer to: How can I batch generate pose images from a bunch of frames? I have done it accidentally in stable diffusion, but I can't replicate it and it's driving me nuts.
thanks Sebastian for the info. I updated Automatic1111 controlnet to 1.1, but i still can use my old fp16 safetensor models, right? Or do i need to update to 1.1 versions?
im using collab how do i install the models, i have open pose editor but the only options are listed in the processor tab . if i select open pose i dont have any options under the models tab
9:25 how did you updated the weight that easily? Is there a shortcut or a macro?
you are best
here
Thank you! 😊
@Sebastian Kamph I get an error message when running with sdxl model. I updated to the latest version, but it only works with SD models.
I get "mat1 and mat2 shapes cannot be multiplied"
Any idea why?
Thank you for sharing!! It seems the T2ia color and style models don't work on this Control-net version.
i have a cuestion, Is stable diffusion the same as automatic 1111? what´s the diference between a both? someone explain it for me, please
is there a way to use cli instead of the gradio app
But there is no longer a preview of the annotator? Especially for the canny?
openpose models showing "none " plz help
at 9:29, my webui don't have styles, how do I get it? edit, okay I found the styles and installed but the output not that good like yours.
cool, but how come I cannot see any list of Model after checking on Enable? any idea?
hey I have the same problem, no list of models
do we have to downlaod the files every time we restart it in cmd its downloading again
I'm only here for the jokes.
The jokes are a big plus.
@@brainwithani5693and multiplication
I'm here for the amazing Swedish accent! And the jokes 😁
Is it me or are the PTH files not there anymore? Only the YAML files?
Do you know whats up with controlnet and SDXL ? the models that are for SDXL i got don't even work
Thanks for your great videos! I hoped you could have unlocked the content
Before installing the Controlnet, how do you install Stable Diffusion? There is a chance to use it on stablediffusionweb but it's not screen that I see here nd it always gives "error". How do I get this? Can anybody help me?
Hello, I have been using control net open pose(full). The general structure seems to be accurate but the position of finger get to totally ruined for some reason. Also at times I get different position as well. I am using the realistic vision, 16:9 ratio resolution. What is that I am getting wrong.
I can't download the styles. You say in the videos they are ffree but when I click on it it asks for patreon membership (paid)
does this work on 2.1 checkpoints? (noticed theres a 10gig file for the 2.1's) and what can you speak to those in terms of being... better?
Hi I did everything like in the video but no controlnet showing up un scrpits, it is writing install updated folders are there, but no control net in scripts; can you help me pls?
Try update your a1111. Git pull in root folder
@@sebastiankamph nope already up to date, wanted to use open pose editor but no send to control net button, but still says instaleld and up to date
Ive done everything exactly as described but the image doesn't follow the pose.. please help
triple checked every button
Same for me
@alexnoghera657 check if each of your controlnet model has its corresponding yaml files in the models folder, and also pay attention to if the stable diffusion model is compatible with the controlnet model. I managed to fix it
@@omerbendoly7231 what's the directory's name? I can't see the files
Hey mate I am thinking of making manga / comic book ! So is it possible to make my characters pose like that with control net
Yep. Toughest job will be character consistency.
Hey Im following your guide, but I dont have a list of models after clicking 'enable', it is simply not there and says 'none' and no other options available
Did you download the models and put them in the folder specified?
@@sebastiankamph oh thats right! I was skipping that part because I had already used your excellent installation video, so I thought that was unneeded, thanks!
This is the main reason I work with AI, CNnet is future
For sure! ControlNet is undoubtedly the king of art generative AI 🌟
Does this for in stable diffusion easy 3.0
It's in localhost right? Is it heavy ? what GPU are you using ?
Rtx 3080. Works with any gpu 4+gb vram
I would like to know: based on the thumbnail of the video, I thought that this feature could extract poses from simple hand-drawn black and white line drawings. This is exactly the issue I want to confirm at the moment. However, after watching the video, I realized that the pose skeleton is not derived from simple line drawings.
I have tried it myself but it was not successful. So I want to know, can OpenPose 1.1 actually do this? Thank you!
Yes, it can
Nice video. Subscribedo.
Happy you liked it. Welcome aboard!
I followed all the steps correctly but the control net doesn't show up in "settings" HELP PLEASE
When i try to download control net using the URL, it doesn't work and tell me,"AssertionError: Extension directory already exists: D:\Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-control net" but it's just an empty folder. When I delete the folder to redownload it, it tells me,"GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git fetch -v -- origin stderr: 'fatal: detected dubious ownership in repository at 'D:/Stable-Diffusion/stable-diffusion-webui/tmp/sd-webui-controlnet''. is there any way to fix this?
Nice tutorial, but still can't get stable diffusion to create the image i want, even when using ControlNet, why is this? All i want is to create an image from the point of view of a person being pinned to a wall, similar to how VI pinns Caitlyn to a wall in arcane, in that one scene.
Not Working with SDXL Model :(
Hey everyone I hope all of you are doing well. Can anyone help me with this error : RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' ? I recently download and installed stable diffusion and everytime I click the generate button the above error is displayed