Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068 Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
Your videos are the best guides for generating images using AI models. So clear, well spoken, and the dry humor gets me every damn time. Thanks for taking the time to make these videos for us and to all of the people who make these tools at no charge to us too. I know it's A LOT of work and when I think there aren't good people in the world I remember all of the devs and creators who make content for us at no charge just to share their creations. So thank you so much!
Instant subscribe. I've been wathcing numerous videos about how to use stable diffusion but this one was the best explanation, the best techinque, and just flowed so well. Really well done, and super cool image in the end! Thankyou for your work creating this
Dude.. i was playing around with sd and mj for half a year now, with more or less efficient workflows and halfway usable results.. but you just opened up my eyes!!
Great stuff. I do most of this already but it's great for people just starting out. When I do the face I set the inpainting canvas size to 1024X1024. A square. Since that better matches the actual area it's working with at that time and it's usually much higher res than the actual image since it's just doing the face. Good job.
Some additions: ControlNet also has an inpainting model which can potentially improve results especially with higher denoising. ControlNet + Ultimate SD Upscale can be used together to get some amazing upscaling results. It is somewhat similiar to getting detail by inpainting, but it is automated and controlnet stirs the process to more closely resemble the original image.
Рік тому+2
Thanks for showing your workflow. Great video, very informative!
This is a great video, lots of detail, we even get some bonus information about some of the other settings in Automatic 1111. Thanks for making this Sebastian.
Been obsessed with SD for about a month now and if anyone out there is new and has a potato PC, you'll be able to learn faster by using a cloud version vs. trying to run it locally. Otherwise you have to wait for each render to complete before you can learn from your results
Welcome aboard! You joined at a great time. It's easier than ever to create amazing images now. If you need help, hop on the Discord (in channel description).
You don't need to set the height and weight on control net. That's for the 'create canvas' button. Personally I like 0.8 for the weight to allow some flexibility to the prompt. But honestly even at 0.05 it's great to give the AI a starting point.
Very useful. 👍 I hadn't thought about increasing the resolution progressively with each img2img and/or inpainting step. There is always something to learn. 😸 Thanks 😺👍👍👍
Thanks för sharing your process! I've been missing out on the upscaling step you do and have stuck with just hiresfix and img2img, but this will add another dimension if I can find the right upscaler for photorealistic humans.
Excellent, excellent, excellent. This is gonna be great to add to my current workflow, net control seems like such a powerful tool, I think my outputs are gonna be a lot better 😃
I've been a huge fan of your channel for a while now and truly appreciate the high-quality content you create. It's clear how much effort and passion you put into each video, and that's why I keep coming back! I wanted to share some thoughts on the Patreon model. While I understand the need to support your work, labeling content as "free" while it's locked behind a Patreon subscription feels a bit misleading. I think a lot of us would be more than happy to support you directly if the subscription details were more transparent, showing clearly what's free and what's exclusive to subscribers. Your content is absolutely worth supporting, and with a bit more clarity, I believe many more would be willing to join in on a straightforward subscription basis. Thanks for considering this feedback, and I'm looking forward to your future projects!
Thank you for the feedback. The content was initially free, but then I left my job to pursue this. And it's been a real mess to get rid of all the "free".
Seb, thank you for this video. I'm just starting out with SD on my Mac Studio M1. So, not as fast as PCs with hot Nvidia cards but still a lot of fun. Keep dropping these videos. I'll keep watching.
Kids take forever to get to bed it's not just you. Some nights you burn them out and it just makes them crankier, some nights one of them causes problems while the other wants to go to bed. Other nights they pass right out it's really one of the trickiest parts.
Very helpful! It would be great if you went over your extensions you have / recommend too. Like what gives you two styles to pick from instead of 1. I also don't have the "Impaint at full resolution" button etc
Inpainting will sometimes not use the mask configured if you use Firefox and instead repaint the whole picture no matter what settings you use, don't know why. Opera works though.
Thanks for the clear and concise tutorial. Is it possible to feed controlnet with completely different topics/prompts. Say I want the woman to look like an island from above. How can I archive something like that?
Honestly I always go for low resolutions like 512x512 because it gives me personally better drawing results and I can always upscale the imagine without the loss of quality.
🎯 Key Takeaways for quick navigation: 00:00 🖼️ Sebastian Kamph shares his advanced workflow for creating high-quality images using Stable Fusion and ControlNet. 01:36 📷 Experiment with different settings, including weight, depth, and model selection, when using ControlNet for image generation. 03:05 🌈 Explore variations in generated images and select the one with the desired composition and color palette. 08:05 🎨 Use image inpainting techniques to enhance specific parts of the generated image, such as faces or details, for better resolution and quality. 10:49 🤖 Experiment with prompts to transform elements of the image, like turning a human into a robot, to create unique and detailed artwork. 12:53 🚀 Combining ControlNet and image inpainting can help you generate high-quality, large images efficiently, improving your AI art workflow.
Really great and straightforward tutorial - those style prompt presets you mentioned were created by some friends... is there anywhere you'd recommend finding csv files for these. Thanks
I love you so much. My wife is jealous right now... I notice that after using an img2img with ControlNet, even if you disable ControlNet to keep working with something else, it will not matter what the prompt say. SD keeps generating images from the original img2img with ControlNet. Is that a bug or I'm doing something wrong? (i bet it's the second option). Thanks a lot for your videos!
Your negative prompt uses the same terms repeatedly, for example, "blurry" appears 4 times. Why are you using the same terms repeatedly instead of just strengthening them with brackets?
Awesome video! Can I just ask what's the logic of the prompt (man face 1.4) ? What does the bracket and the 1.4 value do? Put more weighting to the man's face?
It is interesting that you can drag and drop from the output to the input in img2img. If I try that it just blows up the output before I even start dragging.
Hi, thank you for the awesome video. Can you give me a hint? When you put a mask on the image, how does SD understand what kind of promt to take for this mask, when you have a very large promt?
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
Please support me on Patreon for early access videos. It will also help me keep creating these guides: www.patreon.com/sebastiankamph
Your videos are the best guides for generating images using AI models. So clear, well spoken, and the dry humor gets me every damn time. Thanks for taking the time to make these videos for us and to all of the people who make these tools at no charge to us too. I know it's A LOT of work and when I think there aren't good people in the world I remember all of the devs and creators who make content for us at no charge just to share their creations.
So thank you so much!
Glad to hear you say that and I'm very happy you're enjoying the content :) Hope you'll keep enjoying the videos.
As an artist, I have sooooooo many reference images that I want to play around this. This is like 100x productivity for artists. Absolutely insane.
It increased productivity so much that I no longer have the necessity to hire artists, I can create art myself!
@@androidgameplays4every13 lmfao
I guess that's the art of making money
pun intended
Umbelievable process, you should do more workflow demonstration as this, it's relaxing asf
Thanks, will do! Got a new weekly thing on it
I liked this. More workflow videos please.
i know your kids are sleeping when you are recording but i also fall asleep when watching your videos. Thanks for the videos.
You have such an old version of auto 1111, so many of those options you've talked about have been renamed or not needed.
Ah, is that why I don't have the "in-paint at full resolution" checkbox? Was that deprecated for some reason?
@@Smiithrz On the Inpaint Tab it's now called "Inpaint area -- Only masked"
@@jessedart9103 Thanks :)
@@Smiithrz yah I am curious where that went. (Ps. Jesse Dart deleted their comment?)
@@audiogus2651 oh, weird. He just said it was an old feature, and was replaced with “in-paint only masked” radio button now.
Instant subscribe. I've been wathcing numerous videos about how to use stable diffusion but this one was the best explanation, the best techinque, and just flowed so well. Really well done, and super cool image in the end! Thankyou for your work creating this
Thank you for taking the time to write such a thoughtful and positive comment. Glad to have you aboard! 😊
Thank you so much! I finally understood how I can improve the quality of my images in stable diffusion. Thank you again!
Dude.. i was playing around with sd and mj for half a year now, with more or less efficient workflows and halfway usable results.. but you just opened up my eyes!!
Happy it helped you 😊
Finally, I have time to watch your clip again after saving it in my playlist for awhile. This tutorial helps me a lot. ❤
Glad it was helpful!
I have learned so much from your channel. I write fiction and now I can bring the characters in my stories to life for my fans.
Happy to hear it! 😊🌟
Thanks so much for sharing this. I learned a ton on how to get hi res results.
Glad it was helpful! 🌟🌟🌟
@@sebastiankamph So you there's no need to use an inpaint model?
This is absolutely mindblowing. I love it. Thank you for sharing these videos. You have a great way and tempo of explaining. Very calming and clear.
Thank you for the kind words. So happy you enjoyed it 😊
FYI, you can manually type in the resolution if the slider bar doesn't give you the number you want.
oh whoah! i've been using Satable Diffusion for months almost daily and I didn't know such a robust workflow! thank you for sharing!
Glad it helped! 🌟🤩
I had absolutely no idea you could have so much control over the generated images.
You are like the Bob Ross of Stable Diffusion, thank you for showing the workflow!
LOL i was literally thinking the same thing ..... And here i will generate a happy little tree :)
Nice Video, very helpful - again! And the beginnings "Hey hey hey" got me as well as the dad joke.
Glad you liked it! Thank you for the encouragement 😊
I've learned so much from you Sebastian. Also I have a friend from England who is non stop with the dad jokes. You guys might get a long
You're an absolute legend , thanks especially for the free to use base prompts in the description !
Great stuff. I do most of this already but it's great for people just starting out. When I do the face I set the inpainting canvas size to 1024X1024. A square. Since that better matches the actual area it's working with at that time and it's usually much higher res than the actual image since it's just doing the face. Good job.
As usual, very useful tips and tricks! Thanks!
Amazing to see people who mastered this great tools lml
Glad you liked it! 😊
Thank you.. This is so Educational.
Some additions:
ControlNet also has an inpainting model which can potentially improve results especially with higher denoising.
ControlNet + Ultimate SD Upscale can be used together to get some amazing upscaling results. It is somewhat similiar to getting detail by inpainting, but it is automated and controlnet stirs the process to more closely resemble the original image.
Thanks for showing your workflow. Great video, very informative!
Glad you enjoyed it! 🌟
Excellent video, I am going to incorporate your workflow into mine. I especially appreciated the side comments on the upscalers. thank you
Thank you, glad it was helpful! 🌟
This is a great video, lots of detail, we even get some bonus information about some of the other settings in Automatic 1111. Thanks for making this Sebastian.
Been obsessed with SD for about a month now and if anyone out there is new and has a potato PC, you'll be able to learn faster by using a cloud version vs. trying to run it locally. Otherwise you have to wait for each render to complete before you can learn from your results
Thanks för sharing your process! 👍👍👍
Complete newbie. Started today. I subbed and hope to learn a lot from you, outputting quality, amazing images! Thank you so much for this!
Welcome aboard! You joined at a great time. It's easier than ever to create amazing images now. If you need help, hop on the Discord (in channel description).
amazing run through in this tutorial, so exited to use it.
You don't need to set the height and weight on control net. That's for the 'create canvas' button.
Personally I like 0.8 for the weight to allow some flexibility to the prompt. But honestly even at 0.05 it's great to give the AI a starting point.
Didnt know the canvas thing! I feel like a moron now constantly changing the height and weight haha
Good input! I corrected that in my latest tutorial.
@@Repossessionn trust me I used to too until I saw a reply to someone else about it on a discord server.
We will end up with cartoon versions of famous movies or with movies that have totally different actors, or comedy versions of dramas, etc. 😅
Awesome, much appreciation
Very useful. 👍 I hadn't thought about increasing the resolution progressively with each img2img and/or inpainting step. There is always something to learn. 😸
Thanks 😺👍👍👍
Great to hear! 😊💯💫
I have to ask Hili directly for the template prompt. I like it! 😁👍
He's a man of many talents! 🤩 Thank you 🥰😘
Nice one dude. Great information there.
Thank you! 😊
Thanks för sharing your process! I've been missing out on the upscaling step you do and have stuck with just hiresfix and img2img, but this will add another dimension if I can find the right upscaler for photorealistic humans.
Thankyou for making everything so clear and easy to understand (mostly understood, i.m a Newbie) Liked and Subscribed. 🖖🏻from👽
Just tried this technique. works pretty decent and gives good results overall. Than kyuo so very much for sharing!!!!!!!!!!!
I'm glad it helped you. I hope you have lots of fun with it 😊
Thanks for the video, subbed!
WOW. This is crazy impressive!
Thank you 😊💫
Your dad joke got me to chuckle, definitely deserves a like. 😂
Hah, glad you liked it 😊💫
Excellent, excellent, excellent. This is gonna be great to add to my current workflow, net control seems like such a powerful tool, I think my outputs are gonna be a lot better 😃
Glad it was helpful! Have fun and create amazing art 😊
thank you sir for the amazing workflow walkthrough, it helps a bunch!!
This was extremely informative and helpful, thank you
I've been a huge fan of your channel for a while now and truly appreciate the high-quality content you create. It's clear how much effort and passion you put into each video, and that's why I keep coming back!
I wanted to share some thoughts on the Patreon model. While I understand the need to support your work, labeling content as "free" while it's locked behind a Patreon subscription feels a bit misleading. I think a lot of us would be more than happy to support you directly if the subscription details were more transparent, showing clearly what's free and what's exclusive to subscribers.
Your content is absolutely worth supporting, and with a bit more clarity, I believe many more would be willing to join in on a straightforward subscription basis. Thanks for considering this feedback, and I'm looking forward to your future projects!
Thank you for the feedback. The content was initially free, but then I left my job to pursue this. And it's been a real mess to get rid of all the "free".
More workflow videos of complex put together, high quality images please!
I'm absolutely captivated by the sound! It's been such a pleasure to hear you play.💌🤟
Very interesting and to the point.
Your stuff is excellent. Thanks for all you do!
I appreciate that! 😊
Very helpful video right here! It's incredibly helpful to see the whole process in something like this. Thanks =)
Glad it was helpful! Thanks for the positive vibes! 😊
Thanks for sharing your workflow! Very helpful!!
You're very welcome! 😊😊
All this was super helpful. Thanks for sharing :)
You're so welcome! Thank you for the encouragement 😊🌟
exactly what I have been doing. Great video
Awesome! Thank you!
Thanks that was great!
Glad you liked it and thank you for the support, you superstar you! 😊💫💫
Seb, thank you for this video. I'm just starting out with SD on my Mac Studio M1. So, not as fast as PCs with hot Nvidia cards but still a lot of fun. Keep dropping these videos. I'll keep watching.
Thanks, this has really helped me!
Glad to hear it! 🌟
Kids take forever to get to bed it's not just you. Some nights you burn them out and it just makes them crankier, some nights one of them causes problems while the other wants to go to bed. Other nights they pass right out it's really one of the trickiest parts.
Very helpful! It would be great if you went over your extensions you have / recommend too. Like what gives you two styles to pick from instead of 1. I also don't have the "Impaint at full resolution" button etc
Now it's called "Only masked" in group of "Inpaint area"
WooOooW gooOOOood workflowwww!
Thanks my man! 🌟
absolute game changer
Wow it is exelent !!!!
Inpainting will sometimes not use the mask configured if you use Firefox and instead repaint the whole picture no matter what settings you use, don't know why. Opera works though.
thank you so much :)
You're welcome!
They are perfect images
This specific video you did gave me very Bob Ross vibes.
They call me Seb Ross 😅
Already liked when I saw the last of us input.
at 5:35 you can click into the width value and type 1248 to set it.
You are the Bob Ross of AI arts
I do my best! 😘
Why did you add the prompt at the beginning of the sentence inside brackets? ()
That gives it more weight. I'm telling it "This is more important than the rest".
Thanks for the clear and concise tutorial.
Is it possible to feed controlnet with completely different topics/prompts. Say I want the woman to look like an island from above. How can I archive something like that?
Literally just watched and commented on your other video.
Happy to hear it. Hope you enjoyed both videos 😊
@sebastiankamph Yeah, I did. This one is more helpful to me as I can watch and follow the steps a bit better.
NNY is New New York from Futurama :D
Honestly I always go for low resolutions like 512x512 because it gives me personally better drawing results and I can always upscale the imagine without the loss of quality.
🎯 Key Takeaways for quick navigation:
00:00 🖼️ Sebastian Kamph shares his advanced workflow for creating high-quality images using Stable Fusion and ControlNet.
01:36 📷 Experiment with different settings, including weight, depth, and model selection, when using ControlNet for image generation.
03:05 🌈 Explore variations in generated images and select the one with the desired composition and color palette.
08:05 🎨 Use image inpainting techniques to enhance specific parts of the generated image, such as faces or details, for better resolution and quality.
10:49 🤖 Experiment with prompts to transform elements of the image, like turning a human into a robot, to create unique and detailed artwork.
12:53 🚀 Combining ControlNet and image inpainting can help you generate high-quality, large images efficiently, improving your AI art workflow.
i dont know why, but i don't got that "inpaint in full res" on my Stable Diffusion.
Renamed now, inpaint "masked area only"
Really great and straightforward tutorial - those style prompt presets you mentioned were created by some friends... is there anywhere you'd recommend finding csv files for these. Thanks
Yes, come join us on Discord and ask Hili.
How do we regenerate our current character to be in a different pose without changing the character?
Yes
I love you so much. My wife is jealous right now...
I notice that after using an img2img with ControlNet, even if you disable ControlNet to keep working with something else, it will not matter what the prompt say. SD keeps generating images from the original img2img with ControlNet. Is that a bug or I'm doing something wrong? (i bet it's the second option).
Thanks a lot for your videos!
Haha, thank you! Remember you have both the controlnet input and the img2img input. If one of them is still there, they will influence the output.
You have two drop-downs -- Style 1 and Style 2 -- in the UI. What is this? I have only one.
How do I get styles like the one you use ?
Link in video description
Your negative prompt uses the same terms repeatedly, for example, "blurry" appears 4 times. Why are you using the same terms repeatedly instead of just strengthening them with brackets?
If you already have other photos as jpgs, how do you change the skirts in one photo, and then apply those skirts to the models in the other jpgs?
so if you had to use contorlnet on the img2img section what would you have done?
Awesome video! Can I just ask what's the logic of the prompt (man face 1.4) ? What does the bracket and the 1.4 value do? Put more weighting to the man's face?
Yes, more weight!
Thank you. Not sure why it get stuck on "Loading preprocessor: openpose, model". Running Automatic1111 notebook. Everything else works as it should .
What's the benefit of inpainting at higher resolutions vs. doing everything at lower resolution and then upscaling the final image?
It is interesting that you can drag and drop from the output to the input in img2img. If I try that it just blows up the output before I even start dragging.
Hi, thank you for the awesome video. Can you give me a hint? When you put a mask on the image, how does SD understand what kind of promt to take for this mask, when you have a very large promt?
It's tallied together. You can change the weighting by selecting words and do ctrl + arrow up/down.
But you were supposed to just click a button and easily get the final product! /s
😂
Fantastic work! Thanks! Can you share your styles?
Available in Discord
@@sebastiankamph thx man :)
@@sebastiankamph Not easy as you said ;) `Can not find any from 30 minuts there :(
Great video, do you think it is possible to create a motion in cascadeur and then input it into controlnet to output a video that matches the motion?
Is this process out of date now with the introduction of ControlNet REFERENCE? How does that chance your own process here?
Question: How did you get 150 tokens? Is it model specific? I only have 75.
@sebastiankamph, where do i find the free styles? i can only find ones i habe to pay for....
could you share your preset style prompts? they look so good!
Available on Discord
Why does your interface looks so much different than mine? I have tons of drop down menus instead of button clusters.