I've watched some other channels focused on stable Diffusion, and just wanted to say I appreciate how you don't fill your videos with "funny" gifs and clips, and don't waste time explaining Windows basics like how to use file explorer, unlike those other channels. Keep it up
Thanks for all your splendid tutorials. Although I am considering myself as very advanced with working in SD. Every time I watch your videos I get some new ideas. Also your default negative works extremely good with my checkpoint Colossus 2.03.
Beautiful demonstration of the img2img tab! I've been using the sketch tab to draw new fingers to fix hands. Like Seb shows here the denoise value needs to be tweaked every time but once it clicks you can actually add all kinds of things and even fix hands :)
Seb is an absolute rockstar. I love his way of explaining, but the most important thing for me are the dad jokes. I hope youcan do something on a few advanced methods with higher resolutions too. My minimum output is around 4k and i feel the detail attainable gets way better with increasing resolution. Also - waiting times, i'm used to wait 3-5 min. on an image with 2 or 3 ControlNet instances and Roop on my 4090. My fav. workflow atm is previz with PS beta/Firefly, then feeding that into A1111 and start bouncing. Firefly is so amazing to clean up little things and finish an image while it's terrible finding a creative solution and closing in on an idea.
0:21 -- The "joke" you’re referring to appears to involve wordplay or a pun, but its meaning isn’t immediately clear. The statement _“a sock takes five tails”_ is particularly confusing, as there’s no widely recognized link between socks and tails. It’s possible that we’re missing some specific context or cultural reference that would help clarify the joke’s meaning. Anyway! Thank you for this informative and useful video tutorial.
You got a pretty nice gear man!! I'm in this channel since day one and I am so proud of you my man! Keep doing this amazing content for the community! By the way, those bad jokes, always got me hahahaha
Wow, thank you! You guys and gals are the real mvps out there! You're a real rockstar for hanging in since the early days. I'm surprised you stayed when the mic and video quality was so low back then 😅😘🌟
I opened this one on my tv and I was like WAAAT is this new setup by panavision or it's just new hollywood DOP? Yes, it's visible right away. I love it.
I'm very happy you feel that way. I spent a lot of time researching to be able to get something like that! Anamorphic lens on a Lumix S5iix at 6K open gate.
Very nice new settings and background. Why did the minimalist UA-camr's background get jealous? Because it felt like it was being "framed" out of the picture!
problem I found with img2img was the resource requirements increased dramatically and the model starts getting upset about my aging 1080ti's 11gb of vram lol
Incredible tutorials, congratulations! A tutorial would be great teaching how to transform a drawing into a realistic photo with the same environment as the drawing! Thanks for sharing
I’d love to see a video about reinstalling A1111 and changing to other versions and how to do git pulls and things like that. Every tut on these assumes a really high level of experience with these things. In particular, none of my inpainting / img2img etc work and I’ve seen that this may be a version issue.
you can keep the original image to go to sketch draw the glass, and after that back to inpaint -> painting glass with a prompt "yellow glass" and wait for the result ^^
Great video! I think it would have been better if it included a bit of ControlNet stuff to show how to give even more accurate control for generated shapes.
This was helpful, thanks for that. But with roop and eyemask extensions/scripts, you can do much of what you were doing much more quickly and accurately. Still a good review on image to image.
22:20 I've found that if you want to increase resolution, prompt engineering with "highly detailed", "perfect focus" and "closeup details" gets the models to output finer details even if you're generating non-closeup images.
@@scrung Yeah, adding new keywords or tweaking existing words to push AI to generate improved image quality is called "prompt engineering". I think it started as a joke but it's commonly used seriously nowadays.
I've always had major performance issues with inpaint sketch. Sketch and normal inpaint work fine, but as soon as I stick an image in inpaint sketch the ui just gets extremely laggy to the point of locking up. No idea why.
New to the channel. Absolutely love how detailed your guides are. Been binge watching them recently on repeat, lol. I know it might be asking a lot, but have you ever considered maybe including the base images that you play around with for people to try and follow along? I know ultimately it’s going to come out different. But as I listen to some of these videos while doing housework, I find myself thinking “okay when I get to my computer, I’m going to try and find a soulmate image and replicate your steps exactly to teach myself, so that I can then apply it elsewhere in my open projects” Just a thought! You already provide an abundance of free resources and that’s more than enough. Anywho, like and subscribed (didn’t realize I hadn’t done that yet)~ looking forward to additional content on your channel once I get these fundamentals down :D
It’s a common thing that when following a tutorial you can do something then look at the guide you’re following to try and confirm that what you have made looks something similar like what you were instructed to create. I understand that the concepts are universal. The point of the message was for when learning/following his guides step by step for the first time. Not for when applying it to my own unguided work.
On img2img Tutorial part. If you want to change and keep high denoising, you can use () to emphasize. For example: ((man)) with blue hair. Best results with Inpainting.
You can also use a scale directly, e.g. (man:1.21). It cuts down on the parentheses. And you can select the word or phrase you want to emphasize and press ctrl+up to increase the emphasis and ctrl+down to lower it.
Just so I'm not misunderstanding when you say your image prompts are free, is it just because it's an old video and now they are not actually free anymore? It's asking me to subscribe. I don't mind subscribing I just want to make sure. Maybe you have some free and some are locked behind a paywall or are they all locked behind a paywall now? I notice this on a lot of your older videos. Thanks
Thanks a lot! Appreciate all your work and knowledge you are sharing with us. But I am constantly struggling keeping those very useful workflows, effects of parameters, etc in my brain when I am just working with automatic1111 in my spare free time once or twice a week for an hour 😅 would you think about writing guides too? Is there maybe a knowledge base tool as an extension for automatic1111 whoch could help me keeping tool and best practices closer together?
I cannot get sketch to work to add glasses or a different lip color... The whole image is changing... I'm using the same model you're showing and the settings all look the same... Any idea what I'm doing wrong?
I wish there was a tool to pick a color from the image for sketch mode as it's often difficult to pick the right colors, it often ends up with flashy items like your glasses or eye here
What if your art isn't humans or sceneries. More like a bunch of lines in varying 3d space to create an image? Would the AI understand how to create a new one?...If not, is there any software that does or can take in the arts to create something but still using that style?
How often do you toggle original/fill? If I want to introduce a new item to the foreground or background…how do I approach this? Inpaint sketch, fill, .7+ denoise?
I do not see any styles, I downloaded some, I think the one mentioned in your comments restarted and refreshed but if I push the down arrow it is not opening and I do not see styles
I've been following all of your tutorials of Stable Diffusion and I found them to be really helpful, but when switching from img2img to sketch to inpaint to inpaint sketch, I often get an error, the mouse makes color inputs on it's own, or the image doesn't load at all. Do you have any idea what may be causing this?
Hey Sebastian, I have a quick question here. How to ensure crystal clean transition when doing inpaint to half section of the face. For example, I am creating half human. half cartoon face so the trasition is not very clear, I mean the interaction line from where the new prompt begins, Sometimes lips are distorted and so on. Any advise please
Problems with Model standin in Water, Help needed: I tried to modify a foto with a Person standing in a dungeon like room - I wanted to add some 30 cm water covering the floor so that she is standing in this water, the rest of th fotot without modification. I masked the lower area with the Inpaint tool and used the prompt "dirty water" or "woman standing in dirty water". It almost works fine with a good result, but is creating some ugly artefact in deforming the legs in an area above the masked zone. How can I get rid of this artefact? I was using Epicrealism as model. Yours Uli
hey how to convert cartoon character into realistic one, I mean it will not do a slight changes, can it understand the character features and generate complicated real life character?
Thanks for the great video, I am using sd_xl_base_1.0 and you seem to be using delivered_v2 is the one you are using open source as I can not find it on hugging face? Which one you find more powerful? Thanks a lot.
It's available for free on civitai. Custom 1.5 models are probably more finetuned at this point. SDXL is in its infancy and will probably be better over time with custom models being trained on it.
For some reason I can’t get it to work with an abstract image I’m testing I’m trying to use light beams that travel around the image but when I try to this workflow it just results in smudged semi transparent line and never an actual laser beam image, is there something I need to do here as I can’t figure this out
How about the batch mode? I was going to use img2img upscaling on some images then I realized I can't use the original prompts for each file since there is only one prompt input.
Why when i use img2img and set it to 0.5 and the image almost looked different even the background colors changed? But yours can maintain the similar image on 0.6?
anyone getting this error with controlnet recently, " RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)" used to work fine but recently none of the models work.
Download Prompt styles: www.patreon.com/posts/sebs-hilis-79649068
👋
How can i install it?
@@neprhes
Here you go, you don't even need to pay.
prompt : RAW candid cinema, 16mm, color graded portray 400 film, remarkable color, ultra realistic, textured skin, remarkable detailed pupils, realistic dull skin noise, visible skin detail, skin fuzz, dry skin, shot with cinematic camera
Negative prompt: NSFW, Cleavage, Pubic Hair, Nudity, Naked, Au naturel, Watermark, Text, censored, deformed, bad anatomy, disfigured, poorly drawn face, mutated, extra limb, ugly, poorly drawn hands, missing limb, floating limbs, disconnected limbs, disconnected head, malformed hands, long neck, mutated hands and fingers, bad hands, missing fingers, cropped, worst quality, low quality, mutation, poorly drawn, huge calf, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, missing fingers, fused fingers, abnormal eye proportion, Abnormal hands, abnormal legs, abnormal feet, abnormal fingers.
They are not free
@@thegames6391 I noticed that as well. Maybe they were free when this was released? Paywall now :(
I've watched some other channels focused on stable Diffusion, and just wanted to say I appreciate how you don't fill your videos with "funny" gifs and clips, and don't waste time explaining Windows basics like how to use file explorer, unlike those other channels. Keep it up
Believe or not, the one you mentioned were Ai Works.😂
ayo nothing wrong with a couple of memes here and there!
lol I hate those channels that put dumb meme video clips in.. dad jokes are better
@@CoconutPeteyou dissing on channels like fireship and bycloud? bro them channels r lit yo
I always look forward to your videos your like the Bob Ross of AI art.
Wow, thanks! Glad you like the videos 😊🌟
great comparison!!
Wonderful Tutorial! Who would expect that Noise has nothing to do with removing noise. Now I understand a lot more. Thanks!
I think they need to rename some of the settings to make them more intuitive
I always think of the noise level as the level of squinting my eyes -.-
Hah, that's clever
Thanks for all your splendid tutorials. Although I am considering myself as very advanced with working in SD. Every time I watch your videos I get some new ideas. Also your default negative works extremely good with my checkpoint Colossus 2.03.
I'm happy to hear that! We probably all have some little neat tricks that we could learn from each other 🌟
Beautiful demonstration of the img2img tab!
I've been using the sketch tab to draw new fingers to fix hands. Like Seb shows here the denoise value needs to be tweaked every time but once it clicks you can actually add all kinds of things and even fix hands :)
Seb is an absolute rockstar. I love his way of explaining, but the most important thing for me are the dad jokes. I hope youcan do something on a few advanced methods with higher resolutions too. My minimum output is around 4k and i feel the detail attainable gets way better with increasing resolution. Also - waiting times, i'm used to wait 3-5 min. on an image with 2 or 3 ControlNet instances and Roop on my 4090. My fav. workflow atm is previz with PS beta/Firefly, then feeding that into A1111 and start bouncing. Firefly is so amazing to clean up little things and finish an image while it's terrible finding a creative solution and closing in on an idea.
ahhhhh the production quality has come so far my guy!
Pretty sweet eh? Did you see the little text intro too? I even made the beat for it 😅
@@sebastiankamph Yeah super clean!
Wow! I think the quality of your new cam is amazing.
A nice camera for a nice guy :))
Thanks for the valuable tips
Glad you think so! I wasn't sure if I should go for it, but it seems most people think it's an improvement 😊🌟
Great content in the video. One of the few youtubers I listen to in 1.75 speed.
The new setup is very cool. However, it is nice to see your video anyway it is really helpful
Thank you kindly! Glad it helped you. What feature in img2img do you use the most? 😊🌟
Beautiful demonstration of the img2img tab! Well done 👍👍👍
Now I know why I wasn't getting certain results I wanted. Thank you!
0:21 -- The "joke" you’re referring to appears to involve wordplay or a pun, but its meaning isn’t immediately clear. The statement _“a sock takes five tails”_ is particularly confusing, as there’s no widely recognized link between socks and tails. It’s possible that we’re missing some specific context or cultural reference that would help clarify the joke’s meaning. Anyway! Thank you for this informative and useful video tutorial.
Sebastian you're amazing bro, thank you so much, I really appreciate all your hard work making these videos.❤
Thanks Sebastian for this amazing tutorial! I finally got some clear understanding of img2img. Thanks for all the content.
You are my most favourite YTer to learn from about SD
That's very kind of you, thank you very much! 🤗🌟
Fantastic Video, Thank you for your efforts to make this wonderful tool more accessible to newcomers!
this was very helpful, thank you!
This is absolutely one of the best tutorials I've seen for SD! Thank you Herr Kamph! Sry for being a "shart" before! :) Subbed!
Thank you kindly! Hope you'll enjoy 😊🌟
really good tutorial, thanks!
this is really great, thank you!
thank you very much sir....i really learnt a lot and enjoyed your tutorials....keep bringing more content 😇
You got a pretty nice gear man!! I'm in this channel since day one and I am so proud of you my man! Keep doing this amazing content for the community!
By the way, those bad jokes, always got me hahahaha
Wow, thank you! You guys and gals are the real mvps out there! You're a real rockstar for hanging in since the early days. I'm surprised you stayed when the mic and video quality was so low back then 😅😘🌟
Production quality looks so good my dude
Oh why thank you! Very kind! 😊🌟
god bless you bro, you are going straight to heaven
Excellent tutorial. Thank you!
I opened this one on my tv and I was like WAAAT is this new setup by panavision or it's just new hollywood DOP? Yes, it's visible right away. I love it.
I'm very happy you feel that way. I spent a lot of time researching to be able to get something like that! Anamorphic lens on a Lumix S5iix at 6K open gate.
Very well explained. Thank you very much! 😀
this channel is great!
Thank you for another great tutorial. I really enjoy these
You are so welcome! What would you like to see more of?
Opening shot looks very good, very professional looking. May it ever drive the metrics. 😀🙏
Thank you kindly! I hope so too 😊
@@sebastiankamph :) :)
Very nice new settings and background. Why did the minimalist UA-camr's background get jealous? Because it felt like it was being "framed" out of the picture!
Thanks! Oh, on topic, very nice!
Great Tutorial. Thanks!
Awesome video, Thank you Seb.
Glad you liked it! 🌟
@@sebastiankamph ❤
problem I found with img2img was the resource requirements increased dramatically and the model starts getting upset about my aging 1080ti's 11gb of vram lol
Incredible tutorials, congratulations!
A tutorial would be great teaching how to transform a drawing into a realistic photo with the same environment as the drawing!
Thanks for sharing
Great suggestion!
Thanks for your help! Learned a lot from you.
Glad to hear it, Konrad! 😊🌟
Thank you so much for the tutorial, help me a lot :D
Glad you enjoyed it, tell a friend! 😊🌟
New setup looks good!
Thank you! Appreciate it 🥰
helped a lot thanks
Brilliant job
Thank you so match, you a great teacher, super super super.
🌟🌟
Great tutorial. Thank you.
Thanks master, very useful ant didactical.
Your tutorials are great! 👍
I appreciate that!
Thanks Seb. Excellent tutorial. K 🙂
My pleasure!
You are amazing,I subscribe
I’d love to see a video about reinstalling A1111 and changing to other versions and how to do git pulls and things like that. Every tut on these assumes a really high level of experience with these things. In particular, none of my inpainting / img2img etc work and I’ve seen that this may be a version issue.
you can keep the original image to go to sketch draw the glass, and after that back to inpaint -> painting glass with a prompt "yellow glass" and wait for the result ^^
thank you!
You're welcome!
Great video! I think it would have been better if it included a bit of ControlNet stuff to show how to give even more accurate control for generated shapes.
Great instruction!
Good video. Thanks
i understand image to image little better now
Glad to hear it!
This was helpful, thanks for that. But with roop and eyemask extensions/scripts, you can do much of what you were doing much more quickly and accurately. Still a good review on image to image.
very helpful thank you
Wow the new camera you are using really pops. Quality is On Point!
Ooh, thank you! I'll make sure to keep using it then 😊🌟
wideangle baby!
You know it! Anamorphic even!
Really enjoy your tutorials! Commenting for the algo ;)
Thank you kindly. Real mvp for helping the algo! 😊🌟
22:20 I've found that if you want to increase resolution, prompt engineering with "highly detailed", "perfect focus" and "closeup details" gets the models to output finer details even if you're generating non-closeup images.
prompt engineering Lol
@@scrung Yeah, adding new keywords or tweaking existing words to push AI to generate improved image quality is called "prompt engineering". I think it started as a joke but it's commonly used seriously nowadays.
I've always had major performance issues with inpaint sketch. Sketch and normal inpaint work fine, but as soon as I stick an image in inpaint sketch the ui just gets extremely laggy to the point of locking up. No idea why.
Same here. Always happen when I click to send image from the UI. If I drag and drop the image from the file then it goes fine.
New to the channel. Absolutely love how detailed your guides are. Been binge watching them recently on repeat, lol.
I know it might be asking a lot, but have you ever considered maybe including the base images that you play around with for people to try and follow along? I know ultimately it’s going to come out different. But as I listen to some of these videos while doing housework, I find myself thinking “okay when I get to my computer, I’m going to try and find a soulmate image and replicate your steps exactly to teach myself, so that I can then apply it elsewhere in my open projects”
Just a thought! You already provide an abundance of free resources and that’s more than enough.
Anywho, like and subscribed (didn’t realize I hadn’t done that yet)~ looking forward to additional content on your channel once I get these fundamentals down :D
It’s a common thing that when following a tutorial you can do something then look at the guide you’re following to try and confirm that what you have made looks something similar like what you were instructed to create.
I understand that the concepts are universal. The point of the message was for when learning/following his guides step by step for the first time. Not for when applying it to my own unguided work.
denoising explaining well
Thanks! I learned about sketch today 🎉
(Non asmr fans should watch at 1.75 speed)
On img2img Tutorial part. If you want to change and keep high denoising, you can use () to emphasize. For example: ((man)) with blue hair. Best results with Inpainting.
You can also use a scale directly, e.g. (man:1.21). It cuts down on the parentheses. And you can select the word or phrase you want to emphasize and press ctrl+up to increase the emphasis and ctrl+down to lower it.
@@phizc yup, this too :)
Just so I'm not misunderstanding when you say your image prompts are free, is it just because it's an old video and now they are not actually free anymore? It's asking me to subscribe. I don't mind subscribing I just want to make sure. Maybe you have some free and some are locked behind a paywall or are they all locked behind a paywall now? I notice this on a lot of your older videos. Thanks
You are correct, they used to be free and they are not anymore. Sorry for the confusion.
Thanks a lot! Appreciate all your work and knowledge you are sharing with us.
But I am constantly struggling keeping those very useful workflows, effects of parameters, etc in my brain when I am just working with automatic1111 in my spare free time once or twice a week for an hour 😅 would you think about writing guides too?
Is there maybe a knowledge base tool as an extension for automatic1111 whoch could help me keeping tool and best practices closer together?
I cannot get sketch to work to add glasses or a different lip color... The whole image is changing... I'm using the same model you're showing and the settings all look the same... Any idea what I'm doing wrong?
The fact that people need a tutorial for moving sliders around is hilarious! Welcome to the age of the hack "artists". 🤣👏
is it just my imagination or is a prompt in text2image with controlnet image enabled more powerful than an img2img with text prompt?
I'm here for the dad jokes
And I will not fail you! What did you think of today's dad joke?
@@sebastiankamph loved it. and thank you also for the tutorial, helps my workflow
Are you hiding from a serial killer or something while recording this? Should I sneak around my workplace while listening to this??
спасибо за видео 👍
I wish there was a tool to pick a color from the image for sketch mode as it's often difficult to pick the right colors, it often ends up with flashy items like your glasses or eye here
What if your art isn't humans or sceneries. More like a bunch of lines in varying 3d space to create an image? Would the AI understand how to create a new one?...If not, is there any software that does or can take in the arts to create something but still using that style?
Dude you said the styles were free to download from your description but they're not?
How often do you toggle original/fill? If I want to introduce a new item to the foreground or background…how do I approach this? Inpaint sketch, fill, .7+ denoise?
I do not see any styles, I downloaded some, I think the one mentioned in your comments restarted and refreshed but if I push the down arrow it is not opening and I do not see styles
thanks man! very clear and neat tutorial
Glad you liked it!
Naise new intro :)
Got to step up my game! You liked it?
@sebastiankamph O yea, it's clean and simple, and gives the channel a professional feel
I've been following all of your tutorials of Stable Diffusion and I found them to be really helpful, but when switching from img2img to sketch to inpaint to inpaint sketch, I often get an error, the mouse makes color inputs on it's own, or the image doesn't load at all. Do you have any idea what may be causing this?
Hey Sebastian, I have a quick question here. How to ensure crystal clean transition when doing inpaint to half section of the face. For example, I am creating half human. half cartoon face so the trasition is not very clear, I mean the interaction line from where the new prompt begins, Sometimes lips are distorted and so on. Any advise please
Problems with Model standin in Water, Help needed: I tried to modify a foto with a Person standing in a dungeon like room - I wanted to add some 30 cm water covering the floor so that she is standing in this water, the rest of th fotot without modification. I masked the lower area with the Inpaint tool and used the prompt "dirty water" or "woman standing in dirty water". It almost works fine with a good result, but is creating some ugly artefact in deforming the legs in an area above the masked zone. How can I get rid of this artefact? I was using Epicrealism as model. Yours Uli
I only have black color when using inpaint sketch.... how do I get the color changer?
hey how to convert cartoon character into realistic one, I mean it will not do a slight changes, can it understand the character features and generate complicated real life character?
Thanks for the great video, I am using sd_xl_base_1.0 and you seem to be using delivered_v2 is the one you are using open source as I can not find it on hugging face? Which one you find more powerful? Thanks a lot.
It's available for free on civitai. Custom 1.5 models are probably more finetuned at this point. SDXL is in its infancy and will probably be better over time with custom models being trained on it.
what model did you use on this?
very good video love the slow speed you use to talk.
It was probably the Deliberate v2 when I did this.
For some reason I can’t get it to work with an abstract image I’m testing I’m trying to use light beams that travel around the image but when I try to this workflow it just results in smudged semi transparent line and never an actual laser beam image, is there something I need to do here as I can’t figure this out
Hi sebastian, Im having a problem after generating the result image becomes darker, anyway to solve this? I don't see this problem in your video
How about the batch mode? I was going to use img2img upscaling on some images then I realized I can't use the original prompts for each file since there is only one prompt input.
What service do you use for SD? From one of your videos you said that you don't run it locally any more.
I wonder what video card do you have which can generate a full set of four 768x768 images in just 11 seconds.
RTX 3080
Why when i use img2img and set it to 0.5 and the image almost looked different even the background colors changed? But yours can maintain the similar image on 0.6?
Great guide! What GPU are you using?
Rtx 3080
U need to insert pain in prompt to get perfect sharingan
anyone getting this error with controlnet recently, " RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)" used to work fine but recently none of the models work.
Is there a way to take a drawn image and convert it to a real life image with img2img and vice versa?
What GPU are you using for that fast processing?