Damn. SD is getting powerful every single day I think. I'm so glad I found your channel a few months back. Thank you for sharing all the knowledge 🙏🏽👍🏻❤🔥
I have been inspired! I'm an IT professional, but my main desktop OS has always been Microsoft's Gaming OS. I just, finally, made the switch to Linux Mint (Cinnamon). It's mostly due to following your tutorials and taking the time with Grown Ups OS. I am a new convert! Thanks, as always for sharing your knowledge, experience, work flows, etc.. etc.. Always appreciated!
There is a script for automatic1111 that gives us the possibility to draw more precise masks, in a bigger window too. It's listed in the automatic's repo. Very useful
This is one of my favorite animations of a human-non-human face in the corner. Only sometimes when you look too much down it gods a bit flat, but other than that, I feel, I am watching a TV show from the 70s or so. Great. And thanks for the level one noise, had problems with inpainting as well. So that helped a lot.
@@NerdyRodent hey nerdy rodent! I managed to animate my images with the thin plate model. It looks incredibly real, but i am wondering how you managed it to make that clean in this video? Donyouvrecord the deiving video with a stand camera? Or what is your set up while making these videos? Is your source image upscaled?
Ah! You've answered my mystery as to why none of my inpainting attempts ever seem to make any change whatsoever to the output. Guess I need to change models...
@@NerdyRodent Thanks so much for replying. Your channel is incredible helpful. Are you on PC? I'm on a Mac but not M1 or M2. I think Ive found out how to install on Intel machine the i9 but it's all really confusing. .
Can you please explain or do a video on how to download and install the runway in painting model ? I have tried even downloading it seems impossible there is no download links. I can’t find any download links.. Thank you for the amazing tutorials 🙏
Done one already, just for you! Links are in the description, just download to your models directory exactly as shown! Stable Diffusion InPainting - RunwayML Model ua-cam.com/video/rYCIDGBYYnU/v-deo.html
Your stuff has been super helpful, would really appreciate a video on massaging perspective as it's something I'm having trouble making work. I mainly use SD for making backgrounds for green screen content I make and making the perspective right on the background has been difficult. If i wanted a straight on picture of a "sci-fi factory interior" that would work well in the background of a side-scrolling kind of shot, how would I go about making SD know construct the image in that way? I assume img-2-img would help, and sometimes it does for me but sometimes it decides to take an image I got with the right perspective out of luck and turn it into something with a totally different perspective, and adjusting de-noising doesn't seem to affect it. Would love your input on this!
i know this is an older video but for people just now catching on, one thing ive noticed in the video that i feel has caused less than desirable results is mixing things that are opposite such as 3d render with painting. you took out 3d and render (negative prompt) but left in blender, octane, unreal... and those are 3d modeling engines. you can use whatever you want, and sometimes get cool stuff with things that dont correlate or make sense together... but if youre having issues getting a look you want, examine your prompts for things that might be contradicting one another. for example, you might use something like photorealism, photograph, photography, and the modifiers like high quality, high definition. also check for artists that are not tailored to the style / subject matter / or medium youre trying to emulate. for example, you wouldnt use kim jung gi with photorealistic painting, or 3d render. i found that for me personally, doing things like that in effect cockblocks stable from producing things down the right pathway im trying to go. if anyone wants to see what ive been doing, over the last year or so, starting in disco and moving to stable, you can find my socials easily. GG GL and HF~ edit: i dont use artist names in my prompts anymore, not since the middle of 2022 or so. i feel i personally get better results for what im doing without them.
I found exactly the opposite. Mixing styles gives me great results, and having things like “unreal engine” as a positive with “3d render” also produces awesome outcomes 😉
@@NerdyRodent It really depends on what I'm looking for. I just saw you using inpainting model for txt2img at the beginning of the video, so I asked. Thanks for your great content!!
Do you think there is a trouble for img2img alternative script for not working for me and other. i get only very crap results unsusable whatever i tried in settings
@@NerdyRodent thanks for responding! I tried changing the model and the strength. I must be missing something, but I have the outpainting script installed and the inpainting model downloaded and selected. I'll play with the mask size, but it def feels like I'm missing a piece! I'll keeping digging, and thanks for the help!
Is there any reason in particular if you started your first prompt with the inpaint ckpt rather than the full one? You said that the latter should be better at generating more diverse outputs so why not beginning with that?
Recent PRs added the ability to make deeper HyperNetworks, and-more critically-non-linear activation functions which should greatly improve results. I think it may be time to revisit your comparisons.
Thanks for your channel it’s an incredible resource. Have you done any tutorials on the workflow for your talking heads in the corner? I always figured it was snap camera or something but this is next level…
ok Nerdy Rodent, I have a really dumb question, I see all these tool from git but I have ZERO clue how to use it or where to use the web interface? is there any way i can get it to my local machine and use it? you mention you are using web interface, and when I click it, it is just a bunch of code, there are no place to use it like an app, can you direct me to a 101 tutorial on how to use all these apps to begin with? thanks.
@Nerdy Rodent 👋 In SD Automatic1111 interface, outpainting left and right works fairly well. But outpainting up always looks like it's starting a brand new image with a sharp defined line right where the new pixels are generated as opposed to outpainting a smooth continuum to the original generated image. BTW, The images I'm generating and trying to outpaint are scenics with castles. Many of them the tops of the castle towers are cut off. What am I doing wrong?? P.S. I'm using the 1.5 pruned emaonly model. I don't know how to load multiple models into SD.
@@NerdyRodent Thank you for your response. That's the 7.17 Gb model? In order to get it to open in SD, do I just drag the .Chkp model into the same Models folder the current model is in? Do I need to do anything else to get it to function in the SD local window?
@@NerdyRodent 👋 Thank you so much for responding. I really like your content, but as I'm nearly PC programming illiterate, some of the most important parts of your videos (dealing with installation and programming stuff) is virtually impossible for me to fully grasp. That's my problem, I know. I watched that video you linked several days ago but I got stuck at the 1:09 time stamp of that video where you're adding various text into what looks like a CMD prompt window and that's where I get lost because whatever you're doing is second nature to you but not very familiar to me. Your cursor disappears from the screen and it's not clear how you accessed that dialog box. Anyway, my problem, not yours. Thank you for responding and looking to help. I probably just have to hope real hard that the programmers at Automatic1111 add it to an update. I was able to locally install SD because of an easy to follow video put out about a week ago by Scott Detweiler ( titled "Stable Diffusion 1.5 Windows Installation Guide"). My poor level of knowledge about computer languages and programming leaves me little options except for watching such easy to follow demonstrations as that.
Got a video right here for new computer users! Install Anaconda and Nvidia GPU drivers with CUDA on Microsoft Windows - Beginner Mode ON! ua-cam.com/video/OjOn0Q_U8cY/v-deo.html
I have noticed that there is a lack of rundowns on how to use Krita in place of the poor quality little drawing panel in the Web UI. Krita is packed with an enormous selection of features that makes it the no brainer choice for maximum control of the in-painting process imo.
@@NerdyRodent that's a different plug in than I'm running from what I can see. Specifically I'm referring to showing a breakdown of how to accomplish different goals like in-painting to create a very specific multi-step composition or out-painting things into the canvas smoothly. Particularly focusing on what setting to have different things like Denoising and why. Honestly, the more I think about this, the more inclined I'm becoming to make a video myself.
This is a channel I always enjoy, thank you for the very useful information. Can you give me a hint on how to speak with an actor's face at the bottom of the video? (Deepfakes?) I'm very curious, so please inquire. Please continue to provide good content in the future. you are the best!
Can you make a workflow to change backgrounds of a real car. Maybe i shot from a racecar on track to the same racecar but with another landscape behind? I have big trouble with that and can't get good results. What for a model is the best to do this? The thing is that the carwrap form this photo never ever must be changed, only the environment around the car.
@@kaypetri3862 An inpainting model is best for inpainting, such as huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt There are now inpainting controlnets too which are awesome because then you can use any model 😀
@@NerdyRodent hmm. I'm tested it out, but i can not get a good Result. The Car looses Details. Is there a way to contact you directly to share some of my Designs and the Problem?
From what I've noticed the "Blender" keyword tends to make it focus more on amateur or older works, and doesn't consider much the more realistic stuff that isn't obvious at a first glance that it is CGI or the more professional artwork that is less likely to detail what tools were used....
A quick and dirty way to zoom in on the image you're masking with Inpainting is control + (multiple times) to scale up the browser CSS. Then control - (multiple times) to zoom back out. PS-Thank you for sharing your wisdom Mr. Rodent!
For inpainting, you will get the best results by using the inpainting model but changing 'masked content' to "latent' and incrasing denoising factor to 1.0.
2:01 "Unreal 5, Octane, Blender" - I've not seen any evidence that adding those as terms in your prompt actually gets you anything like what it says on the tin. YES, it _changes_ the output, but that's expected, since the input tokens changed. It did get the 'harsh light' of Blender, and also the plasticky look...but... Evidence: If you take your output images (or ANY images, really...) and send them to IMG2IMG and then run CLIP and DeepBooru interrogation. You'll find that _NONE_ of those sorts of terms are _ever_ returned by the interrogator, and that using them _in the absence of anything else_ renders random junk - compared to putting in a single 'known' term like Alfred Hitchcock or David Hasselhoff, which hands you back that specific thing as output, confirming that they're known terms in the Model. Putting unknown terms in your prompt is, effectively, fluff that give you worse results (even if you LIKE the results you get, if you generate 100 or 1000 more, they'll be less 'focused' than if you hadn't included those terms). When I send a 'blender-looking' photo through CLIP and DeepBooru, I get things like 'harsh lighting' back as terms. It's the same thing with using "HD, 8K" and all that other rot people put in their prompts because they're practicing Cargo Cult Programming and saw it in some video or on Reddit. The output _can't_ be an "HD" or "8k" image, because it's literally _512x512_ ! 512x512 is 0.262MP, the default output of Stable Diffusion is _not even_ "SD" resolution (640x480, or 0.307MP) Working with other AI/ML solutions (Large Language Models, mostly) has taught me that every Token in your Prompt should have meaning for the model you're working with. Fortunately, Automatic1111 has given us the tools _right in the web interface_ for us to learn to speak the same language as the model...
8k and HD (with many other) is not about the resolution of the generated images but it's to target more specifically things that were tagged as HD or 8k in the training dataset. Good and sharp images are most likely to be associated with 8k or HD tags. A viking helmet at 8k or one at 1000x1000 resized to 512x512 (and used for training) will look pretty much the same,... But the 8k ones are mostly those high res good quality images that were tagged properly. At the end of the day, if a sequence of words works good for someone, it doesn't matter. It's better to stick with it for a while, add to it maybe sometimes.
im just going to wait a year or two until they stream line this process because this shit is unbelievably frustrating to set up and follow. then in tutorials , there is always something missing or someone will say "ok now we do this" and then on my screen its a different version no longer available or buttons are moved around
the problem is it's very good at drawing attractive women and cats, because that's what everyone else is doing and what it seems like 90% of model is trained around, and not much good at anything else. I've recently tried to do multiple variations of a futuristic/alien spaceship/space station/mechanical objects as concept art for my game, and the results were absolutely horrible. Even just renderings of space/planets are very subpar.
You can definitely get there with the right prompts, keeping trying. Using init images helps too. Also training your own checkpoint via Dreambooth opens up massive new potential, more than anything else.
I just tested The Superb Workflow and it does indeed work for spaceships too. If you have a “pose” in mind, definitely better to start with your own doodle. Made some very nice space stations and stuff! Tbh, haven’t found anything it doesn’t work on 😉
I still have problems with characters, that have no feet and or no head and to trying to outpaint a head and feet. So is there a trick, to get a complete character in front of the view? My best soltion was, to make a siluette or something like that before and use img2img, but I am also limited to small resolutions with my RTX 2070 Super 😞
I've NEVER seen good hands/feet even in the best generations of others on SD subreddit. They are always deformed. At this point, I wouldn't waste my time trying to get SD to fix them, and just learn to manually fix defects in photoshop.
Damn. SD is getting powerful every single day I think. I'm so glad I found your channel a few months back. Thank you for sharing all the knowledge 🙏🏽👍🏻❤🔥
I have been inspired! I'm an IT professional, but my main desktop OS has always been Microsoft's Gaming OS. I just, finally, made the switch to Linux Mint (Cinnamon). It's mostly due to following your tutorials and taking the time with Grown Ups OS. I am a new convert! Thanks, as always for sharing your knowledge, experience, work flows, etc.. etc.. Always appreciated!
Glad you’re having fun! Learning new stuff gets the old brain matter ticking over and strange, exciting, new ideas can form 😉
There is a script for automatic1111 that gives us the possibility to draw more precise masks, in a bigger window too. It's listed in the automatic's repo. Very useful
Oo, what is it called?
even better there's one for automatic prompt-based masking
where
Mask drawing UI
Listed in the "custom scripts" on AUTOMATIC1111's repo page
Your avatar is becoming sentient.
Eeep!
This is one of my favorite animations of a human-non-human face in the corner. Only sometimes when you look too much down it gods a bit flat, but other than that, I feel, I am watching a TV show from the 70s or so. Great. And thanks for the level one noise, had problems with inpainting as well. So that helped a lot.
Give it a few months, it'll get so good that he'll give a secret face reveal and no one will notice 😂
😉 Glad it helped! Make amazing things!
@@NerdyRodent hey nerdy rodent! I managed to animate my images with the thin plate model. It looks incredibly real, but i am wondering how you managed it to make that clean in this video? Donyouvrecord the deiving video with a stand camera? Or what is your set up while making these videos? Is your source image upscaled?
@@lucienmontandon8003 I just have a webcam on my monitor 😀
@@lucienmontandon8003 bro what program do you use to do that?
This is great, I just started watching your videos. What are you using for your webcam overlay?
I use the thin plate spline motion model, video link in the description!
The corner pip got me 😂 bro you look like a anchorman from the 70s . Nice vid
Your videos are great! How do you make the AI with your face over there in the corner of the video? Can you tell me which video you teach this in?
How did you create that animated realtime face at the bottom right?
I used the thin plate spline motion model for image animation (more information in the video description)
Ah! You've answered my mystery as to why none of my inpainting attempts ever seem to make any change whatsoever to the output. Guess I need to change models...
Always smashing the tutorials
Hope you make some things! 😉
fantastic as always.
Thank you for all your amazing videos!
Glad you like them!
really good video! thanks!
Glad you liked it!
How do you get to that interface? Is that an app or web interface? So slick. Great work.
Thanks! It’s the stable diffusion automatic1111 webui!
@@NerdyRodent Thanks so much for replying. Your channel is incredible helpful. Are you on PC? I'm on a Mac but not M1 or M2. I think Ive found out how to install on Intel machine the i9 but it's all really confusing. .
@@jamiekingham5854 Yup, I use a Linux PC + Nvidia as that’s best for all things AI 😀 Not used a Mac in years, I’m afraid!
Did they pull support for A1111? I can try to open it and just get a blue refresh arrow and nothing else. I have A1111 installed locally.
Could you make a video simply explaining each element of the webui? Some of them are really unclear as to what they do
Take a look at my earlier video for more of a feature overview Vs this workflow - ua-cam.com/video/XI5kYmfgu14/v-deo.html
Can you please explain or do a video on how to download and install the runway in painting model ? I have tried even downloading it seems impossible there is no download links. I can’t find any download links..
Thank you for the amazing tutorials 🙏
Done one already, just for you! Links are in the description, just download to your models directory exactly as shown! Stable Diffusion InPainting - RunwayML Model
ua-cam.com/video/rYCIDGBYYnU/v-deo.html
Your stuff has been super helpful, would really appreciate a video on massaging perspective as it's something I'm having trouble making work. I mainly use SD for making backgrounds for green screen content I make and making the perspective right on the background has been difficult. If i wanted a straight on picture of a "sci-fi factory interior" that would work well in the background of a side-scrolling kind of shot, how would I go about making SD know construct the image in that way? I assume img-2-img would help, and sometimes it does for me but sometimes it decides to take an image I got with the right perspective out of luck and turn it into something with a totally different perspective, and adjusting de-noising doesn't seem to affect it.
Would love your input on this!
How do you make that bottom right corner video
Excellent
i know this is an older video but for people just now catching on, one thing ive noticed in the video that i feel has caused less than desirable results is mixing things that are opposite such as 3d render with painting. you took out 3d and render (negative prompt) but left in blender, octane, unreal... and those are 3d modeling engines. you can use whatever you want, and sometimes get cool stuff with things that dont correlate or make sense together... but if youre having issues getting a look you want, examine your prompts for things that might be contradicting one another.
for example, you might use something like photorealism, photograph, photography, and the modifiers like high quality, high definition.
also check for artists that are not tailored to the style / subject matter / or medium youre trying to emulate. for example, you wouldnt use kim jung gi with photorealistic painting, or 3d render.
i found that for me personally, doing things like that in effect cockblocks stable from producing things down the right pathway im trying to go. if anyone wants to see what ive been doing, over the last year or so, starting in disco and moving to stable, you can find my socials easily. GG GL and HF~
edit: i dont use artist names in my prompts anymore, not since the middle of 2022 or so. i feel i personally get better results for what im doing without them.
I found exactly the opposite. Mixing styles gives me great results, and having things like “unreal engine” as a positive with “3d render” also produces awesome outcomes 😉
Do find that you get better results when using the inpainting model for txt2img, as opposed to the regular or pruned SD v1.5?
Roughly similar for me. Have you been getting better results?
@@NerdyRodent It really depends on what I'm looking for. I just saw you using inpainting model for txt2img at the beginning of the video, so I asked. Thanks for your great content!!
I wonder if we'll ever be able to merge the inpainting model with other models, so that we could inpaint in arcane style, or the various anime models.
You can now. There's a guide on Reddit posted a few days ago that let you turn any model into an inpainting one
Hey really like your videos. What software are you using for the Webcam to show the animated image? Thanks
Thanks! I’m using the thin plate spline motion model for image animation, as linked in the description 😀
@@NerdyRodent but it is not realtime, (i.e the webcam is not using that motion model to alter your face realtime), correct?
@@florinflorin249 The motion is webcam driven, but not in real time. You can use Avatarify for real-time though
@@NerdyRodent is avatarify the best option for something like this? like for twitch streaming fot ex?
@@houdinimasters There are loads of live cams, but Avatarify is the only live version of the first order motion model I’m aware of
Thanks for your videos ♥
Hadn't paid any mind to the scripts section... Outpainting - brilliant rodent!
Do you think there is a trouble for img2img alternative script for not working for me and other. i get only very crap results unsusable whatever i tried in settings
Try this workflow exactly ;)
Great tutorial and thank you. One question, how can we fix a black (basically blank) area that appears instead of the additional outpainting?
Either change model or use strength 1 usually works 😃 You can also play with the mask size. Depends what you’re wanting to achieve!
@@NerdyRodent thanks for responding! I tried changing the model and the strength. I must be missing something, but I have the outpainting script installed and the inpainting model downloaded and selected. I'll play with the mask size, but it def feels like I'm missing a piece! I'll keeping digging, and thanks for the help!
Is there any reason in particular if you started your first prompt with the inpaint ckpt rather than the full one? You said that the latter should be better at generating more diverse outputs so why not beginning with that?
Start anywhere you like! It’s totally free form 😀
@Nerdy Rodent, how come you don't inpaint at full resolution ? Is there a special reason ?
Yup - the outputs can vary. Try both for the best result!
How did you do the face in the corner? Please tell me you have a tutorial on that.
How to Animate faces from Stable Diffusion!
ua-cam.com/video/Z7TLukqckR0/v-deo.html
Recent PRs added the ability to make deeper HyperNetworks, and-more critically-non-linear activation functions which should greatly improve results. I think it may be time to revisit your comparisons.
what are your specs??
Thanks for your channel it’s an incredible resource. Have you done any tutorials on the workflow for your talking heads in the corner? I always figured it was snap camera or something but this is next level…
Yup - that’s in the description 😉
Very interesting, thank you
Where does that inpainting model come from?
You can get it from huggingface.co/runwayml/stable-diffusion-inpainting
ok Nerdy Rodent, I have a really dumb question, I see all these tool from git but I have ZERO clue how to use it or where to use the web interface? is there any way i can get it to my local machine and use it? you mention you are using web interface, and when I click it, it is just a bunch of code, there are no place to use it like an app, can you direct me to a 101 tutorial on how to use all these apps to begin with? thanks.
The zero-install, super-easy website info is right here - ua-cam.com/video/wHFDrkvsP5U/v-deo.html :)
@Nerdy Rodent
👋
In SD Automatic1111 interface, outpainting left and right works fairly well.
But outpainting up always looks like it's starting a brand new image with a sharp defined line right where the new pixels are generated as opposed to outpainting a smooth continuum to the original generated image.
BTW, The images I'm generating and trying to outpaint are scenics with castles. Many of them the tops of the castle towers are cut off.
What am I doing wrong??
P.S. I'm using the 1.5 pruned emaonly model. I don't know how to load multiple models into SD.
Use the inpainting model for outpainting :)
@@NerdyRodent
Thank you for your response.
That's the 7.17 Gb model?
In order to get it to open in SD, do I just drag the .Chkp model into the same Models folder the current model is in? Do I need to do anything else to get it to function in the SD local window?
Video on the new inpainting model - Stable Diffusion InPainting - RunwayML Model
ua-cam.com/video/rYCIDGBYYnU/v-deo.html 😀
@@NerdyRodent 👋 Thank you so much for responding. I really like your content, but as I'm nearly PC programming illiterate, some of the most important parts of your videos (dealing with installation and programming stuff) is virtually impossible for me to fully grasp. That's my problem, I know.
I watched that video you linked several days ago but I got stuck at the 1:09 time stamp of that video where you're adding various text into what looks like a CMD prompt window and that's where I get lost because whatever you're doing is second nature to you but not very familiar to me. Your cursor disappears from the screen and it's not clear how you accessed that dialog box.
Anyway, my problem, not yours. Thank you for responding and looking to help. I probably just have to hope real hard that the programmers at Automatic1111 add it to an update.
I was able to locally install SD because of an easy to follow video put out about a week ago by Scott Detweiler ( titled "Stable Diffusion 1.5 Windows Installation Guide").
My poor level of knowledge about computer languages and programming leaves me little options except for watching such easy to follow demonstrations as that.
Got a video right here for new computer users! Install Anaconda and Nvidia GPU drivers with CUDA on Microsoft Windows - Beginner Mode ON!
ua-cam.com/video/OjOn0Q_U8cY/v-deo.html
Does anybody know a way that will output the exactly same image you added in but in a different style?
Lower the denoising strength, and for the inpainting model you can also alter the mask power (in settings, because that’s a good place for a slider)
Have I gone mad or did you never turn the denoising down from one after changing it?
Maximum denoise ftw! 😆
I have noticed that there is a lack of rundowns on how to use Krita in place of the poor quality little drawing panel in the Web UI.
Krita is packed with an enormous selection of features that makes it the no brainer choice for maximum control of the in-painting process imo.
Got an example of that at ua-cam.com/video/M2R-tsZglaY/v-deo.html - fairly early on though now! Time really flies in the AI world, it seems :)
@@NerdyRodent that's a different plug in than I'm running from what I can see. Specifically I'm referring to showing a breakdown of how to accomplish different goals like in-painting to create a very specific multi-step composition or out-painting things into the canvas smoothly. Particularly focusing on what setting to have different things like Denoising and why. Honestly, the more I think about this, the more inclined I'm becoming to make a video myself.
@@BlueCollarDev Go for it! 😉
This is a channel I always enjoy, thank you for the very useful information.
Can you give me a hint on how to speak with an actor's face at the bottom of the video? (Deepfakes?)
I'm very curious, so please inquire. Please continue to provide good content in the future.
you are the best!
Personally, I use the thin plate spline motion model for image animation
@@NerdyRodent Wow!
What a quick answer!~~
Thank you so much
Can you make a workflow to change backgrounds of a real car. Maybe i shot from a racecar on track to the same racecar but with another landscape behind? I have big trouble with that and can't get good results. What for a model is the best to do this? The thing is that the carwrap form this photo never ever must be changed, only the environment around the car.
Yes, inpainting is perfect for changing just a small part of an image such as the landscape behind a car.
The question is what the best Model is for that case. I only Get horrible results when I do it.
@@kaypetri3862 An inpainting model is best for inpainting, such as huggingface.co/runwayml/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.ckpt
There are now inpainting controlnets too which are awesome because then you can use any model 😀
@@NerdyRodent hmm. I'm tested it out, but i can not get a good Result. The Car looses Details. Is there a way to contact you directly to share some of my Designs and the Problem?
From what I've noticed the "Blender" keyword tends to make it focus more on amateur or older works, and doesn't consider much the more realistic stuff that isn't obvious at a first glance that it is CGI or the more professional artwork that is less likely to detail what tools were used....
Don’t forget to try other rendering engines 😉
A quick and dirty way to zoom in on the image you're masking with Inpainting is control + (multiple times) to scale up the browser CSS. Then control - (multiple times) to zoom back out.
PS-Thank you for sharing your wisdom Mr. Rodent!
Do you have a discord?
how did you do that with your face? is that deepfake? or some sort of ai animation?
It's animating a still image. Link is in the description!
Could you show us how to install SD on AMD machines?
Unfortunately I don’t have an AMD gpu, but if you’re on Linux just select the “rocM” install for PyTorch instead of the CUDA (Nvidia) one
@@NerdyRodent Thanks a lot...
¿is this the best way to do it still?
For now, yup!
@@NerdyRodent ty, is the process slow ? generating a image in my pc at 512 take seconds, doing it in outpainting takes more
@@Tcgtrainer 512x512 is 1s for me, so it’s not too bad
“Who needs to learn Blender? You can just type rendered in Blender and then you get a Blender render” 👏😂👍
Ikr! 😆
No "Denoising" slider on my Webui. 🤷♂
I’m using the Automatic1111 one
I was thinking that maybe outpainting is a metaphor for faith and that's why no one can explain it.
lol, you merged the 1.5 with the inpaint version... :D
Merging is fun 😉
The guy at the corner is not you right. He looks like a james bond kind character.
Unfortunately, due to national security, I am unable to confirm or deny whether or not I am a James Bond character 😉
memoria ram?
I’ve got just 32GB RAM
@@NerdyRodent OKAY..... ! thank you
For inpainting, you will get the best results by using the inpainting model but changing 'masked content' to "latent' and incrasing denoising factor to 1.0.
2:01 "Unreal 5, Octane, Blender" - I've not seen any evidence that adding those as terms in your prompt actually gets you anything like what it says on the tin. YES, it _changes_ the output, but that's expected, since the input tokens changed. It did get the 'harsh light' of Blender, and also the plasticky look...but...
Evidence: If you take your output images (or ANY images, really...) and send them to IMG2IMG and then run CLIP and DeepBooru interrogation. You'll find that _NONE_ of those sorts of terms are _ever_ returned by the interrogator, and that using them _in the absence of anything else_ renders random junk - compared to putting in a single 'known' term like Alfred Hitchcock or David Hasselhoff, which hands you back that specific thing as output, confirming that they're known terms in the Model. Putting unknown terms in your prompt is, effectively, fluff that give you worse results (even if you LIKE the results you get, if you generate 100 or 1000 more, they'll be less 'focused' than if you hadn't included those terms).
When I send a 'blender-looking' photo through CLIP and DeepBooru, I get things like 'harsh lighting' back as terms.
It's the same thing with using "HD, 8K" and all that other rot people put in their prompts because they're practicing Cargo Cult Programming and saw it in some video or on Reddit.
The output _can't_ be an "HD" or "8k" image, because it's literally _512x512_ ! 512x512 is 0.262MP, the default output of Stable Diffusion is _not even_ "SD" resolution (640x480, or 0.307MP)
Working with other AI/ML solutions (Large Language Models, mostly) has taught me that every Token in your Prompt should have meaning for the model you're working with.
Fortunately, Automatic1111 has given us the tools _right in the web interface_ for us to learn to speak the same language as the model...
8k and HD (with many other) is not about the resolution of the generated images but it's to target more specifically things that were tagged as HD or 8k in the training dataset. Good and sharp images are most likely to be associated with 8k or HD tags.
A viking helmet at 8k or one at 1000x1000 resized to 512x512 (and used for training) will look pretty much the same,... But the 8k ones are mostly those high res good quality images that were tagged properly.
At the end of the day, if a sequence of words works good for someone, it doesn't matter. It's better to stick with it for a while, add to it maybe sometimes.
I won't be surprised if one fine day a video here is commented on by Trump or Biden, seriously😉
I tested everything, the inpaint is not working... Only outpaint is.
For inpainting make sure to use a mask as shown or it will indeed not work!
im just going to wait a year or two until they stream line this process because this shit is unbelievably frustrating to set up and follow.
then in tutorials , there is always something missing or someone will say "ok now we do this" and then on my screen its a different version no longer available or buttons are moved around
the problem is it's very good at drawing attractive women and cats, because that's what everyone else is doing and what it seems like 90% of model is trained around, and not much good at anything else. I've recently tried to do multiple variations of a futuristic/alien spaceship/space station/mechanical objects as concept art for my game, and the results were absolutely horrible. Even just renderings of space/planets are very subpar.
You can definitely get there with the right prompts, keeping trying. Using init images helps too. Also training your own checkpoint via Dreambooth opens up massive new potential, more than anything else.
Very true! It’s certainly harder to get decent space ships, but it can be done! Even just using img2img
@@audiogus2651 thanks, I am actually considering training on few of my 3d models to see if it might be possible to create variations.
I just tested The Superb Workflow and it does indeed work for spaceships too. If you have a “pose” in mind, definitely better to start with your own doodle. Made some very nice space stations and stuff! Tbh, haven’t found anything it doesn’t work on 😉
I still have problems with characters, that have no feet and or no head and to trying to outpaint a head and feet. So is there a trick, to get a complete character in front of the view? My best soltion was, to make a siluette or something like that before and use img2img, but I am also limited to small resolutions with my RTX 2070 Super 😞
Assuming you start with a head, just keep outpainting down! It’s bad at hands though…
I've NEVER seen good hands/feet even in the best generations of others on SD subreddit. They are always deformed. At this point, I wouldn't waste my time trying to get SD to fix them, and just learn to manually fix defects in photoshop.
anyone got a link for that inpainting model?
Links are in the video description 😉