I can'ts understate how much I love your videos. As an almost-engineer this type of content is pure joy condensed intro video. Thanks for your work! If you even need help I would be happy to help you, since I have a fairly powerful rig.
Your videos really are so much more informative than the vast majority of people posting information about Stable Diffusion and Ai related content. Your skills for teaching and keeping things clear and concise are off the charts! Thank you so much, you're quickly becoming one of my favorite content creators (I just wish you had an easier to remember and easier to spell channel name lol).
A-Zovya Photoreal V2 would have been a great addition in the roundup. Also, about the 48/50 OCD thing, it only matters for base10 OCDs. 48 is perfectly fine in base16: 0x30.
Great Content! salute! This kind of content is what community needs! Very informative, great format, precise! PS. I'm an engineer. How you setup the testing criteria, summarize the insight of the statistics from result is very satisfied to me since there're very few channel doing this! So I'm sure you're a Data science or some kind of mathematical profession for sure.
Next can we benchmark Negative embedding? seems fun stuff to compare how significantly it can change certain style and compositions of the base image generation. I usually use this "style of bad-artist-anime, style of bad-artist, BadNegAnatomyV1-neg, bad-hands-5, Unspeakable-Horrors-24v,, {prompt}" never test it out though (I mean real benchmark like you did). xd
The neg embed situation just seems like too much of a "throw everything at it, hope for the best!" I have has a situation where deleting the contents in the neg box improved the image! Here is my set. "easynegative ng_deepnegative_v1_75t Unspeakable-Horrors-Composition-4v, bad-hands-5, bad_prompt_version2 badhandv4 By bad artist -neg, pasties, double_navel " besides bad hands, 2 navels show up at random especially with using as a positive "plump" and "chubby"
@@luislozano2896 So true actually, these days I use less often NegEmbed aswell unless for inpainting hands or something, but it might placebo atm. xd Ill try your neg. ty
@@siliconthaumaturgy7593 thats what I thought and cross Negative embedding from specific models may work for other models as well, and since some models had core model for that embedding. You can experiment with something like this, find some specific model that had negative embedding and merge them all and then use all negative embedding after for generation.
@Cutieplus yes, it can break the "style" on some models, there are some negative embedding that lean towards realistic edition even with no realistic prompt in it idk which one I forgot.
as mentioned by others, I would also like to see comparisons of these embeddings to see what they change and if they are good at all. I noticed it myself especially with hands and stopped using them. I also noticed that many checkpoints don't know some prompts. Currently my biggest struggle is to get niche prompts working (like china dress with changes) without using LORAs
I'd be curiuos to know how AbsoluteReality holds up. It's one of my favorite Checkpoints. I'd love to see you do the same process but with testing faces/eyes.
If you get fully black images it might be because of the bugged VAE (either your chosen one or built into the model). Try adding --no-half-vae parameter to the A1111 launch script
I have that no-half-vae in there already. It's not too often, though. Maybe 1-2 out of 100 images. Rev seems to have the same issue but a bit less frequent
@@siliconthaumaturgy7593 Huh. That's weird. I've never had black images after adding no-half-vae to the options. Are there any error messages in the console window?
Lol called it! Knew RevAnimated would be number 1. My only issue is it's tendency for big booty, slim waist. Also thanks for confirming my suspicion regarding negative embeddings, they either do not work as intended or they make things worse More please, this was a great video
Simple prompts!!! ReV Animated with simple prompts, try with and without GoodHands lycoris. But the #1 way to get good hands I've found is to keep prompts simple. Don't use rare words like artist or place names. Say "museum" in your prompt, not "Louvre", or you're gonna have a bad time.
you make smartest ideas for sd videos, i was interested with this, would also be interesting to see which model outputs highest resolution full size body (not portrait) without hires fix by just using txt2img and resolutions above 1024 until it clones bodyparts
That's a good idea and shouldn't be too hard to test. Though alas, I think after this series, I'll be occupied with SDXL for quite some time. So we'll see
you should also tested if adding hands in negative prompt improve the result. Even if you add hands in negative, the model still generate hands. I been adding different body part in negative and sometime it fixes the image.
Liking how you censor big bouncy distractions for the sole purpose of demonstrating informative results. kudos. Time to distract myself after the review 👍🏻🙏🏻
"...also has high levels of inappropriate content ..once again, in this community, that seems to be a feature rather than a bug." That's definitely not a feature in my book. Wading through the inappropriate stuff to find good solid models to work with is a challenge. I definitely don't appreciate the inappropriate stuff.
Hi there, your videos are really informative and well-thought, I'm very interested in what you have been doing, do let me know if I could be of a help.
It is sad that realistic models still trail behind anime/cartoon models. Like making something manga or waifu seems to be accepted in society then tasteful nudity and of course simple hands and feet.
Your videos are consistently informative, with information which is not easy to come by. Please keep making them!
I can'ts understate how much I love your videos. As an almost-engineer this type of content is pure joy condensed intro video.
Thanks for your work! If you even need help I would be happy to help you, since I have a fairly powerful rig.
I'm glad you like them. Funny, you should mention the engineering part. IRL, I'm actually an engineer that does quality testing
@@siliconthaumaturgy7593 it shows 😀. 👍
Bro , i liked , subscribed. this channel is great , probably one of the best stable diffusion videos out there !
Gonna check part 2 now
UWU
Your videos really are so much more informative than the vast majority of people posting information about Stable Diffusion and Ai related content. Your skills for teaching and keeping things clear and concise are off the charts! Thank you so much, you're quickly becoming one of my favorite content creators (I just wish you had an easier to remember and easier to spell channel name lol).
thank you so much for bringing the 3rd part! Subbed!
Superb analysis with legendary scientific basis. Thank you, thank you, thank you!!!
Amazing as always. ty and have a nice day.
Amazing and insightful video! Thank you for your dedication to the science! By far the greatest SD videos on youtube
You're the best! I've been using negative hand for a while so good to know I should toss that one out.
A-Zovya Photoreal V2 would have been a great addition in the roundup.
Also, about the 48/50 OCD thing, it only matters for base10 OCDs. 48 is perfectly fine in base16: 0x30.
interesting experiment. thanks for your hard work!
Great Content! salute! This kind of content is what community needs! Very informative, great format, precise!
PS. I'm an engineer. How you setup the testing criteria, summarize the insight of the statistics from result is very satisfied to me since there're very few channel doing this! So I'm sure you're a Data science or some kind of mathematical profession for sure.
Nah, I'm an engineer. I have background setting specs and doing trending, which is why I have some stat knowledge.
Next can we benchmark Negative embedding? seems fun stuff to compare how significantly it can change certain style and compositions of the base image generation.
I usually use this "style of bad-artist-anime, style of bad-artist, BadNegAnatomyV1-neg, bad-hands-5, Unspeakable-Horrors-24v,, {prompt}"
never test it out though (I mean real benchmark like you did). xd
The neg embed situation just seems like too much of a "throw everything at it, hope for the best!" I have has a situation where deleting the contents in the neg box improved the image! Here is my set. "easynegative ng_deepnegative_v1_75t Unspeakable-Horrors-Composition-4v, bad-hands-5, bad_prompt_version2 badhandv4 By bad artist -neg, pasties, double_navel " besides bad hands, 2 navels show up at random especially with using as a positive "plump" and "chubby"
@@luislozano2896 So true actually, these days I use less often NegEmbed aswell unless for inpainting hands or something, but it might placebo atm. xd
Ill try your neg. ty
I'll add those to my growing list. The way it's looking, I might need to make a part 2 just for models then a part 3 for negative embeddings
@@siliconthaumaturgy7593 thats what I thought and cross Negative embedding from specific models may work for other models as well, and since some models had core model for that embedding. You can experiment with something like this, find some specific model that had negative embedding and merge them all and then use all negative embedding after for generation.
@Cutieplus yes, it can break the "style" on some models, there are some negative embedding that lean towards realistic edition even with no realistic prompt in it idk which one I forgot.
as mentioned by others, I would also like to see comparisons of these embeddings to see what they change and if they are good at all. I noticed it myself especially with hands and stopped using them. I also noticed that many checkpoints don't know some prompts. Currently my biggest struggle is to get niche prompts working (like china dress with changes) without using LORAs
Truly admirable work, sir! Faraday would be proud. 👏👏👏
I'd be curiuos to know how AbsoluteReality holds up. It's one of my favorite Checkpoints. I'd love to see you do the same process but with testing faces/eyes.
Agreed, it's the same creator who did Dreamshaper
If you get fully black images it might be because of the bugged VAE (either your chosen one or built into the model). Try adding --no-half-vae parameter to the A1111 launch script
I have that no-half-vae in there already. It's not too often, though. Maybe 1-2 out of 100 images. Rev seems to have the same issue but a bit less frequent
@@siliconthaumaturgy7593 Huh. That's weird. I've never had black images after adding no-half-vae to the options. Are there any error messages in the console window?
@@Sheevlord Nope. Just comes out black
Lol called it! Knew RevAnimated would be number 1. My only issue is it's tendency for big booty, slim waist.
Also thanks for confirming my suspicion regarding negative embeddings, they either do not work as intended or they make things worse
More please, this was a great video
Great content, Thanks ☮️
do you have a discord to discuss geeky nerdy stuff in stable diffusion?
trying to use after detailer to get hands right, do you have any other recommendations?
Simple prompts!!! ReV Animated with simple prompts, try with and without GoodHands lycoris. But the #1 way to get good hands I've found is to keep prompts simple. Don't use rare words like artist or place names. Say "museum" in your prompt, not "Louvre", or you're gonna have a bad time.
you make smartest ideas for sd videos, i was interested with this, would also be interesting to see which model outputs highest resolution full size body (not portrait) without hires fix by just using txt2img and resolutions above 1024 until it clones bodyparts
That's a good idea and shouldn't be too hard to test. Though alas, I think after this series, I'll be occupied with SDXL for quite some time. So we'll see
love your videos❤
you should also tested if adding hands in negative prompt improve the result. Even if you add hands in negative, the model still generate hands. I been adding different body part in negative and sometime it fixes the image.
great video, could you tell me how to score the hands generated by AI?
can you show like a table or graph of realistic models and ranks of them instead all of them together
I'll try to put together a spreadsheet later this week. I had some graphs I couldn't squeeze into the video, so I'll put them there instead
Liking how you censor big bouncy distractions for the sole purpose of demonstrating informative results. kudos. Time to distract myself after the review 👍🏻🙏🏻
The older models tend to do better, when they get recombined, or more censored, they seem to do worse. That is my problem with sdxl right now.
you have an oopse @ 7:04 needs to be fix.
"...also has high levels of inappropriate content ..once again, in this community, that seems to be a feature rather than a bug." That's definitely not a feature in my book. Wading through the inappropriate stuff to find good solid models to work with is a challenge. I definitely don't appreciate the inappropriate stuff.
Hi there, your videos are really informative and well-thought, I'm very interested in what you have been doing, do let me know if I could be of a help.
It is sad that realistic models still trail behind anime/cartoon models. Like making something manga or waifu seems to be accepted in society then tasteful nudity and of course simple hands and feet.
Big bouncy distractions😂😂😂