Hey folks! I want to test the prompt adherence for SD3 in the next video, I'd appreciate any prompts you want me to try out! Or if there is anything else you want me to focus on let me know in the comments!
Great video! It was interesting and easy to follow. I wonder how well it will handle consistent characters and also the level of prompt adherence when there is more than one character in an image. I am looking forward to a deeper dive. Thanks!
It's still SD under the hood, meaning it won't be able to create the same character twice without a trained LoRa. Look at the img2img (sketch) in the video. SD has no idea what it's looking at without a prompt, and txt2img alone is pure diffusion without conditioning controls.
Well if they don't make money there won't be a Stability AI in the future. Once the model is released most people don't use SD for commercial purposes anyway so it won't change for them. However to use it commercially it will cost some $. I'm still experimenting with the model to see what improvements have been made.
@@MonzonMedia Well you know , They could just charge a one time fee Like Z-brush does for about $800 and then charge less money for the upgrades . That way you dont have big brother looking over your sholder .
Thanks for the video: I can’t stand this, it the worst part of Mid journey. Where are the models? Where are the seeds? How do you get consistency? It looks like a cash grab, and following Ai subs platform formula, won’t be wanting to use this at all, such a step away from 1111, and comfy. Shame they went this way 😢
The model will be released soon and if you don’t use it commercially then it wouldn’t be any different. But it’s new architecture so it will be a bit more wait to use things like ControlNet and other functions.
It’s better than SDXL but I still need to test it more. It does work better with holding weapons but not always. Fine tune models will be much better though.
Just announced that the 2B model for SD3 will be released June 12. 👍🏼
Hey folks! I want to test the prompt adherence for SD3 in the next video, I'd appreciate any prompts you want me to try out! Or if there is anything else you want me to focus on let me know in the comments!
12:57 Considering your status in the industry (if they are wise, they keep an eye), Beta is actually the perfect time for critique/feedback.
Great video! It was interesting and easy to follow. I wonder how well it will handle consistent characters and also the level of prompt adherence when there is more than one character in an image. I am looking forward to a deeper dive. Thanks!
It's still SD under the hood, meaning it won't be able to create the same character twice without a trained LoRa. Look at the img2img (sketch) in the video. SD has no idea what it's looking at without a prompt, and txt2img alone is pure diffusion without conditioning controls.
something about 1.5 that always stuck the balance between worlds.
I hate those subscription plans, and they still have to work on their stability problems and the image prompt
Well if they don't make money there won't be a Stability AI in the future. Once the model is released most people don't use SD for commercial purposes anyway so it won't change for them. However to use it commercially it will cost some $. I'm still experimenting with the model to see what improvements have been made.
@@MonzonMedia Well you know , They could just charge a one time fee Like Z-brush does for about $800 and then charge less money for the upgrades . That way you dont have big brother looking over your sholder .
Thanks for the video: I can’t stand this, it the worst part of Mid journey. Where are the models? Where are the seeds? How do you get consistency? It looks like a cash grab, and following Ai subs platform formula, won’t be wanting to use this at all, such a step away from 1111, and comfy. Shame they went this way 😢
The model will be released soon and if you don’t use it commercially then it wouldn’t be any different. But it’s new architecture so it will be a bit more wait to use things like ControlNet and other functions.
SD3 still sux with hands and fingers ...
It’s better than SDXL but I still need to test it more. It does work better with holding weapons but not always. Fine tune models will be much better though.
@@MonzonMedia better, but not enough to consider it 100% enough
The biggest problem for us Turks is the problem of Turkish Character in artificial intelligence. Can it do the letters "ŞşİÖöÜüÇç"?
That has a lot to do with the training. I haven't tested text in other languages but I can try.
@@MonzonMedia I would be glad if you try the Turkish language for me. good luck.
Features are terrible.
I suspect when the model is released it will be a bit longer to use on the usual platforms with things like ControlNet. New architecture and all.