Absolutely. 1.5 still looks great, tons of community models/loras, super fast and easy to train on. SDXL is too slow and human looks plasticy. Flux is way too slow. SD3.5 appears to be the same speed as Flux and looks worse, so yeah no thanks...
Fantastic! Finally a good non-distilled model which supports negative prompts. I can't wait for the controlnets! This might be better than Flux since it's not a distilled model and should be way easier to fine tune.
Ok this is a base model, but honestly I prefer flux, much more accurate. Sd3.5 is still immature, if someone from the community will do some fine tuning, it could be at the flux level.
Maybe, but it seems to me like 99% of community-generated content gets about 1/1000th of the training that it needs and the results are janky or limit the image diversity.
I haven't done a whole lot of testing with hands, but I'm seeing others report it struggles w/them. So that's one in the column for Flux. However, maybe w/finetunes & loras we'll get hands sorted.
If you use it to produce underrage material, it should. Just saying that because on SD reddit you get downvoted for being a normal person and being against such things.
@@GCT_777 i just wonder how you would do that. There are debates about that, but current technology is not up to the task. In the end it would be a lawyer specific to your country in form of a piece of software. Just on a prompt level can easily be circumvented.
SD 3.5 still does not compare to Flux, but Im excited about the SD3.5 Medium. Medium has incredible training/fine tuning potential, while SD3.5 large and Flux is far too heavy/expensive to fine tune. SDXL is in an incredibly great state right now with the fine tunes, in my workflows its just as good as Flux, even better in some aspects too like NSFW. I think SD3.5 medium will be the best model for the community to mod/fine tune, its like a better version of SDXL and just as easy to train as SDXL.
Only benefit over flux is, that its using less ressourced. Be aware nvidia is cooking something similar too. But if you have the hardware, go with flux at the moment. On local its free except the power used. Im able to generate images within 1-3min depending which flux model i use with my rtx3060
Wow that was basically a zero hour news video. Very quick! So far though I have been quite underwhelmed by 3.5 Turbo from what I can see on HuggingFace. Output quality seems worse than compared to even DALL-E 3 or Grok.
It was an honest reaction because I was prepared to do a video on something else. So it literally took me by surprise. I even have a whole clip talking about something else that I cut away 😅
@@sebastiankamph Thanks for replying. I hope SD3 still has a future, even if its gens are only good for using them as a source for img2img with some more natural-looking models.
@@sebastiankamph Thanks for the quick reply, and for the quick video showing how to run this thing so quickly after it was released - could you check task manager and see how much it's using?
@@omegaidol Stable Diffusion XL was the one that leaked. Not the others, also I think you're mistaken by something else. Runway didn't leak SDXL. A researcher for Stability AI did. Runway had no access to SDXL they are totally different companies running totally different AI models. Stability are images and Runway are video.
Yeah, but looking down on someone laying down from a bird's eye perspective hasn't always been as much a problem as frontal view. Trying to do frontal view or from the side, or a slight angle, they end up looking like someone hit them with a car first.
It would be interesting to check the different samplers and art styles (SD3.5 being quite good at it, it seems). On my side, I wonder, ok, Flux is good for photorealism, great. Is SD3.5 good at all the rest? And that's a broad field.
it didn’t even got the nordic background, just some lame wooden cabin wall. the skin was bad, like uber bad. sure you could work on the prompt more but let’s be honest, that was not impressive at all. Stability are really far behind
@@BecauseBingeAs you seeing the SD3.5 base model for Large ( 16gb) and Small(8gb)...yet it similar file sizes as FluxDev FP8( 8gb) or Flux Dev FP16 , but Flux can doing quite good already with such training model file size.
@@peterpui7219 FLUX's floor is much higher (much better out of the box), and in theory SD 3.5s ceiling is much higher (real base mode, easier to train and free). Not here to disagree or argue, just shared the reason why I am a little excited for 3.5. 😄
@@BecauseBinge Can't agree more. I have found that community resources on Flux are... interesting, but not amazing, considering the base model is already good. For SD3.5 fine-tuned models, the question is: will it be like 1.5 and XL fine-tuned models?
I don't think it's meant to be but that makes me wonder what the point of this is. The only possible advantage I see is for those who don't have the specs to run Flux but given the file sizes, I'm not even sure about that.
@@eufrosniad994 True but there is a reduction in quality/coherence with NF4 and this could be less heavy than that, I don't know, haven't tried it myself yet. Whatever the case, I'n curious to test it but don't see me keeping it. 🙂
@@vicentepallamare2608 I am using the Forge UI, but I believe there is Comfy UI support for NF4 as well if you want more finer control on the workflow. For Forge, I can get good results without even having to use any Hi res fix settings. To render a 1366x768 takes around 1 min and 30 seconds on 6 GB vram and around 2-3 minutes I add two or three Lora’s. Oh and it will take a little longer when it runs the first time, but it generates faster after the Lora weights have been applied and all that. If I feel that some detail is missing like skin textures, I just load it up on img2img and use a SD or SDXL checkpoint to add it in post when I am upscaling using the typical workflows for SD or SDXL upscaling. For me personally, Flux just seems Vito offer more control via prompt, though I was skeptical how it would be without the negative prompts.
wow that viking prompt turned out hella shit
He typed "nordic wooden fjord". What can you expect ? The wooden was too much. :)
lol they put a woman laying on the grass as the cover image for the page introducing the model, they got traumatized
We all got traumatized. I have seen imagery that I will never get out of my mind!
@@bxl2012 Dude, I'm *still* traumatized...😂
still not a full body shot too...
What? lol
@@bxl2012what was it? Seriously, I have no idea what you guys talking about
1.5 Forever!
same idk why people use anything elts
Absolutely. 1.5 still looks great, tons of community models/loras, super fast and easy to train on. SDXL is too slow and human looks plasticy. Flux is way too slow. SD3.5 appears to be the same speed as Flux and looks worse, so yeah no thanks...
@@Jeremy-sk9qhbecause 1.5 aged like milk compared to flux imo.
@@ZwaetschgeRaeuber then why cant it do nsfw?
@@Jeremy-sk9qh wdym? Flux unchained can do that perfectly fine.
Fantastic! Finally a good non-distilled model which supports negative prompts. I can't wait for the controlnets! This might be better than Flux since it's not a distilled model and should be way easier to fine tune.
Your videos are so helpful! Thanks you for keeping us informed!
You are so welcome! 🌟
free for commercial use? So did they remove the dumb license thing that the original SD3 had?
New ceo did that a few days after sd3 dropped
No they just made it finer print....
I want to thank the two above me for not clearing that up. 🥲
in the minute 1:04 where do i clik to go on dowmload page? thanks.
Ok this is a base model, but honestly I prefer flux, much more accurate. Sd3.5 is still immature, if someone from the community will do some fine tuning, it could be at the flux level.
Maybe, but it seems to me like 99% of community-generated content gets about 1/1000th of the training that it needs and the results are janky or limit the image diversity.
Thanks for the heads up!
I haven't done a whole lot of testing with hands, but I'm seeing others report it struggles w/them. So that's one in the column for Flux. However, maybe w/finetunes & loras we'll get hands sorted.
Holy shit, your speed is insane!
Does it automatically report me to the authorities yet?
If you use it to produce underrage material, it should. Just saying that because on SD reddit you get downvoted for being a normal person and being against such things.
@@GCT_777 i just wonder how you would do that. There are debates about that, but current technology is not up to the task. In the end it would be a lawyer specific to your country in form of a piece of software. Just on a prompt level can easily be circumvented.
@@GCT_777Thought police 🤓
@@GCT_777I’d be interested in consuming some proper debate over this. My first thoughts are that I totally disagree.
@@European-Man-88 Ppl trying to banalize pdfilia. What a time to be alive right. This will end soon do not worry
SD 3.5 still does not compare to Flux, but Im excited about the SD3.5 Medium. Medium has incredible training/fine tuning potential, while SD3.5 large and Flux is far too heavy/expensive to fine tune. SDXL is in an incredibly great state right now with the fine tunes, in my workflows its just as good as Flux, even better in some aspects too like NSFW. I think SD3.5 medium will be the best model for the community to mod/fine tune, its like a better version of SDXL and just as easy to train as SDXL.
Really?, and I was so excited about fine tunes with large😮💨
Presumably this is the SD3 8B with extra training to fix the anatomy problems and other issues that the SD3 Medium model had at release?
We would have been so stocked if they had released this one right away!!
Thank you 😍
What kind of Temp Monitoring Expanison for the Browser is that?
Crystools
Is Flux free for commercial use if we generate on our PC? What's the benefit of using SD3.5 over Flux? Sorry I'm a noob at this stuff.
Only benefit over flux is, that its using less ressourced. Be aware nvidia is cooking something similar too. But if you have the hardware, go with flux at the moment. On local its free except the power used. Im able to generate images within 1-3min depending which flux model i use with my rtx3060
(cries in 8GB vram).
crying in 6
@@prinnycupcakes4992Crying in 4 (GTX 1650)
Wow that was basically a zero hour news video. Very quick! So far though I have been quite underwhelmed by 3.5 Turbo from what I can see on HuggingFace. Output quality seems worse than compared to even DALL-E 3 or Grok.
where can i get this safetensors, im new to this stuff and its saying i dont have access to it..
Is Fooocus continually updated also? SD* is just too much hassle. Even Comfyui is preferable.
*Automatic1111 and Forge
It's seeing much slower updates sadly
I love it how Sebastian sounds so unenthusiastic right from the beginning :D
It was an honest reaction because I was prepared to do a video on something else. So it literally took me by surprise. I even have a whole clip talking about something else that I cut away 😅
@@sebastiankamph Thanks for replying. I hope SD3 still has a future, even if its gens are only good for using them as a source for img2img with some more natural-looking models.
It would have been interesting to make a prompt with text like with the potions
Owww yeah!
Sooo... all the loras trained on sdxl won't be compatibile with 3.5 ? 😅
I'M using STABLE DIFFUSION with pinokio can i just download that 16gb file and put it in SD folder located in pinokio?
Hopefully James Cameron is going to whip Stability into shape. This is a small improvement, but still not that usable compared to Flux.
Yeah I bet they have to deliver something worthy now if they want to be involved in Hollywood production.
Can't wait for ai Hollywood movies.
Maybe I missed it... did you mention VRam requirements, or how much VRam it is using on your end?
I didn't actually. I didn't see anything about that. Maybe I missed that too.
@@sebastiankamph Thanks for the quick reply, and for the quick video showing how to run this thing so quickly after it was released - could you check task manager and see how much it's using?
FP16 too large to run in 12gb ram. FP8 8gb, should fit in 12gb of vram.
Much better.
just started the video, will you do "woman laying in grass" test?
edit: OH!!!! it does work 😁
SD 1.5 was the last great flexible model from Stable Diffusion. Since then they have shown they have no clue how to make them.
It was never intended to be released to the Masses. Runway leaked it. That's why I support Runway
@@omegaidol Stable Diffusion XL was the one that leaked. Not the others, also I think you're mistaken by something else. Runway didn't leak SDXL. A researcher for Stability AI did. Runway had no access to SDXL they are totally different companies running totally different AI models. Stability are images and Runway are video.
@@omegaidolyou mean the company who since then has only released closed source stuff?
@@TPCDAZ SDXL was leaked before it's intended release, but SD1.5 was not intdened for release
@@ZwaetschgeRaeuber Yes. RunwayML leaked StabilityAI weights
Yeah, but looking down on someone laying down from a bird's eye perspective hasn't always been as much a problem as frontal view. Trying to do frontal view or from the side, or a slight angle, they end up looking like someone hit them with a car first.
If we still have to prompt raw photo, beautiful, intricate, professional, realistic, etc to get realism, they have failed and Flux is still it.
that's called customizes and that's what I like about SD1.5 to SDXL. They'll only give me what I want when I prompt them what I need.
look cool i accidently deleted all my flux models over 100 gigs so i need to redownload them and now i dont know what to get this or flux
Why not both?
@@ZwaetschgeRaeuber true now it deciding on the models to get lol
I think we all wanted to see a woman lay in the grass but the viking said enough for me.
Awesome, now how do I make my cat dance, I just want to make funny dancing videos
Hi 👋
It would be interesting to check the different samplers and art styles (SD3.5 being quite good at it, it seems). On my side, I wonder, ok, Flux is good for photorealism, great. Is SD3.5 good at all the rest? And that's a broad field.
SD.3.5 Is a base model to be fine Tuned. Not like Flux.
You can finetune flux just fine. Its just more costly.
I started creating my own model, these companies are just fucking up everything.
You mean finetune?
I guess, it is "Safe". No NSFW generation. So I prefer SD 1.5
Its about as safe as sdxl or flux
Man I hate comfyui
is this gonna be a bigger disappointment than sd 3?
Bad and good at the same time....hands and fingers never fixed... that's ashame...
Hands still god damn awful unfortunately
Are we waiting for version 4 then?
Flux.
The same plasticine as flux
still looks like crap when compared to ideogram and flux 1.1 lmao
Even flux1 schnell
flux is better
Flux Turbo is required only 8 steps, less computational resources, yet brings good outcome, quality and fast.
it didn’t even got the nordic background, just some lame wooden cabin wall. the skin was bad, like uber bad. sure you could work on the prompt more but let’s be honest, that was not impressive at all. Stability are really far behind
SD3.5 is disappointing. Bad hands and fingers as usual.Flux is doing great enough, why we need SD3.5?
Non distilled base model that is by nature much more maliable and easier to train than FLUX.
@@BecauseBingeAs you seeing the SD3.5 base model for Large ( 16gb) and Small(8gb)...yet it similar file sizes as FluxDev FP8( 8gb) or Flux Dev FP16 , but Flux can doing quite good already with such training model file size.
@@peterpui7219 FLUX's floor is much higher (much better out of the box), and in theory SD 3.5s ceiling is much higher (real base mode, easier to train and free). Not here to disagree or argue, just shared the reason why I am a little excited for 3.5. 😄
@@BecauseBinge Can't agree more. I have found that community resources on Flux are... interesting, but not amazing, considering the base model is already good.
For SD3.5 fine-tuned models, the question is: will it be like 1.5 and XL fine-tuned models?
This comment "why we need SD3.5?" Who ever questions a new AI model? Options are great! And SD3.5 at least based on my tests is insanely fast.
still not as good as FLUX
I don't think it's meant to be but that makes me wonder what the point of this is. The only possible advantage I see is for those who don't have the specs to run Flux but given the file sizes, I'm not even sure about that.
@@Elwaves2925 one could always run Flux NF4 though. So Flux being resource heavy isn’t exactly true anymore.
Got a workflow for flux nf4? Wanna try for a project with a. 6gb vram
@@eufrosniad994 True but there is a reduction in quality/coherence with NF4 and this could be less heavy than that, I don't know, haven't tried it myself yet.
Whatever the case, I'n curious to test it but don't see me keeping it. 🙂
@@vicentepallamare2608 I am using the Forge UI, but I believe there is Comfy UI support for NF4 as well if you want more finer control on the workflow. For Forge, I can get good results without even having to use any Hi res fix settings. To render a 1366x768 takes around 1 min and 30 seconds on 6 GB vram and around 2-3 minutes I add two or three Lora’s. Oh and it will take a little longer when it runs the first time, but it generates faster after the Lora weights have been applied and all that. If I feel that some detail is missing like skin textures, I just load it up on img2img and use a SD or SDXL checkpoint to add it in post when I am upscaling using the typical workflows for SD or SDXL upscaling. For me personally, Flux just seems Vito offer more control via prompt, though I was skeptical how it would be without the negative prompts.
Nothing wow with this. Good for art and cartoons pictures