@@OlivioSarikas I know you love Comfy but it would be appreciated if you can give a heads up when other UI's add SD3 support. I'm a fan of Forge but official development of that has stopped, so I'm on the for an SD3 only alternative and not everyone cares for Comfy.
Olivio, Thanks for all the past vids. SD3 is GREAT …for text interpretation. I use AI to build graphics/posters for my screenplays. I need to describe more than one person in my prompts. In SDXL clothing A would ‘bleed’ in to clothing B. Characters were always wearing the other’s clothing SD3 fixes that. Last night I started resubmitting all my graphics for my show bible. I had 100% results with multiple characters in a prompt.
Just tested it, this base model is SD 2.0 level bad, it cannot get any hands right and if you try to generate a person lying on gras, it will generate body horror exclusively.
Can you run this within Automatic 1111? If I place the safetensors in my models folder, my SD stops working and gives me an error. It goes back to normal once more when I remove the model.
I understand you are not allowed to use "SD-3 Models" for commercial use in apps you create! BUT What is not clear to me is commercial use of "Output images" ie putting generated image on wall art, t-shirts, book covers, etc and selling that as final product, is that commercially allowed with free version ??????
StabilityAI's commercial license agreement for using SD3 models means you cannot sell any SD3 images in any way, shape or form, if you do not at least get their 20 USD per month "creator" subscription plan. This is not depended on weather you have a company or not. As soon as you sell images or use them in a product that you intend to sell, you need to subscribe with them. This also applies if you use the images in a company environment, like a Power Point presentation, company brochure aso. It applies to big and small companies, freelancers, private individuals, schools, educators or government agencies alike. You need to pay them, if you intend to sell any image created in SD3. You also need to pay them, if you are using SD3 images in any medium owned by a company. Image in company profile. You need to pay. Image used in government agency PDF, dont need to pay. Government or School wants to sell image for profit, most likely need to pay. Other example: You are a private individual and want to sell an image. You need to pay. You are using an image in a Power Point presentation for sisters weeding, you don't need to pay. You are selling the power point presentation to your sister, after the weeding is over: You need to pay. You want to use the image in a company power point presentation, you need to pay. You are a wedding planer and created a power point presentation for a weeding, you need to pay. OpenAI has taken a different approach: You can sell any images created with ChatGPT or Dall-E, no matter if you created the image within their subscription plan OR within their free plan. Any image you create with them, you can sell. Stability AI has secretly transformed from an open-source research company to a full blown for-profit tech enterprise. Many AI companies start out as open source research companies, as this status legally allows them to scrape the web for images without paying license fees to copyright owners. The fact that they silently transform into profit companies after the scrapping phase is really questionably practice. OpenAI has tried doing that, then Elon Musk sued them. In the meantime they also offer a free plan to fend off any opposition, which seems to work. StabilityAI on the other hand will face a lot of legal battles as a result, at least that is my guess. I am actually surprised that Stability is being so shameless. Their new management team is either very bold or inexperienced.
@moxxs Die kommerzielle Lizenzvereinbarung von Stability AI besagt, dass du keine SD3-Bilder in irgendeiner Form verkaufen darfst, wenn du nicht mindestens das 20 USD pro Monat teure "Creator"-Abo abschließt. Das gilt unabhängig davon, ob du eine Firma hast oder nicht. Sobald du Bilder verkaufst oder sie in einem Produkt verwendest, das du verkaufen willst, musst du ein Abonnement abschließen. Dies gilt auch, wenn du die Bilder in einer Unternehmensumgebung verwendest, z. B. in einer PowerPoint-Präsentation oder einer Unternehmensbroschüre. Dies betrifft große und kleine Unternehmen, Freiberufler, Privatpersonen, Schulen, Pädagogen und Behörden gleichermaßen. Du musst dein Lizenzabo monatlich bezahlen, wenn du ein mit SD3 erstelltes Bild verkaufen möchtest. Du musst auch zahlen, wenn du SD3-Bilder in einem Medium verwendest, das einem Unternehmen gehört. Zum Beispiel ein Bild, das in einem Unternehmensprofil verwendet wird oder in einer Firmen-App (auch wenn diese nicht weiterverkauft wird). Wenn du ein Bild in einer PDF-Datei einer Behörde verwendest, fällt keine Gebühr an. Wenn die Regierung oder eine Schule das Bild jedoch gewinnbringend verkaufen will, musst du höchstwahrscheinlich auch zahlen. Ein weiteres Beispiel: Du bist eine Privatperson und möchtest ein Bild verkaufen. Lizenzgebühr notwendig. Du verwendest ein Bild in einer PowerPoint-Präsentation, die auf der Hochzeit deiner Schwester gezeigt wird. Keine Lizenzgebühr. Du verkaufst die PowerPoint-Präsentation nach der Hochzeit an deine Schwester, dann musst du eine Lizenzgebühr zahlen. Du willst das Bild in einer PowerPoint-Präsentation für dein Unternehmen verwenden: Lizenzabo muss bezahlt werden. OpenAI hat einen anderen Ansatz gewählt: Du kannst alle Bilder verkaufen, egal ob du die Bilder im Zuge des Abo-Plans oder im Zuge der Gratisnutzung erstellt hast.
Thanks for the update! The generations with the RAW version of SD3 are absolute trash, let's hope the fine-tuned models will be actually an improvement from previous versions.
I just got the SD3 Medium tests on ComfyUI. The results are fabulous and generated image quality better than SD1.5 & SDXL by 3X based on the base model testing. Looking forward to have community trained models on SD3 soon!
In reality it's hit or miss. Some pictures turn out very awesome for a basemodel though. Finetuning is definitely needed but we'll see how and when we get decent models, since sai new licencing. Personally I like it so far, a bit underwhelming in some parts but very nice in others. Def. the best basemodel we got
can you do a Can you make a detailed video tutorial? I downloaded it and tried it, but the image is very bad. I don't know what's wrong and I don't know how to code. (If possible, please give me a link to the pre-set templates. Thanks)
i'm having an error when trying to load sd3 checkpoint, any idea for: Error while deserializing header: InvalidHeaderDeserialization. at the end says "safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization" edit: easy fix, do the same thing in the video for the problem he had.
Hi, I have the next issue: I am using comfy_example_workflows_sd3_medium_example_workflow_basic: Prompt outputs failed validation TripleCLIPLoader: - Value not in list: clip_name1: 'clip_g.safetensors' not in [] - Value not in list: clip_name2: 'clip_l.safetensors' not in [] - Value not in list: clip_name3: 't5xxl_fp8_e4m3fn.safetensors' not in [] CheckpointLoaderSimple: - Value not in list: ckpt_name: 'sdv3/2b_1024/sd3_medium.safetensors' not in ['sd3_medium(1).safetensors', 'sd3_medium_incl_clips.safetensors', 'sd3_medium_incl_clips_t5xxlfp16(1).safetensors']
@@OlivioSarikas Some anti included a keylogger in an extension that sent information back to a Discord server. Good reminder to not install any software that hasn't been tested and vetted.
Just in case you don't want to give your data away: Civitai has the models, too. edit: Also just in case you don't want to give your data away: Just don't download it as it is absolute hot garbage (unless you are into creating Cthulhu abomination type body horror)
Absolutely disaster for anything pose/anatomy related - like eldritch horror level disaster. It’s also really bad at hands. On the other hands, it does well in prompt adherence and text generation but if I want to do text, I will just use photoshop or CN with the text. SD3 Medium launch feels like 2.0 all over again… dead on arrival.
@OlivioSarikas or just live your life😂 dude we appreciate all the free content you put out and help you provide. It's insane to think people want more for nothing out of someone that already provides so much.
@@OlivioSarikas Which is why I'm waiting. I never use official models. Hopefully Automatic will get support soon - but, I've heard ComfyUI is better overall for image generation.
@@Vestu An excellent question. What's the dif between SD1 and SDXL? More importantly, why isn't SDXL called SD2? When you enter a prompt, the machine takes your human language (English, Spanish, French, Mandarin, etc.) and converts it into--let's call it "SD Speak". SD1, SD2, SDXL, and SD3 all use the exact same architecture for prompts. The only programming/math difference between them is the Human-to-SD-Speak converter. The other part of the equation is the dataset. Obviously each generation adds more data. So 1.5 knows "kitten" but not "Calico" and SDXL/SD3 have "Calico" inside of their massive banks of data. So that's it: more data and a better language translator. But... We have LoRAs. Thousands of them. There's even a LoRA for "Calico" cats. And even if there wasn't, you can just make one of your own on Civit with as few as one photo. Now imagine you want your Calico cat on SD3. SD3 has to search BILLIONS of datapoints looking for "Calico" instead of just handing it the "Calico" LoRA. This is hugely inefficient. That's why SD3 can't run without a GPU. With 8th grade PhotoPea knowledge and thirty seconds of elbow grease to import the right LoRA, you can make art literally 10x faster in SD1.5 without the need for GPU support and the extra electricity.
Excellent video, what great news! Now I'm going to wait for you to release a video on how to make a LORA with SD3 so I can create my LinkedIn profile picture 😅
have error pressing promt. Prompt outputs failed validation CheckpointLoaderSimple: - Value not in list: ckpt_name: 'sdv3/2b_1024/sd3_medium.safetensors' not in ['sdv3\\2b_1024\\sd3_medium.safetensors']
This model is simply not capable of creating even the simplest human anatomy, even the 1.5 model is significantly better for human anatomy, and this is not an exaggeration. I don't understand why they released this in the first place, this model only makes sense for creating images that don't contain people. How this was not immediately apparent to those who test these models long before the official release, as Olivio Sarikas and other ''SD Streamers'' is beyond me, they are just as credible in their "presentations" as Stability AI.
It's a real pain to update comfyui, and the missing modules NEVER appear, I really don't understand what they do when they "update" because they manage to ensure that everything that is useful no longer works.
@@h.m.chuang0224 (clicking 20 buttons later) no, thats really not how it works. im having the same issue in that i have to track down a forked repository of a useful node pack that no longer is being supported. comfy wont do it for me but tells me that i need to do it. so, more than one click. i suppose that one click does say what i need to do to fix it. but still. it says "uninstall current unsupported node pack, find forked repository and install that instead"
there wont be a Pony model for SD3, the creator of it got mocked by their employees on their discord and they're not responding to his request for an enterprise license BUT, we'll get Pony v6.9 (nice) for SDXL or maybe another model
Nope, I hate it, too. Whenever I force myself to use it again, I spend more time trying to get the cable salad all neat and tidy than actual image generation. 😋
I appreciate the video, but as a newbie, some basic things are not exactly intuitive. Like when he says to download the models and so on... Okay, where? It would be nice if you actually showed this step. Also, how do you actually run comfy ui so it generates an image? There seems to be no clear button to launch it
Thank you. Just tried it in Comfy, works really well and waiting for A1111 patch update to support 3.0.
ComfyUI is the best :)
Same, love having everything available on one page!
@@OlivioSarikas I know you love Comfy but it would be appreciated if you can give a heads up when other UI's add SD3 support. I'm a fan of Forge but official development of that has stopped, so I'm on the for an SD3 only alternative and not everyone cares for Comfy.
ComfyUI is awful
comfyui is weird
Olivio,
Thanks for all the past vids.
SD3 is GREAT …for text interpretation. I use AI to build graphics/posters for my screenplays.
I need to describe more than one person in my prompts. In SDXL clothing A would ‘bleed’ in to clothing B. Characters were always wearing the other’s clothing
SD3 fixes that. Last night I started resubmitting all my graphics for my show bible.
I had 100% results with multiple characters in a prompt.
Olivio comes in with a new video at the exact time of my need! Appreciate ya.
awesome :)
So, I can't use it for commercial use unlike the previous versions?
you can't use SD2Turbo as well, the newer model you can use for commercial use is SD2
Update: SDXL is free for commercial use as well
@@NHCH SDXL is free for commercial use as I know
@@hrcd21 yes, you are right!
Thanks for leaving out all the fluff and just getting to it facts. Much appreciated.
Including how to literally download the thing 🤣
Just tested it, this base model is SD 2.0 level bad, it cannot get any hands right and if you try to generate a person lying on gras, it will generate body horror exclusively.
Yeah its really not worth it to use it for now.
It's nerfed that's why
recently there's a new model, which include clips + t5xxl fp16, it's around 15GB. it's released an hour ago.
If you want to render people, don't bother with this base model. It mostly creates abominations. Purposedly censored to the max.
It's ok no one ever used SD for people
/s
@@spacekitt.n All humanity needs is more cat images. 😄
@@madrooky1398 exactly! We are creating AI for one purpose only - to give cat images to pour peoples, who can't have cat :)
Then what model would you suggest lol
Can we use SD3 in normal A1111 mode, ComfiUI is not really for me...
Danke Olivio! Will we be able to use SD3 with Automatic 1111?
A1111 is kind of not really updated much anymore from what i have heard, but probably it will also come to A1111
Course it will, just got to wait for the forge update.
@@PretendBreadBoy I thought the dev had stopped updating Forge? He's moved onto other projects and is leaving Forge as it is.
@@PretendBreadBoy Forge has been discontinued. Check on its download page.
@@OlivioSarikas the dev branch is updated almost daily
Can you run this within Automatic 1111? If I place the safetensors in my models folder, my SD stops working and gives me an error. It goes back to normal once more when I remove the model.
Me too.
Was verstehen die unter Commercial use das man die Bilder nicht verwenden darf oder das du SD3 nicht in deiner iPhone ki bild app einbauen darfst?
im Zweifel am besten nachfragen
I understand you are not allowed to use "SD-3 Models" for commercial use in apps you create! BUT What is not clear to me is commercial use of "Output images" ie putting generated image on wall art, t-shirts, book covers, etc and selling that as final product, is that commercially allowed with free version ??????
@@akcreativeagency8280How would anyone know? Just do it
StabilityAI's commercial license agreement for using SD3 models means you cannot sell any SD3 images in any way, shape or form, if you do not at least get their 20 USD per month "creator" subscription plan. This is not depended on weather you have a company or not. As soon as you sell images or use them in a product that you intend to sell, you need to subscribe with them. This also applies if you use the images in a company environment, like a Power Point presentation, company brochure aso. It applies to big and small companies, freelancers, private individuals, schools, educators or government agencies alike. You need to pay them, if you intend to sell any image created in SD3. You also need to pay them, if you are using SD3 images in any medium owned by a company. Image in company profile. You need to pay. Image used in government agency PDF, dont need to pay. Government or School wants to sell image for profit, most likely need to pay.
Other example: You are a private individual and want to sell an image. You need to pay. You are using an image in a Power Point presentation for sisters weeding, you don't need to pay. You are selling the power point presentation to your sister, after the weeding is over: You need to pay. You want to use the image in a company power point presentation, you need to pay. You are a wedding planer and created a power point presentation for a weeding, you need to pay.
OpenAI has taken a different approach: You can sell any images created with ChatGPT or Dall-E, no matter if you created the image within their subscription plan OR within their free plan. Any image you create with them, you can sell.
Stability AI has secretly transformed from an open-source research company to a full blown for-profit tech enterprise. Many AI companies start out as open source research companies, as this status legally allows them to scrape the web for images without paying license fees to copyright owners. The fact that they silently transform into profit companies after the scrapping phase is really questionably practice. OpenAI has tried doing that, then Elon Musk sued them. In the meantime they also offer a free plan to fend off any opposition, which seems to work. StabilityAI on the other hand will face a lot of legal battles as a result, at least that is my guess. I am actually surprised that Stability is being so shameless. Their new management team is either very bold or inexperienced.
@moxxs Die kommerzielle Lizenzvereinbarung von Stability AI besagt, dass du keine SD3-Bilder in irgendeiner Form verkaufen darfst, wenn du nicht mindestens das 20 USD pro Monat teure "Creator"-Abo abschließt. Das gilt unabhängig davon, ob du eine Firma hast oder nicht. Sobald du Bilder verkaufst oder sie in einem Produkt verwendest, das du verkaufen willst, musst du ein Abonnement abschließen. Dies gilt auch, wenn du die Bilder in einer Unternehmensumgebung verwendest, z. B. in einer PowerPoint-Präsentation oder einer Unternehmensbroschüre. Dies betrifft große und kleine Unternehmen, Freiberufler, Privatpersonen, Schulen, Pädagogen und Behörden gleichermaßen. Du musst dein Lizenzabo monatlich bezahlen, wenn du ein mit SD3 erstelltes Bild verkaufen möchtest. Du musst auch zahlen, wenn du SD3-Bilder in einem Medium verwendest, das einem Unternehmen gehört. Zum Beispiel ein Bild, das in einem Unternehmensprofil verwendet wird oder in einer Firmen-App (auch wenn diese nicht weiterverkauft wird). Wenn du ein Bild in einer PDF-Datei einer Behörde verwendest, fällt keine Gebühr an. Wenn die Regierung oder eine Schule das Bild jedoch gewinnbringend verkaufen will, musst du höchstwahrscheinlich auch zahlen.
Ein weiteres Beispiel: Du bist eine Privatperson und möchtest ein Bild verkaufen. Lizenzgebühr notwendig. Du verwendest ein Bild in einer PowerPoint-Präsentation, die auf der Hochzeit deiner Schwester gezeigt wird. Keine Lizenzgebühr. Du verkaufst die PowerPoint-Präsentation nach der Hochzeit an deine Schwester, dann musst du eine Lizenzgebühr zahlen. Du willst das Bild in einer PowerPoint-Präsentation für dein Unternehmen verwenden: Lizenzabo muss bezahlt werden.
OpenAI hat einen anderen Ansatz gewählt: Du kannst alle Bilder verkaufen, egal ob du die Bilder im Zuge des Abo-Plans oder im Zuge der Gratisnutzung erstellt hast.
Thanks Olivio! Can't wait to dig in. I've been sitting on discord allday waiting for it to drop.
same :)
Pretty straightforward to run it, thanks again!
Not so happy with first results, but I hope it's just experience and prompt adjustment.
It's not its nerfed
@@MarcoCholo-iz9js after a few days, agreed. Completely. Even SDXL is better. Back to Pony/1.5
@@jcvijr Just have to wait for SD3 Juggernaut to come out
Information in this video is gold.
more like silver. lol
Great information as usual, but I'll be sticking with SDXL until they get a few fine tunings out there.
Thanks for the update! The generations with the RAW version of SD3 are absolute trash, let's hope the fine-tuned models will be actually an improvement from previous versions.
I just got the SD3 Medium tests on ComfyUI. The results are fabulous and generated image quality better than SD1.5 & SDXL by 3X based on the base model testing. Looking forward to have community trained models on SD3 soon!
yes, looks really nice so far and fast too :)
In reality it's hit or miss. Some pictures turn out very awesome for a basemodel though. Finetuning is definitely needed but we'll see how and when we get decent models, since sai new licencing. Personally I like it so far, a bit underwhelming in some parts but very nice in others. Def. the best basemodel we got
what about speed, while giving answer tell about your vram.
@@HOLOGRAMUA-cam 12gb vram (cries in rtx 4070): 1 image takes roughly 12 to 20seconds @1024x1024, 28steps, cfg5, sd3 sampler clip+t5
@ChAzR89 so is very likely to run in colab, interesting.
Thanks for the vid. I'll just wait for the community trained models. Human anatomy is quite underwhelming with SD3 base model.
yeah was pretty disappinted myself
It seems sd3 is so censored, so bad that I'm not sure it's worth investing for the community to train it tbh.
@@eimantasbutkus5324Yes, looks like you're right unfortunately.
Thank you for your content!! I hope you feeling better. ❤
thank you :) on my way up again :)
can you do a Can you make a detailed video tutorial? I downloaded it and tried it, but the image is very bad. I don't know what's wrong and I don't know how to code. (If possible, please give me a link to the pre-set templates. Thanks)
i'm having an error when trying to load sd3 checkpoint, any idea for: Error while deserializing header: InvalidHeaderDeserialization. at the end says "safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization"
edit: easy fix, do the same thing in the video for the problem he had.
same but now got 'module 'torch' has no attribute 'float8_e4m3fn'' when trying to use the T5
@@jond532 download sd3_medium_incl_clips_t5xxlfp16.safetensors
@@jond532 That means your PyTorch is outdated and/or your GPU does not support fp8 precision
Thank you so much for the simple tutorial! Unfortunately the model seems to be terrible. I hope they will fix it with the bigger one.
Hi, I have the next issue: I am using comfy_example_workflows_sd3_medium_example_workflow_basic:
Prompt outputs failed validation
TripleCLIPLoader:
- Value not in list: clip_name1: 'clip_g.safetensors' not in []
- Value not in list: clip_name2: 'clip_l.safetensors' not in []
- Value not in list: clip_name3: 't5xxl_fp8_e4m3fn.safetensors' not in []
CheckpointLoaderSimple:
- Value not in list: ckpt_name: 'sdv3/2b_1024/sd3_medium.safetensors' not in ['sd3_medium(1).safetensors', 'sd3_medium_incl_clips.safetensors', 'sd3_medium_incl_clips_t5xxlfp16(1).safetensors']
NSFW capable or censored like 2.0?
What should I do if I keep displaying TypeError: Failed to fetch while executing tasks
Does it work on automatic 1111?
Not yet. A1111 needs an update, so probably very soon.
@@BoyanOrion thanks !
can you talk about anti AI putting malware's on github please
I will check it out. First time i hear of it, other than the images that destroy model training.
@@OlivioSarikas Some anti included a keylogger in an extension that sent information back to a Discord server. Good reminder to not install any software that hasn't been tested and vetted.
Can it be used without ComfyUI?
Which one do I download for a 4090?
Thanks!
Thank you very much for your support ❤️
Does it only work on Comfy UI? or Forge is ok too
Just in case you don't want to give your data away: Civitai has the models, too.
edit: Also just in case you don't want to give your data away: Just don't download it as it is absolute hot garbage (unless you are into creating Cthulhu abomination type body horror)
Why when I generate a picture in 16:9 do I have artifacts on the sides?
Thanks Sir! You make my day great!!!
i cant figer out where to place the TripleCLIPLoader files (OpenCLIP-ViT/G, CLIP-ViT/L and T5-xxl) any ideas ??
ComfyUI\models\clip
Is there any lightning or LCM model for it?
whats the difference between previous models?
So how do I install it??
Is it possible to run SD without and internet connection?
Has anyone gotten img2img working yet? The VAE nodes in ComfyUI don’t appear to be compatible at this time.
Use the model. Vae is embedded
please also a video for automatic1111!
Thanks for video, Olivio!
Can you run it via stable the old way or must you use comfyui to run SD3?
cool. ! thanks. hope you are doing well!
can someone share the workflow? I gotten the file but it doesn't work in comfyui.
After updating all, I got this error ModelSamplingSD3 TripleCLIPLoader missing, how can I solve it?
did you download the model that includes clip, like i showed in the video? feel free to ask in my discord: discord.gg/gUengqcN
@@OlivioSarikas yes the 11Gb model!
Update comfyui. Don't use the manager.
I wonder if old Loras would works with SD3. Any success so far?
good question... i doubt it though
i think no
If it's older like sd1.5 than absolutely not, one is trained on 512x512 and the other is 1024x1024
Please....Automatic1111 setup!
Thank you, my friend!
Welcome!
Thanks, Olivio, great share!
thanks for the quick update 🖐🖐
you are welcome :)
I do the same but I just got fully noisy image, colored noise
are you sure you updated and used the correct model i show in the video?
@@OlivioSarikas the same, the clip included model from the workflow
This serves as an empirical clue of why Stability AI might be failing.
5:59 Sound in background 🤔
Absolutely disaster for anything pose/anatomy related - like eldritch horror level disaster. It’s also really bad at hands. On the other hands, it does well in prompt adherence and text generation but if I want to do text, I will just use photoshop or CN with the text. SD3 Medium launch feels like 2.0 all over again… dead on arrival.
do not need to download clips?
can I use for stable video diffusion?
no reason why it should not work. you will have to fuze the video workflow with the new sampling settings tho
Hey, we dont see you enough lately. Try to expand your scope if image generation news are slow. Take care.
Had some private problems to deal with. But i'm back now :)
@OlivioSarikas or just live your life😂 dude we appreciate all the free content you put out and help you provide. It's insane to think people want more for nothing out of someone that already provides so much.
@amorgan5844 oh shush. they aren't expecting anything. Is just a show of support because people like his content.
About the agreement:
"You own your derivative works, but Stability AI owns the original software and any derivative works they create". ?
Where you at Juggernaut, SD3 is here.
Most likely at the office, brewing their new model... 😅
no training support yet, but hopefully soon
@@OlivioSarikas Now I'm sad, thx for the info.
@@OlivioSarikas Which is why I'm waiting. I never use official models. Hopefully Automatic will get support soon - but, I've heard ComfyUI is better overall for image generation.
Could you explain more of ComfyUI
I have. search my channel
Why use SD3 if SD1 not only works really well but has thousands if not tens of thousands of LoRAs already created?
Why use SD1 when SDXL is currently peak when taking in account the quality of the base model and the amount of LoRAs
@@Vestu An excellent question. What's the dif between SD1 and SDXL? More importantly, why isn't SDXL called SD2? When you enter a prompt, the machine takes your human language (English, Spanish, French, Mandarin, etc.) and converts it into--let's call it "SD Speak". SD1, SD2, SDXL, and SD3 all use the exact same architecture for prompts. The only programming/math difference between them is the Human-to-SD-Speak converter.
The other part of the equation is the dataset. Obviously each generation adds more data. So 1.5 knows "kitten" but not "Calico" and SDXL/SD3 have "Calico" inside of their massive banks of data.
So that's it: more data and a better language translator. But...
We have LoRAs. Thousands of them. There's even a LoRA for "Calico" cats. And even if there wasn't, you can just make one of your own on Civit with as few as one photo. Now imagine you want your Calico cat on SD3. SD3 has to search BILLIONS of datapoints looking for "Calico" instead of just handing it the "Calico" LoRA. This is hugely inefficient. That's why SD3 can't run without a GPU. With 8th grade PhotoPea knowledge and thirty seconds of elbow grease to import the right LoRA, you can make art literally 10x faster in SD1.5 without the need for GPU support and the extra electricity.
*IF* i have proof jagex game studios uses this model without commercial use is there a reward for telling on them??
Excellent video, what great news! Now I'm going to wait for you to release a video on how to make a LORA with SD3 so I can create my LinkedIn profile picture 😅
have error pressing promt. Prompt outputs failed validation
CheckpointLoaderSimple:
- Value not in list: ckpt_name: 'sdv3/2b_1024/sd3_medium.safetensors' not in ['sdv3\\2b_1024\\sd3_medium.safetensors']
This model is simply not capable of creating even the simplest human anatomy, even the 1.5 model is significantly better for human anatomy, and this is not an exaggeration. I don't understand why they released this in the first place, this model only makes sense for creating images that don't contain people. How this was not immediately apparent to those who test these models long before the official release, as Olivio Sarikas and other ''SD Streamers'' is beyond me, they are just as credible in their "presentations" as Stability AI.
Love the commercial license idea, I hope that the owners of the original training data get residuals
My Stable Cascade images are better than SD3 (for a same prompt).
Olivio U R my man :)
thank you :)
beautiful bro
Sorry--what/where is the "Comfy UI Windows Portable folder"? I can't find what you're talking about.
It's a real pain to update comfyui, and the missing modules NEVER appear, I really don't understand what they do when they "update" because they manage to ensure that everything that is useful no longer works.
As much as I don't like ComfyUI, you literally only need to click 1 button to update everything
@@h.m.chuang0224 (clicking 20 buttons later) no, thats really not how it works. im having the same issue in that i have to track down a forked repository of a useful node pack that no longer is being supported. comfy wont do it for me but tells me that i need to do it. so, more than one click. i suppose that one click does say what i need to do to fix it. but still. it says "uninstall current unsupported node pack, find forked repository and install that instead"
I downloaded the files, they are sitting there with the other models. I've updated ComfyUI. It cannot see the new models.
u never explain how to instal :D the file you wanted to download don't work
At least I have better prompt control to render my abominations now.
We're going to need the 2nd Amendment for AI. It's not the tool that hurts people, it's the people who use the tool wrong that hurts people.
And now my friends, let's patiently wait for Pony! 🎠🤣
They wont release a pony sd3 just cinfirmee
pony be like "let's ride this hell horse"
@@j5545 Purple Smart AI said that would like to train a new model Pony on SD3
there wont be a Pony model for SD3, the creator of it got mocked by their employees on their discord and they're not responding to his request for an enterprise license
BUT, we'll get Pony v6.9 (nice) for SDXL or maybe another model
Thank you
you are welcome :)
the lady in the thumbnail looks suspiciously similar to Senta Berger?
hm....
Get cooking guys ❤
GPU: on fire
Keep in mind the prompting needs to be different. I've seen much better results from people using full sentences. So no SD1.5 or SDXL style prompting.
SD3 claims better hands - all sample images don't show human hands.
I am just wondering how they are going to enforce their non commercial licensing when it is all done on your PC, are they spying on you? 🤷♂️
Bro is milking it... wow!
Am I the only one that doesn't like ComfyUI? 😥
no
Like they say, spaghetti should be for eating and not generating AI images.
Nope, I hate it, too. Whenever I force myself to use it again, I spend more time trying to get the cable salad all neat and tidy than actual image generation. 😋
I hate it
IT USE COMFYUI AND IT WILL LIKE COMFYUI OR IT GETS THE HOSE AGAIN!
It is so bad lol. Back to SDXL and Pony.
Stability AI is so back!
Let's go!!
You are a fricken hero mate.
Thank you :)
Hey oliviooooo brooooo
hey-oooo!!!! what up!
its working in 1111?... i tried but, "TypeError: 'NoneType' object is not iterable" ... sighs!!! u,u
not yet
Body horror model
not worth the time, easily one of the worst SD launches, licensing it the way they did with how poorly it was trained really shot themself in the foot
comment for the algo
thank you :)
SD3 sux...
"Unable to find workflow in demo" after the update he swears at all the pictures
Yes!!!
So the blue guy at the end is gone...
I appreciate the video, but as a newbie, some basic things are not exactly intuitive. Like when he says to download the models and so on... Okay, where? It would be nice if you actually showed this step. Also, how do you actually run comfy ui so it generates an image? There seems to be no clear button to launch it