Probably the Best Model of 2023 So Far.

Поділитися
Вставка
  • Опубліковано 11 січ 2025

КОМЕНТАРІ • 140

  • @sebastiankamph
    @sebastiankamph  Рік тому +1

    Get early access to videos and help me, support me on Patreon www.patreon.com/sebastiankamph

  • @cyril1111
    @cyril1111 Рік тому +15

    I played with it all weekend and agree with you - Very good one. I even done a bunch of finetuning on it (over 3K image) and the results are pretty amazing!

    • @sebastiankamph
      @sebastiankamph  Рік тому +3

      Wow, what did you train on?

    • @mirek190
      @mirek190 Рік тому

      probably questionable shit ;) .. like it @@sebastiankamph

    • @cyril1111
      @cyril1111 Рік тому

      I trained for a client of ours (Norma Kamali) on her swimwear archive, over 40yrs of archive! The results are so versatile and precise - Im in awe! @@sebastiankamph

  • @techviking23
    @techviking23 Рік тому +11

    cool! love how there's so much free stuff in the opensource stable diffusion community

  • @RuneW0lf
    @RuneW0lf Рік тому +3

    Awwww yeah, i just downloaded this model the other day and made it my defaults, and thankyou for showcasing RuinedFooocus

  • @matthallett4126
    @matthallett4126 Рік тому +2

    Love trying new models, thanks for the recommendation

  • @jibcot8541
    @jibcot8541 Рік тому +3

    The other 2 models that are up there with this one are _Mohawk_ and RelVisXL.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Oh, I haven't tried Mohawk, thanks for the tip!

  • @blackvx
    @blackvx Рік тому +1

    I tried it, and the results are impressive! Thanks!

  • @techviking23
    @techviking23 Рік тому +18

    It's surprising how such simple prompts you showed created such high quality images. Stable Diffusion has come a long way since just last year

    • @DerXavia
      @DerXavia Рік тому

      yeah simple prompts work especially well with sd xl, sadly more specific prompts seem to be a little tricky to get remotely right.

    • @sebastiankamph
      @sebastiankamph  Рік тому +3

      Yeah, it's much closer to the simple prompting of Midjourney. Much easier to get good looking images quickly.

    • @wolfai_
      @wolfai_ Рік тому

      From my experience. Simple prompts works better because the AI can "think more freely". Just like our brain, when we're told to many things to do, it's clouded and the output will be worse.

  • @Cu-gp4fy
    @Cu-gp4fy Рік тому +1

    Niiiiiiice my new go to! Thanks for sharing

  • @LikquidDutch
    @LikquidDutch Рік тому +1

    In this video you said the prompt styles are free check out the link below. When I click on it wants me to sign up for a monthly plan????

  • @michaelleue7594
    @michaelleue7594 Рік тому +12

    Realism is nice, but I want better composition controls and better tools for changing specific details.

    • @jibcot8541
      @jibcot8541 Рік тому

      Bing Image generator(Dall.e 3,) is so much better at that. Hope it comes to SD soon.

    • @techviking23
      @techviking23 Рік тому

      Curious, what UI do you use?
      ControlNet and InpaintAnything extensions on A1111 are my favorites right now for comp control...

    • @sebastiankamph
      @sebastiankamph  Рік тому +3

      For Stable Diffusion in general, that's already available in ControlNet and Inpainting. But everything can be improved I guess!

  • @MuslimFriend_2023
    @MuslimFriend_2023 Рік тому +2

    We missed you man :)

  • @techeman369
    @techeman369 Рік тому

    Great Bro, the model eas lit!!!

  • @Mowgi
    @Mowgi Рік тому +4

    I've been using xeno for a certain style and juggernaut for more realism a lot, keen to try this out.

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Let me know what you think! Would love to hear some more thoughts

    • @spiritpowertx
      @spiritpowertx Рік тому +2

      what is xeno model ? what is its full name please ?

  • @princessdeathray
    @princessdeathray Рік тому +2

    Really fantastic video, but I had to pause it over the umbrella comment 😂😂😂

  • @PrinceWesterburg
    @PrinceWesterburg Рік тому +1

    Civitai - A Civit is a small cat that lives in the rain forest and eats coffee berries that ferment in it's gut and its poo is the world's most expensive coffee

  • @RogerBetin
    @RogerBetin Рік тому +1

    Great model, too bad it crashes with Deforum after generating around 200 images, see the following error message:
    User friendly error message:
    Error: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.. Please, check your schedules/ init values.

  • @Deep_field
    @Deep_field Рік тому +1

    For some reason this model runs super slow. Getting 6.4s/it when with most other SDXL models I get 10-15it/s

  • @ZYF34R_ART
    @ZYF34R_ART Рік тому

    do u know why mine doesn't generate correctly? I get this weird highlighter red and yellow output... thanks

  • @SuccessNowBlueprints
    @SuccessNowBlueprints 8 місяців тому

    I am super perplexed by SDXL; its benefits seem to be tradeoffs, not upgrades. I basically learned 1.5 from your wisdom - huge thanks and praise, and I'd like to think I've gotten quite good at it, even on a 10y old potato. I notice the results of in-painting are weaker; you can't just force more pixel density into an area like 1.5. There seems to be this very common theme where blur and distortion are just accepted around background elements, where there is massive loss of detail for, dare I say, a more artistic look instead of something polished and sharp? I guess we are prioritizing foreground elements to go large and saying forget the rest... Between the refiner, vae, high res fix, and low support for ultimate upscale in a111, is it forcing you to jump to comfyui? I feel like this is a backwards progression. I have renders on 1.5 that are not even just as good; they are exponentially better. The one thing I do love about SDXL is the natural and very literal interpretations of prompting. Body position and senes do seem more dynamic and exciting, which I love, but dare I say I am tempted to start in SDXL and use it as an openpose to go back to 1.5 for my quality. You will likely get what you put in even without the major support of loras, so it feels like it has more scope. I just got a brand new PC with a GForce RTX 4070TI, and I thought I'd take the plunge and try the more intensive interface as SDXL was a 2 hour render and is now 14 seconds. Am I about to throw in the towel as it feels like way more trouble than its worth? Underwhelmed and kinda disapointed. Am I crazy, or does anyone else feel like this?! Perhaps, SDXL is a is a good idea on paper, but a different flavor of ice cream may not be the next big thing -idk maybe I am missing some important piece. I would be very curious what people's opinions are. With some great prompts and a couple good loras, you can drop a 3000x3000 image on a 1.5 set of dimensions to.3-.4 with a CFG of 7.5-10 and churn out how high the res masterpieces starting at 768x768, which may only need a little inpainting and minor upscaling. Running 768sx1344s regardless of the inputs I feel like I want to throw up in my mouth, getting results like your first day of stable diffusion without embeddings, loras, and using base models. It concerns me that the only refiner I seem to find for SDXL is the base model, which is universally garbage; running images through it seems to make things even worse, even at 80% or 70% shift. I am just wondering if anyone else feels the same? Just like 1.5 when you start to big it overwhelms the AI and by that standard I think SDXL and even the upcoming cascades are just flawed, start small get something amazing scale appropriately. When I look at top renders on CIVITAI a lot of the SDXLs don't really impress me, the hyper-realism is better ish, but all other styles just as good or worse, and often in far less quantity. Can 1.5 outperform SDXL as more veteran'd and polished system? I am curious to try for another week or two, and then I might just hold off or a year until it gets dialed in more. Thoughts and feedback welcome, really not sure what to think.

  • @KitsuneVoss
    @KitsuneVoss Рік тому

    I tried this model and it is amazing. the 6 GB RTX 3060 on my gaming laptop takes a long time to render however. If I can get stable diffusion to see my desktop's RX 6800, might be different.

  • @artificialbeat
    @artificialbeat Рік тому

    Pls make a work flow A-Z for this setting and all for A1111 pls

  • @donvarsmak5688
    @donvarsmak5688 Рік тому

    Im having an error, when i try to use this model in foooocus: AttributeError: 'NoneType' object has no attribute 'model'

  • @Benwager12
    @Benwager12 Рік тому +1

    Love the videos, Sebastian, is there any chance of a longer form video where we see an video being initially made in Fooocus, and then touched up in automatic1111? Even so, this is definitely a model I'm going to try out ASAP.

  • @ItsBrody
    @ItsBrody Рік тому +1

    Doesn't let me use it for some reason. I downloaded it but it just switches back to the model I was using previously.

    • @Elwaves2925
      @Elwaves2925 Рік тому +1

      I sometimes have that issue but it usually sticks if I wait about 10 seconds before trying again. I assume you've restarted the UI (including the console window) and so on? That also works for me. I know I'm stating the obvious but you never know. 🙂

    • @ItsBrody
      @ItsBrody Рік тому

      Well it only does it for this model. Is that the case with you too? Maybe I don't have the right version of stable diffusion?@@Elwaves2925

    • @ItsBrody
      @ItsBrody Рік тому

      Looking at the console/CMD is does say failed to load checkpoint, restoring previous. How do i see what version of SD this checkpoint uses?@@Elwaves2925

  • @KiwiHawk-downunder-nz
    @KiwiHawk-downunder-nz Рік тому +1

    Be nice to see how this works for creating head/face with no hair for use with tools like face-transfer in Daz 3D and head-shot 2 in Chrataceter Creator

  • @bitedight
    @bitedight Рік тому

    Guys do anyone have access to the Ruined Fooocus download file? After downloading and extracting, it is showing multiple critical errors, so it cant be even extracted properly from the zip folder

  • @AlbertRossvids
    @AlbertRossvids Рік тому

    Is ruined focus faster at generating than AUTOMATIC 1111?
    I am a beginner & am wondering what my expectations should be on generating times.
    I have just started with Juggernaut SDXL.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      RuinedFooocus will be a little faster for SDXL due to the fact that it's built upon Comfy and handles XL better. It's not a massive change, but a little bit.

  • @Elwaves2925
    @Elwaves2925 Рік тому +1

    Nice and thanks. Always willing to try out new models.
    I say Civit-A-I because it's A.I. that we use.

    • @serenaquaranta6673
      @serenaquaranta6673 Рік тому +1

      Sometimes we are in a hurry and call it the other way! XD

  • @tomaseriksson5430
    @tomaseriksson5430 Рік тому

    Hi when I try to load this in A1111 it takes quite long then it doesn't load just simply goes back to the previous model , is it because my computer is too slow, how can I fix that

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Try using a more lightweight ui like RuinedFooocus if your gpu is not up to the task. If you want to use SDXL with a1111, make sure you have the latest version.

  • @MrNorBro
    @MrNorBro Рік тому +3

    Wow, this model creates incredibly realistic skin! I love how you've crafted such amazing images with it, I´m gonna try this out as well! Thx Seb!

  • @jonmichaelgalindo
    @jonmichaelgalindo Рік тому +1

    I was trying this over the weekend, and I couldn't get anything good out of it. I'll try it again.

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Try it with simpler prompts, or if you're using a1111 my SDXL styles. Worked very well for me. The keyword cinematic helped a ton.

    • @jonmichaelgalindo
      @jonmichaelgalindo Рік тому

      @@sebastiankamph Okay, it definitely doesn't work. Trying to get a mermaid for a painting reference. (I always use simple prompts. No artist names, no loras, no styles.) It can't do hands, or noses, or eyes; the compositions are terrible; there are scales in the water and in the sky; it always ignores at least one part of my prompt. The skin texture is very realistic, but that's not helpful here at all. :-(

    • @sebastiankamph
      @sebastiankamph  Рік тому

      I see, what XL models usually work for your usecase?@@jonmichaelgalindo

    • @jonmichaelgalindo
      @jonmichaelgalindo Рік тому

      @@sebastiankamph CrystalClear is the only one that gives me consistent real-world accuracy across all prompt subjects - everything from figurines to futuristic cities to pools in desert palaces to trading cards and figurines, on and on. And I've tried close to fifty models. Most models work for certain subjects like: cars, faces, landscapes, and food. A few models work for fantasy like elves, dragons, and intricate armor. But all models except CrystalClear fail to follow prompts accurately across the subjects I need to paint, and I have no idea why!

    • @shv9029
      @shv9029 Рік тому

      same here, very difficult to get good stuff out of it, only thing I could get real good was joker's face makeup, almost nothing else, also tried the exact prompts with exact seed from the examples in civitai, surprisingly it still gives me awful stuff @@sebastiankamph

  • @blasteekmag
    @blasteekmag Рік тому

    Hi thank you so much for this. I have a question how can we repose a character in an existing image?

  • @TomiTom1234
    @TomiTom1234 Рік тому

    Indeed an awesome model.
    But a Q: under the "Performance" tab, how can I get that file like yours in this video? I mean all what I have to choose is "Speed", "Quality" and "Custom" which opens other options. But what is this "Seb_cfg4.." and how I get it?

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      If you go into custom and set your own settings, you can then save those custom settings.

  • @IndiaUntold
    @IndiaUntold Рік тому

    I tried plenty of models, this one gives a pretty cinematic look but nothing can beat the speed of realisticvision, the speed and accuracy. Bulk image creation is too easy

  • @rodrigosouza8471
    @rodrigosouza8471 Рік тому

    Which model would you recommend for fantasy/dnd images,

  • @andyrevo8081
    @andyrevo8081 Рік тому +4

    You shoud really fold your umbrella-jokes.

  • @gameswownow7213
    @gameswownow7213 Рік тому

    they are free for your patreons...

  • @binyaminbass
    @binyaminbass Рік тому

    Basic question: I noticed that this model is using SDXL 1.0. Isn't 1.5 better?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Version numbers go in this order. 1.4 1.5 2.0 2.1 XL

    • @binyaminbass
      @binyaminbass Рік тому

      @@sebastiankamph But what I'm asking is, isn't ThinkDiffusion XL using 1.0, which would make it not as good as a newer model? Why are still training models on the older version?

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      There is only one official SDXL base model. 1.5 is not SDXL and SD 1.5 is earlier than SDXL 1.0@@binyaminbass

    • @binyaminbass
      @binyaminbass Рік тому

      Got it! Thank you! I've been downloading old models! @@sebastiankamph

  • @NikolaiWilding
    @NikolaiWilding Рік тому

    great images but i never yet seen SDXL models show great human hands. Any chance you can make a point of demonstrating how to do this

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Hands are still terrible unless super close up I'm afraid.

  • @BrinaGoddess-c1u
    @BrinaGoddess-c1u Рік тому

    promt style is a patreon thing now.. seems like i cant download it for free

  • @scienceandmatter8739
    @scienceandmatter8739 Рік тому

    SORRY FOR being a unknowledgable european but can you tell me is there a local install for windiw 11 amd gpu to use this model on my pc ?!
    ive tried to get into AI art few montsh ago but there wasnt any way to get it to work because i have amd gpu .....

    • @Elwaves2925
      @Elwaves2925 Рік тому

      I can't speak for using an AMD GPU, it may cause issues but the official Automatic1111 install file has been available as standalone install file. No need to download any dependencies and so on, it does it all in one go. Maybe you last tried before that became a feature? Might be worth trying again.
      YT doesn't like links but it should be easy to find, or there's a link in the description for the one in the video, which is a good starter UI. 🙂

  • @opensourceradionics
    @opensourceradionics Рік тому

    Can we say that a model is a holographic database of all images used for the training?

    • @sebastiankamph
      @sebastiankamph  Рік тому +2

      Not quite. Saying that it's a database of the images used for training implies that the images are stored in the model, but they aren't stored, the model merely learned from the images.

  • @SirKarugan
    @SirKarugan Рік тому

    Unfortunately for me it doesn't work. I used the same prompts but all I get are deformed faces or not the same details

  • @20xd6
    @20xd6 Рік тому +2

    Any tips for us 8gb vram bros for trying to run XL models?

    • @temetski91
      @temetski91 Рік тому

      I modified the "webui-user.bat" and added the following arguments after set COMMANDLINE_ARGS: --xformers --medvram --no-half-vae. It helped with my img generation speed a lot (on rtx 2080ti, 11gb vram)

    • @sebastiankamph
      @sebastiankamph  Рік тому +2

      Comfy and Fooocus will run smoother for you then.

    • @20xd6
      @20xd6 Рік тому

      ok so updated a11 to 1.6 and using --medvram-sdxl 👍@@sebastiankamph

  • @coenielandman9941
    @coenielandman9941 Рік тому

    is there a 1.5 version of this?

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      My personal 1.5 favourite is epicrealism

    • @coenielandman9941
      @coenielandman9941 Рік тому

      @@sebastiankamph yea me too i get the most consistently beautifull pics with that model

  • @ZYF34R_ART
    @ZYF34R_ART Рік тому

    mine just doesn't work at all :(

  • @kaso6164
    @kaso6164 Рік тому

    is it just me or does everything take ages with this model? or is this always the case for xl checkpoints? never used one before

  • @OldToby53
    @OldToby53 Рік тому

    So Ruined Fooocus is just like stable diffusion automatic 1111? We dont need it, right?

  • @Steamrick
    @Steamrick Рік тому

    Is it just me, or does the model have a propensity to stretch the neck for tall images?
    (With tall, I mean something like 1216x832, which is only 1.46:1, nothing extreme)

    • @jonathaningram8157
      @jonathaningram8157 Рік тому

      Many sdxl models do that. I use 1024x1024 if I want good proportions

  • @bunnystrasse
    @bunnystrasse Рік тому

    What is RuinedFooocus?

    • @sebastiankamph
      @sebastiankamph  Рік тому

      It's a fork of popular UI Fooocus. See video description for link.

  • @kurtlindner
    @kurtlindner Рік тому

    I need a better gpu, and start running SDXL.
    Usually Civit-eye.

  • @luizhenriquereis9228
    @luizhenriquereis9228 Рік тому

    i fucking love your channel

  • @MonDiabolique
    @MonDiabolique Рік тому +3

    For me its the opposite, been a photographer for 15 years and majority of models ive used give amazingly realistic results for portraits with the acception of occasional common body disfigurations with Ai.
    I find the challenge is finding a model that can do creative artworks specifically to your needs.
    A photograph i can be like it is what it is. But say like concept art, logo designs, character designs, etc. You need precise details to your vision. Otherwise its pointless to your project.

  • @Stick3x
    @Stick3x Рік тому

    What does checkpoint mean? Can you use these commercially?

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      Checkpoint in this case is a Stable diffusion model. You can use it commercially, yes.

    • @Stick3x
      @Stick3x Рік тому

      Thank you for your response.@@sebastiankamph

  • @johnquertinmont9215
    @johnquertinmont9215 9 місяців тому

    Had to delete it, with it as my checkpoint all my embeddings, and LORA's would disappear, and I did not have acess to them, when i went back to another checkpoint they came back, weird huh, anyway my favorite checkpoint is Lyriel, secound is dreamshaper

  • @Dunc4n1d4h0
    @Dunc4n1d4h0 Рік тому +2

    So much bleeding, green eyes and almost everything goes green 🙂

  • @midjourneyman
    @midjourneyman Рік тому

    Per the CEO, it is Civi-Thai. :)

  • @xXx-lfg
    @xXx-lfg Рік тому +1

    Make the page bigger.

  • @LouisLeXVIII
    @LouisLeXVIII Рік тому +4

    HI SAB 1ST COMMENT? love your vids

  • @schtroumpfyoda8469
    @schtroumpfyoda8469 Рік тому

    Juggernaut 6 sdxl !

  • @ANGEL-fg4hv
    @ANGEL-fg4hv Рік тому

    Aghhh xl 😢

  • @jonathaningram8157
    @jonathaningram8157 Рік тому

    What I want is more amateur looking model. Like someone taking a picture of his friends etc. Most model looks way too professional.

  • @justanothernobody7142
    @justanothernobody7142 Рік тому +1

    Do we really believe they hand captioned 10,000 images? Wouldn't that be like a full time job for a small team over several weeks.
    Combined with the fact they were also touting this as some kind of next level model on Reddit a few weeks back and then sponsoring UA-camrs makes me think they are planning on trying to convince people they have something special and then move future models to some kind of paid service.
    So far I'm yet to see anything from it that I couldnt get from some of the other XL models.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      I've spoken to the people that did the work and they indeed say it was hand captioned. I see no reason to make that up, but I can also not verify that it's true. It's a cool feature that improves the model, but not revolutionary. As far as I know, the models are not planned to go paid and from a business perspective more of a marketing tool.

    • @justanothernobody7142
      @justanothernobody7142 Рік тому

      @@sebastiankamph Ok that's good to hear. I'm still skeptical though as hand captioning 10,000 images is a huge and extremely tedious task and not something you would expect done for nothing in return. Even at one caption a minute which is generous that's over 160 hours of mind numbing work.

    • @sebastiankamph
      @sebastiankamph  Рік тому

      Oh, I don't think anyone is implying that it was done for nothing in return :) 💰@@justanothernobody7142

  • @Yownas73
    @Yownas73 Рік тому +2

    Woooo! RuinedFooocus!

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      The best! Just waiting for those gif features

  • @daliborzivotic7371
    @daliborzivotic7371 Рік тому +1

    Those millions of steps doesn't really show. Not much different than juggernaut. Actually worst in some cases for realistic results. The prompts shown here as examples are detailed in every model. Aliens in particular.

  • @Solodevgamesin
    @Solodevgamesin Рік тому +1

    Not a best 😂😂😂 Nlight is far better

  • @romansaakov7081
    @romansaakov7081 Рік тому

    AIYou, I thought you've meant 3d models, which were made by humans, but no it's same hype of lazy stupid IA thing. IA will evolve human will degrade, not all, but many. Tools should help, not to replace human being at least in art.

  • @HyperUpscale
    @HyperUpscale Рік тому

    TRY RuinedFooocus instead ;)

    • @sebastiankamph
      @sebastiankamph  Рік тому +1

      That's what I use 😂

    • @HyperUpscale
      @HyperUpscale Рік тому

      @@sebastiankamph 🤒😵‍💫 Apparently .... I double checked now 😶‍🌫