Use Your Face in AI Images - Self-Hosted Stable Diffusion Tutorial

Поділитися
Вставка
  • Опубліковано 5 вер 2024
  • Thanks to Ekster Wallets for sponsoring today's video. Head over to shop.ekster.co..., or use Promo Code: Craft, and get 35% Off your next order!
    How do you make your own AI Generated Images? And how do you train Stable Diffusion to use your face? Today, I'm going to show you how to install and run your own AI Image Generation Server, and teach it who you are.
    But first... What am I drinking???
    From Barsidious Brewing, it's the BLACK Stout Ale. For an 8% stout, this hits WAY above it class in body and flavor. Highly recommended.
    Link to written documentation: drive.google.c...
    Grab yourself a Pint Glass or Hoodie at craftcomputing...
    Follow me on Mastodon @Craftcomputing@hostux.social
    Support me on Patreon and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.
    / craftcomputing
    Music:
    No Good Layabout by Kevin MacLeod
    Link: incompetech.fi...
    License: filmmusic.io/s...

КОМЕНТАРІ • 231

  • @janliberda9493
    @janliberda9493 Рік тому +66

    To view GPU utilization during AI computation under Winows, you need to switch the graph from 3D to CUDA. Otherwise it may look like the GPU is doing nothing :)

    • @thafex2061
      @thafex2061 Рік тому +3

      you have no cuda if u run it with an amd graca

    • @gmfPimp
      @gmfPimp 8 місяців тому +1

      ​@@thafex2061 Also, Recent changes to Windows 10 removed Cuda option from GPU charts. If you have NVIDIA and don't see CUDA, you have make registry changes to re-enable it.

  • @jdl3408
    @jdl3408 Рік тому +49

    Please train the model on Charlie. We need AI generated cat pictures… “Charlie and Rambo fighting a dragon in the forest”

    • @haxboi5492
      @haxboi5492 Рік тому

      And a storymaking ai

    • @fatrobin72
      @fatrobin72 Рік тому +3

      And start working towards replacing UA-cam with ai cat videos?

    • @ProliantLife
      @ProliantLife Рік тому

      ​@@haxboi5492 chatgpt can create rough storyline for you lol

  • @VanHonkerton
    @VanHonkerton Рік тому +10

    Pumping tons of images isn't usually the best route, Sampling steps is generally better for 25-30 and CFG Scale of 8-9, usually Restore faces also makes things look a lot better. Adding Negative Prompts is also very useful and you can click the little recycling icon for Seed to see how certain tokens affect the outcome to further fine tune the outcome.

  • @ywueeee
    @ywueeee Рік тому +6

    You have to play around with CFG when it's your own face. Also more steps (30 or +) with Euler does help!

  • @jonahrothenberger1782
    @jonahrothenberger1782 Рік тому +3

    I spent over 2 hours on other videos, so confused. This video was simple, to the point, and got me started on my AI goals. Thanks!

  • @interlace84
    @interlace84 Рік тому +11

    Thanks for the detailed deepdive! Would you be interested in explaining how to host or train your own (Chat)GPT as well?

    • @yobdrzl
      @yobdrzl Рік тому +2

      This would be good to see

    • @felixjohnson140
      @felixjohnson140 Рік тому +1

      Yea, if only you have more than $100 million to spare. Just the Andromeda CS-2 supercomputer cost $30 million. Good luck training ChatGPT on your RTX 3060.

    • @interlace84
      @interlace84 Рік тому

      @@felixjohnson140 that depends on the size of the dataset you're training on 😁

  • @alpenfoxvideo7255
    @alpenfoxvideo7255 Рік тому +3

    you can bulk resize images in windows using the offical PowerToy

  • @wrmusic8736
    @wrmusic8736 Рік тому +15

    Default Stable Diffusion checkpoints are not very good at specific things, they are too general. It would produce better results if you use focus-trained checkpoints from CivitAI, like realistic vision for example, which was specifically trained to produce more real humans (while, naturally, getting worse at everything else). Vanilla SD is basically jack of all trades/master of none and it's up to you (or other people) to focus it on a specific task or style.

  • @L337f33t
    @L337f33t Рік тому +9

    I set mine up months ago and still haven’t used all the features yet! It’s a neat thing and I can’t wait to see where it goes. Edit: How did you get it to use both GPUs? From all that I read while setting mine up you couldn't use 2 at the same time?

    • @L337f33t
      @L337f33t Рік тому +1

      Has anyone figured out how he got both of them to work at the same time?

  • @neon_Nomad
    @neon_Nomad Рік тому +3

    Uses stable diffusion, adopts cat,
    life is good

  • @swyftty2
    @swyftty2 Рік тому +2

    Probably need libraries of each concept you wish to merge yourself with. Aka star trek library and specific characters you want to look like or the background you wish it to link with. More just face pics from actual angles, yours were a little flat facing. The more libraries the more you can mesh... But I haven't done it before. Let me know if this comes out true.

  • @dionelr
    @dionelr Рік тому +3

    Is “financial advisor” referring to “Mrs Craft Computing”?

    • @fierce134
      @fierce134 Рік тому

      Lol, exactly what I was thinking

  • @fuzzbawls6698
    @fuzzbawls6698 Рік тому +1

    With 24GB of VRAM, you can greatly speed up your image generation by increasing the "Batch Size" rather than "Batch Count". Batch Count is how many times you want to loop through the prompt in series; Batch Size is how many images you want to generate in parallel. The higher the Batch Size, the more VRAM is required, but it is generally more efficient than only using ~1% of your VRAM 30 times over in a loop one by one.
    Also, use Xformers to increase efficiency even further!

    • @tylerwatt12
      @tylerwatt12 Рік тому +1

      VRAM will really help the resolution mainly, my 3080 couldn't handle resolutions higher than 1024, but then again you run into issues where faces repeat if you go above 512px

    • @fuzzbawls6698
      @fuzzbawls6698 Рік тому +2

      @@tylerwatt12 Hires Fix will sort out most of the repeating/tiling issues when targeting an image resolution larger than what a model was trained at

  • @WillFuI
    @WillFuI Рік тому +1

    Thanks for the reminder I’m going to go change my oil. Hope it won’t take all day tomorrow so I can watch the talking heads.

  • @xero110
    @xero110 Рік тому +4

    This is an awesome demo and guide. Thanks very much! I will be playing around with this over the weekend. Hopefully I can figure out how to merge different training models to fine tune the results I'm looking for.

  • @KissesLoveKawaii
    @KissesLoveKawaii Рік тому +6

    normal models are not cutting it anymore, custom models and merges been a meta for few months now. And if you're training only one person, simple embedding/lora will suffice and FAR more malleable since it can be applied to any custom model.

    • @CraftComputing
      @CraftComputing  Рік тому +7

      I'm 100% new to the space, and said as such at the beginning of the video. This was meant as an introductory tutorial to get started. The sky is definitely the limit when it comes to configuration, models, etc.

    • @L337f33t
      @L337f33t Рік тому

      @@CraftComputing how did you get the GPUs to work in parallel? I have two 1070’s and can only use one at a time.

    • @adrianli7757
      @adrianli7757 Рік тому

      Got any more tips for creating your own embedding/lora?

  • @tylerwatt12
    @tylerwatt12 Рік тому +1

    So the more oddly specific the prompt is, the better the output image is. I tried things like "tyler driving a racecar" and it was always just a photo of me. But when I "dilute" the power of my name with a bunch of extra words, it puts less emphasis on getting the "tyler" part of the picture right. So instead try "tyler driving a yellow racecar on a nascar track at night".
    It also helps to tell it to make an image "in the style of watercolor". It seems Stable Diffusion has better luck creating good art vs good photorealistic images. The eyes and face are the worst part, always lazy eyes, giant foreheads, etc.

    • @Trainguyrom
      @Trainguyrom Рік тому

      I learned similar when I started using a prompt generator plugin. The prompts looked like garbage but the output was way better. Pretty soon I'll just automate myself out of the image generation process because the AI is way better than me 🙃

  • @brandonheath6713
    @brandonheath6713 Рік тому +2

    Why does my UI look so much different than yours? Like when I install Dreambooth I don't get a Dreambooth tab I get a "Train" tab that doesn't have the options you have

  • @hevenzgaming
    @hevenzgaming Рік тому +4

    ugh i saw those GPU's and had to reassure my little 1050ti that we're still good.

    • @delsings
      @delsings Рік тому +2

      Hahaha i have one of those too. Lil engine that could!

    • @hevenzgaming
      @hevenzgaming Рік тому +2

      @@delsings 🤣

  • @dustinphillips605
    @dustinphillips605 Рік тому +3

    Thanks for the video. This was much easier than I assumed it would have been. This will be a lot of fun to play with in conjunction with an online DnD session I recently started with friends.

  • @geeklukeg
    @geeklukeg Рік тому +1

    Disappointed that we did not get Klingon Jeff, crazy awesome video.

  • @Prophes0r
    @Prophes0r Рік тому +2

    One of those original "Jeff from Craft Computing" images is VERY close to your facial features and bone structure.
    Close enough that it is a totally believable "This is me from my early college days" picture.

    • @CraftComputing
      @CraftComputing  Рік тому +4

      So does Stable Diffusion know who I am, or do I look way more like the average Tech Guy than I'd like to admit?

    • @Prophes0r
      @Prophes0r Рік тому +3

      @@CraftComputing I feel like that is a question for a statistician to answer.
      I honestly don't know what the expected ratio of Light Skin + red/brown hair + Glasses + Beard should be expected from those keywords. But the results you got feel rather on the high side.
      To be clear, I'm talking about the one with the pink background at 12:26.
      It isn't perfect, but structurally, it is eerily close. Maybe a "This is me as a chunky highschooler, before I could grow a mustache" pic.

  • @TechnoTim
    @TechnoTim Рік тому +2

    Awesome video Jeff!

  • @gamingthunder6305
    @gamingthunder6305 Рік тому +5

    for better results you want to make at least 100 images of your face and also of your entire body in different poses. selfies will only produce selfies

    • @tylerwatt12
      @tylerwatt12 Рік тому

      Yep. I had a bunch of mirror selfies, me holding a phone. And a bunch taken with my ex. So it made the AI very easily make me holding something, like a chocolate bar, or any time there was a second person in the photo, it was always my ex.

  • @codigoBinario01
    @codigoBinario01 11 місяців тому +1

    Thanks for the video! I'm looking for cheap GPUs for LLVM to fine-tune Llama2. For that, lot of RAM (>16GB for small model) in the GPU is required and this is the way I arrived to your channel, looking for M40.
    My main concern is regarding computation power: I have seen test with 3090 and 4090. Have you tested any large model to see if the cores are able to deal with this new NN models?
    Thanks in advance ;-)
    (Great channel by the way, funny things that I have added to my watch list)

  • @_shadow_1
    @_shadow_1 Рік тому +1

    So if I wanted to use an image or a basic drawing directly as an input in addition with a prompt just like the "diffuse the rest" variation, how would I go about this? I have found that being able to pick options and re-input options with a text prompt I can actively change this prompt as needed is a pretty effective way to vastly increase the quality of the images. This also has the benefit of the fact I don't need to pre-train the model nearly as much, if at all in some cases. Also being a human filter is kind of cool because you get to learn how those algorithms work and engineer some amazing stuff using this algorithm knowledge.
    Maybe someday someone can build an algorithm to utilize dynamic prompt switching in an intelligent way to make amazing and original pieces of art which are unique and beautiful.

  • @sergeykudryashov9201
    @sergeykudryashov9201 Рік тому

    I have error when click on Train to start process images: Exception training model: ''NoneType' object is not subscriptable'.
    What to do?

  • @dadogwitdabignose
    @dadogwitdabignose Рік тому +2

    16:25 am i having a stroke or did he do the same thing twice

    • @thevaldis1167
      @thevaldis1167 Рік тому

      That's what I was thinking and I was having second thoughts about did I missed something? Is it directory where output will be storaged?

  • @marcusmeins1839
    @marcusmeins1839 7 місяців тому

    the dreambooth extention of my stable diffusion doesn't have input,nor the scheduler . instead of scheduler , it says resources .

  • @RobertJene
    @RobertJene Рік тому

    15:46 oh you're using Stable Diffusion in Firefox.
    I instinctively did the same when I set mine up.
    We must like stability and less memory overhead or something like that.

  • @Dannicus117
    @Dannicus117 Рік тому +1

    Upvoting for algo, awesome video

  • @gghosty1326
    @gghosty1326 4 місяці тому

    Help,it keeps saying create or select a model first when I press on the train button😅

  • @timcunkle4508
    @timcunkle4508 Рік тому +3

    GPT-NEOX 20B next, right? Right???

  • @stopspyingonme9210
    @stopspyingonme9210 Рік тому +1

    I honestly have a genius use for this. Maybe not genius but what if I set up a shop with one of these servers attached to a poster printer? Imagine a booth in the mall you could walk out with a one of a kind poster. I quite love that idea and mall rent might be cheap atm.

  • @FelixTheAnimator
    @FelixTheAnimator Рік тому +1

    Is there a *blank* untrained stable diffusion that *doesn't* include other people's art & IP? IMHO it should *not* know who Yoda or Mario are.

  • @nochan6248
    @nochan6248 Рік тому

    PROBLEM HERE
    Managed to train a face.
    Tried to train a 2nd face, but after I press Create Model in dreambooth tab it gives this error: Missing model directory, removing model: F:\ai\stable-diffusion-webui\models\dreambooth\miha\working\vae

    • @nochan6248
      @nochan6248 Рік тому

      seems deleting the initial face model helped creating the new face model, kind of annoying

  • @bamnjphoto
    @bamnjphoto Рік тому +1

    Thanks, this was right on time I just installed and needed this tutorial

  • @AS-bm3sk
    @AS-bm3sk 9 місяців тому

    I followed your steps but when I get to the dreambooth section I get settings, which contains models, concepts and parameters and then I get output, but no input tab, was wondering if anyone else has experienced this or if I've made some sort of mistake or maybe I'm just not using the same thing.

  • @UnwantedSelf
    @UnwantedSelf Рік тому +3

    Your cuda graph would have been pinned at 100%

  • @jonathaningram8157
    @jonathaningram8157 Рік тому

    I don't get it. I do exactly that but in the end if I enter my prompt literally nothing happens. It's as if the model didn't change at all. I'm wondering if there is an option that I enable that doesn't work.

  • @TheBinklemNetwork
    @TheBinklemNetwork Рік тому +1

    Perhaps if you trained with pictures of yourself from afar as well as "medium" distance, there may be more variety of how you get rendered. That felt weird to type...

  • @heclanet
    @heclanet Рік тому

    You are thaaa best!
    Let's be honest, a couple of those pictures were of Jeff on drugs!
    Greetings from Paraguay

  • @Dorff_Meister
    @Dorff_Meister Рік тому

    I used Chocolaty to install git and python - this is now my preferred way of installing / updating lots of Windows software.

  • @Rewe4life
    @Rewe4life 5 місяців тому

    Heyho,
    I have build myself a server for AI.
    Well…at least I tried. 😅
    When I push the power button it does not do anything. No cpu led, no ram led, no beep, no fan spinning, nothing. 😢
    I have tried without ram and GPU, same thing.
    I have tried with a different power supply, same behavior.
    I have changed the bios battery to a new one, disconnected everything except cpu and it’s fan. Nothing helps.
    Do you have any idea what I might miss and what might help?
    My motherboard: asus P9X79 WS
    CPU: Xeon E5-2695 v2

  • @gustersongusterson4120
    @gustersongusterson4120 Рік тому

    Mario with a gun made my day

  • @8bitbrainz
    @8bitbrainz Рік тому

    Hello, you forgot to put the links in the video desctiption .-.

  • @Isaac-X113
    @Isaac-X113 Рік тому

    Trying to run on linux and it just says it's running but I can't get a webui

  • @xxexexex
    @xxexexex Рік тому

    if you wear glasses only sometimes should all of yor images be of you wearing glasses or can you have some with you without glasses?

  • @fractanimal2527
    @fractanimal2527 6 місяців тому

    hey man, thanks for the vid. I love it. I have a quick question pls.
    I followed your steps, and the model just creates identical images to the ones I uploaded. I've managed to get some good results with img to img, but txt to image just replicates images I used to create the person model.
    I tried merging it with some other checkpoints but then the face loses it's character, and finding a balance of an accurate face with another model, at all different ratios, hasn't proven successful.
    Do you have any advice/tips?

  • @JimmytheCow2000
    @JimmytheCow2000 Рік тому

    Hi Charlie!!! I love you Buddy! welcome to the channel!

  • @michaelrichardson8467
    @michaelrichardson8467 Рік тому +1

    Ccjeff is more like Crafy Meth Tips 😆

  • @OBERHighCommand
    @OBERHighCommand Рік тому

    Now would this work for achieving a particular style? Such as a photographer training portraits of various people to model the colorgrade and lighting style?

  • @TheTrulyInsane
    @TheTrulyInsane Рік тому +1

    Honestly was thinking about setting this up, after seeing the results, I'll wait a few years

    • @CraftComputing
      @CraftComputing  Рік тому +5

      Remember, I was generating less than 5 at a time, and didn't try refining my prompts. I've gotten some FANTASTIC images playing around with it this week.

    • @AlexTheStampede
      @AlexTheStampede Рік тому +8

      Jeff has just started, and as such he's terrible at writing prompts. See the blank text field below the prompt? That's the negative prompt, or in other words what NOT to do. Stuff like "bad anatomy, low quality, cropped, out of frame, extra limbs, extra fingers, missing limbs, missing fingers, bad face, bad mouth, perfect skin" and so on. His prompts sucked as a better result would've been along the lines of "portrait of CCJeff in a Star Trek uniform on the Enterprise bridge, highly detailed, detailed background, intricate detail"
      And then he didn't touch any of the settings! 20 steps are fast but not great, Euler a works fine but meh. I would've tried this: SDE 2m KARRAS, 20 steps then after finding one that looks good I'd re use the seed (the green recycling icon), toggle face restoration and run it again. Is it better? Possibly. Does Euler a give a better one? There's also the one right after the 2m KARRAS that is worth checking. Now that I have the best one, I turn on the high resolution fix and pick the R esrgan 4+ upscale, set my steps up to let's say 60 and let it churn data. I'm going to end up with a 1024 x 1024 picture with a ton of extra detail. Lovely!

  • @ewookiis
    @ewookiis Рік тому +1

    I can vouch for the P8 and M40 doing a good job :)

  • @WilfEsme
    @WilfEsme Рік тому

    I'm using Bluewillow with references as well but I think i'll stick with my privacy and use public photos instead.

  • @dragodin
    @dragodin Рік тому

    Thanks Jeff this is exactly what I needed! Is there a way to get better looking results, though? I've been spoiled by MidJourney and anything less looks plain bad. Openjourney maybe?

  • @dadogwitdabignose
    @dadogwitdabignose Рік тому

    20:53 that is utterly terrifying

  • @gamingwithsparton
    @gamingwithsparton Рік тому

    I'd love to see a video on using the stable diffusion video/animation extensions. I've been trying to figure that out myself but it's been a bit difficult.

  • @Miyconst
    @Miyconst Рік тому +1

    Like for the cat!

  • @DIYDaveOK
    @DIYDaveOK Рік тому +1

    The model's preoccupation with bizarre teeth is...curious.

  • @danney777
    @danney777 Рік тому

    Could this be set up with multiple amd gpus? I happen to have multiple spare rx 580s and this would be a fun testlab experiment for me to try.
    EDIT: additionally I am totally aware that AMD will be less efficient, spare AMD cards is all i have on hand and i don't feel like buying more hardware at the moment and its for fun and learning so efficiency isn't my primary concern atm.

  • @mpxz999
    @mpxz999 Рік тому

    I'M CRYING LOL
    This is incredible!
    Omg this program makes some of the funniest stuff AHhhhhhhHHHHHH

  • @grasshopper1g
    @grasshopper1g Рік тому +1

    lets do it!

  • @michealwood3663
    @michealwood3663 Рік тому

    Got all the way to hitting the train button and it tells me to install xformers.. did I do something wrong?

  • @novantha1
    @novantha1 Рік тому

    You know, I wonder if at a certain point AI art just becomes...Art...? Like, you can throw a basic prompt in, but you won't get a high quality work out of it, so you have to adjust the specific model you're using, potentially use stylistic / concept focused LoRA, Dreambooth finetuning, techniques for framing such as controlnet, as well as some somewhat artistic solutions like using multi-net with a controlnet wireframe and specific 3d models to govern more precise elements of the final img2img output (reminiscent of how Monty Oum managed the cell shading in early RWBY), you need to manage color palettes, and prompt creation isn't just a "one and done" thing, it's a range of options that require some degree of understanding how elements will interact with one another.
    But, that's today. I suspect the generators of maybe not tomorrow, but of the next decade will be remarkably easier to use, and much more influential.

  • @jpconstantineau
    @jpconstantineau Рік тому

    lol Star Trek Jeff. A new definition for Red Shirt Jeff. Watch out!

  • @MadsonOnTheWeb
    @MadsonOnTheWeb Рік тому +1

    Great! Not that hard. Would be the same for AMD?

  • @whosscruffylookin95
    @whosscruffylookin95 Рік тому +1

    Bald Sisko-Jeff is the stuff of nightmares

    • @CraftComputing
      @CraftComputing  Рік тому +2

      There is so much nightmare fuel in this video. And you only got to see the ones that made the video.

  • @TerminalWorld
    @TerminalWorld Рік тому +1

    Nice guide. Generated images disappointing.
    Is there any way of getting better ones? (Not talking about self face generation case)

    • @DodaGarcia
      @DodaGarcia Рік тому

      Yes, playing with the CFG scales and definitely more sampling steps than 20

  • @UrSoMeanBoss
    @UrSoMeanBoss Рік тому

    Still waiting on support for AMD+Windows... :(
    I used it a ton before upgrading my GPU and i miss it.

  • @Reggieincontrol
    @Reggieincontrol Рік тому

    How do we get the dark ui sd? When I stalled mine it was white.

  • @metalmanexetreme
    @metalmanexetreme Рік тому

    Can you do a video of setting up Pygmalion (a set hosted LLM)?

  • @willfancher9775
    @willfancher9775 Рік тому

    For some reason I really enjoy the fact that it briefly became obsessed with giving you absolutely nightmarish teeth.

  • @ianvszo
    @ianvszo Рік тому

    could you make a video about Collosal Ai, I want to see how to use it and stuff

  • @joediliberto6244
    @joediliberto6244 7 місяців тому +1

    Welp. It's been a year now and the GUI for Dreambooth has completely changed so therefore I can't follow this video past ~14 minutes after the Dreambooth installation

  • @bdhaliwal24
    @bdhaliwal24 Рік тому

    Jeff thanks for another informative and entertaining video!!

  • @gamingthunder6305
    @gamingthunder6305 Рік тому

    dont use a m40. nothing but problems with the card and even the p40 i have has some issues with some extensions and particularly lora training.

  • @arthuralford
    @arthuralford Рік тому

    Cat created delays are always understandable. Charlie and Rambo will work things out. Someday.

  • @matthewmaca6675
    @matthewmaca6675 Рік тому

    very cool, got it working in like 10 minutes

  • @delsings
    @delsings Рік тому +1

    I'm an artist who had to pause my business in mid 2018 due to physical issues, and have been trying to wrap my brain around making an ai learn my styles. Is this possible with this software, or was this only for learning faces? Really cool video!

    • @AlexTheStampede
      @AlexTheStampede Рік тому +1

      I don't know how, but training on a style is very much possible. I downloaded a few models that do wonders: one is in the style of the Star Wars cgi cartoon and can be scary good, one is based on the Simpsons and it's more a miss than a hit, another I really like is based on a specific artist and it usually gives me very good looking anime pictures clearly inspired by that style. I've seen various ways to do it: a checkpoint, a textual inversion, an hypernetwork and another one I forgot. The checkpoint has a downside, it's trained on a specific set and that's it. The others however are applied on whatever checkpoint you are using and as such can be more flexible but results might vary wildly.

    • @delsings
      @delsings Рік тому

      @@AlexTheStampede yeah I create in a few different styles myself, just trying to figure out what to use so I can configure each of them. Been drawing since a kid, and professionally since 2010 (I'm an 80s baby). The way you are describing the different style sets is my goal yes
      Edited a typo

    • @pablito5927
      @pablito5927 Рік тому +2

      @@delsings I recommend training it with as many of your drawings as possible and maybe adding some from the internet in the same style. if you really want to make something good, you should try training it on tons of art and then give your style a really high value so everything you generate will be able to fill in gaps it didnt learn from your drawings, but still look like it's yours, if that makes sense.

    • @delsings
      @delsings Рік тому

      @@pablito5927 ty for the advice, I appreciate it. For this particular project I intend to only use my art, but I do work in many different styles and plan to upload them in categorized batches to differentiate. Even including unfinished sketches and doodles of different categories. However I do not intend to use anyone else's art for it since I intend this to be a personal helper for my art business so I do not whatsoever intend to plagiarize other artists as much as possible from my input end (I understand it isn't a perfectly moral system since it likely was developed from swaths of online content, but I do not intend to add to that on my end). Pre business alone (before 2010) I would not doubt I'd have at least a thousand samples to input, as I will be also categorizing my early years into it aka children's drawings/teen/young adult etc. I actually started my business in my upper 20s. :) Since my business started, I've done a huge portfolio of work too so I'm really hopeful about it. Just gotta find the right softwares to utilize what I'm needing. This is something I've spent way too much time ruminating on tbh

    • @FelixTheAnimator
      @FelixTheAnimator Рік тому +1

      I'm in the same boat really. And I don't want to use anyone else's art--at least no one living.

  • @eyturkuyann
    @eyturkuyann Рік тому

    can we add models and use with dreamshaper?

  • @csharpner
    @csharpner Рік тому

    You probably know by now, but you need to crank up the sampling steps to get better results.

  • @SUB1KOIRA
    @SUB1KOIRA 9 місяців тому

    This is probably outdated or something since almost every step was different with my webui, making it impossible to finish.

  • @shodan2002
    @shodan2002 Рік тому

    How big is dreambooth? download size?

  • @reLi70
    @reLi70 Рік тому

    Sorry for the probably stupid question, but I am really new to enterprise GPUs.. do you need some licenses from NVidia in order to use them?

    • @Arachnoid_of_the_underverse
      @Arachnoid_of_the_underverse Рік тому +1

      No but Jeff doesnt like servers with licenses for home use anyway.

    • @CraftComputing
      @CraftComputing  Рік тому +1

      No licenses are needed for GPUs to run on bare metal like this. Virtualizing CUDA for use in cloud gaming / rendering is another story.

  • @gmfPimp
    @gmfPimp 8 місяців тому

    As the author doesn't seem to respond to posts, which is one reason I haven't subscribed, has anyone else gotten this to work? I don't have same options as he shows in video and while I have managed to translate the steps, my model NEVER looks right. One thing I can point out is that the images look almost good while the prompt is running, but right at the end, it does something that basically changes the output to look like a transgender woman with orange skin. I have worked hard getting images which basically match what poses are in the video but still can't get it to work. I am seriously wondering if there are other things missing from installation. As I am growing tired of out-of-date or presumptive tutorials, when I start making them, they will be linked to GITHUB where those interested can use the same version that I do when creating the tutorials.

  • @SP-ny1fk
    @SP-ny1fk Рік тому

    22:37 is an ok one

  • @jordondavidson3405
    @jordondavidson3405 Рік тому +1

    Cool stuff Jeff! For the record euler (14:23) is pronounced "Oiler" (/ˈɔɪlər/ OY-lər,) It's based on Leonhard Euler, the Swiss mathematician

    • @CraftComputing
      @CraftComputing  Рік тому

      Well then Leonhard should have pronounced it right :-P

    • @jordondavidson3405
      @jordondavidson3405 Рік тому

      @@CraftComputing For Sure! Back in university we nearly named the math society hockey team the "Eulers" knowing full well nobody else would get the joke.

  • @Thewickedjon
    @Thewickedjon Рік тому

    nintendo's lawyers are coming for you

  • @Catge
    @Catge Рік тому

    Unexpected cats are my favorite

  • @elvisbeststuff
    @elvisbeststuff Рік тому

    I laughed so hard looking at the AI photos of your face, Jeff! The Super Mario ones were my favorite though!

  • @YamiFrankc
    @YamiFrankc Рік тому

    Can this be used without a gpu? I have an extra r710 with no gpu and I'm wondering if it can be used for this

    • @playlist5455
      @playlist5455 Рік тому +1

      Can run it on the CPU but it is many times slower (minutes per image compared to ten-ish seconds per image)

    • @YamiFrankc
      @YamiFrankc Рік тому

      @@playlist5455 thx

  • @pragavirtual
    @pragavirtual Рік тому

    Plan is working, not that it gives good results but oh well, its working...

  • @zepesh
    @zepesh Рік тому +1

    This stuff is a rabbit hole and evolves so fast that is hard to keep up

    • @neon_Nomad
      @neon_Nomad Рік тому

      Time for the neuralink upgrade amiright

    • @Prophes0r
      @Prophes0r Рік тому +1

      I've been out of the ML game for almost a year and a half at this point.
      (Illness combined with me giving away the 4x 1660s from my server to people for Christmas when you literally couldn't buy GPUs.)
      In less than 2 years, I literally don't even recognize the stuff people are doing anymore. It is wild.
      The craziest thing I've seen so far, more so than even CharGPT and Stable Diffusion, is the temporally stable "style filter" for REAL-TIME video.
      The demo I've seen was able to take low resolution footage from GTA5 and convert it to what looked like live footage from an alternate universe LA.
      And it was fast enough to do it in real-time.

  • @dewvey1
    @dewvey1 Рік тому

    How to do this on mac?

  • @Theinnersearcher
    @Theinnersearcher 7 місяців тому +1

    My stable diffusion doesnt look anything like this ,.... GOD DAMN IT!

  • @thafex2061
    @thafex2061 Рік тому

    you have to increase the amount of steps for much better results

  • @AG-ur1lj
    @AG-ur1lj Місяць тому

    Bro if you got extra GPU’s cluttering up your space, feel free to send ‘em my way. I’ve been dying to get my hands on enough power to test out my math & coding

  • @Pheatrix
    @Pheatrix Рік тому

    Why aren't you using a k8s to run stable diffusion?
    And I'm not only asking this because I want a tutorial for setting up a k8s ;)

    • @Trainguyrom
      @Trainguyrom Рік тому

      I too would appreciate a tutorial on K8s disguised as a stable diffusion tutorial!

  • @hacked2123
    @hacked2123 Рік тому

    Rofl. ET in Jeff's hair there