Train a Stable Diffusion Model Based on Your Own Art Style

Поділитися
Вставка
  • Опубліковано 27 січ 2025

КОМЕНТАРІ • 116

  • @RektCryptoNews
    @RektCryptoNews Рік тому +15

    Honestly this was perhaps the most direct and easy to follow tutorial on creating a SD style...thanks and great vid!

  • @latent-broadcasting
    @latent-broadcasting Рік тому +8

    Thanks so much! I made it, I really made it thanks to you. Trained a model with 39 images, took more than an hour but the results are amazing. I'm so happy!

    • @wizzitygen
      @wizzitygen  Рік тому

      Congrats! So happy you found the video useful.

  • @coulterjb22
    @coulterjb22 Рік тому +3

    Making your own Lora really looks like the way to go for targeted results. Thank you for this.

  • @INVICTUSSOLIS
    @INVICTUSSOLIS Рік тому +1

    For the first time with youtube tutorials, I understood everything, thank you.

    • @wizzitygen
      @wizzitygen  Рік тому

      Glad you found the video helpful.

  • @RogalevE
    @RogalevE Рік тому +2

    This is a very straightforward and easy tutorial. Thanks!

  • @lorenzmeier754
    @lorenzmeier754 Рік тому +5

    Straightforward, easy to follow, worked smoothly. Thanks for the tut!

    • @wizzitygen
      @wizzitygen  Рік тому

      Thanks for the feedback. Glad you enjoyed!

  • @MacShrike
    @MacShrike Рік тому +2

    Thank you, good video.
    As for your art; I really like that guy/person with the phones showing him/her/it. Really good.

  • @techarchsefa
    @techarchsefa 7 місяців тому

    the most clear demo i ever see

  • @ekansago
    @ekansago 11 місяців тому +2

    Thank you for the video. I have a question. I cannot find file "model.ckpt" in my google drive. I've check several time all my google drive. Where can it be?

  • @DanielSchweinert
    @DanielSchweinert Рік тому +2

    Thanks, straight to the point.

  • @eli-shulga
    @eli-shulga Рік тому +1

    Man I was looking exactly for this thank you thank you thank you!!

  • @calebweintraub1
    @calebweintraub1 Рік тому +1

    Thank you! This is well done and helpful.

  • @willemcramer8951
    @willemcramer8951 Рік тому +2

    beautiful background music :-)

    • @wizzitygen
      @wizzitygen  Рік тому +1

      The music is from audiio.com :)

  • @VulcanDoodie
    @VulcanDoodie Рік тому +1

    very clear and easy to follow, google stopped giving permission to use SD on colab though, I donnow what that means, they warned me twice today

    • @wizzitygen
      @wizzitygen  Рік тому

      Thanks for the kind words. I think Google blocked only the free tier.

  • @minigabiworld
    @minigabiworld Рік тому +1

    Thank you so much for this video, will give it a try 🙏

    • @wizzitygen
      @wizzitygen  Рік тому

      Thanks for your message! Best of luck!

  • @almoniruzzaman584
    @almoniruzzaman584 7 місяців тому

    best video ever!
    Easy to follow

  • @jvangeene
    @jvangeene 5 місяців тому

    This is a great tutorial. As I am new to this, I was wondering if trained model could also be used to transfer style. So, for instance, can I train the model in my own art style, then take a picture (e.g. a photo of a face) and prompt the model to create a new image of the face in my art style?

  • @sameer01bera
    @sameer01bera 11 місяців тому +1

    is this model works as a image to image? hope i get your replay or a link of work if possible

  • @ConorBroderick-p2m
    @ConorBroderick-p2m 11 місяців тому

    This was a great video and the Colab workflow was very easy to follow. I wanted to create a model based on my own art style. This notebook worked perfectly up to the very end. I was able to generate the sample images and also generate images using my own prompt but after checking my Drive folder the checkpoint was never saved to the AI_PICS/models folder. All permissions were given to Colab regarding access to my Drive account. So I used my other google account that has 14gb of available space, built another model and again the tensorsafe file did not appear in Drive folder. Has anyone else experienced this problem?

  • @madrooky1398
    @madrooky1398 Рік тому +3

    Actually it is better to train a model with the largest sized pictures your GPU can handle. Of course if you take the base model that was purely trained on 512x you will have issues in the beginning, but just take a custom model that has been trained with larger pictures.
    The obvious advantage is the level of detail. It might be that a certain art style does not require much detail, but some do, and 512x pics simply cant carry much detail and also being limited to 512x output is another strain.
    I was actually surprised myself not so long ago how well a 512x model can handle larger sizes, and it basically rendered all my prepared sets useless because it is such a big difference in quality only going up to 768x. But i do actually not use a specific AR on purpose, because i figured if i do that i will limit the model on this AR, so i use the number of pixels my GPU can handle and try to input as much variety as possible and i have seen the duplication issues decreasing since doing so. Its basically not happening any more. And what i mean with number of pixels is this, i figured my GPU can handle around 800,000 pixel very well. This could be for example a 800x1000 picture or 2000x400. You see it does not matter really what format, the maximum is the total number of pixels. A model just need to learn a few example of different formats for the subjects so it will not start duplicating things on the image grid.
    I am not certain however how large the dataset must be to expand a models capability in that regard since i start my own models from merges i do out of other custom models in an attempt to get the best base for my own ideas. And because the base model is actually not trained very well, if you see some of the data set and how the images have been described it is no wonder that very often things are deformed because there was simply no real focus on a proper image description. And that is also no surprise if you have worked on that you know how much effort it takes even for smaller data sets.

    • @wizzitygen
      @wizzitygen  Рік тому

      I haven't experiment with resolutions other than 512 yet, perhaps once there are more models who do so, I will give it a try then.

    • @madrooky1398
      @madrooky1398 Рік тому +1

      @@wizzitygen There are many models, i would actually assume most of the high quality models on civitai were trained with larger sizes. And many of them are also based on merges, i would rather say meanwhile it is hard to find a model that was not at some point trained with larger sizes.

  • @cuanbutfun3282
    @cuanbutfun3282 Рік тому +1

    These are great, May I know how can I change the base model that I want to merge with the new one?

    • @wizzitygen
      @wizzitygen  Рік тому

      You can find different base models on Hugging Face.

  • @juanmanuelcabrera1062
    @juanmanuelcabrera1062 7 місяців тому

    Have u tried to train with several styles? I've trying to train with one style, upload the model to huggingface, train this model again, so on. The problem is in the 5 o 6 train the first style start to fail.

  • @yutupedia7351
    @yutupedia7351 Рік тому +1

    pretty cool! ✌

  • @joshuadavis6574
    @joshuadavis6574 Рік тому +1

    I keep getting an error that says" ModuleNotFoundError Traceback (most recent call last)
    in ()
    ModuleNotFoundError: No module named 'diffusers'
    Can you help me or can someone explain what this means?

  • @mackjeer4349
    @mackjeer4349 4 місяці тому

    I tried and upload 30 IMG but after the some processing.
    It showing “Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.“
    Pls help me 🥺😭

  • @erlouwer
    @erlouwer 6 місяців тому

    Im getting the error:
    MessageError: Error: credential propagation was unsuccessful

  • @PranshuJain-m5t
    @PranshuJain-m5t Рік тому

    Super!

  • @solomani-42
    @solomani-42 Рік тому +1

    Curious, must you use a specific square size for image size? As most of my pictures are all different sizes and mostly rectangle.

    • @silveralcid
      @silveralcid Рік тому +2

      512x512 pixels is the best size to use as Stable Diffusion itself is trained on that size.

  • @chloesun1873
    @chloesun1873 Рік тому

    Thanks for the tutorial! I wonder how many images to use for the best result, the more the better? I first used about 30 images, and later 200 images, but the latter doesn't give a better outcome.

    • @wizzitygen
      @wizzitygen  Рік тому +1

      My understanding is 20-30 images. More than that, you could risk overtraining the model and not getting any benefit or perhaps seeing poorer outcomes.

  • @kamw8860
    @kamw8860 Рік тому +1

    thank you ❤

  • @BirBen-mt9eq
    @BirBen-mt9eq Рік тому +1

    When making model are they became priviate?no one can use aside from me right?

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there, to be completely honest I am not 100% sure. The model is forked from Hugging Face so I am not sure if your trained model goes elsewhere. The person who created the Collab Notebook can be found here. github.com/ShivamShrirao It would be best to ask him.

    • @wizzitygen
      @wizzitygen  Рік тому +5

      Hi there, I decided to write Shivam Shrirao and ask him your question. This was his response: Shivam Shrirao
      Sun, Jul 2, 11:17 PM (10 hours ago) "It's only available to you."

  • @Queenbeez786
    @Queenbeez786 Рік тому

    currently in the process of training. i was actually looking for the styles panel that on the right in your locally installed SD.. how do you make a style like that?

  • @colinelkington1360
    @colinelkington1360 Рік тому +1

    Would increasing the number of 'input' images into the model improve the accuracy at which the model is able to replicate the art style? Or will you achieve the same results as long as you input roughly 20-30 as mentioned?
    Great tutorial, thank you for your time.

    • @wizzitygen
      @wizzitygen  Рік тому

      My understanding is that 20-30 is the sweetspot. Training with more could result in overtraining and could affect your results. But, I would experiment and see what works best for you. If you have more give it a try and compare.

    • @dadabranding3537
      @dadabranding3537 Рік тому

      Absolutely not. Fewer is better, make sure to choose the best.

  • @Queenbeez786
    @Queenbeez786 Рік тому

    pls upload a tutorial for training lora as well.

  • @YourBrandDead
    @YourBrandDead 6 місяців тому

    May todays weakness be tomorrows strength my boi🙏🏼

  • @bcraigcraig4796
    @bcraigcraig4796 Рік тому

    Do I need code because I do not know anything about code but I do want to use my own artistic style

  • @lukaszgrub9367
    @lukaszgrub9367 Рік тому

    Hey, the faces and details are not really good on my model, is it possible to train it further to improve to make details etc? or maybe I should use better prompts?

    • @lukaszgrub9367
      @lukaszgrub9367 Рік тому

      My model are "cartoonish drawings" but realistic, although the faces still seem bad after adding negative prompts and using a realistic lora. Do you know how to fix?

  • @timothywestover
    @timothywestover Рік тому +1

    Did you find the resolution mattered of each photo? I had heard from other's that they needed to be 512x512 but I'm seeing some of yours are varying resolutions. Thanks!

    • @wizzitygen
      @wizzitygen  Рік тому +1

      Hi Tim. Yes, my originals had varying resolutions, but I made them all 512 x 512 before training on them.

    • @timothywestover
      @timothywestover Рік тому

      @@wizzitygen Got it thanks!

    • @user-jk9zr3sc5h
      @user-jk9zr3sc5h Рік тому +2

      If its varying resolutions youll have trained "buckets" of resolutions and should result in higher quality

  • @Biips
    @Biips Рік тому

    I do mostly abstract illustration I’m wondering how a model could be trained with my art style if objects aren’t recognizable

    • @wizzitygen
      @wizzitygen  Рік тому +1

      I'm not sure exactly but I believe it will recognize patterns and shapes. color etc. I'm unsure how you would prompt it though. I would suggest trying it and experimenting. That would be the only way to know.

  • @Fitzcarraldo92
    @Fitzcarraldo92 Рік тому

    How does the google collab do regularization images? When training directly on Dreambooth you have to provide a large dataset of images to contrast against the object or style you are creating

    • @wizzitygen
      @wizzitygen  Рік тому +1

      Hi Jasper. Not sure I understand your question, but you are not creating an original model, you are actually including your data as part of the larger data set/model. So when you call on an image that was not part of your training images it will reference the larger model for that. Then apply your style. Not sure if this answers your question.

  • @remain_
    @remain_ Рік тому

    I'm curious about the img2img functionality. Rather than typing a prompt of a cat, I'd love to see if it could translate an image of a specific cat into vrcty_02.

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there. Not sure I understand what you mean. Like a Siamese cat or the like?

  • @nickross6245
    @nickross6245 Рік тому

    I want to make sure this doesn't sample other artists. I'm fine with it using pictures of objects for reference but is it 100% only sampling my artwork for style?

    • @wizzitygen
      @wizzitygen  Рік тому

      It samples your art for style but if you refer to other things in your prompts, i.e. tree, house, etc. it uses the larger model to generate those ideas. In the style you trained it on.

  • @mortenlegarth1047
    @mortenlegarth1047 Рік тому

    Do you not need to add text to the training images to let SD know what they depict?

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there. That is what the "Class Prompt" is useful for. It helps classify the image.

  • @bcraigcraig4796
    @bcraigcraig4796 Рік тому

    how get you dlown Stable Diffusion WebUI i on your mac

  • @bcraigcraig4796
    @bcraigcraig4796 Рік тому

    where do I get Stable diffusion to downlod on my mac

    • @wizzitygen
      @wizzitygen  Рік тому +1

      Try googling Stable Diffusion WebUI for Mac, Github. If you have an M1 chip addd that to your search.

  • @dronematic6959
    @dronematic6959 Рік тому

    Do you put any regularization images?

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there. I'm not exactly sure what you mean by regularization images. Do you mean training images?

  • @paulrobion
    @paulrobion Рік тому

    Damn, I followed all the steps and the model semmed to work correctly on dreambooth but not in automatic1111 : the *.ckpt shows in the dropdown menu but I can't select it for some reason. Safetensors files work though, what did I do wrong ?

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there, I'm not sure why that would be. Sometimes the latest commit can be buggy, but I am by no means an expert in these matters. I'm sorry I can't be of more help.

  • @МайаРудольфовна

    How can I train one model multiple times? For example: I trained model to recognize new art style, and now I want the same model to be able to draw a specific bunny plush in this new art style. In the colab it says to change model_name to a new path, but path from where? Google drive or the colab folder?

    • @wizzitygen
      @wizzitygen  Рік тому +1

      I believe you would have to add your model to Hugging Face and link to it.

    • @МайаРудольфовна
      @МайаРудольфовна Рік тому +1

      ​@@wizzitygenlooks like you're right. Thanks for the consult.

  • @bcraigcraig4796
    @bcraigcraig4796 Рік тому

    I know You need to exprot as a 512 X 512 but does it need to be png?

  • @rwuns
    @rwuns Рік тому +1

    I don’t See a file called Model ckpt i did everything correctly!

    • @themaayte
      @themaayte Рік тому

      I have the same issue, @euyoss did you find a solution?
      @wizzitygen

    • @rwuns
      @rwuns Рік тому

      nope ;C@@themaayte

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there, not sure what the problem might be. Have you searched the entire drive? Search (.ckpt).

    • @themaayte
      @themaayte Рік тому +2

      @@wizzitygen Hi so I've found the solution, the code doesn't give you a .ckpt file, it give you a safetensors file, which is the same thing

    • @rwuns
      @rwuns Рік тому +1

      thanks !!
      @@themaayte

  • @anuragbhandari3776
    @anuragbhandari3776 Рік тому

    is having names of images with same prefixes a mandatory thing?

    • @wizzitygen
      @wizzitygen  Рік тому

      Yes, you will get better and more consistent results.

  • @cezarybaryka737
    @cezarybaryka737 Рік тому

    Can I save the file in .safetensors extension and not .ckpt?

    • @wizzitygen
      @wizzitygen  Рік тому +1

      In this instance it saves as a ckpt.

  • @pressrender_
    @pressrender_ Рік тому

    Thanks for the video, is really great and helpful. QQ, do I have to redo all the model calculations every time I re enter, or there is a way to skip that?

    • @wizzitygen
      @wizzitygen  Рік тому +1

      Hi Renato. I'm not sure I understand your question. But once the model is trained you can use it freely in Stable Diffusion Automatic 1111 Interface. No need to retrain every time.

  • @Queenbeez786
    @Queenbeez786 Рік тому

    omg it worked aaaaaaaaaaaaaaaaaaaaaaaaaaaa. i've been stuck on this issue for months, im a noob with this so little issues would last weeks. thank so so much. can't believe i did this on my own lol. do you have a discord or community?

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there, I'm so happy it worked for you. I do have a Discord Channel #wizzitygen but I am not very active on it. I made this video a while back to give artists a leg up on working with their own images and haven't posted many other videos since. My business takes up much of my time as well as the poetry I write. Thank you for your comment, it is nice to know this video is helping people.

  • @CRIMELAB357
    @CRIMELAB357 Рік тому

    can someone show me the process of uploading the ckpt to Huggingface and using the model online? plz...anyone?

    • @wizzitygen
      @wizzitygen  Рік тому

      This might help you. huggingface.co/docs/hub/models-uploading

  • @omuupied6760
    @omuupied6760 Рік тому

    why i cant find model.ckpt on my drive?

    • @wizzitygen
      @wizzitygen  Рік тому

      Not sure. Have you tried searching the entire drive?

  • @breeezywitit1714
    @breeezywitit1714 11 годин тому

    Is there any way we could do a consultation? You could walk me through a few steps via zoom call or phone call.. I’m willing to pay

  • @gerardcremin
    @gerardcremin Рік тому +1

    👍 "Promosm"

  • @clyphx
    @clyphx Рік тому

    18min to go

    • @wizzitygen
      @wizzitygen  Рік тому

      Hope it turned out to your liking!

  • @shin-ishikiri-no
    @shin-ishikiri-no 6 місяців тому

    Please drink water.

  • @goatfang123
    @goatfang123 Рік тому

    waste of time stops working nextday

    • @wizzitygen
      @wizzitygen  Рік тому +1

      Hmm. That is unusual. The model I created in this video is still working fine. One thing to check is to make sure you have the correct model loaded up when generating the image. I.e. Select the right .ckpt (Stable Diffusion checkpoint) file.

  • @user-jk9zr3sc5h
    @user-jk9zr3sc5h Рік тому +1

    You didnt have to annotate each image?

  • @judge_li9947
    @judge_li9947 Рік тому

    Hi, thank you very much. Used this before and is for sure the easiest and best video out there. Today though I got following error, can you help: ValueError: torch.cuda.is_available() should be True but is False. xformers'
    memory efficient attention is only available for GPU
    not sure what to do.
    Much apprechiated thanks.

    • @wizzitygen
      @wizzitygen  Рік тому

      Hi there, thanks for the kind words. Your best bet with errors is to paste the error in Google. It is usually a bit of a hunt but the solutions are usually out there if you Google the error. Sorry I can’t be of more help.