DREAMBOOTH: Train Your Own Style Like Midjourney On Stable Diffusion

Поділитися
Вставка
  • Опубліковано 24 сер 2024
  • Dreambooth is Google’s new AI and it allows you to train a stable diffusion model with your own pictures with better results than textual inversion. Dreambooth is originally based on Imagen text-to-image model and this technology makes it possible for you to insert any character (yourself, your friends, your family), object or animal you want into a stable diffusion model, all of that with a few images and in less than 1h. In this video, I will show you how you can train your own style similar to Midjourney using Dreambooth and GPU renting sites like Runpod or Vast ai, all of that for a few cents. I will also show how you can download a trained dreambooth models from the huggingface Stable Diffusion Dreambooth Concepts Library and then convert it into a CKPT file using the convert_diffusers_to_sd.py script that just came out a few days ago!
    Did you manage to train a new style? Let me know in the comments!
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my work on Patreon: / aitrepreneur
    ⚔️ Join the Discord server: bit.ly/aitdiscord
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Runpod: bit.ly/runpodAi
    HUGGING FACE:
    huggingface.co...
    Midjourneyart style: mega.nz/folder...
    ddfusion style: mega.nz/folder...
    Converter script: gist.github.co...
    python .\convert_diffusers_to_sd.py --model_path .\disco-diffusion-style --checkpoint_path .\disco-diffusion-style\discodiffusion.ckpt
    Download my repo by replacing the cell by:
    dataset="style_ddim" #@param ["man_euler", "man_unsplash", "person_ddim", "woman_ddim", "blonde_woman"]
    !git clone github.com/ait...
    !mkdir -p regularization_images/{dataset}
    !mv -v SD-Regularization-Images-Style-Dreambooth/{dataset}/*.* regularization_images/{dataset}
    ./gdrive upload ./trained_models --recursive
    #stablediffusion #dreambooth #stablediffusiontutorial
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    WATCH MY MOST POPULAR VIDEOS:
    RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
    ►► bit.ly/stabled...
    RECOMMENDED WATCHING - My "Tutorial" Playlist:
    ►► bit.ly/TuTPlay...
    Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

КОМЕНТАРІ • 301

  • @SabyMp
    @SabyMp Рік тому +109

    Do you know that your channel is the only one that helps people to learn all these difficult tasks to do it ourselves, you are amazing that you choose this way of helping people to learn and experiment, i appreciate you so much and love this channel for its educational purpose.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +7

      Glad to help

    • @cliveagate
      @cliveagate Рік тому +4

      I totally agree. I'm a complete beginner in the AI field, but the Ai Overlord has shown a bright light at the end of a very long and changing tunnel 🤜🤛

    • @nemonomen3340
      @nemonomen3340 Рік тому +7

      I agree he’s awesome, but technically I know there are at least a couple others who produce similar content.

    • @iamYork_
      @iamYork_ Рік тому

      I might be able to help you friend...

  • @zvit
    @zvit Рік тому +4

    The golden nugget for me was to learn that you can type 'cmd' in the path bar to open a command prompt at that location!

  • @done.8373
    @done.8373 Рік тому +4

    I seriously don't know how you figure this out this quickly and then get a vid up within days of release. Really amazing.

  • @chelfyn
    @chelfyn Рік тому +12

    Thank you for this awesome tutorial and all the other great work you've done recently. I am getting so much joy and satisfaction out of mastering these amazing bleeding edge tools.

  • @roboldx9171
    @roboldx9171 Рік тому +3

    Thank you. I would never have gotten this together on my own. This is the best channel for understanding the Ai experience. You are the best. Keep up the good work.

  • @AscendantStoic
    @AscendantStoic Рік тому +11

    First of all thanks for sharing the models, also I think now more than ever "Checkpoint Merger" in AUTOMATIC1111 GUI (which can combine two model files into one) is more important than ever, but it isn't really straightforward, I looked around and it's not clear what each option or choice does, hopefully you can make a tutorial for how to use it properly now that people might want to combine their own trained models with their likeness with the Waifu Diffusion models or with your Midjourney or DiscoDiffusion style models.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +9

      I will yes

    • @AscendantStoic
      @AscendantStoic Рік тому

      @@Aitrepreneur Ty!

    • @StillNight77
      @StillNight77 Рік тому +4

      Just to give you some quick help:
      The .ckpt merger doesn't just combine two models together; it mashes them with a loss rate close to the slider percentage you input in the WebUI. This means that you're diluting the data of both models heavily, and the resulting model won't be as good as either of the initial models by themselves. THAT SAID, it's very useful; I do use it a lot.
      - Click on the tab > See two dropdowns and a slider bar.
      - Choose two models to combine using the dropdowns.
      - (Optional) Choose a name for the new file.
      - Move the slider bar along the track to adjust what percent (roughly, it's not exactly how it works internally) of each .ckpt model the new model will have information from.
      - - Example: SD 1.4 + WD 1.3 with the slider bar at 0.3 will be 30% SD 1.4, and 70% WD 1.3
      - Click merge and wait for the message box on the right to say "Success"
      You have to go to Settings and click Reload at the base of the page, or restart the WebUI.bat launcher to use the new model. It automatically saves to the models folder.

    • @AscendantStoic
      @AscendantStoic Рік тому

      @@StillNight77 Thanks a lot, but what about the options below called "interpolation method" which has three different types/ways to mix the files, as well as the "Save as float 16" option

  • @Neurodivergent
    @Neurodivergent Рік тому +5

    Great info but especially mad props for having concise easy to understand videos. Not enough ppl do that anymore.

  • @zonas7915
    @zonas7915 Рік тому +3

    There is an error when I try to login;
    Failed to load model class 'VBoxModel' from module '@jupyter-widgets/controls'
    Error: Module @jupyter-widgets/controls, version ^1.5.0 is not registered, however, 2.0.0 is

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      No idea where this error comes from, where do this happen?

    • @iisilas
      @iisilas Рік тому +1

      I am also having this problem it happens where the huggingface logo should be on the login step

  • @mashonoid
    @mashonoid Рік тому +2

    What is regularization, and what does it do? And most importantly how do I get my own regularization for training styles?
    Also what I have concluded in some experiments I did, is that you don't need to specify the token and class in the prompt, you just need the model loaded.
    Also you don't need to restart Automatic1111's WEBUI after changing the model, just wait for it to be loaded (see console) and you are good to go.

  • @thanksfernuthin
    @thanksfernuthin Рік тому

    Good job picking samples for the Midjourney model. It's a great style. Very rich. Very beautiful. Situationally valuable.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      I like it too, the sky is the limit with the style training

  • @wk1247
    @wk1247 Рік тому +1

    in your window you don't need to type cmd, you can just do `git clone repo` in the future.
    I would also recommend others to run the pip install torch in venv (virtual environments) for python modules, so you're not breaking anything in the future.

  • @amj2048
    @amj2048 Рік тому +5

    You make these videos so quickly, very impressive!

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Glad you like them!

    • @EightBitRG
      @EightBitRG Рік тому

      Seriously though, good job on staying up to date!

  • @ProducingItOfficial
    @ProducingItOfficial Рік тому +3

    Aitrepreneur, if you could please explain what regularization images are and how they affect the final model, it would be greatly helpful!

  • @muhammadshahzaib9122
    @muhammadshahzaib9122 Рік тому

    Best video till now, regarding Stable Diffusion Models... Keep it up 👍👍

  • @TheWizardBattle
    @TheWizardBattle Рік тому

    Thanks for the video, I was already experimenting with this it's really nice to know how someone else more knowledgeable than I does it.

  • @theappointed
    @theappointed Рік тому +3

    Another great video, thanks! Being able to upload to google is a great addition as well. I was trying to figure it out myself but couldn't get it to work 👍👍

  • @PawFromTheBroons
    @PawFromTheBroons Рік тому +1

    This was very gracious of you to provide the trained CKPT files.
    Thanks a *LOT*, really.

  • @offchan
    @offchan Рік тому

    What I learned from the video:
    1. It seems the `gdrive` CLI can both upload and download files to/from Google Drive. It doesn't only download files. And it's also fast. I used to run `runpodctl send` to upload files to colab because I thought I couldn't upload files to Google Drive directly. This is a game changer.
    2. TheLastBen repo doesn't have a way to specify token class which is different from this video so I'm not sure why.

  • @oakman8512
    @oakman8512 Рік тому +6

    Very useful videos thanks a lot! I was wondering maybe you could do a video about "Stable Diffusion Infinity"? It allows infinite outpainting. I think many people would be interested in that.

  • @nayandhabarde
    @nayandhabarde Рік тому +1

    Any tips on dataset used for training for style? Like how many landscapes characters objects should be there? What type

  • @blakewbillmaier999
    @blakewbillmaier999 Рік тому

    You are an absolute gentle-robot. Thanks for the videos!

  • @Jukaorena
    @Jukaorena Рік тому

    10k Bro, congrats

  • @pablopietropinto5907
    @pablopietropinto5907 Рік тому +3

    Sorry, I can't find how to enter the username in GitHub when I try to download your regularization images never download the files, I just can see: Github username.
    Can you help me please?
    Great job!
    Thanks for sharing.
    Pablo.

  • @madebyrasa
    @madebyrasa Рік тому

    Thanks for sharing! Stunning, top row share right there. Its really nice that your making these videos.

  • @mingranchen6938
    @mingranchen6938 Рік тому +1

    Thanks for the wonderful tutorial. I want to know if the pre-generated regularization images are important? What I want to do is to train a style of a specific anime(for example attack on titan,Jojo and Ghibli). So is it better if I prepare some pictures of those anime as pre-generated regularization images before I train my style?

  • @Efotix
    @Efotix Рік тому

    Disco diffusion works! Thank you.

  • @chaks2432
    @chaks2432 Рік тому +1

    can you do a Google Collab version? or is it the same as the previous video?

  • @nehoray200
    @nehoray200 Рік тому +3

    Can you make a video that explains how to combine 2 ckpt files together because I have 2 characters that I want to put together?

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +2

      Yes that's a good idea

    • @lekistra1166
      @lekistra1166 Рік тому

      @@Aitrepreneur Hey can you copy what should inference cell look like when loading ckpt model from drive, I keep getting syntax error

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      Check my previous video, I show that I think

    • @lekistra1166
      @lekistra1166 Рік тому

      @@Aitrepreneur is shows running it on runpod not collab

  • @phiavir5594
    @phiavir5594 Рік тому +1

    For newcomers, I'd advise not using community cloud on runpod. It downloads at incredibly slow speeds that you will probably end up wasting time and money compared to secure. It also just seems to get stuck when running certain cells and you can't tell if it's doing anything or not. It sucks the availability is so bad, but it is what it is

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Better this than nothing

    • @bladechild2449
      @bladechild2449 Рік тому

      Dear god absolutely this. I just spent an hour training the images to then find out Runpod wants to upload the file to google drive at 200 kbps. DO NOT USE THE COMMUNITY SERVERS PEOPLE

  • @beonoc
    @beonoc Рік тому +4

    Hey Im stuck at the part arpund 9:15, its asking me to log into git hub in the output, but i cant type anything

    • @CampfireCrucifix
      @CampfireCrucifix Рік тому +3

      I am also having the exact same issue. It says "Username for: github"

  • @Scorpiove
    @Scorpiove Рік тому

    Thank you, I was hoping someone would the train midjourney style into SD. It works great btw. :)

  • @MarkWilder68
    @MarkWilder68 Рік тому +1

    I find myself just sitting here waiting on your next video to see what it is, very nice, I will definitely try this.
    Thank you.

  • @rice.flakes
    @rice.flakes Рік тому

    This is a fantastic tutorial. Thank you thousand times!

  • @westingtyler1
    @westingtyler1 Рік тому +2

    3:30 TIP for "python not found" command error thing: if the command window says python not found even though you've installed python, you may need to add your python installation path to the windows environment variables path. it's pretty simple, and there are quick tutorials online for adding python to the windows path. once I did this, both the pip command, and the python command, worked just fine.

  • @isekai_beauty4389
    @isekai_beauty4389 Рік тому

    I joined your discord community!! You are awesome!!

  • @gigginogigetto7620
    @gigginogigetto7620 Рік тому

    Absolutely fantastic! Thank u so much for this tutorial and the shared folder! :D
    Subscribed for the incredible help u gave me!

  • @salvadorrobles7014
    @salvadorrobles7014 Рік тому

    Yes, thanks for your work man...I can not follow everyday because of family, work... but I tried the colab for training person model and IT WORKED, and you are right, not to much quality although it should be interesting someone playing around with samplers to get good stuff from colab trained models... I will try training in runpod for comparison as you did... THANKS again..., I see your numbers of suscriptors raising everyday... I really aprreciatte you sharing your trained models by another site aside mega.... thanks anyway

  • @lithium534
    @lithium534 Рік тому +3

    Do you know a way to joint a style and a trained "object" into one so that you can have yourself of whatever in a style you trained.
    Great video's and great tutorials, well explained.

    • @crimsoncuttlefish8842
      @crimsoncuttlefish8842 Рік тому

      I would bet you can train one dataset (yourself) into the original model.ckpt to make yourself.ckpt and then train the second dataset (your style) into yourself.ckpt, so you end up with yourself+style.ckpt

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Checkpoint merger?

    • @lithium534
      @lithium534 Рік тому

      @@crimsoncuttlefish8842 That is a good point. That should work.
      As I'm not skilled with python how would you load yourself into the program instead of the model that is directly downloaded from hugging face as doing local is a no go with only 11gb.

    • @lithium534
      @lithium534 Рік тому

      @@Aitrepreneur OK. Idk what that is but will google it tomorrow.
      Thanks.

    • @RhysAndSuns
      @RhysAndSuns Рік тому

      @@lithium534 I think if you reload trained ckpts into dreambooth then the level of corruption of understanding gets too high. You could use one model and then image2img the style on with a different model, or you could train the 2 objects in as 1

  • @lenguyenphuoc
    @lenguyenphuoc Рік тому

    Very useful video, thank you men!

  • @mohammeda-lk1kw
    @mohammeda-lk1kw Рік тому +1

    Anyone managed to do this locally without runpod? Runpod seems to be using very obscure versions of the requirements that makes it almost impossible to run elsewhere.

  • @zonas7915
    @zonas7915 Рік тому +3

    Would be cool to have a video showing how to add yourself to stablediffusion + apply a style to yourself because for now on this video, I can't add myself + a style I want

  • @sasufreqchann
    @sasufreqchann Рік тому +2

    bro its saying FileNotFoundError: No such file or directory: '.\\discodefstyle\\unet\\diffusion_pytorch_model.bin'

  • @suduvanofficial2270
    @suduvanofficial2270 Рік тому

    thank you so much for the files bro!

  • @andreabigiarini
    @andreabigiarini Рік тому

    Thank you for your work. You're the best!

  • @HavoJavo
    @HavoJavo Рік тому +1

    No need to convert embeddings to ckpt. As of the latest 1111 version u can use .pt and .bin files ditectly by placingvthem into the /embeddings folder and usimg the filename in prompt. Even multiple embeddings at once works.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      This is not textual inversion, this is dreambooth

    • @alex.nolasco
      @alex.nolasco Рік тому

      That’s what I had understood as well , tried it and it seems to work.

  • @kavellion
    @kavellion Рік тому

    Thanks man. If you make anymore ckpt files. Those are awesome.

  • @MA-ck4wu
    @MA-ck4wu Рік тому

    Thanks for the awesome simple-to-follow tutorial. I used Google Colab instead, since LastBen's dreambooth notebook doesn't require as much Vram, (used 7 GB) to train a model

  • @Perfectblue55
    @Perfectblue55 Рік тому

    Thank you... really enjoy and learn so much with yours videos..!!! 😀👍

  • @generalawareness101
    @generalawareness101 Рік тому

    OMG, this is what I have been waiting for as I am not wanting to do models for now I hav many artists not in SD that I wanted to do DB on. Thank you for this.

  • @Patrick2017-
    @Patrick2017- Рік тому +1

    where is the git hub link

  • @joes3635
    @joes3635 Рік тому

    Two things...
    1. The mega zips are corrupted and won't extract in winrar or 7zip
    2. After copying the ckpt to the /models/stable-diffusion folder (or subfolders) errors are being thrown after trying to swap to that ckpt in the GUI.
    Too bad, this was looking like a cool tool to try.

  • @spearcy
    @spearcy Рік тому

    I didn't see why it's necessary to change any of those names you mentioned at 4:07, because those names already show up on your python link below the way you say they should look.

  • @corujameireles
    @corujameireles Рік тому +1

    someone please can upload the ckpt's files on gdrive ? I don't have bandwidthto download from Mega

  • @catrocks
    @catrocks Рік тому

    Cheers for the video ♥

  • @leeblackharry
    @leeblackharry Рік тому

    Is there a video guide for doing this all on ones PC, no services on the internet?

  • @verticallucas
    @verticallucas Рік тому +1

    Awesome tutorial. I see lot's of potential here. I'm wondering if it's possible to train a specific situation/pose, and then on top of that use this model with another model I've trained. For example, Mario slapping Luigi(character models) in the face(situation models). Would that be checkpoint merger?

  • @schmidbeda9866
    @schmidbeda9866 Рік тому

    The notebook "download normalisation images" asks for GitHub Username and password, which cannot be passed to the git clone url since the authentication with password is discontinued. Also, it should not be required at all anyway.

  • @TheAndzhik
    @TheAndzhik Рік тому

    Could you please post your prompts in the description? (for this and for future videos).
    Thanks!

  • @binyu335
    @binyu335 Рік тому

    Thank you for your video, since there are so many models, is there any method to merge all the different models(object, style etc), that will be more convinient in the future.😀

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      No, unfortunately that's not how this works...

  • @hatuey6326
    @hatuey6326 Рік тому

    Just awesome thanks so much !!

  • @gdizzzl
    @gdizzzl Рік тому +1

    will it work if my images aren't 512X512?

  • @kallamamran
    @kallamamran Рік тому +1

    Nice tempo!

  • @crimsoncuttlefish8842
    @crimsoncuttlefish8842 Рік тому

    Instead of doing "Photo of ddfusion style," try "Prompt in the style of ddfusion style," and that way you can customize what you get!

  • @flonixcorn
    @flonixcorn Рік тому

    Great Video Like always, im still trying to run dreambooth locally

  • @daniel.skarzynski
    @daniel.skarzynski Рік тому +1

    Hi, is anyone now how to add own Regularization Images from own github rep? I added my own link but is want my username and password. I should change some settings in github to clone rep without this?

  • @mikemenders
    @mikemenders Рік тому

    I really like your videos, and I am studying DreamBooth. What I didn't see in the video is what Learning Rate you used for the training. 5e-6 or 1e-6?

  • @ThELUzZs
    @ThELUzZs Рік тому +1

    Hi great tutorial and work, keep it up

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      The checkpoint selector has been moved to the top left of the screen

  • @TheGameLecturer
    @TheGameLecturer Рік тому

    On my Vast-ai machine, the google drive trick couldn't work, my acces was denied... But surprisingly, the "normal" download only took a minute or so (with "download as a zip, the simple download returned an error).

  • @Uratz
    @Uratz Рік тому

    Need to know how to train Midjourney on a style

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      Already done a video about that check out my how to train a style video, I did it with Midjourney images ;)

  • @tentacle_sama3822
    @tentacle_sama3822 Рік тому

    Need an update on this with the new dreambooth

  • @MatheusTassoo
    @MatheusTassoo Рік тому

    I did everything that you said, but when i try to open the webui-user.bat there is an error " The file may be malicious, so the program is not going to read it.
    You can skip this check with --disable-safe-unpickle commandline argument." How can i fix this issue? Im trying to install disco diffusion btw

  • @GarethOwenFilmGOwen
    @GarethOwenFilmGOwen Рік тому

    This may sound silly but which is better, but what is better produce images, use a trained style in image2image or do all of the training to then create images in the style with text prompts from the original imagss

  • @NeonXXP
    @NeonXXP Рік тому +1

    This what I've been waiting for. Time to train Cutesexyrobutts' style! The next big leap would be the ability to update and grow my CKPT so I can add other people's trained libraries to my existing file without losing what I already have. Is that a thing yet?

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      No, not yet

    • @StillNight77
      @StillNight77 Рік тому +1

      When you download (or upload) the SD 1.4 model as a base, that's basically what you can do moving forward if you've always used SD 1.4's model. Just take the last trained model and use that as the new base. At some point it'll get super diluted, and you'd have to use new tokens for every single new style/person/etc, but they'll all be in there.

  • @SnoMan1818
    @SnoMan1818 Рік тому

    what is the difference between hypernetwork, google ai, and dreambooth when it comes to training?

  • @AfterAlter.x
    @AfterAlter.x Рік тому

    if i wanted to train both a style and a person to run in the same ckpt is that possible so i would beable to prompt for both a person and as style without as many characters needed

  • @vocally13
    @vocally13 Рік тому

    2:50 stuck in git clone here, unpacking objects 100% but nothing like filtering but the process still going without any command written in cmd

    • @vocally13
      @vocally13 Рік тому

      my cmd stuck when it should be updating line command

    • @vocally13
      @vocally13 Рік тому

      is it can be the style i try to clone not responding?

  • @MaximeMalters
    @MaximeMalters 9 місяців тому

    if you feed it style like all super mario world sprite, can one hope to have it accomplish new sprite in this stye?

  • @Shykar0
    @Shykar0 Рік тому

    how does style and model training work out exactly?

  • @TheJefferson
    @TheJefferson Рік тому

    im getting an out of memory error, even with cards on runpod with 24gb:
    RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 23.69 GiB total capacity; 18.33 GiB already allocated; 42.69 MiB free; 18.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    It worked the first time about a week ago, but cant get it to work again.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      use the pytorch template instead of stable diffusion template

    • @TheJefferson
      @TheJefferson Рік тому

      @@Aitrepreneur awesome, Thanks!

  • @wapzter
    @wapzter Рік тому

    You are really absoluting amazing!!! Thank You very much

  • @an1kii309
    @an1kii309 Рік тому +1

    hello guys someone can help me i did all the part and in the end when launching the training i got this error :
    FileNotFoundError: [Errno 2] No such file or directory: '/workspace/Dreambooth-Stable-Diffusion/training_images'
    but i really havea training_images folder with all my picture from imgur
    Please help lol
    thanks in advance !

    • @bladechild2449
      @bladechild2449 Рік тому +1

      I had this and it was because I didn't use caps on the Dreambooth-Stable-Diffusion bit. The folder name has to match exactly, caps and all, to the code you run at the training bit

  • @BrogramFilmss
    @BrogramFilmss Рік тому

    Im sad i havent subscribed yet, that changes!

  • @eddiej.l.christian6754
    @eddiej.l.christian6754 Рік тому

    Why not just install these as Styles??

  • @fallency4
    @fallency4 Рік тому

    Great video !

  • @DrMacabre
    @DrMacabre Рік тому

    Anyone has been doing this locally in VOC? I know how to train portrait with dreambooth in voc but this sounds a little different

  • @metanulski
    @metanulski Рік тому

    Can you make a video on fusing two dreamboht models?

  • @jean-christophepaulau9040
    @jean-christophepaulau9040 Рік тому

    Nice vid, but a question is : how can you use both a trained model for a person done with Dreambooth and apply a trained style la Discodiffusion or Midjourney to this new trained person ? It involves using two ckpt at once ... because in the setting of SD you can only choose one model at a time ... Should merging the two ckpt files be a solution to have simultaniously the trained person and the trained style to prompt ?? Thanks

  • @greendsnow
    @greendsnow Рік тому

    When will Stable Diffusion 1.5 come public? We're all training on this old technology...

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +2

      No one knows, but the 1.5 isn't that great tbh just a tiny bit better than 1.4

    • @greendsnow
      @greendsnow Рік тому

      @@Aitrepreneur that's good to know.

  • @SpaceRitual
    @SpaceRitual Рік тому

    i got this error: FileNotFoundError: [Errno 2] No such file or directory: '.\\disco-diffusion-style\\unet\\diffusion_pytorch_model.bin'
    Any solution?

  • @gaminghawk4794
    @gaminghawk4794 Рік тому

    hey i tried this part and pip install torch wont work and it says ('pip' is not recognized as an internal or external command,
    operable program or batch file.) so it's not environment variable so did i type it wrong?

  • @werewolfpreyan
    @werewolfpreyan Рік тому +1

    I am wondering if it is possible to use multiple Models (for classes, styles etc), and use them in single prompt. That would open up new Worlds all together as then we can truly create something unique from our own inspirations, styles and mix and combine things together. Good work nonethless, I love how quick you are in updating things, and I like the quality of your tutorials and the hard work you do. :D

    • @pabloescaparo6511
      @pabloescaparo6511 Рік тому +1

      Yes. Just merge weights of trained checkpoints.

    • @werewolfpreyan
      @werewolfpreyan Рік тому +1

      @@pabloescaparo6511 What do you mean? What I meant is something like this as a Prompt- Me(name) Person(class) holding a Ice Katana(sword's name) Prop/Object(class) within Zebra(building name) Building(class) in the My(style name) Style(class). In this, I am using 4 self trained classes, Person, Object, Building and Style all within one Prompt. Possible?

    • @joachim595
      @joachim595 Рік тому

      @@werewolfpreyan You can merge models easily in Automatic1111, but that will also mean that the styles will compromise each other. But try it, you might get some new exciting results mashed together :)

  • @goatnamese
    @goatnamese Рік тому

    Damn you are amazing!!!

  • @CRIMELAB357
    @CRIMELAB357 11 місяців тому

    dont you think just maybe you're over complicating this ?

  • @LinhLe-xs1bd
    @LinhLe-xs1bd Рік тому

    Could you please give me link dowload to Gibly style please?

  • @lekistra1166
    @lekistra1166 Рік тому

    Hey can you copy what should inference cell look like when loading ckpt model from drive, I keep getting syntax error,
    i pasted the path in line 6 instead of 'OUTPUT DIR'

  • @talismanjorgensen1710
    @talismanjorgensen1710 Рік тому

    sorry for the basic q, but could anyone point me to something that talks about how to add negative prompts in this situation? i know what they are, and don't need suggestions, but i'm not sure how to include them (do i just say "negative" or use the "(( ))" syntax..?) Thanks!

  • @FazerGS
    @FazerGS Рік тому

    Switching between checkpoints in the webui settings doesn't work for me. It still generates using the one that it uses when it first loads. For example, it'll default to one ckpt and only use that, even when I select others from the list. I have to remove the other models from the models folder in order to use a specific one.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Maybe update to the latest version or relaunch SD

  • @LinkPellow
    @LinkPellow Рік тому

    does this work for IMG2IMG as well or only text to image

  • @mingranchen6938
    @mingranchen6938 Рік тому

    Can I train it on RTX A4000? The RTX A5000 is always unavailable.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      you can use a 3090 instead, you 24gb of vram

  • @RonnieMirands
    @RonnieMirands Рік тому +1

    Damn, thats completely complicated. Pope will pass :D