I've started saving at 50 steps, since it allows you to see more precisely when things start to go bad. Also, if you don't want a precise style, you can have a more general prompt or leave it out entirely. The most important thing is to train until things go bad, then take the best image and train on that at a lower learning rate. Presumably (haven't tried it yet) you can take it even further with a third iteration.
Not sure if anyone has mentioned it but if you put a checkmark in the setting ""Move VAE and CLIP to RAM when training if possible" under training settings it prevents the issue you are having with overtraining.
@@kernsanders3973 How to "apply" it? I choose it, I click on the blue button and try some prompts, I tried with and without the selected hypernetwork, clicking and without clocking the blue button. The prompts with the same seed do EXACLTY the same result so it is not using the .pt file What I am doing wrong? It is doesn't explained on the video
@@NapalmCandy the blue button just refresh the list, in the hypernetworks drop down in settings, choose the hypernetwork then Apply Settings on top of the Setting page. The prompts would depend on the subject or style you used in training in fileword template. So if it's a subject then you would prompt with something like "a photo of hypernetworkname" hypernetwork name being the name of you hypernetwork. if it's a style you trained then it would be "a photo of a man by hypernetworkname" hypernetwork name again being the name of the hypernetwork.
@@kernsanders3973 So basically, the hypernetwork works the same as the embeddings in the prompt input? The only counterpart I see using it is that you just can only have one hypernetwork at the time...
surely .. But a question, please, why do I get a black image when I use Stable Deviation? I am using Lenovo Legion 17 with: Cpu : Amd Ryzen 5 5600h Gpu : Nvidia GeForce GTX 1650 Memory : 8gp Is it possible to run it on? and when i go to settings 》spable diffusion finetune hypernetwork , it's set to (None) and there's no model to choose ..
@@Aitrepreneur I'm getting a few errors, I'll show you what I'm getting. It won't let me add a URL so it's basically the Pastebin website followed by '/H0kYdbHQ'. This is after putting "Git pull" into the file you pointed to in Notepad++. Edit: I'll just manually do it from Github for now.
I just tried you Midjourney and your discodiffusion checkpoints and the disco one I never managed to tame. It was ugly then looked much better at double the steps of any other checkpoints I own but still not very good. What could be wrong with the disco one?
Honestly I think you get pretty good results if you're just wanting realistic faces. Don't even need that much refining either. I normally first train to about 1500 steps at 0.00005, then a few more thousand steps at 0.000005, and it seems pretty good for most cases. Also far more useful to have embeddings instead of dozens of different ckpts
Would absolutely love to see you make some Stable Diffusion news videos and cool things happening and stuff going on. it's hard to get the info because it seems so spread out, so many things being developed with this tech. if you have time that is. Thanks for your hard work!
It seems that 3090 TI can train 2000 steps at .00005 in 10 minutes which is nice Definitely going to use this way of creating embeddings instead of the standalone repo now!
How do you invoke the character after training? Do you have to select right hypernetwork in settings and that's when in txt2img you can use it? Do you invoke it with full name with sufffix (like agentX-900 ) or is the name enough?
Thank you very much for tutorials. Here's a quick tip for you: In Photoshop for Windows, hold down ALT + Right click + Drag Horizontally. If you drag left, the brush size decreases while dragging right increases the size.
Thank you so much for your lessons. I have learned a lot. A quick tip for people who work on one monitor and have 2 windows open side by side, the tutorial and the webui, like me: the webui will hide some tabs if you don't have it fully expanded. It took me a few minutes to realize that. So my train tab below the main train tab disappeared and I thought I had an older version of the webui or something. Keep an eye out if you have windows side by side. Then I've got out of memory errors on my 2080 RTX 8GB Vram card and the training couldn't proceed. So, I added this line to the webui-user.bat file: set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 I don't know if this will have an effect on the quality of my renderings but it made me go through with it. After training was done I removed the line from my bat file.
@@NapalmCandy I have posted the same question too. I have the trained file and so far, after watching a lot of docs and videos online I still don't know what do with it... I thought it was just a matter of going into the settings tab and choosing the hypernetwork but that didn't do anything for me.
@@NapalmCandy So, update: Going into settings and adding the hynetwork trained file did it for me. for some reason it didn't work the first time around, it was late and I was tired... The next day I did the same and it worked. I created this (for now) profile picture with it.
After you finish the training how do you start generating images with it? I type in a prompt and it doesn't seem to generate that character. Im assuming i have to load it in somehow but it doesn't seem to be explained in this video. Edit: Im selecting the checkpoint in settings but it still isn't generating the person when i enter the prompt. Edit 2: OHHHH i think i figured it out... After you load the checkpoint in the settings you gotta enter a generic prompt like girl or guy and it fills in with your trained data. I thought i had to use the checkpoint name as a prompt...
Tried it this morning, but wasn't satisfied with the results... That's because I didn't think of the second part of the process. I'm trying this right now, I'm sure this will be better. Thank you 👍
Hey, great informative videos and i've been watching you for a month or so now but why don't you ever credit Automatic1111 for his work on the WebUI as im fairly sure thats what you're using when you say "Super Stable Diffusion"? Apologies in advance if I'm missing something.
Thanks for the vid, how do you then integrate the hypernetwork in the prompt to generate txt2img with the trained model ? Is it done via selecting the hypernetwork in the settings page ? Does the prompt text include reference to the name of the hypernetwork ?
Thank you for detailing about hypernetworks. I've been tinkering with it lately - and your warning at the end of the video about overtraining was a godsend. I'm still very new to this so am not sure about a lot of things. Firstly, your example of making a hypernetwork uses 20 images, but when making a hypernetwork, does it benefit from having a larger database of images to learn from, or does it start to become information overload and corrupt the training process? Also, we're aware of embedding and hypernetworks. They both seem to exist to train SD on our own images, but I'm not sure about how they differ, and what the strengths and weaknesses of each of them are. Do hypernetworks render embedding obsolete, or are there specific cases when you should choose one or another? It seems like you can have multiple embeddings, but only one hypernetwork running at a given time, so is there a way to fuse hypernetwork data together or should it be trained fresh if we want to make changes?
Instead of training it to put someone's face into images. Is Hypernetwork a better to train for overall style? I've been training one today based on my own drawings, and the result is becoming pretty uncanny. It's a weird feeling having a machine draw just like you do. Would Dreambooth be a better way for me to train on my own drawings for style rather than specific faces?
I can't find where to put the prompt while training my hypernetwork? It should be down the bottom with "Save images with embedding in PNG chunks" but its not?
Tried with Nvidia P104 mining GPU with 8GB of VRAM, took at least an hour to finish. Hypernetworks can bring your customized styles of image generation by applying different art styles, adding another additional prompts which were not present in existing models and etc. Note that I'm using asynchronous multi-GPU setup for doing this and since mining GPU doesn't have video ports and of course having limited functionality to cuda applications, it's pretty much easy to offload applications and also setting up power setting to performance.
The UI on the training tab looks totally different now, and I can't seem to find any newer tutorial. Since I cannot get Dreambooth to work (yet), would it be possible to do an updated tutorial on this at some point? I realize it's a tall ask, but it would be greatly appreciated.
It seems harder to train but, after you find the perfect hypernetwork file for your training you can use it with every ckpt model as you wish. I wonder which one is more successful; merging your trained CKPT files with other ones (like Midjourney or Disco Diffusion) or using your trained hypernetwork with the other cpkt models.
merging is a bad idea, you often don't get the results you expect, someone said that hyper is better for finetuning rather than a complete training but the end results still feel very subjective
Oooh, good stuff! Stable Diffusion UI V2.21 has new features, so I think I can try it (It has new models to upload and more, negative prompts, and more). And if I can't, I'll install Automatic 1111. Thank you!
So do you use the name of the checkpoint in the prompt? Example sksyoung-12000 to get our results? Or can it be renamed when placed into the models/hypernetworks folder to something easier?
11:44 I think a lot of us tinkerers just like the challenge. We realize that there are established ways of doing things. Then again if no one tinkered, nothing would improve ;)
I think the question also refers to the prompt: How do you call on the specific character from a trained hypernetwork? Suppose you train the Rhaenyra hypernetwork to the end, what would be the prompt? For dreambooth it would clearly be 'rhaenyra person', but not sure how to do it with hypernetworks. There are no field for either or , correct? BTW Thank you so much for you work. Having tons of fun!
Same question here. As I have read in the comments, it is not necessary to create a ckpt file, it would be enough to go to settings and, in hypernetwork, put the pt file that you have trained, but at least it does not work for me.
Oh...another thing about deepdanbooru feature - You must have CUDA version 11.0 installed. After install, try to run deepdanbooru feature to obtain tags via interrogate deepbooru. It may seem have some errors but ignore it until you will see some of keyword tags appeared on the cmd window. You can try to add some other nvidia libraries to fix it, however it may not need as fixing tensorflow error messages also requires you to obtain another propriertary cuda toolkit/framework stuff that locks behind Nvidia Developer partner program.
It was worth a try! For those of you with a 6GB card (GeForce 1060)... won't work. Ran out of memory. Bad news. Fortunately for me I pick up my RTX 3060 12GB card tomorrow! I hope everyone that wants a bigger better card gets one soon!
@@kevinm4x Oh yeah! 12GB handles it well and does larger renders. Faster. I'd been wanting to upgrade for over a year but you couldn't get a card. Now you can.
Thanks for another great video :)) Finally, I can train something in my 3060ti :____ Not the best option as you say, but at least it's possible! And it could get better overtime, since it just landed. Thanks again for your hardwork!
Looks like I'm missing a step... I finished the training, restart SD, type the prompt [portrait of a man, elegant, smooth, highly detailed, sharp] that I trained and the name [gabrielortiz]... I get nothing like my face. During training the images came out fine. Can you help me out?
U should use only 3-5 pics at 0.000005... 3 of face one full body. It seems like u dont like textual inversion or hypernetworks, but u r in love with dreambooth.... Kind of strange ;) B.t.w thank u for your effort great vids!!! Keep up the great work!!!
Not strange, i personally just find it reasier to use and the results are pretty much always perfect, I just let it run and 50mins later I have a perfect ckpt file I can reuse that gives me beautiful quality images, contrary to TI or Hypernetworks
I'm thankful, I have a question, after finishing the training and wanting to use the model, you just have to go to txt2imgm and then in Stable diffusion checkpoint select the trained model, right?
it seems to me that Hypernetworks, Textual inversion and LORA do the same thing, train a small model to apply to a bigger model to get an output in a fine-tuned style or using your own face. what is the difference between all of them? why using one vs the other?
Hi all, very nice, simple and helpfull video! Unfortunately the flied "Preview promt" min 5:32 isn't shown in my web ui. Has anyone an idea how to fix this? Your Steve
I had a question re-training, if you have SDXL on your computer, is the process of teaching it what something is the same? So for example, if after a while you realized that SDXL didn't actually know what something was, for example let's say it didn't know what an apple was, if you wanted to train it to essentially show it what an apple was, is the process the same as with people?
Thank you for another great video! 💙 I was excited first but than I realize... Bummer I only have 4GB on my GPU... Back to train in Colab unless some magical improvement will allow 4GB to train on our own machine... so close! but yet, not so close 😅
@@Aitrepreneur You have a point, I believe the dreambooth way is a better way, I hope that this will become locally possible to train (with less GPU RAM) in the future, instead of using Colab... that will be really nice!
But Overlord, the question is how do I use dreambooth with stable diffusion webui? I managed to install super stable diffusion 2 from your video installation guide and it works great! (Thank you), But I couldn't follow your dreambooth tutorial, because you used some sort of web gpu service to luanch jupiter and there i got lost. I have 3090 so i am trying to do all locally...
AMazing! Thanks for the tips, one question though: i see you made that robot avatar that "talks" during the video, together with your voice. Which technology did you use for it? I'd like to have a Midjourney image generated and allow it to speak with the prompts i provide, much similar to what you did. THanks!
How do I use the new model after I've trained it? I was expecting to see a new .ckpt file. What do you do with the .pt output file to generate new images?
Hey! What other usecases are other people exploring beyond replicating faces? Is it good to train on particular styles? I thought for instance you can use it to create particular embroidery techniques, or ceramic glazes etc.
Thanks for the great tutorial oh great AI master! :) Just to clarify, you are not recommending Hypernetwork over Dreambooth mainly for the reason that it will take longer to do, rather than you get better quality results from Dreambooth?
8GB of VRAM, but still getting "out of VRAM" after about 450 steps or so. What is the recommended number of images for the dataset after they've been processed?
if you recommend dreambooth how can you do it? i want to use my own gpu in stable diffusion and trained images with deambooth do you have a video explains how to do it thanks
Another Great video. Are you going to make another trained video since they changed something’s from your last video? I noticed it stopped working for me in this new version of training for Dreambooth
When I click "Train Hypernetwork", I get a: failed assertion Only Float32 convolution supported. Then Stable Diffusion crashes/stops in the terminal. Any tips?
Ohh! I see! I selected on Settings->Stable Diffusion->Hypernetwork the trained hypernetwork, then at the top of the page of settings there is big button called "Apply settings", click it and will load it!
Here you go everyone: "after placing the .pt in the models/hypernetwork. You have to go to setting and in the bottom you will find Hypernetworks. There you have to select the Hypernetwork that you want to use. Don't forge to push Apply Settings at the top of the menu before leaving." Kind of crucial step that I didn't see mentioned. Wow the results are amazing.
I'm using model.ckpt [cc6cb27103] When I goto settings>Stable Diffusion, it doesn't show me the option "Stable Diffusion finetune hypernetwork" It has a little bit different interface and this option is not available. Also, when I try to Create Hypernetwork, it doesn't create any and shows "Created: None" Also it doesn't even preprocess images......
With the new version of stable diffusion i didnt find "fine tune hyper network" in the settings. Someone where it was move? I try to adapt this video to the new version, it take 10-20 hours is it normal?
Good to know. Had no idea it was possible to ever manage to over train it. I thought that surely it would do better if it was let go as long as possible with as many reference images as possible to pull from... Is it only this version of training that's like this, or do all of them so this?
I know Dreambooth does it as well. Even though the original recommendation was 20 images at 2000 steps, I tried one with 34 images at 3400 steps (100 steps/image is allegedly the sweet spot). Images I generated with that character were gritty and oversaturated. I did another round of training with the recommended 20 images/2000 steps and got really great results. Also, one of the bigger checkpoints I downloaded was overtrained in the latest version and its creators said it lost a lot of variety and clarity from the previous iteration. So, yeah, overtraining seems to be a thing across the board.
Did this and it resulted in a .pt file instead of a .ckpt file which I can't select to use as a model for txt2img. am I missing a step somwhere or do I need to convert it somehow
once again you widened my horizon on what Stable Diffusion can do, thanks! Do you have a Patreon ? It's a shame that the AItrepreneur is so restrained on his local machine! Would donate towards a RTX4090 for you ;) I'm highly interested is doing stuff on my local machine. So it would be great if you could cover those things too and then actaully use it yourself of course ;) ! Got mine yesterday and from what I can see from your video it's 6 times as fast as your current setup. So training the hypernetwork 10 min instead of an hour makes it much more attractive to use ;)
So what are hypernetworks good for? Maybe bad for trying to add new characters, since dreambooth handles that pretty well. Anything you could only do with a hybernetwork, that you cannot do with textual inversion, or dreambooth?
I've started saving at 50 steps, since it allows you to see more precisely when things start to go bad. Also, if you don't want a precise style, you can have a more general prompt or leave it out entirely. The most important thing is to train until things go bad, then take the best image and train on that at a lower learning rate. Presumably (haven't tried it yet) you can take it even further with a third iteration.
Not sure if anyone has mentioned it but if you put a checkmark in the setting ""Move VAE and CLIP to RAM when training if possible" under training settings it prevents the issue you are having with overtraining.
why? Does that mean you can achieve better training in the sense of pushing the boundary back of when it will be overtrained?
Exactly the video I needed after updating to the hypernetwork update last night, thanks!
Help! Now that I have the .pt file, after training how do I use it?
@@MegaGasek Go to settings, scroll to hypernetwork and choose your PT file in the drop down. Apply, then it should work when you prompt the name
@@kernsanders3973 How to "apply" it? I choose it, I click on the blue button and try some prompts, I tried with and without the selected hypernetwork, clicking and without clocking the blue button. The prompts with the same seed do EXACLTY the same result so it is not using the .pt file
What I am doing wrong? It is doesn't explained on the video
@@NapalmCandy the blue button just refresh the list, in the hypernetworks drop down in settings, choose the hypernetwork then Apply Settings on top of the Setting page. The prompts would depend on the subject or style you used in training in fileword template. So if it's a subject then you would prompt with something like "a photo of hypernetworkname" hypernetwork name being the name of you hypernetwork. if it's a style you trained then it would be "a photo of a man by hypernetworkname" hypernetwork name again being the name of the hypernetwork.
@@kernsanders3973 So basically, the hypernetwork works the same as the embeddings in the prompt input? The only counterpart I see using it is that you just can only have one hypernetwork at the time...
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
surely ..
But a question, please, why do I get a black image when I use Stable Deviation? I am using Lenovo Legion 17 with:
Cpu : Amd Ryzen 5 5600h
Gpu : Nvidia GeForce GTX 1650
Memory : 8gp
Is it possible to run it on?
and when i go to settings 》spable diffusion finetune hypernetwork , it's set to (None)
and there's no model to choose ..
Check my settings troubleshooting video you need to add --precision full --no-half to the webui user bat file
@@Aitrepreneur I'm getting a few errors, I'll show you what I'm getting. It won't let me add a URL so it's basically the Pastebin website followed by '/H0kYdbHQ'. This is after putting "Git pull" into the file you pointed to in Notepad++. Edit: I'll just manually do it from Github for now.
What is the website you recommend for characters?
I just tried you Midjourney and your discodiffusion checkpoints and the disco one I never managed to tame. It was ugly then looked much better at double the steps of any other checkpoints I own but still not very good. What could be wrong with the disco one?
Honestly I think you get pretty good results if you're just wanting realistic faces. Don't even need that much refining either. I normally first train to about 1500 steps at 0.00005, then a few more thousand steps at 0.000005, and it seems pretty good for most cases. Also far more useful to have embeddings instead of dozens of different ckpts
How do you get those .ckpts to use as models from the .pt hypernetworks produced? Am I missing something?
It is true that it takes longer, but it is free. And I had planned to create many models of different characters. Thanks for the info! 🔥🔥
Would absolutely love to see you make some Stable Diffusion news videos and cool things happening and stuff going on. it's hard to get the info because it seems so spread out, so many things being developed with this tech. if you have time that is.
Thanks for your hard work!
Could be a good idea
@@Aitrepreneur I have 4gb vram. How long do you think it will take to go from 8gb vram to 4gb vram?
Wow this is big. Thank you so much for making this so clear.
The sculpting is the best annalogy ever!
You are so helpful! Thank you for taking the time to make these. You are the only person I've found on YT who is walking through like this.
And that sphere analogy. Amazing 😄
It seems that 3090 TI can train 2000 steps at .00005 in 10 minutes which is nice
Definitely going to use this way of creating embeddings instead of the standalone repo now!
How many images are you using for training ? I have a 3060/12G and it takes 20 min to train 20 images at 2000 steps and 0.00001 rate
How do you invoke the character after training? Do you have to select right hypernetwork in settings and that's when in txt2img you can use it? Do you invoke it with full name with sufffix (like agentX-900 ) or is the name enough?
Answer is you don't. It always uses hypernetwork
Thank you very much for tutorials. Here's a quick tip for you: In Photoshop for Windows, hold down ALT + Right click + Drag Horizontally. If you drag left, the brush size decreases while dragging right increases the size.
Thank you so much for your lessons. I have learned a lot.
A quick tip for people who work on one monitor and have 2 windows open side by side, the tutorial and the webui, like me: the webui will hide some tabs if you don't have it fully expanded. It took me a few minutes to realize that. So my train tab below the main train tab disappeared and I thought I had an older version of the webui or something. Keep an eye out if you have windows side by side.
Then I've got out of memory errors on my 2080 RTX 8GB Vram card and the training couldn't proceed. So, I added this line to the webui-user.bat file: set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
I don't know if this will have an effect on the quality of my renderings but it made me go through with it. After training was done I removed the line from my bat file.
thanks for this hint
After you are happy with the trained .pt file... How you can start generating images with it? It is not explained on the video
@@NapalmCandy I have posted the same question too. I have the trained file and so far, after watching a lot of docs and videos online I still don't know what do with it... I thought it was just a matter of going into the settings tab and choosing the hypernetwork but that didn't do anything for me.
@@NapalmCandy So, update: Going into settings and adding the hynetwork trained file did it for me. for some reason it didn't work the first time around, it was late and I was tired... The next day I did the same and it worked. I created this (for now) profile picture with it.
god what a great time to be alive, this stuff is so cool, thank you for the video
After you finish the training how do you start generating images with it? I type in a prompt and it doesn't seem to generate that character. Im assuming i have to load it in somehow but it doesn't seem to be explained in this video. Edit: Im selecting the checkpoint in settings but it still isn't generating the person when i enter the prompt. Edit 2: OHHHH i think i figured it out... After you load the checkpoint in the settings you gotta enter a generic prompt like girl or guy and it fills in with your trained data. I thought i had to use the checkpoint name as a prompt...
Same question
Same question
@@NapalmCandy same question
same question
@@kinggunil same question
Tried it this morning, but wasn't satisfied with the results...
That's because I didn't think of the second part of the process.
I'm trying this right now, I'm sure this will be better.
Thank you 👍
Hey, great informative videos and i've been watching you for a month or so now but why don't you ever credit Automatic1111 for his work on the WebUI as im fairly sure thats what you're using when you say "Super Stable Diffusion"? Apologies in advance if I'm missing something.
You're right, I added it in the description
@@Aitrepreneur yeah man, don't need to make it confusioning lol:)
Man the algorithm is scary. I was just looking for this
Praise the algorithm!
Thanks for the vid, how do you then integrate the hypernetwork in the prompt to generate txt2img with the trained model ? Is it done via selecting the hypernetwork in the settings page ? Does the prompt text include reference to the name of the hypernetwork ?
I had the same question in mind
Thank you for detailing about hypernetworks. I've been tinkering with it lately - and your warning at the end of the video about overtraining was a godsend. I'm still very new to this so am not sure about a lot of things.
Firstly, your example of making a hypernetwork uses 20 images, but when making a hypernetwork, does it benefit from having a larger database of images to learn from, or does it start to become information overload and corrupt the training process?
Also, we're aware of embedding and hypernetworks. They both seem to exist to train SD on our own images, but I'm not sure about how they differ, and what the strengths and weaknesses of each of them are. Do hypernetworks render embedding obsolete, or are there specific cases when you should choose one or another? It seems like you can have multiple embeddings, but only one hypernetwork running at a given time, so is there a way to fuse hypernetwork data together or should it be trained fresh if we want to make changes?
yes
Instead of training it to put someone's face into images. Is Hypernetwork a better to train for overall style? I've been training one today based on my own drawings, and the result is becoming pretty uncanny. It's a weird feeling having a machine draw just like you do. Would Dreambooth be a better way for me to train on my own drawings for style rather than specific faces?
I've been waiting for this. Thanks for all your tut vids :)
I can't find where to put the prompt while training my hypernetwork? It should be down the bottom with "Save images with embedding in PNG chunks" but its not?
Tried with Nvidia P104 mining GPU with 8GB of VRAM, took at least an hour to finish. Hypernetworks can bring your customized styles of image generation by applying different art styles, adding another additional prompts which were not present in existing models and etc.
Note that I'm using asynchronous multi-GPU setup for doing this and since mining GPU doesn't have video ports and of course having limited functionality to cuda applications, it's pretty much easy to offload applications and also setting up power setting to performance.
Right on! Thanks for the vid and I do agree. The Runpod method for me has created some great results.
Very nice! Great Content as always! 💯👍
The UI on the training tab looks totally different now, and I can't seem to find any newer tutorial. Since I cannot get Dreambooth to work (yet), would it be possible to do an updated tutorial on this at some point? I realize it's a tall ask, but it would be greatly appreciated.
It seems harder to train but, after you find the perfect hypernetwork file for your training you can use it with every ckpt model as you wish.
I wonder which one is more successful; merging your trained CKPT files with other ones (like Midjourney or Disco Diffusion) or using your trained hypernetwork with the other cpkt models.
merging is a bad idea, you often don't get the results you expect, someone said that hyper is better for finetuning rather than a complete training but the end results still feel very subjective
Hi, is there any updated version of this? the options in the webui version have changed, and for example, I don't see the option for "Preview prompt"
Noticed this too.
there is no place in my setting that says fine tune hyper network, and when I click create hyper network nothing happens, it says " created : none
Oooh, good stuff! Stable Diffusion UI V2.21 has new features, so I think I can try it (It has new models to upload and more, negative prompts, and more). And if I can't, I'll install Automatic 1111. Thank you!
thank you for making these videos it helps so much
So do you use the name of the checkpoint in the prompt? Example sksyoung-12000 to get our results? Or can it be renamed when placed into the models/hypernetworks folder to something easier?
I am also confused by this
11:44 I think a lot of us tinkerers just like the challenge. We realize that there are established ways of doing things. Then again if no one tinkered, nothing would improve ;)
You said that you preffer dreambooth, but I cant find any good dreambooth tutorials, do you have one??
Thank you for this! Probably a silly question... once we get the training to where we want it, how do we use it in normal ai image rendering?
In settings scroll down and select it in the hypernetwork option
I think the question also refers to the prompt: How do you call on the specific character from a trained hypernetwork? Suppose you train the Rhaenyra hypernetwork to the end, what would be the prompt? For dreambooth it would clearly be 'rhaenyra person', but not sure how to do it with hypernetworks. There are no field for either or , correct? BTW Thank you so much for you work. Having tons of fun!
@@Skydam33hoezee can someone answer this?
very interesting stuff, this should be something that gets better over time, so maybe too much effort now, but worth watching for the future
Great video and great analogy.
OK, I have the perfect PT File. But how do I finally use it? Do I have to convert it into a ckpt File?
Same question here. As I have read in the comments, it is not necessary to create a ckpt file, it would be enough to go to settings and, in hypernetwork, put the pt file that you have trained, but at least it does not work for me.
@@mutenclon Yes I have tried this too, but didn't work.
You select it in settings in the hypernetwork section
Thank you! Very well explained!
Oh...another thing about deepdanbooru feature - You must have CUDA version 11.0 installed. After install, try to run deepdanbooru feature to obtain tags via interrogate deepbooru. It may seem have some errors but ignore it until you will see some of keyword tags appeared on the cmd window. You can try to add some other nvidia libraries to fix it, however it may not need as fixing tensorflow error messages also requires you to obtain another propriertary cuda toolkit/framework stuff that locks behind Nvidia Developer partner program.
It was worth a try! For those of you with a 6GB card (GeForce 1060)... won't work. Ran out of memory. Bad news.
Fortunately for me I pick up my RTX 3060 12GB card tomorrow! I hope everyone that wants a bigger better card gets one soon!
lol, same boat here
did it work? mine crashes with a 3070 8gb
@@kevinm4x Oh yeah! 12GB handles it well and does larger renders. Faster. I'd been wanting to upgrade for over a year but you couldn't get a card. Now you can.
Omg this is so exciting and it looks so much fun! I hope I can manage to do it as well 🥰
As usual GREAT VIDEO! 🥳
Dude you are awesome I appreciate your videos so much!
I appreciate that!
Thanks for another great video :)) Finally, I can train something in my 3060ti :____ Not the best option as you say, but at least it's possible! And it could get better overtime, since it just landed. Thanks again for your hardwork!
3060ti works very well for me. The first step of 28 photos was processed by me in just 20-30 minutes
But I can use the Hypernetwork with any (some) models, unlike Dreambooth. If I teach through Dreambooth, can I apply to all models?
Looks like I'm missing a step... I finished the training, restart SD, type the prompt [portrait of a man, elegant, smooth, highly detailed, sharp] that I trained and the name [gabrielortiz]... I get nothing like my face. During training the images came out fine.
Can you help me out?
do i need to create new hypernetwork ever time when i training a new character? Or can I do it all in one pt file?
U should use only 3-5 pics at 0.000005... 3 of face one full body.
It seems like u dont like textual inversion or hypernetworks, but u r in love with dreambooth.... Kind of strange ;)
B.t.w thank u for your effort great vids!!! Keep up the great work!!!
Not strange, i personally just find it reasier to use and the results are pretty much always perfect, I just let it run and 50mins later I have a perfect ckpt file I can reuse that gives me beautiful quality images, contrary to TI or Hypernetworks
3 of the face on a hypernet?
I'm thankful, I have a question, after finishing the training and wanting to use the model, you just have to go to txt2imgm and then in Stable diffusion checkpoint select the trained model, right?
it seems to me that Hypernetworks, Textual inversion and LORA do the same thing, train a small model to apply to a bigger model to get an output in a fine-tuned style or using your own face.
what is the difference between all of them? why using one vs the other?
Hi all, very nice, simple and helpfull video!
Unfortunately the flied "Preview promt" min 5:32 isn't shown in my web ui.
Has anyone an idea how to fix this?
Your Steve
I can't find "SD finetune hypernetwork" in my stable diffusion tab, how can i find it? thanks you
I had a question re-training, if you have SDXL on your computer, is the process of teaching it what something is the same? So for example, if after a while you realized that SDXL didn't actually know what something was, for example let's say it didn't know what an apple was, if you wanted to train it to essentially show it what an apple was, is the process the same as with people?
Stupid question, but how do you then use your model in image creation?
Thank you for another great video! 💙
I was excited first but than I realize... Bummer I only have 4GB on my GPU...
Back to train in Colab unless some magical improvement will allow 4GB to train on our own machine... so close! but yet, not so close 😅
Well tbh you're not missing much imo
@@Aitrepreneur You have a point, I believe the dreambooth way is a better way, I hope that this will become locally possible to train (with less GPU RAM) in the future, instead of using Colab... that will be really nice!
@@MrDanINSANE You can using Kaggle instead of using Colab, Its free for 36 Hours a week and will be reset every week ofc
@@arielfikru9865I am using stable diffusion computer base if I use colab version of SD will it effect on computer base ?
seems like even if you don't plan to use hypernetwork, it is a good tool to learn the process
But Overlord, the question is how do I use dreambooth with stable diffusion webui? I managed to install super stable diffusion 2 from your video installation guide and it works great! (Thank you), But I couldn't follow your dreambooth tutorial, because you used some sort of web gpu service to luanch jupiter and there i got lost. I have 3090 so i am trying to do all locally...
I don't have a powerful GPU so I can't really show it to you...
@@Aitrepreneur that's ok man, you help enough already, appreciate all your effort, thank you!
Where do I find the Prompt Template File? Do i have to download it form somewhere?
Can this be used to train a style instead of a person ? Also, how do you use the model you trained for your stable diffusion prompts ?
I have the same question, nobody knows?
Yes, check my style training video
I think I missed a step somewhere. When the training is complete, how do I make it use the trained checkpoint?
thank you for sharing this with us!!!
AMazing! Thanks for the tips, one question though: i see you made that robot avatar that "talks" during the video, together with your voice. Which technology did you use for it? I'd like to have a Midjourney image generated and allow it to speak with the prompts i provide, much similar to what you did. THanks!
How do I use the new model after I've trained it? I was expecting to see a new .ckpt file. What do you do with the .pt output file to generate new images?
You select it in the settings tab, in hypernetwork then you use a basic phrase like portrait of a woman if you trained a woman
Hey! What other usecases are other people exploring beyond replicating faces? Is it good to train on particular styles? I thought for instance you can use it to create particular embroidery techniques, or ceramic glazes etc.
How do i use my trained character? I write my Hypernetwork name, but the program doesnt recognize the name and puts a random person
Is 8GB a hard requirement for vram? Or can you do it with 6?
run the webui with --medvram, perhaps also with --opt-split-attention . Seems to be working for most of it
Hello, after preprocess images, when I go to "train" my 'cmd' appears "RuntimeError: CUDA out of memory". Do you know how can I solve it?
That's your GPU VRAM issues. Better get a better GPU with at least 8GB of VRAM.
Awesome my friend
Thanks for the great tutorial oh great AI master! :) Just to clarify, you are not recommending Hypernetwork over Dreambooth mainly for the reason that it will take longer to do, rather than you get better quality results from Dreambooth?
basically both
Hello Air, may I ask where did you get model.ckpt? I already have the latest SD, but the option for it does not appear.
how do hypernetworks compare to dreambooth and textual inversion, like what are the differences and pro and cons of each ?
What is the URL of the image crop website you mentioned in the video?
birme.net
So how i can use my models now???
The "finetune hypernetwork" setting is not showing up for me, what am I missing?
Thanks for your work on this!~ Do you plan to make a video on checkpoint merger as well?
Yes
8GB of VRAM, but still getting "out of VRAM" after about 450 steps or so. What is the recommended number of images for the dataset after they've been processed?
You can try --medvram or --lowvram, I think at least 20 is a good number
what gpu?
So, i don't have a notmal Stable diffusion model. Any special way i have to load it in or did i just delete it, and where to get it back ?
Hi! thanks for the vid, i have a question, now that UI has 2 learning rates for hypernetwork, you know why? Thanks in advance!
ho nice, i was actually looking into learning that right now haha
Tried my hand at this last night with some results but I'd rather have someone hold my hand through it instead. Thanks!
My version which should be the latest doesn't have the Preview Prompt box.
if you recommend dreambooth how can you do it?
i want to use my own gpu in stable diffusion
and trained images with deambooth
do you have a video explains how to do it
thanks
Check out my other dreambooth videos
URGENT! How do you generate art after you've trained the hypernetwork?
Another Great video. Are you going to make another trained video since they changed something’s from your last video? I noticed it stopped working for me in this new version of training for Dreambooth
What changed?
@@Aitrepreneur That one file that you name what project it is in, you don’t touch that anymore. Since then none of my models work
When I click "Train Hypernetwork", I get a: failed assertion Only Float32 convolution supported. Then Stable Diffusion crashes/stops in the terminal. Any tips?
Interesting video, but the whole zooming in and out of the AI character gets annoying really quickly.
What to do when I have the .pt file fully trained? How to use in txt2img? I think you missed this part
Ohh! I see! I selected on Settings->Stable Diffusion->Hypernetwork the trained hypernetwork, then at the top of the page of settings there is big button called "Apply settings", click it and will load it!
What do I do once I am done training - what are the next steps to make use of the hypernetwork outside of training?
Here you go everyone: "after placing the .pt in the models/hypernetwork. You have to go to setting and in the bottom you will find Hypernetworks. There you have to select the Hypernetwork that you want to use. Don't forge to push Apply Settings at the top of the menu before leaving."
Kind of crucial step that I didn't see mentioned. Wow the results are amazing.
I'm using model.ckpt [cc6cb27103]
When I goto settings>Stable Diffusion, it doesn't show me the option "Stable Diffusion finetune hypernetwork"
It has a little bit different interface and this option is not available.
Also, when I try to Create Hypernetwork, it doesn't create any and shows "Created: None"
Also it doesn't even preprocess images......
With the new version of stable diffusion i didnt find "fine tune hyper network" in the settings. Someone where it was move? I try to adapt this video to the new version, it take 10-20 hours is it normal?
This is really helpful. Thanks! My concern is if I analyze proprietary images how secure are they from copying?
Good to know. Had no idea it was possible to ever manage to over train it. I thought that surely it would do better if it was let go as long as possible with as many reference images as possible to pull from... Is it only this version of training that's like this, or do all of them so this?
I know Dreambooth does it as well. Even though the original recommendation was 20 images at 2000 steps, I tried one with 34 images at 3400 steps (100 steps/image is allegedly the sweet spot). Images I generated with that character were gritty and oversaturated. I did another round of training with the recommended 20 images/2000 steps and got really great results. Also, one of the bigger checkpoints I downloaded was overtrained in the latest version and its creators said it lost a lot of variety and clarity from the previous iteration. So, yeah, overtraining seems to be a thing across the board.
"Training finished at 0 steps" with no images created.
Did this and it resulted in a .pt file instead of a .ckpt file which I can't select to use as a model for txt2img. am I missing a step somwhere or do I need to convert it somehow
you are not the only person to ask this, I'm very frustrated nobody is answering this
once again you widened my horizon on what Stable Diffusion can do, thanks! Do you have a Patreon ? It's a shame that the AItrepreneur is so restrained on his local machine! Would donate towards a RTX4090 for you ;) I'm highly interested is doing stuff on my local machine. So it would be great if you could cover those things too and then actaully use it yourself of course ;) ! Got mine yesterday and from what I can see from your video it's 6 times as fast as your current setup. So training the hypernetwork 10 min instead of an hour makes it much more attractive to use ;)
Yeah I do have a Patreon although I'm very far from being able to afford a new GPU
So what are hypernetworks good for? Maybe bad for trying to add new characters, since dreambooth handles that pretty well. Anything you could only do with a hybernetwork, that you cannot do with textual inversion, or dreambooth?
I can train to 10,000 steps in half an hour, should I increase the maximum learning rate?