"Basically you trying to look at a cloud, and figuring out what the shape actually is" I love this metaphor! Great shorthand for interpreting noise with preexisting training, notions and biases
Just wanna say, man... your style is great. These how-to videos are usually a snooze-fest, but you make it entertaining. You'll definitely be my go-to guy from now on.
bro I was trying to make a model/lora in SD for a week, I spent hours learning the basics of programming, I almost gave up, I tried again and there was always an error somewhere, this video saved me and finally I managed to generate the images that I wanted, thank you very much bro, success with your videos, you gained another subscriber
Everytime I get to the training program i always get "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory" and it never shows that file when i open it through your link
Honestly Speaking I was trying to train my model back from 2022 December But I failed many times This is one of the most Practical and quick tutorial Thank you Man
I have a labeled dataset means I have a folder consisting of subfolders named according to the type of pattern they consist and another folder for the background so how to train with a dataset which have multiple sub-datasets and I want realistic images like texture of cloth should be generated so which model is best suited
@@ThatArtsGuySiddhant-tk4jb Thanks my friend... i fixed it.. However the next question is how do i start it again when it times out or when i close my system down ? Do i need to follow the steps as per the initial install each and every time i want to run ???
Afer I created a cpk file with your tutorial, I tried adding it to my personal stable diffusion and using it to make promts, but it is not working. How to I transfer that model created by your guidef over to my own stable diffusion?
So as per the info you provided . I think eather you didn’t downloaded the ckpt file properly or the file was too long and it didn't downloaded fully from that colab notebook
Thak you for the video , i am stuck at 9:32 the play button returns zero seconds and dosent train the image set . do note my image are 1024X1024 and do not contain a face , i am traying to replacate a searten style of the environment in my data set. any advice ?
Well the colab notebook that I used back then is now having some issues, I would recommend you to try some other colab note book for Stable diffusion there are many if you search for it, they all have similar steps like this one
@@ThatArtsGuySiddhant-tk4jb btw sir can this ckpt file be converted into tflite format and can it be used as a backend to send text input directly to the model and get an image output? If yes, can you please help me how should i pass the text and how to get the output
Does this still work? It won't work for me. I'm confused in rhe part where you input pictures. I put them in rhe data folder and then it just fails. How do you do it? You can't press run all?
@ThatArtsGuySiddhant-tk4jb When i started the training, it generated few samples in for the class category as well and when i manually observed these generated images, these were images of person (as expected) however the face was deformed and not well formed and the the sample quality for the person class was bad. Why is that ? will it affect the our trained model output quality ? For better result, should we add the person's images ourselves ? Also, if we no better class keywords which represents the instance object, then should we rather use that for better results ? Say rather than person (in the context of the example you have given), should we use "male person" or "indian male person" for better results ?
There is no need to specify that its an indian person(its mostly going to make you in some random rajistani as it happened to me when I accidentally trained my model with me wearing turben ) rather then that do as i say in the persons folder where all those deformed human type pics are change them with actual human. To be more precise if you are training your own model then change it with pics of people who look like you. (If your from south the pic of south indian man and women, if your from north the pic of north indian man and wormen) just always remember this one thing that your are teaching the ai how a person looks like to be more presise how you look...😊
hi lad, i have this issue : ImportError: cannot import name 'cached_download' from 'huggingface_hub' and cant fiw it, i tried many things, but still get that issue, im running on facehub 26 i think. and forums say its uncompatible with manythings, yet ive tried downgrading it to 24.1, 23.5 and 25.1 still the same issue, if not jax or compatibility issues. idk if you can help but if yes please do so. ive been on my pc for around 4 hours trying to figure this out.
hlo sir i have a problem, when i open stable diffusion in google colab, then after 5 min i see some problem like "runtime disconnected your runtime has been disconnected due to executing code" pls solve my problem pls
Btw, it is not a good idea to only have simple backgrounds. The simple background will be associated with your character and will negatively affect backgrounds. Some of your images needs have a background, but you need to roughly describe the background in your caption for those pictures. Likewise, you should add a caption for simple background pictures with 'simple background'. This helps the training to associate the background with other tags rather than with your character tag. This also helps because it can now identify that simple flat backgrounds are not associated with your character.
Not really brother sd already is trained on millions of images therefore it already has references for backgrounds and stuff. But whats new to it is you face. Thats why in the training images if its the face thats constantly changing the ai will focus on learning that itself. Your explination is correct when you're training a model in a particular art style. "Anime, Disney etc" I especially in this video only trained my own model. 😊 And btw sorry for replying so late😅.
@@ThatArtsGuySiddhant-tk4jb All good, and good insight. From my tests so far though, with enough regularisation (such as regularisation images, weight decay, max norm, network dropout) the AI will mostly learn what is consistent between images instead of everything about your images. If the background is consistently gray, then your generations will include more grey backgrounds.
Brother my model is training on me not a particular style. Its not an model trained on indian mens but a Siddhant model. Even if you use this model to generate images thay will have a very close resemblance to me. If thats what you want you can try but if you want a model specifically trained on indian then you have to find it on google brother😅.
@@ThatArtsGuySiddhant-tk4jb 🤣 thank you for the reply! What I meant was I have a modeled that I trained on a specific photo style. I would like to be able to upload a picture and have the model implement that style onto the uploaded picture, not sure if that’s possible. Thanks again!
@@ClaudetteFiguera Brother if you have trained a model then you should get output in that particular style that the very reason we train it. Isn't it😅. 1 Eather you are not prompting it properly. 2 or you have just trained it on a very small data set. Well I'll give you a short cut rather then training a whole new model for a particular style. Use midjourney upload the image which style you want and then use a bot called insight face and with that you might get you desired outcome. Just google it midjourney insight face.😊👍
Brother this colab note book is strictly for straining you own SD model for any other activity like video to animation watch my other video where I turned black widow into a disney character
Getting this error, however I have uninstalled torch 2.2.2 and installed 2.2.1 using both pip and conda. Conda list only shows 2.2.1 installed, however i keep getting this error.... Any suggestions? torchaudio 2.2.1+cu121 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible. torchtext 0.17.1 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible. torchvision 0.17.1+cu121 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible.
Hi! Great video Sid, very informative thank you :) Would you know work with this stable diffusion model via API? I'd like to use an automation software to send it prompts and get back the result. Thanks!
I tried the shared method 10 times but failed and getting the following error everytime I try: python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory
i am facing error on python code : Traceback (most recent call last): File "/content/train_dreambooth.py", line 18, in from accelerate import Accelerator ModuleNotFoundError: No module named 'accelerate'
Thanks for this great tutorial. However I'm stuck, after adding my images to the folder and adjusting the max train steps prompt, trying to run it, I get an error message saying "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory" Any advice? Thanks
Wow Awesome Video - love the edits, cadence and clarity! Really helped me understand diffusion as well! Thank you! Question: If I wanted to be crazy and try to locally install it on my home pc, would I clone it from Git, or download the files? I would love to see this being done!
First and foremost, I recommend not attempting to run Stable Diffusion on your PC. However, if you're interested, I've created a tutorial video. Simply clone the 'Automatic 1111' repository onto your computer and follow the instructions provided in the tutorial. This is advisable only if your PC is highly powerful; otherwise, the experience could be quite frustrating.
Bro for dreambooth only required 3 to 5 images. But I have only 1 image😅(of other person) now how can gather other images of that person he is not famous and he is a small school teacher
@@ThatArtsGuySiddhant-tk4jb bro how to i train a different persons in one model in different GPU because after first training model GPU runtime out after some or using different GPU how to i second person and the photo required is of my principle thanks for your reply please also reply me for this 🙏🙏
You talk a lot but I really enjoyed your chat, also, your editing style is very fun. ALSO, you're amazing with your results! Thanks! What city are you from?
No 😅. It's not about which laptop to choose, but how powerful your GPU is. To run SD, you need quite a powerful GPU, which itself could cost between $1,000 and $2,000. Therefore, it might make more sense for you to just get a Google Colab subscription. For a small cost of $10 a month, you can use a world-class GPU. Then, even a $200 PC would work. I myself have been running this whole thing on Google Colab.
Someone who makes a video about training your own AI and then tells people that installing Stable diffusion on a local PC is a bad decision, is clearly a SCAM. Stop making videos, please.
I don't know if someone write you this already but I think when you change the words in # commented lines around 4:56 this will not affect. This lines are there to show you if you want to add more concepts to the training :)
This is one of the very best tutorials I have watched on this topic.
"Basically you trying to look at a cloud, and figuring out what the shape actually is"
I love this metaphor! Great shorthand for interpreting noise with preexisting training, notions and biases
yes, very good analogy
Glad you enjoyed it brother !😊
Just wanna say, man... your style is great. These how-to videos are usually a snooze-fest, but you make it entertaining.
You'll definitely be my go-to guy from now on.
Thanks Brother your kind words means a lot.☺
bro I was trying to make a model/lora in SD for a week, I spent hours learning the basics of programming, I almost gave up, I tried again and there was always an error somewhere, this video saved me and finally I managed to generate the images that I wanted, thank you very much bro, success with your videos, you gained another subscriber
Thanks brother😊
Everytime I get to the training program i always get "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory" and it never shows that file when i open it through your link
Hey did you get any solution??
same issue here
same issue
Honestly Speaking
I was trying to train my model back from 2022 December
But I failed many times
This is one of the most Practical and quick tutorial
Thank you Man
Thanks brother😊
i fallow every step but im ruining to a error (python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory)
Hey did you get any solution?
A path error somewhere the code should should you the line where it occurs
"python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory" what do I do if this happens?
Get python 3.10.6 only nothing above that
This is one of the very best tutorials I have watched on this topic. Have not tried it out yet. But I will. Thx from Germany!
Thanks Brothers 😊
Great video. I cant imagine the work you have put up on this one video, especially the video editing.
I have a labeled dataset
means I have a folder consisting of subfolders named according to the type of pattern they consist
and another folder for the background
so how to train with a dataset which have multiple sub-datasets
and I want realistic images like texture of cloth should be generated so which model is best suited
Sorry Bro I don't know about that. Im currently learning about chat bots and no-code.
Had me chuckling the whole way through, nice and informative vid. Subbed
Thanks brother☺
tell me the settings for the rtx 3060 12gig webui/bat video card/// thnx bro
Great Tutorial. I thought Andrew Kramer's After effects tutorials were good, but you out did him.
Can we use SD's weights and create an api to incorporate into the application? Please make a video about this issue
Greate idea but I haven't tried that yet brother so I don't think I'm the right person for this.
Im doo learning about chatbots currently i will put a tutorial for that in few days
Thanks for your repply. I will try it by myself@@ThatArtsGuySiddhant-tk4jb
I keep getting this error ???? Help Please "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory"
Try reinstalling that whole colab file and then follow this tutorial word by word do not miss any step
@@ThatArtsGuySiddhant-tk4jb Thanks my friend... i fixed it.. However the next question is how do i start it again when it times out or when i close my system down ? Do i need to follow the steps as per the initial install each and every time i want to run ???
@@Content-Calling yes you do plus just keep on doing some thing or another on your screen it will not shut down
@@ThatArtsGuySiddhant-tk4jb does the hugging face token need to change every time ?
@@ThatArtsGuySiddhant-tk4jb Having real difficulties re-starting, any advice. Should i delete everything and run again
Afer I created a cpk file with your tutorial, I tried adding it to my personal stable diffusion and using it to make promts, but it is not working. How to I transfer that model created by your guidef over to my own stable diffusion?
So as per the info you provided . I think eather you didn’t downloaded the ckpt file properly or the file was too long and it didn't downloaded fully from that colab notebook
Try redoing it and do pay heat to all the key points / small details that I mentioned in this video
Hi!
tensorflow-probability 0.22.0 requires typing-extensions
Best ai training vid on the tube godbless
Appreciate that brother😊
Specify the weights directory to use (leave blank for latest) this gives error
Thak you for the video , i am stuck at 9:32 the play button returns zero seconds and dosent train the image set . do note my image are 1024X1024 and do not contain a face , i am traying to replacate a searten style of the environment in my data set. any advice ?
Well the colab notebook that I used back then is now having some issues, I would recommend you to try some other colab note book for Stable diffusion there are many if you search for it, they all have similar steps like this one
how did you do this video at 5:56?
You can watch my another video that I made on how I turned a Video into animation. Its all about that only.😊 ua-cam.com/video/J3EuLW7phLo/v-deo.html
Brother it is showing "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory"
any help?
@@nahathblah2242 Can you elaborate a bit
@@ThatArtsGuySiddhant-tk4jb there was an error since my anti virus was blocking something. But it's fixed thank you!
@@ThatArtsGuySiddhant-tk4jb btw sir can this ckpt file be converted into tflite format and can it be used as a backend to send text input directly to the model and get an image output? If yes, can you please help me how should i pass the text and how to get the output
@@nahathblah2242 Sorry buddy I haven't tried that yet. 😅 So I'm not the best person to ask about it.
Does this still work? It won't work for me. I'm confused in rhe part where you input pictures. I put them in rhe data folder and then it just fails. How do you do it? You can't press run all?
@ThatArtsGuySiddhant-tk4jb When i started the training, it generated few samples in for the class category as well and when i manually observed these generated images, these were images of person (as expected) however the face was deformed and not well formed and the the sample quality for the person class was bad. Why is that ? will it affect the our trained model output quality ? For better result, should we add the person's images ourselves ? Also, if we no better class keywords which represents the instance object, then should we rather use that for better results ? Say rather than person (in the context of the example you have given), should we use "male person" or "indian male person" for better results ?
There is no need to specify that its an indian person(its mostly going to make you in some random rajistani as it happened to me when I accidentally trained my model with me wearing turben ) rather then that do as i say in the persons folder where all those deformed human type pics are change them with actual human. To be more precise if you are training your own model then change it with pics of people who look like you. (If your from south the pic of south indian man and women, if your from north the pic of north indian man and wormen) just always remember this one thing that your are teaching the ai how a person looks like to be more presise how you look...😊
how do i re install the entire colab file ? ive tried and tried but its so difficult pls help
nice effort brother, good work here
Thanks man😊
hi lad, i have this issue : ImportError: cannot import name 'cached_download' from 'huggingface_hub' and cant fiw it, i tried many things, but still get that issue, im running on facehub 26 i think. and forums say its uncompatible with manythings, yet ive tried downgrading it to 24.1, 23.5 and 25.1 still the same issue, if not jax or compatibility issues. idk if you can help but if yes please do so. ive been on my pc for around 4 hours trying to figure this out.
hlo sir i have a problem, when i open stable diffusion in google colab, then after 5 min i see some problem like "runtime disconnected your runtime has been disconnected due to executing code" pls solve my problem pls
Btw, it is not a good idea to only have simple backgrounds. The simple background will be associated with your character and will negatively affect backgrounds.
Some of your images needs have a background, but you need to roughly describe the background in your caption for those pictures. Likewise, you should add a caption for simple background pictures with 'simple background'.
This helps the training to associate the background with other tags rather than with your character tag. This also helps because it can now identify that simple flat backgrounds are not associated with your character.
Not really brother sd already is trained on millions of images therefore it already has references for backgrounds and stuff. But whats new to it is you face. Thats why in the training images if its the face thats constantly changing the ai will focus on learning that itself. Your explination is correct when you're training a model in a particular art style. "Anime, Disney etc" I especially in this video only trained my own model. 😊
And btw sorry for replying so late😅.
@@ThatArtsGuySiddhant-tk4jb All good, and good insight. From my tests so far though, with enough regularisation (such as regularisation images, weight decay, max norm, network dropout) the AI will mostly learn what is consistent between images instead of everything about your images. If the background is consistently gray, then your generations will include more grey backgrounds.
Hey, do you take commissions?
the algo is picking this up and showed me this great video, keep up the good work
Thanks brother😊
AWESOME TUTORIAL! This video is underrated 😅
Thanks Brother😊
in () index error how do i fix this type of error pls respond
python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory
well done pajeeet!
Thanks Brother😊
Is there a way to use your trained model to carry over the image style to other photos?
Brother my model is training on me not a particular style. Its not an model trained on indian mens but a Siddhant model. Even if you use this model to generate images thay will have a very close resemblance to me. If thats what you want you can try but if you want a model specifically trained on indian then you have to find it on google brother😅.
@@ThatArtsGuySiddhant-tk4jb 🤣 thank you for the reply! What I meant was I have a modeled that I trained on a specific photo style. I would like to be able to upload a picture and have the model implement that style onto the uploaded picture, not sure if that’s possible.
Thanks again!
@@ClaudetteFiguera Brother if you have trained a model then you should get output in that particular style that the very reason we train it. Isn't it😅.
1 Eather you are not prompting it properly.
2 or you have just trained it on a very small data set.
Well I'll give you a short cut rather then training a whole new model for a particular style. Use midjourney upload the image which style you want and then use a bot called insight face and with that you might get you desired outcome. Just google it midjourney insight face.😊👍
How do we fine tune in this Web UI version of SD... It doesnt have any added features such as Extensions and Image to image etc ????
Brother this colab note book is strictly for straining you own SD model for any other activity like video to animation watch my other video where I turned black widow into a disney character
Please make one for training SDXL now
Great Idea bro but currently Im focusing more towards my ai chatbots and automations. Video will be coming soon. Can't say the same about sd
Getting this error, however I have uninstalled torch 2.2.2 and installed 2.2.1 using both pip and conda. Conda list only shows 2.2.1 installed, however i keep getting this error.... Any suggestions?
torchaudio 2.2.1+cu121 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible.
torchtext 0.17.1 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible.
torchvision 0.17.1+cu121 requires torch==2.2.1, but you have torch 2.2.2 which is incompatible.
same problem
Same problem
For the Settings and Run part, could I train using any model I want? Including Flux?
The model name used in the video is now deprecated.
Hi! Great video Sid, very informative thank you :) Would you know work with this stable diffusion model via API? I'd like to use an automation software to send it prompts and get back the result. Thanks!
Will it crash if i try this on mobile?
I tried the shared method 10 times but failed and getting the following error everytime I try: python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory
How to do that for multiple classes??
super in depth and great tips. made sure i wasnt wearing a turban for sure
😂😂😂
Is there a reason you used different colors for the background on some of the images? Is there a reason I can’t use white or green or something
end with sigma made me watch again and again ..keep sigma
Thanks brother😅
Very nice video dude
this colab is not working.giving dependency error
bro how to use this model as call of rest api, should i push this model to hugging face??
Very nice video, I get totally without Speaker, bravo
i am facing error on python code : Traceback (most recent call last):
File "/content/train_dreambooth.py", line 18, in
from accelerate import Accelerator
ModuleNotFoundError: No module named 'accelerate'
Really cool!
Thanks Brother😊
what a great video man.... keep up, bro
Thanks brother😊
great job
Thanks mom😊
Thanks for this great tutorial. However I'm stuck, after adding my images to the folder and adjusting the max train steps prompt, trying to run it, I get an error message saying "python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory"
Any advice? Thanks
same, anyone find the fix yet?
you are a life saver!!!!
Thanks brother😊
Awesome video and enjoyed your editing!
Thanks brother😊
01:15-01:25 Simple and best explanation :X
Nice tutorial i hope it get well know it!
Thanks Brother 😊
Wow Awesome Video - love the edits, cadence and clarity! Really helped me understand diffusion as well! Thank you! Question: If I wanted to be crazy and try to locally install it on my home pc, would I clone it from Git, or download the files? I would love to see this being done!
First and foremost, I recommend not attempting to run Stable Diffusion on your PC. However, if you're interested, I've created a tutorial video. Simply clone the 'Automatic 1111' repository onto your computer and follow the instructions provided in the tutorial. This is advisable only if your PC is highly powerful; otherwise, the experience could be quite frustrating.
proud to you bhai! top q vid!
thanks brother
4070 TI Super now available with 16GB of VRAM for around 800 US. Still not cheap, but not thousands either. Go nuts with local install.
Great tutorial buddy! Thanks, learend some new stuff :)
Thanks brother
nice video. Thank you!
2:36 *than
Bro for dreambooth only required 3 to 5 images. But I have only 1 image😅(of other person) now how can gather other images of that person he is not famous and he is a small school teacher
Ask him bro😅
@@ThatArtsGuySiddhant-tk4jb bro how to i train a different persons in one model in different GPU because after first training model GPU runtime out after some or using different GPU how to i second person and the photo required is of my principle thanks for your reply please also reply me for this 🙏🙏
You talk a lot but I really enjoyed your chat, also, your editing style is very fun. ALSO, you're amazing with your results! Thanks! What city are you from?
Im from india brother😊
Good stuff. 👌
Thanks brother😊
Amazing
Thanks brother😊
You can use roop Module without train your own model
This colab note is broken DONT USE IT
Waste of time..
can you share your model?
If you dont move the page while is loading you have to start again; i did five tie i gave up ! you explain way more easier tho! you make me laught 💪💪💪
Yes this is the issue with the colab files that if you leave them dormant for a while the system shuts down itself😅
Oi. I have £1000-£1300 to spend on a laptop / Mac for stable diffusion.. what to choose?
Bring me the horizon ❤❤..
No 😅. It's not about which laptop to choose, but how powerful your GPU is. To run SD, you need quite a powerful GPU, which itself could cost between $1,000 and $2,000. Therefore, it might make more sense for you to just get a Google Colab subscription. For a small cost of $10 a month, you can use a world-class GPU. Then, even a $200 PC would work. I myself have been running this whole thing on Google Colab.
The tutorial is great but it does unfortunately not work anymore. Breaking changes in the dependencies :/
noooooo
Colab banned all usage of sd sadly
Too bad🥲
i think it does not work anymore
No, this is not correct I am using every dimension image and the result very good 😊☺️
7:20
wait hold up, how the fart did you get it to run on google like that... thats cool...
10:30
this is hilarious as fuck!!! ahhahah
thanks mr poo in loo
may 2024 it doesn't work
don't use it please, i just wasted my time... lol
Why?
Someone who makes a video about training your own AI and then tells people that installing Stable diffusion on a local PC is a bad decision, is clearly a SCAM.
Stop making videos, please.
I don't know if someone write you this already but I think when you change the words in # commented lines around 4:56 this will not affect. This lines are there to show you if you want to add more concepts to the training :)
Hey bro nice video i m also learning ai and we can make a community research and generate ai arts and explore more if u are interested… contact me….