The reason "l4ura" works better then "laura" ist that the instance prompt should be a unique string. It is my understanding that the model already has an "idea" of "laura" since it has propably been trained with 100.000s of pictures of women (or other "things") called laura or associated with the name laura and mixes those in. So to achieve a highest possible likeness try using a prompt which is unlikely to have been used somewhere else in the dataset. Even "l4ura" might not be quite obscure enough, though i am sure it already works much better. Maybe also add your modified last name in or something more uncommon. This could be special enough to not get mixed up with things the model already "knows". Maybe it does not make a huge difference anymore. But it cant hurt, I guess. :) Anyway: This is the best Lora-Tutorial I have come across so far and I am only halfway through. Thanks for explaining so thorougly. It helps a lot. :)
With my limited knowledge this is also my understanding. I usually try to add not just numbers but special characters so the AI can completely differentiate from anything it may have been trained on.
What if I want to make use of the idea of what to model already knows? Like further fine tune the likeness of a celerity. Does this also need a unique identifier?
It depends on what you want to do. If the model has a lot of data for "laura", it will take a lot of time to train over that, but at the same time, you could benefit from the training as "laura", since the data may contain a lot that can mix with your data. You might end up with a lot of different poses, styles and what ever the latent space have learned from the world "laura". Training "l4ura" will just take less time since that word is not represented so much in the latent space, but you may get less variety and ultimately if you train long enough, the "l4ura" word will mostly produce your input pictures.
Just want to thank you for the simplest, clear and easy to understand explanation on how to create a LoRa model "locally" using Kohya! I was finally, after previous attempts, to create a great LoRa Model. It took 3 days on my limited video RAM laptop and I don't know why it created 30 epochs when I set the settings at 15, but it was successful.
Little Summary of différents Steps. And Thank you very much, i try a lot of vidéo and your are reallygreat 05:00 Step 1 Blip Captioning 06:27 Step 2 Rename Caption for better Results 09:11 Step 3 Folder Préparation / Dataset Preparation 13:10 Step 4 Source Model 16:09 Step 5 Tool -> DataSet Preparation 20:30 Step 6 Check Info in Training Folder Tabs, change model output name 27:50 Step 7 Training Parameters Tabs
Wow, this video is absolutely amazing! I am so impressed with how thoroughly it explains all the topics related to Lora training in stable diffusion. The content is incredibly informative and has definitely made me more aware of the subject. However, since it's quite a long video, I would suggest breaking it down into chapters. This way, viewers can easily navigate through different sections and find specific information they're interested in. Overall, fantastic job on creating such an informative and comprehensive video!
Definitely one of the best videos I’ve seen so far in regards to explaining neural networks and how to train models for stable diffusion. Thank you so much!
I trained a few times for fun already and this tutorial is seriously great! A lot of things that I were not sure about are explained in more detail so I can better wrap my head around it. I also like the presentation and clear explanations. While I like me some memes I am really glad that you don't put silly stuff in your video and keep it very focused.
Thank you for the tutorial, and especially for doing it in English! Your voice is easy on the ears. I’ve been watching a lot of Lora tutorials for the past few weeks, and I feel like your video has been most effective. Subbed! Oh, and thank you for the illustrations!
Laura, as i could see, the captions that you created didnt get used in the training. The console showed the error message when you started training. This is because you must set the file extension to ".txt". This setting is in kohya, Lora, Training, Parameters, Basic, "Caption Extension". Set it to ".txt"
Well done! I'm only 2 weeks in from knowing nothing and have learned a thing or two from this channel. Subbed. Thank you Laura. Very good pace and clear explanations.
Thanks Laura and L4ura for your Lora explanations. I have tried a few times and got mid results. Thanks to your efforts I understand a lot more now. Best Wishes and much love. I'm off to play with my newly attained knowledge. Keep up the good work!
I was exited to learn the right button function to generate indefinitely and than investigate in a statistical (“inverse AI”) way the differences in output for a category prompt like “modern design” between the different models. HIGHLY POLITICAL !!
Your videos are amazing! You explain everything so clearly and have been so helpful to me learning how to use StableDiffusion. I plan to make a LoRA for my wife and surprise her with some (what I hope will be) awesome images!
🎯 Key Takeaways for quick navigation: 00:00 🧠 Fine-tuning means generating a new model from an already trained model to improve performance or adapt it for a specific task. 01:09 📊 Understanding the main parameters in a neural network, such as in Stable Diffusion, helps in making informed decisions for model training. 03:00 🖼️ For training a Laura model, you need training images and regularization images. Training images represent the subject, and regularization images represent the class. 05:04 📝 Captioning images is important for training; you can use Koya to create image descriptions. 08:57 🧞 Generating regularization images, even in large numbers, significantly improves model performance and diversity. 15:09 ⚙️ Setting up Koya's parameters, including the source model, training data folders, and prompts, is crucial before training a model. 19:30 🧰 Preparing training data in Koya creates the necessary structure for training, including folders for images, logs, and model output. 21:18 🎯 Understanding neural networks and batch training helps optimize the training process and improve model accuracy. 25:12 🔄 An epoch in neural network training is the process of going through all batches of data. Iterations refer to one update of the model's parameters within an epoch. 25:51 📉 The goal of training a neural network is to minimize the loss function, which represents the model's performance. This involves finding the lowest point in the function, ideally the global minimum. 26:44 🚶♂️ The size of steps taken toward the minimum during training is determined by the learning rate. It's essential to strike a balance between a high learning rate for faster convergence and a low learning rate for precision. 27:52 💼 Training parameters include batch size (number of data batches), the number of epochs, learning rate, mixed precision (for speed and memory optimization), and more. 30:07 📈 Learning rate schedulers adjust the learning rate during training, with options like cosine scheduling or constant rate. The choice depends on whether you're fine-tuning or training from scratch. 31:11 💡 The max resolution setting should match the image resolution used for training in models like KoYaGAN. It's important for generating high-quality images. 32:19 🧩 To use a trained model with Stable Diffusion, you'll need to link the model file generated during training to the Stable Diffusion web UI, allowing you to generate images with specific prompts.
Exactly what I have been looking for! This video is excellent: it makes so many thing every so clear. Thank you Laura!! ❤❤❤ I will finally endeavour to make my own LoRAs....
Hey Laura :) I'm so happy to stumbled upon your channel. Your explanations and energy are beautiful as you are. I'm in love 🥰Take care and keep shining bright
I have been watching 3 videos together to learn how to make a LoRa. In the end I liked your video best. Its not too technical, while still giving a lot of information. I have learned a lot and just finished making my first LoRA and its very exciting!
The small white page next to picking the model in the lora menu is to find and pick your custom model, edit: it's great you talk about regularization images not many youtubers talk about how beneficial they are when creating loras, and some suggest to not use them which is fair but the loras are much better when they are used
@@jr-wg6os I tried to use reg images yesterday, but somehow I'm unable to prompt the subject ... I get only very vague similarities to the face I tried to train. Any idea what could cause this?
@@equilibrium964 I just had the same problem... trained on myself and the images are nothing like me, I have to specifically point out my skin colour, hair colour and all sorts of basica details which I didn't put in captions (so I shoulldn't have to prompt them). Only after doing that does it start to very slightly resemble me. Doing it without the reg images it instantly looks 100x more like me... Not sure what I'm missing. If you work it out let me know please!
@@equilibrium964 sometimes it's also the model your using to generate the images I've noticed "REALISTIC cartoon/anime" models are much more flexible in terms of generating likenesses but worse at things like skin tones and details, but it also matters how clear the pictures and so on, for instance I noticed much better results with images closer to 512,512 or 768,768 then if inuse 4k images even though you can enable buckets I seem to get better results with more regular quality images then high Def ones.
This video is so valuable. Thank you for being brief but thorough, and using plain language. I really appreciate it. I will probably be watching it multiple times as my go-to for LoRAs. Even though my interface is different than yours by the time my comment is made, your explanations made it easy for me to follow anyways and find what I need.
You are the best. The only who really explain the things (and probably the only who really knows on UA-cam). Can you make a video explaining how the IA reconstruct the image from the noise? I mean, something like, Noise x Pixels relationship? Thanks for everything.
I realize you did this a month ago, but so much of this is out of date already, wow. Time fly's in the Ai tech world. I hope you will take the time to update this video, you did a fantastic job. You obviously have more patients than I do to read all the garbally goop stuff. I just want something simple, and I hope you delivered, it's still training on the new SDXL model so not 20 min. Mine says 2 hours. We will see if it worked. Thanks Laura for your hard work.
If you had a large batch of images all taken from the same session, such as your 'red hoodie' set, you could use a text modification program such as sed or AWK to do a bulk update of keywords of things common to all pictures. For example, you could add in 'earrings' to all pictures.
thanks you so much for the video, i created my own lora with base model SD 1.5 successfully, however there are many sampling method, DPM++ 2M, 3M, euler, heun...etc, how do i know which sampler work best with my lora?
I would suggest to look at what other creators used to generate their models - in CivitAI, you can look for what model is similar to what you would like to generate. Hope it helps
Not sure why, but my LoRa didn't work with regularization images. Without reg it worked. With time hopefully I'll figure out how to improve its versatility.
Thanks for the video. How can i train a local Lora for SDXL? I havr. 4090 with 24Gb of RAM and is allways going over that limit? Is it possible at home or only for SD1.5 and small resolutions
For me, training without regulation img make largely better results, also without captioning, I did a training with 100 regularization images in 34 passes and 68 reference images in 5 passes, and I ended up with a model that was completely off. I did a second test without regularization images and without captions, and in 13 minutes I got better results than a 2-hour training session. This is the third time that regularization images have just messed up my training. Maybe I misunderstood something, but it seems like they are just being used as training data, which is not the point, Anyway, thanks for the tutorial, it's the most comprehensive one on YT
Hi Laura, Thanks a million for your efforts and your tutorial, I watched so many tricks thanks to your video! I was wondering I could I create a specific part of a body I would like to focus on man's chest (muscles and hair) and I was wondering if I need to takejust chest training images or the full body pictures of a man. And what about regularization pics? Should I take just chest or face or the full body? I'm a bit confused. Thanks a million!
You still have it in the "depracated" tab. You can also add this under the Lora> Training > Folders > "training comments": trigger: xxxx swapping xxxx with the trigger word
Great video, I was waiting for this after your last Kohya video... A small notes, think at the end of the video the training parameters section (27:55 ish) are not from Dreambooth LoRA, rather only from Dreambooth, and that's why there is no settings appearing for Network Rank and Network Alpha... also the Caption Extension setting is empty and this would lead the training not loading captions.. might want to fix that, even on LoRA you still need to enter TXT or whatever extension you are using (in fact in your terminal it says "No caption found for your 25 images") ... also I think bf16 works with Nvidia 30 and 40 series well... I would also be interested in your thoughts on what to set for Class Prompt when training an artistic style, for example a style of photography from particular artist.
Hi, for styles I would guess that you can use "style" as the class prompt because when you prompt for a specific style you can use "in the style of " or " style" so now you would prompt for " style".
i try making training whitout a human face or body so when i came to blip step it wont work normal how should i make it (im trying to make a instrument model)
If you want better results you can set your Network Rank also to 50-256 this will also make your model size bigger but will give you more accurate results to you or help. And Thanks your video was also a good help to training models.
Hello Laura, I followed your excellent presentation step by step, but I cannot obtain models under the "safetensors" extension! In the "Models" folder there is only one .json file... Do you have a solution to offer me? Thanking you.
Thanks very helpful tutorial. Although I'd advise you to install the extension WD14 tagger, it's enormously better than CLIP (or Danbooru) to generate helpful vocabulary to help describe the dataset images.
Hi, I'm pretty new to generative stuff and I have difficulties to get a good framing. Could LoRa be a way to teach SD some filmmaking vocabulary? If so, how would you set a training? Many trainings one after another, each with a bunch of pictures with a specific shot, then another specific shot, and another, etc. Or goods captions could be enough for one big training?
Great video! I have a qeuestion, if I wanted to have the lora be of a person in different poses (headshot, stadning up full body shown, side view seated etc. basically any positon) How would I accomplish this? I want my modal to have the same proportions in different generations
Very detailed tutorial until I got stuck from 9.13 where you explain how to get more images using the base Realistic vision model. I cannot find a model named "Realistic vision V2.0" in CivitAI. It would be very useful if you could provide references on how and where to create these images?
The model is this one: civitai.com/models/4201/realistic-vision-v51 (now updated to version 5.1) However, you should use the model you want to use as base for your training. If you want to train a realistic model this is perfect.
Hello, a big thank you for your great video. The installation went well. However, how can I change the model? I only have a dropdown menu with several choices. There's no option to reference another model. Can you help me?
Amazing video, thank you so much for the tutorial, I have a question in my case around 5% of the portraits I created looks like me, should I use these new "photos" and placed them in the new regularization images, or do I need to add them in the training images for getting better and accurate/results?
I think the result might be worse actually. It would be better if use different backgrounds, a subject/object from different perspectives in different places - key is to describe the surrounding. You could give a go and let us know ;)
@@LaCarnevali I asked chatgpt and it gave me the answer, that about 70% shall be with diffrent bg and about 30% should be transparent. this 100% are then 80%-90% because you will need 10%-20% of controll images (wrong images). ... i havent tried yet, but i will tell you about the result.
Hi, thanks for the content. Are you Italian? I ask you, why if I use 1024x512 diffusion it gives me two people next to me and not one? If I want a landscape image, why two people?
Question about the regulation images: Does it make the lora better if I choose a smaller class? So for example, "European woman", instead of just "woman"? How narrow or wide should I make the class, for best results?
Thanks for sharing this useful video, I am curious about your GPU setup (VRAM, n gpus, model) since it seems pretty fast, I tried searching in older videos but I could not find that mentioned. Since my setup with two Tesla T4 (16 gb each) is much slower than yours, and I want to understand if that is a problem of configuration on my side. Thanks in advance.
Hi Laura, I really like your videos, it is very helpful. One question, may I know how many GB of VRAM you have to run this training? I only have 4GB currently and intend to buy a new RTX. Hence, the question. Thank you!
The reason "l4ura" works better then "laura" ist that the instance prompt should be a unique string. It is my understanding that the model already has an "idea" of "laura" since it has propably been trained with 100.000s of pictures of women (or other "things") called laura or associated with the name laura and mixes those in. So to achieve a highest possible likeness try using a prompt which is unlikely to have been used somewhere else in the dataset. Even "l4ura" might not be quite obscure enough, though i am sure it already works much better. Maybe also add your modified last name in or something more uncommon. This could be special enough to not get mixed up with things the model already "knows". Maybe it does not make a huge difference anymore. But it cant hurt, I guess. :)
Anyway: This is the best Lora-Tutorial I have come across so far and I am only halfway through. Thanks for explaining so thorougly. It helps a lot. :)
Thank you so much! It actually makes more sense! Pinned ;)
Попробуйте ещё датасет из лиц в формате png с прозрачным фоном
With my limited knowledge this is also my understanding. I usually try to add not just numbers but special characters so the AI can completely differentiate from anything it may have been trained on.
What if I want to make use of the idea of what to model already knows? Like further fine tune the likeness of a celerity. Does this also need a unique identifier?
It depends on what you want to do. If the model has a lot of data for "laura", it will take a lot of time to train over that, but at the same time, you could benefit from the training as "laura", since the data may contain a lot that can mix with your data. You might end up with a lot of different poses, styles and what ever the latent space have learned from the world "laura". Training "l4ura" will just take less time since that word is not represented so much in the latent space, but you may get less variety and ultimately if you train long enough, the "l4ura" word will mostly produce your input pictures.
Lora by Laura
😂😂
Lora of Laura by Laura
About time someone made that joke… 10 months ago
Thank you Laura for the tips of Lora
Those little extra explanations which pros forget all the time and I end up confused are not missing here. Thanks for going so much into detail.
Just want to thank you for the simplest, clear and easy to understand explanation on how to create a LoRa model "locally" using Kohya! I was finally, after previous attempts, to create a great LoRa Model. It took 3 days on my limited video RAM laptop and I don't know why it created 30 epochs when I set the settings at 15, but it was successful.
Little Summary of différents Steps. And Thank you very much, i try a lot of vidéo and your are reallygreat
05:00 Step 1 Blip Captioning
06:27 Step 2 Rename Caption for better Results
09:11 Step 3 Folder Préparation / Dataset Preparation
13:10 Step 4 Source Model
16:09 Step 5 Tool -> DataSet Preparation
20:30 Step 6 Check Info in Training Folder Tabs, change model output name
27:50 Step 7 Training Parameters Tabs
Um...no thanks....we all saw the video...your summary sucks.
Clearest and most thorough explanation of LoRAs and their creation on YT. Thanks so much.
Wow, this video is absolutely amazing! I am so impressed with how thoroughly it explains all the topics related to Lora training in stable diffusion. The content is incredibly informative and has definitely made me more aware of the subject. However, since it's quite a long video, I would suggest breaking it down into chapters. This way, viewers can easily navigate through different sections and find specific information they're interested in. Overall, fantastic job on creating such an informative and comprehensive video!
Thanks!
Thankss, I'm glad it helped ☀️
Definitely one of the best videos I’ve seen so far in regards to explaining neural networks and how to train models for stable diffusion. Thank you so much!
I trained a few times for fun already and this tutorial is seriously great! A lot of things that I were not sure about are explained in more detail so I can better wrap my head around it. I also like the presentation and clear explanations. While I like me some memes I am really glad that you don't put silly stuff in your video and keep it very focused.
Thank you for the tutorial, and especially for doing it in English! Your voice is easy on the ears. I’ve been watching a lot of Lora tutorials for the past few weeks, and I feel like your video has been most effective. Subbed! Oh, and thank you for the illustrations!
Laura, as i could see, the captions that you created didnt get used in the training. The console showed the error message when you started training. This is because you must set the file extension to ".txt". This setting is in kohya, Lora, Training, Parameters, Basic, "Caption Extension". Set it to ".txt"
Thank you! It's the first video i watch with such deep theory description
Would you, could you, please do this for style/artstyle training? Thank you.
Well done! I'm only 2 weeks in from knowing nothing and have learned a thing or two from this channel. Subbed.
Thank you Laura. Very good pace and clear explanations.
Great video on lora training! Others I've watched are all over the place and never explained it as concisely or left out info. Well done!
Amazing videos, really clear and detailed, thank you Laura
I'm just learning about LoRAs and your tutorial is absolutely the best I've seen. 👍 Keep up the good work!
Thanks Laura and L4ura for your Lora explanations. I have tried a few times and got mid results. Thanks to your efforts I understand a lot more now. Best Wishes and much love. I'm off to play with my newly attained knowledge. Keep up the good work!
How did it go?
Thanks for this! Loras by Laura! I've been waiting for a good tut on this and you delivered and then some!
best LoRA model video I found - thanks for this.
I was exited to learn the right button function to generate indefinitely and than investigate in a statistical (“inverse AI”) way the differences in output for a category prompt like “modern design” between the different models. HIGHLY POLITICAL !!
Me too, never seen that before
Your videos are amazing! You explain everything so clearly and have been so helpful to me learning how to use StableDiffusion. I plan to make a LoRA for my wife and surprise her with some (what I hope will be) awesome images!
🎯 Key Takeaways for quick navigation:
00:00 🧠 Fine-tuning means generating a new model from an already trained model to improve performance or adapt it for a specific task.
01:09 📊 Understanding the main parameters in a neural network, such as in Stable Diffusion, helps in making informed decisions for model training.
03:00 🖼️ For training a Laura model, you need training images and regularization images. Training images represent the subject, and regularization images represent the class.
05:04 📝 Captioning images is important for training; you can use Koya to create image descriptions.
08:57 🧞 Generating regularization images, even in large numbers, significantly improves model performance and diversity.
15:09 ⚙️ Setting up Koya's parameters, including the source model, training data folders, and prompts, is crucial before training a model.
19:30 🧰 Preparing training data in Koya creates the necessary structure for training, including folders for images, logs, and model output.
21:18 🎯 Understanding neural networks and batch training helps optimize the training process and improve model accuracy.
25:12 🔄 An epoch in neural network training is the process of going through all batches of data. Iterations refer to one update of the model's parameters within an epoch.
25:51 📉 The goal of training a neural network is to minimize the loss function, which represents the model's performance. This involves finding the lowest point in the function, ideally the global minimum.
26:44 🚶♂️ The size of steps taken toward the minimum during training is determined by the learning rate. It's essential to strike a balance between a high learning rate for faster convergence and a low learning rate for precision.
27:52 💼 Training parameters include batch size (number of data batches), the number of epochs, learning rate, mixed precision (for speed and memory optimization), and more.
30:07 📈 Learning rate schedulers adjust the learning rate during training, with options like cosine scheduling or constant rate. The choice depends on whether you're fine-tuning or training from scratch.
31:11 💡 The max resolution setting should match the image resolution used for training in models like KoYaGAN. It's important for generating high-quality images.
32:19 🧩 To use a trained model with Stable Diffusion, you'll need to link the model file generated during training to the Stable Diffusion web UI, allowing you to generate images with specific prompts.
The best explanation of how learning rates work, that I have seen so far. Very useful video, thank you.
Exactly what I have been looking for! This video is excellent: it makes so many thing every so clear. Thank you Laura!! ❤❤❤ I will finally endeavour to make my own LoRAs....
best tutorial for lora training , by far, thanks
such an underrated channel, great video thanks
Hey Laura :) I'm so happy to stumbled upon your channel. Your explanations and energy are beautiful as you are.
I'm in love 🥰Take care and keep shining bright
best video I've seen so far. You are the best!
Incredible video!!! I understood almost all the theory, and that's pretty much. You have a new subscriber
I have been watching 3 videos together to learn how to make a LoRa. In the end I liked your video best. Its not too technical, while still giving a lot of information. I have learned a lot and just finished making my first LoRA and its very exciting!
thank you! such a useful video and great explanations 😊
The small white page next to picking the model in the lora menu is to find and pick your custom model, edit: it's great you talk about regularization images not many youtubers talk about how beneficial they are when creating loras, and some suggest to not use them which is fair but the loras are much better when they are used
My personal Lora and girlfriend’s aren’t trained with aregularization folder and look perfect. Maybe I’ll try again for fun!
@SantoValentino same actually but it's helped when I have had less clear images to work with
@@jr-wg6os I tried to use reg images yesterday, but somehow I'm unable to prompt the subject ... I get only very vague similarities to the face I tried to train. Any idea what could cause this?
@@equilibrium964 I just had the same problem... trained on myself and the images are nothing like me, I have to specifically point out my skin colour, hair colour and all sorts of basica details which I didn't put in captions (so I shoulldn't have to prompt them). Only after doing that does it start to very slightly resemble me. Doing it without the reg images it instantly looks 100x more like me... Not sure what I'm missing. If you work it out let me know please!
@@equilibrium964 sometimes it's also the model your using to generate the images I've noticed "REALISTIC cartoon/anime" models are much more flexible in terms of generating likenesses but worse at things like skin tones and details, but it also matters how clear the pictures and so on, for instance I noticed much better results with images closer to 512,512 or 768,768 then if inuse 4k images even though you can enable buckets I seem to get better results with more regular quality images then high Def ones.
you are so wonderful at explaining everything.
Fantastic video, very clear with just the right amount of detail to get me started down this path. Many thanks for sharing.
This video is so valuable. Thank you for being brief but thorough, and using plain language. I really appreciate it. I will probably be watching it multiple times as my go-to for LoRAs. Even though my interface is different than yours by the time my comment is made, your explanations made it easy for me to follow anyways and find what I need.
You are the best. The only who really explain the things (and probably the only who really knows on UA-cam). Can you make a video explaining how the IA reconstruct the image from the noise? I mean, something like, Noise x Pixels relationship? Thanks for everything.
Wow I didnt expect a quick rundown with graphs. Thank you!
I realize you did this a month ago, but so much of this is out of date already, wow. Time fly's in the Ai tech world. I hope you will take the time to update this video, you did a fantastic job. You obviously have more patients than I do to read all the garbally goop stuff. I just want something simple, and I hope you delivered, it's still training on the new SDXL model so not 20 min. Mine says 2 hours. We will see if it worked. Thanks Laura for your hard work.
Good photos it's the best way, your tutorial with good photos = perfect ;p
NGL your videos about how to use and train image generation models are the best
If you had a large batch of images all taken from the same session, such as your 'red hoodie' set, you could use a text modification program such as sed or AWK to do a bulk update of keywords of things common to all pictures. For example, you could add in 'earrings' to all pictures.
Thanks so much! I've watched many videos about loras and I think this is the best explained one. Helped me a lot
I love this video. Awesome descriptions!!!
Thank you so much with the wonderful tutorial!
Thank you 🎉 Perfect overview and hands on 💪 learned a lot
Excellent video, clearly describing the workflow !
Finally a tutorial that doesn't say: now click here, do that, click there, type this.... thank you lora ... laura... l4ura🎉
Very informative video! Thank you very much! I will really rewatch this.
Amazing video, Laura! Thank you very much!
thanks you so much for the video, i created my own lora with base model SD 1.5 successfully, however there are many sampling method, DPM++ 2M, 3M, euler, heun...etc, how do i know which sampler work best with my lora?
Only one way to find out
I would suggest to look at what other creators used to generate their models - in CivitAI, you can look for what model is similar to what you would like to generate. Hope it helps
Great tutorial. Grazie. 30:24 was my favorite part.
Not sure why, but my LoRa didn't work with regularization images. Without reg it worked. With time hopefully I'll figure out how to improve its versatility.
Nice explanation, thanks!
Amazing knowledge you're sharing, thank you so much!
Thank you for this very good video. You have explained everything very well and clearly. I'm going to try it out right now.
Thanks for the video. How can i train a local Lora for SDXL? I havr. 4090 with 24Gb of RAM and is allways going over that limit? Is it possible at home or only for SD1.5 and small resolutions
Thank you so much for the video, you're the best!!
hi, Thanks for your tutos.
May I know the trick to have the dark mode? Thanks 🙂
For me, training without regulation img make largely better results, also without captioning,
I did a training with 100 regularization images in 34 passes and 68 reference images in 5 passes, and I ended up with a model that was completely off. I did a second test without regularization images and without captions, and in 13 minutes I got better results than a 2-hour training session.
This is the third time that regularization images have just messed up my training. Maybe I misunderstood something, but it seems like they are just being used as training data, which is not the point,
Anyway, thanks for the tutorial, it's the most comprehensive one on YT
What is the function of image regularization, then what kind of regularization image should be chosen to create a facial lora? thank you
That was an absolutely fantastic demo
Hi Laura, Thanks a million for your efforts and your tutorial, I watched so many tricks thanks to your video!
I was wondering I could I create a specific part of a body
I would like to focus on man's chest (muscles and hair) and I was wondering if I need to takejust chest training images or the full body pictures of a man. And what about regularization pics? Should I take just chest or face or the full body? I'm a bit confused. Thanks a million!
Instance Prompt and Text prompt fields seem to be gone from the current interface. Makes me wonder how to assign keywords now.
You still have it in the "depracated" tab. You can also add this under the Lora> Training > Folders > "training comments": trigger: xxxx
swapping xxxx with the trigger word
Great video, I was waiting for this after your last Kohya video... A small notes, think at the end of the video the training parameters section (27:55 ish) are not from Dreambooth LoRA, rather only from Dreambooth, and that's why there is no settings appearing for Network Rank and Network Alpha... also the Caption Extension setting is empty and this would lead the training not loading captions.. might want to fix that, even on LoRA you still need to enter TXT or whatever extension you are using (in fact in your terminal it says "No caption found for your 25 images") ... also I think bf16 works with Nvidia 30 and 40 series well... I would also be interested in your thoughts on what to set for Class Prompt when training an artistic style, for example a style of photography from particular artist.
Hi, for styles I would guess that you can use "style" as the class prompt because when you prompt for a specific style you can use "in the style of " or " style" so now you would prompt for " style".
Or simply don't use a class prompt neither a keyword so your lora always applies when referenced in the prompt
Best Kohya tutorial!
thank you laura for all the great content 🥰
i try making training whitout a human face or body so when i came to blip step it wont work normal how should i make it (im trying to make a instrument model)
💛💛💛💛💛 my favorite ai teacher
If you want better results you can set your Network Rank also to 50-256 this will also make your model size bigger but will give you more accurate results to you or help. And Thanks your video was also a good help to training models.
Hello Laura,
I followed your excellent presentation step by step, but I cannot obtain models under the "safetensors" extension! In the "Models" folder there is only one .json file...
Do you have a solution to offer me?
Thanking you.
Thanks very helpful tutorial. Although I'd advise you to install the extension WD14 tagger, it's enormously better than CLIP (or Danbooru) to generate helpful vocabulary to help describe the dataset images.
This is so helpful. Would u mind sharing a bit about Ur computer specs? Crying in macOs rn and thinking about building my own windows-run rig. Ty ❤
NVIDIA RTX 3090. Yeah the mac is not the best, but you could run SD on external GPU, like colab, runpod, think diffusion, diffusion hub
This was such a helpful video! Thank you so much, subbed for more schooling.
Can you just use Batch (250) for creating the regularization images instead of the constant generate?
Do more videos!! You are great!
Hi,
I'm pretty new to generative stuff and I have difficulties to get a good framing. Could LoRa be a way to teach SD some filmmaking vocabulary?
If so, how would you set a training? Many trainings one after another, each with a bunch of pictures with a specific shot, then another specific shot, and another, etc. Or goods captions could be enough for one big training?
That tip of "generate forever" was a mindblown for me. 😄
Hello laura you can macke a video for sdxl model? We love you you are my dream teacher
veramente un signor video, complimenti
Great video! I have a qeuestion, if I wanted to have the lora be of a person in different poses (headshot, stadning up full body shown, side view seated etc. basically any positon) How would I accomplish this? I want my modal to have the same proportions in different generations
need to train the model using pictures of that person in different positions
Great vid but why this over dreambooth XL ?
This was made before the release of XL, but the logic behind is exactly the same.
good job. thanks
Thanks, this was helpful and easy to follow.
Very detailed tutorial until I got stuck from 9.13 where you explain how to get more images using the base Realistic vision model. I cannot find a model named "Realistic vision V2.0" in CivitAI. It would be very useful if you could provide references on how and where to create these images?
The model is this one: civitai.com/models/4201/realistic-vision-v51 (now updated to version 5.1)
However, you should use the model you want to use as base for your training. If you want to train a realistic model this is perfect.
Hello, a big thank you for your great video. The installation went well. However, how can I change the model? I only have a dropdown menu with several choices. There's no option to reference another model. Can you help me?
Hi, you need to add your new model in the stable diffusion models folder
Excellent tutorial for someone who is an AI enthusiast and is just starting out. Do you intend to make videos about ComfyUI?
Thank you very much.
I get error no image... Follow folder structure
Amazing video, thank you so much for the tutorial, I have a question in my case around 5% of the portraits I created looks like me, should I use these new "photos" and placed them in the new regularization images, or do I need to add them in the training images for getting better and accurate/results?
Regularization images should not include photos of yourself. You can use them for training, only if they are actually good.
Hey Laura, thank your helpful tipps in the video! Do you think it makes sense to train a model for realistic peoples, with transparent background?
I think the result might be worse actually. It would be better if use different backgrounds, a subject/object from different perspectives in different places - key is to describe the surrounding. You could give a go and let us know ;)
@@LaCarnevali I asked chatgpt and it gave me the answer, that about 70% shall be with diffrent bg and about 30% should be transparent. this 100% are then 80%-90% because you will need 10%-20% of controll images (wrong images). ... i havent tried yet, but i will tell you about the result.
Hi, thanks for the content. Are you Italian? I ask you, why if I use 1024x512 diffusion it gives me two people next to me and not one? If I want a landscape image, why two people?
Question about the regulation images:
Does it make the lora better if I choose a smaller class? So for example, "European woman", instead of just "woman"?
How narrow or wide should I make the class, for best results?
I think it could improve but it 's not going to make much difference. I've never tried it tbh - let me know ;)
Thanks for sharing this useful video, I am curious about your GPU setup (VRAM, n gpus, model) since it seems pretty fast, I tried searching in older videos but I could not find that mentioned. Since my setup with two Tesla T4 (16 gb each) is much slower than yours, and I want to understand if that is a problem of configuration on my side. Thanks in advance.
Sweet & cute lovely teacher 😊❤😇
Hi Laura, I really like your videos, it is very helpful. One question, may I know how many GB of VRAM you have to run this training? I only have 4GB currently and intend to buy a new RTX. Hence, the question. Thank you!
Hey could you show us how to do it on colab
You should make a new version of this video with flux... please 😍🥰
All these comments and no one has said...
So, you've made a Laura Lora, Laura? 😁
can this be used with Flux dev? Is a 3060 12VRam enough? How will the configuration be in such case?