Hey Creatix! I just wanted to thank you for this video. Back when I was new to LoRA training this video laid out the basics for my AI related journey. To this day I come back to this video and send it to people getting started. Always will be greatful for the work you did there! Ralf
Hi. I did start a training just fine using my 12GB 3060 just by reducing the resolution to 768,768. But the card is a bit too slow. So here is my tip for low vram cards. Use 16 pics max. Resize all pictures to 768,768 using birme. Don't use buckets and only 8 Epochs. That should get you a Lora Training in 2 hours.
Hi, this is great, I learned so much from that video. The video was a little bit of to the audo though, I sometimes got lost in what you were saying and what was happening. The output I received is insane though. Thank you so much, You did an amazing job in this video. I'd like to see more of Your content!
Great video, love your voice. Some things change in the updated version, can you update the video with the new version of kohya?, i can find almost any parameter but may be confusing for new people
I can train on 3080@10GB with the Lora Optimized preset. I only turn the gardients off and the buckets on. I just trained a LoRA (person) with 220 images (768x768) on the RealisticVision model. It took like 5 hours, but it works awesome :) I'm not using SDXL since it's slow in everything, and also my experience is that it's worst in almost everything (generation, inpainting, etc) than the deliberate, realisticvision, cyberrealistic, etc SD 1.5 models
So you trained a Lora on a model that is based on SD 1.5. That makes sense for 3080 and I used to do this as well ☺️ And yeah, SDXL is slower, but I think as the community works together to create better checkpoints, extensions, optimizations, etc for it - it will be able to overpower what we currently have. ☺️🤞🏻
I have a problem I can not finalize the installation of kohya when I copy the last link and I paste on me asks to install python3. after installing and I paste the link again I am asked to install python I don't understand anything anymore thank you for explaining to me
Hi, I have a question if you don't mind. I'm new to image generation and have installed and run stable diffusion with the python version 3.10.6 so do I need to delete everything and install it again with the 3.10.11 version to run Kohya ss GUI? Because inside the Github page of Kohya ss GUI it mention to install that version. Please I would like an advice because I'm a bit confused. Thanks.
very helpful how long it took to finish the training? there is a value change in the website (Learning rate: Input a value between 0.001-0.004) and in the video it is 0.0003
HEY! This is THE BEST tutorial I have seen to date! I just wanted to let you know and that I did it right first try! Funny thing is.. I'm only at 15gb of gpu memory out of 24 on my 4090! What numbers could I pump up to test this beast out and get even more quality out of training? Again, this is by far THE BEST TUTORIAL on training with SDXL. Just fun, good pace, perfectly instructed and awesome.
Aww 🥰 Thank you for such a sweet comment!! I'm so happy. If you want to test it more - do a larger training size! Remember how I put down 768,768? You could do any number that's higher if you wanted to train a higher quality model. Let me know how far you can push that number, I'm curious also :) The question is - is it worth it? I honestly don't know. But if you could train a 4K model, dammm.... 💖
Is this tutorial adaptable to for example training on s specific animal? I really think SDXL is lacking good scorpions and I am currently creating a dataset of about 200 images. Do you have any suggestions for the settings when using that much images? I have 24 GB VRAM. Any suggestions on how to push the card to the limit in order to get the best results possible?
Great guide. Got it right on my first go without using Kohya before. My only question is why did I only get three LORAs even though I asked for ten epochs and had "Save every N epochs" set to 1?
I got a new laptop (4090). I noticed when I start training my gpu in task manager performance tab stays at 0%? Training is still proceeding but why is that? Brand new laptop. Yours shows this for comparison 16:24
14:13, either theres a typo or you said the wrong thing 4 times in a row, you say zero point zero zero three, two zeros. but you type three zeros. can you put up a correction? not sure what is correct
The videos have been very helpful. Thank you. While there are many tutorials on creating LoRa, it seems there is no tutorial on creating checkpoint models anywhere on UA-cam. Do you have any plans to produce a tutorial on creating checkpoint models?
16:16:34-049467 INFO Command executed. 'launch' is not recognized as an internal or external command, operable program or batch file. I'm getting this error someone can help me?
Great tutorial. Thanks! However, my training times keep growing to 250 hours'ish. Adjusting in accordance to your advice, but apparently I do something wrong. (I have around 50 portrait images of a person). Saw some other comments advicing you to add a link to a json settings file. Is that something you consider?
I really wish I had a machine that had vram. The one I have now does not. 😭😭😭 I need to watch more of your videos to see if you discuss the other settings and what they mean. I love that you told us what epoch (sp?) means.
thanks. in 13 min. batch size with stype. batch sie =1, same orgin image? batch size > 1, diffitence stype? i understand its, true or fail? or batch size only affect speed train?
@@kick851 you can use ANY SDXL model as your source. Pony diffusion on civitai is popular now (I think you can guess why, lol), but there are other more specialized models that would make training your lora more simple and quicker.
Thanks a lot! Nice straight to the point tutorial. Btw you can export your parameter settings as a json file at the top and share it along with the video or just reuse it every time :)
I finished the training, however, when I put safetensors into sd webui(automatica1111). In controlnet there some missing keys. If using your approach, I put it into text prompt there're no issue. However, it doesnt show any style impact.
When I hit prepare training data i get File "C:\Users\devotee\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 204, in _samefile return os.path.samestat(src.stat(), os.stat(dst)) RecursionError: maximum recursion depth exceeded while calling a Python object Any clue why this happens? Thank you for the wonderful video
Hello friend, is there going to be anything new on Lora training program for people? I haven't heard anything in a long time, maybe there are some new software chips
I can ask you a question: what is the maximum number of photos you need to upload for training on a real person in full growth? I have 250 pcs of photos of my wife, I trained on a 1.5 model and everything is cool. I wrote 150_gen and top quality and now I'm busy with sdhl and I'm curious whether this amount will be a style or a character or you need to make up to 100 pcs of photos???? thanks
Hi there! I'm a complete beginner. I want to create photo-realistic AI Avatars of myself (ideally 9:16 aspect ratio). Is this the right technic for me? Should I watch something else beforehand, or this is more than enough? Thanks in advance.
The tutorial is very clear, but unfortunately the program UI is now completely different and I failed to successfully reproduce anything. When going to train it gives " No data found. Please verify arguments (train_data_dir must be the parent of folders with images)" even through I reproduced literally everything you did, including folder naming.
My bad for the confusion, I have no problem training SDXL @1024 with 3090 :) 3080 wouldn't work training on SDXL, but it worked with SD 1.5. Hope that clears things up!
Hi, I would like to ask. I have a character sheet with different poses and perspectives. I would like to make a LoRA out of it. When making a LoRA we need to chose a model right. My question is that, is there any model that is neutral which can retain the style of the character sheet when I generate images with stable diffusion? Is there any way I can do that? Thank you
I tried this and kept getting errors. 1. It couldn't find the images because I didn't realize that subfolder we were creating needed to be structured like that for it to find the images? I fixed that problem by switching the image folder from the Source Image folder to that sub-folder 'img' 2. Images were Truncated, had no idea what it meant so I did research and it seems to happen when image files don't get fully downloaded I fixed this by loading them up in paint and simply re-saving them. 3. RuntimeError: The size of tensor a (2048) must match the size of tensor b (1024) at non-singleton dimension 1 I still haven't a clue what this even means and I looked it up and it was reported not too long ago on their Github, the Author seemed to think that it was caused by leaving the name as the default 'last' but when I re-trained the model making sure to set the name to something other then 'last', it still gave me the same error. I'm thinking that it may be possible this last error was due to the fact that I cropped a majority of these images to non-specific dimensions to remove stuff I didn't need. I could be wrong but I'm wondering if the dimensions of the images I trained the model with might be to blame for that error? You said if we used that one setting that it wouldn't matter but I still can't help but wonder given the two numbers 2048 and 1024 in that error. Those numbers I usually correlated to image or screen dimensions. Also I should mention that Triton apparently doesn't support windows so it simply diverts to training with Dreambooth....so I question if using the Lora tab is even relevant considering you have so many more options to fill in that I assume will potentially never be used, some of which you use in this video. Note: This is why I hate everything written in Python....either you have a version 0.00001 off from the version needed to run stuff since every single update to Python literally breaks every application ever made with it which is by far the stupidest thing ever......or it simply lacks proper support for getting around things as simple as Truncated Image files. What I mean by lacks proper support is that it doesn't have well structured libraries that eliminate the need for developers to overlook small things like this, you really have to be on your toes when writing code for Python or you'll miss stuff like this that you would otherwise never have to consider with other modern languages.
Thanks for making this tutorial. I learnt a lot of whole new set of skill to create LORA in Stable Diffusion. I am trying to create a LORA for a artistic style. I have an issue with my training. It is taking way too long than expected & I am not able to control the number of steps in the process in settings for 'Parameters'. Right now it says the steps are '6200' I don't no where that number comes from! It is also taking about 50 minutes for every 100 steps! 😔. I understand a mistake had been made in the settings. MY rough system config is - 'AMD Ryzen 7 5800X 8-Core' , 32gb ram and display is NVIDIA GeForce RTX 3060. all the help would be appreciated.
Thanks for watching :) I'm not sure what setting you've picked, but let's say your 6,200 steps are done in 10 epochs. That means that each epoch does 620 steps. If you have 15 training images at 40 repeats (steps) than it would make total sense why the number is 6,200 steps. In my "Dark Shine" example, I had 16 training images at 20 repeats = 320 steps per epoch. 10 epochs = 3,200 steps. However, I also used "Train batch size : 4 " so we divide 3,200/4 = 800 steps. You can see that number show up at timestamp 16:26 Your options are: decrease repeats; decrease epochs; train a higher batch size. I really hope this was helpful!
@@CreatixAi Thank you so much for the reply. I very much appreciate your answer. I understand now how much time and steps are calculated. I am using the latest version. It comes with many presets. Do you have any recommendations on them? Thank you again. 🙏
No matter what settings i change, i keep getting a: "returned non-zero exit status 1" error. anyone know how to fix this? i dont think it is a RAM issue...
One note I'd add is to be sure to fill in caption extension with .txt! Assuming that we want captions and they work well. Otherwise, you get an error that no caption file was found
make buckets number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (1024, 1024), count: 440 mean ar error (without repeats): 0.0
It says it cant find python for some random reason. Even tho I have the latest version installed and I use things like stable diffusion and other python related programs so I dont understand why it cant find it
Thanks for the tutorial. You forgot to put the 'extra arguments' in your description, they are: scale_parameter=False relative_step=False warmup_init=False
hjx, ihanks. i do follow video but it's same video. in 2p15, i chose 1 and ins ... very long and dont has any step, the end, next step, 3p11s. i dont choose train by cpu or gpu ...
After installing torch 2 it doesn't ask me anything it then it again give me first 5 options so I tried launch in browser and it opens but havn't tested if its working
@@CreatixAiyes I ended up making both 1.5 and SDXL models and used almost the same settings (other than that radio button you mentioned). Training for SDXL was about 4x longer to make and look better than the 1.5 outputs
You start with the prerequisites.. but miss the most important. You need a card with 24gigs of VRAM With no second hand 3090s on the market any more, that means pumping down almost 2 grand on a 4090. If I would have known that up front I could have saved myself 11 minutes.
apparently you can rent virtual gpus online for pennies and hour or so. runpod, aws, and others etc. even if you mess up and take longer than hoped, trying different settings etc, you'll still be saving money compared to the costs you're mentioned for your own gpu.
Excellent tutorial! TY!!! I haven't had a chance to try it but I have a 12GB 3080Ti so I'm hoping... You deserve like a thousand more thumbs up for this, it's very well explained. Timestamps too! :D
Since sd already knows that character, how can we tell if the lora even worked? You should really use unknown people to showcase these kind of tutorials. Also using famous people that sd knows as token to train an unknown person works really well btw. Been testing with sd1.5 and getting really good results.
Thanks for the comment! 😊 You can see if the Lora worked by comparing all the same settings with and without Lora, like I showed with Emma Watson. I think the Lora one has more likeness, don’t you think? About using famous people as the token to train an unknown person - haven’t tried it, but sounds like that could work well. Thanks for the tip ☺️
Batch size depend on how much VRAM u got not what U wanna learn. alot of mistakes in this video. If U say "zero point zero zero one its not 0.0001" Dimension size - larger value mens that more information from model is used to train lora. It has nothing to do with quality - more with flexiblity if we used good dataset of photos with good captions.
can u explain more lora image informations. i honestly like to get better feet for my model and want to make a feet lora. what i need to do? is it smart to make only the feet on a white wallpaper to get the model without ugly anime girls with nice feet. ? second question: if i want make a full model anime girl lora. which parts i need to take. 1. head 2. chest 3. hips 4. knees 5. feet ? is that good to parted the model in so many parts ore only head with chest hips with knees and feet ? Good: i like you tutorial and is very amazing to see a girl on such cool content. you rock this and you explain very clearly.
Thanks for your comment! ☺️ Here’s what I think, making feet or hands Lora won’t help with the deformed feet or hands on the actual generations. If it was this easy - we would all have perfect hands and feet on them! If you want to make a Lora for full body anime girls - you should use multiple images with full body anime girl references, not necessarily their body parts! Though, you can include middle and close-up crops too. Just think about it - what kind of image do you want the model to generate in the end? If it’s a girl with close-up feet (for example) then that’s the reference images you should be training on. Does it make sense? Hope this helps! ☺️
ummm hmmm 1% per hour on a 3060ti, with a big cpu and 64gig of ram for the system. at 6h at 6% i stopped it. wtf, on forums there are heaps doing training on GTX960 at 2h so what gives....
hi, the tutorial is no longer good since it is old compared to the new version of Kohya. I really can't create a Lora, because the video shows an old version :(
Hey Creatix! I just wanted to thank you for this video. Back when I was new to LoRA training this video laid out the basics for my AI related journey. To this day I come back to this video and send it to people getting started. Always will be greatful for the work you did there! Ralf
The English was so easy to understand that even a Japanese person like me could understand it. You have a very nice voice.❤
Aww 🥰 thank you so much for your kind words ☺️
Amazing explanation!!! Best one I found so far. ❤
Thank you so much ☺️
Holy shitz the f2 shortcut changing my life
Hi. I did start a training just fine using my 12GB 3060 just by reducing the resolution to 768,768. But the card is a bit too slow. So here is my tip for low vram cards. Use 16 pics max. Resize all pictures to 768,768 using birme. Don't use buckets and only 8 Epochs. That should get you a Lora Training in 2 hours.
This tutorial... PERFECT! no explanation needed
Glad you think so! 😊
Your voice 💗!, clear and beautiful.
Excellent tutorial thanks for sharing .
Hi, this is great, I learned so much from that video. The video was a little bit of to the audo though, I sometimes got lost in what you were saying and what was happening. The output I received is insane though. Thank you so much, You did an amazing job in this video. I'd like to see more of Your content!
Thank you so much for your time making this!
Glad it was helpful! ☺️
Why did you jump back and forth between dark shine and Mr. beast? I feel you should do separate videos with just a simple straight run through.
I wanted to show that you can do basically everything the same for both style Loras and Character Loras. Thank you for your feedback though 😊
@@CreatixAi bro. i use 3090 + r7 2700 + SSD 980. when i train lora sdxl. my speed train about 6s/it. uhm, i feet very slow. it's true or fal? thanks
Excellent tutorial. well put together, easy to follow
Thanks, it works for me.
Great video, love your voice. Some things change in the updated version, can you update the video with the new version of kohya?, i can find almost any parameter but may be confusing for new people
I can train on 3080@10GB with the Lora Optimized preset. I only turn the gardients off and the buckets on. I just trained a LoRA (person) with 220 images (768x768) on the RealisticVision model. It took like 5 hours, but it works awesome :) I'm not using SDXL since it's slow in everything, and also my experience is that it's worst in almost everything (generation, inpainting, etc) than the deliberate, realisticvision, cyberrealistic, etc SD 1.5 models
So you trained a Lora on a model that is based on SD 1.5. That makes sense for 3080 and I used to do this as well ☺️
And yeah, SDXL is slower, but I think as the community works together to create better checkpoints, extensions, optimizations, etc for it - it will be able to overpower what we currently have. ☺️🤞🏻
I have a problem I can not finalize the installation of kohya when I copy the last link and I paste on me asks to install python3. after installing and I paste the link again I am asked to install python I don't understand anything anymore thank you for explaining to me
You rock! Thank you for being concise.
Hi, I have a question if you don't mind. I'm new to image generation and have installed and run stable diffusion with the python version 3.10.6 so do I need to delete everything and install it again with the 3.10.11 version to run Kohya ss GUI? Because inside the Github page of Kohya ss GUI it mention to install that version. Please I would like an advice because I'm a bit confused. Thanks.
very helpful
how long it took to finish the training?
there is a value change in the website (Learning rate: Input a value between 0.001-0.004) and in the video it is 0.0003
HEY! This is THE BEST tutorial I have seen to date! I just wanted to let you know and that I did it right first try! Funny thing is.. I'm only at 15gb of gpu memory out of 24 on my 4090! What numbers could I pump up to test this beast out and get even more quality out of training? Again, this is by far THE BEST TUTORIAL on training with SDXL. Just fun, good pace, perfectly instructed and awesome.
Aww 🥰 Thank you for such a sweet comment!! I'm so happy.
If you want to test it more - do a larger training size! Remember how I put down 768,768? You could do any number that's higher if you wanted to train a higher quality model. Let me know how far you can push that number, I'm curious also :)
The question is - is it worth it? I honestly don't know. But if you could train a 4K model, dammm.... 💖
I'm about to hit mine with everything from the video but training size set 1024,1024 hoping it works out.
🥰🥰🥰 Thank you .
Is this tutorial adaptable to for example training on s specific animal? I really think SDXL is lacking good scorpions and I am currently creating a dataset of about 200 images. Do you have any suggestions for the settings when using that much images? I have 24 GB VRAM. Any suggestions on how to push the card to the limit in order to get the best results possible?
Thank you so much for this one!! Very structured and easy to follow! Also your voice is very pleasant to listen to! 😊
Thanks so much! 😊
9:49 do i have the wrong version of kohya or something? that tab doesn't exist
Thanks. Great video. Can you do one on flux too 😂 if you have info on good and bad results
awesome tutorial 🙂
LFG -- I just trained my first LoRA thanks to you
Good stuff! I'm so glad 🥰
Great guide. Got it right on my first go without using Kohya before. My only question is why did I only get three LORAs even though I asked for ten epochs and had "Save every N epochs" set to 1?
¡Gracias por el video! Me encantó.Tu video fue muy informativo y útil.
Glad you think so, thanks so much for watching ☺️
dude i love the voice
I got a new laptop (4090). I noticed when I start training my gpu in task manager performance tab stays at 0%? Training is still proceeding but why is that? Brand new laptop. Yours shows this for comparison 16:24
when i run the image captioning, it only captions wiht my prefix and nothing else? i tried with Blip and WD14 - What can i do to change this?
14:13, either theres a typo or you said the wrong thing 4 times in a row, you say zero point zero zero three, two zeros. but you type three zeros. can you put up a correction? not sure what is correct
my bad, the video is correct, I messed up while recording the audio :3
The videos have been very helpful. Thank you. While there are many tutorials on creating LoRa, it seems there is no tutorial on creating checkpoint models anywhere on UA-cam. Do you have any plans to produce a tutorial on creating checkpoint models?
16:16:34-049467 INFO Command executed.
'launch' is not recognized as an internal or external command,
operable program or batch file. I'm getting this error someone can help me?
epiCRealism is based on SD1.5, so if I were to use that as pretrain model, should I check v2, v_parameterization or SDXL Model?
Great tutorial. Thanks! However, my training times keep growing to 250 hours'ish. Adjusting in accordance to your advice, but apparently I do something wrong. (I have around 50 portrait images of a person). Saw some other comments advicing you to add a link to a json settings file. Is that something you consider?
I really wish I had a machine that had vram. The one I have now does not. 😭😭😭
I need to watch more of your videos to see if you discuss the other settings and what they mean. I love that you told us what epoch (sp?) means.
How many VRAM it is needed? It is possible with 10Gb GPU?
Yes 8GB minimum
@@Frodoswaggns Can you share a guide? Or if this video is useful, can you share the config options?
thanks. in 13 min. batch size with stype. batch sie =1, same orgin image? batch size > 1, diffitence stype? i understand its, true or fail? or batch size only affect speed train?
Great tutorial, thanks!
Thank you for watching 😊
well explained and very well video composited thanks.
I can't find the Deprecated option.
I am using Kohya_ss Gui v23.0.14
They moved it to the Dreambooth tab, it's called "dataset preparation" and has all the options of the deprecated tab.
@@shippo72 Thanks but I can't find the source model help ? !
@@kick851 you can use ANY SDXL model as your source. Pony diffusion on civitai is popular now (I think you can guess why, lol), but there are other more specialized models that would make training your lora more simple and quicker.
Thanks a lot! Nice straight to the point tutorial. Btw you can export your parameter settings as a json file at the top and share it along with the video or just reuse it every time :)
Great tip, thanks! 😊
Do you happen to have a good, working, fast, parameters settings .json that you can share? Thanks.🙏
I finished the training, however, when I put safetensors into sd webui(automatica1111). In controlnet there some missing keys. If using your approach, I put it into text prompt there're no issue. However, it doesnt show any style impact.
I love you. Thank you!
🥰🥰🥰
When I hit prepare training data i get File "C:\Users\devotee\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 204, in _samefile
return os.path.samestat(src.stat(), os.stat(dst))
RecursionError: maximum recursion depth exceeded while calling a Python object
Any clue why this happens? Thank you for the wonderful video
Hello friend, is there going to be anything new on Lora training program for people? I haven't heard anything in a long time, maybe there are some new software chips
I can ask you a question: what is the maximum number of photos you need to upload for training on a real person in full growth? I have 250 pcs of photos of my wife, I trained on a 1.5 model and everything is cool. I wrote 150_gen and top quality and now I'm busy with sdhl and I'm curious whether this amount will be a style or a character or you need to make up to 100 pcs of photos???? thanks
Hi there! I'm a complete beginner. I want to create photo-realistic AI Avatars of myself (ideally 9:16 aspect ratio). Is this the right technic for me? Should I watch something else beforehand, or this is more than enough? Thanks in advance.
This should be enough :) just make sure to use photos of yourself from different angles, lighting and clothing. ☺️
@@CreatixAi thank youuu ♥️
The tutorial is very clear, but unfortunately the program UI is now completely different and I failed to successfully reproduce anything. When going to train it gives " No data found. Please verify arguments (train_data_dir must be the parent of folders with images)" even through I reproduced literally everything you did, including folder naming.
i ran into the same issue T.T.. did you manage to find a solution at the end? :)
if I'm hearing this right, you can't fit the full SDXL @ 1024 on your 3090?
dang, I was hoping my local card could handle that too
My bad for the confusion, I have no problem training SDXL @1024 with 3090 :)
3080 wouldn't work training on SDXL, but it worked with SD 1.5.
Hope that clears things up!
4080 also works with 16 GB
Hi, I would like to ask. I have a character sheet with different poses and perspectives. I would like to make a LoRA out of it. When making a LoRA we need to chose a model right. My question is that, is there any model that is neutral which can retain the style of the character sheet when I generate images with stable diffusion? Is there any way I can do that? Thank you
7:04 Cool tip, didn't even notice that...
I tried this and kept getting errors.
1. It couldn't find the images because I didn't realize that subfolder we were creating needed to be structured like that for it to find the images?
I fixed that problem by switching the image folder from the Source Image folder to that sub-folder 'img'
2. Images were Truncated, had no idea what it meant so I did research and it seems to happen when image files don't get fully downloaded
I fixed this by loading them up in paint and simply re-saving them.
3. RuntimeError: The size of tensor a (2048) must match the size of tensor b (1024) at non-singleton dimension 1
I still haven't a clue what this even means and I looked it up and it was reported not too long ago on their Github, the Author seemed to think that it was caused by leaving the name as the default 'last' but when I re-trained the model making sure to set the name to something other then 'last', it still gave me the same error.
I'm thinking that it may be possible this last error was due to the fact that I cropped a majority of these images to non-specific dimensions to remove stuff I didn't need. I could be wrong but I'm wondering if the dimensions of the images I trained the model with might be to blame for that error? You said if we used that one setting that it wouldn't matter but I still can't help but wonder given the two numbers 2048 and 1024 in that error. Those numbers I usually correlated to image or screen dimensions.
Also I should mention that Triton apparently doesn't support windows so it simply diverts to training with Dreambooth....so I question if using the Lora tab is even relevant considering you have so many more options to fill in that I assume will potentially never be used, some of which you use in this video.
Note: This is why I hate everything written in Python....either you have a version 0.00001 off from the version needed to run stuff since every single update to Python literally breaks every application ever made with it which is by far the stupidest thing ever......or it simply lacks proper support for getting around things as simple as Truncated Image files. What I mean by lacks proper support is that it doesn't have well structured libraries that eliminate the need for developers to overlook small things like this, you really have to be on your toes when writing code for Python or you'll miss stuff like this that you would otherwise never have to consider with other modern languages.
Can Kohya be used on Colab?
why are my loras saving as json files and not saftensors?
Thanks for making this tutorial. I learnt a lot of whole new set of skill to create LORA in Stable Diffusion.
I am trying to create a LORA for a artistic style. I have an issue with my training. It is taking way too long than expected & I am not able to control the number of steps in the process in settings for 'Parameters'. Right now it says the steps are '6200' I don't no where that number comes from! It is also taking about 50 minutes for every 100 steps! 😔. I understand a mistake had been made in the settings.
MY rough system config is - 'AMD Ryzen 7 5800X 8-Core' , 32gb ram and display is NVIDIA GeForce RTX 3060.
all the help would be appreciated.
Thanks for watching :) I'm not sure what setting you've picked, but let's say your 6,200 steps are done in 10 epochs. That means that each epoch does 620 steps. If you have 15 training images at 40 repeats (steps) than it would make total sense why the number is 6,200 steps.
In my "Dark Shine" example, I had 16 training images at 20 repeats = 320 steps per epoch. 10 epochs = 3,200 steps. However, I also used "Train batch size : 4 " so we divide 3,200/4 = 800 steps. You can see that number show up at timestamp 16:26
Your options are: decrease repeats; decrease epochs; train a higher batch size. I really hope this was helpful!
@@CreatixAi Thank you so much for the reply. I very much appreciate your answer.
I understand now how much time and steps are calculated. I am using the latest version. It comes with many presets. Do you have any recommendations on them?
Thank you again. 🙏
you beauty, love it.
Finally someone that knows how to speak clearly
😜🥰
No matter what settings i change, i keep getting a: "returned non-zero exit status 1" error. anyone know how to fix this? i dont think it is a RAM issue...
Thanks you 😊
One note I'd add is to be sure to fill in caption extension with .txt! Assuming that we want captions and they work well. Otherwise, you get an error that no caption file was found
make buckets
number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)
bucket 0: resolution (1024, 1024), count: 440
mean ar error (without repeats): 0.0
what if I didn't get the questions in 3:06?
It says it cant find python for some random reason. Even tho I have the latest version installed and I use things like stable diffusion and other python related programs so I dont understand why it cant find it
love condensed videos
Love making them! :)
17:05 what work the lora need can you guide us to make our lora perfect?
I want a lora for myself and i need it as best as possible
Would this work on my 3060ti 8Gb ?
What minimum Nvidia GPU do you suggest?
Thank you for this awesome video!
Any idea if there's a Kohya notebook for training for those who cannot do this locally? Thanks in advance 🙏🏻
Can I ask which PC do you use?
It's a custom build :)
I have a 4070 but it takes so damn long. It works though
Thanks for the tutorial. You forgot to put the 'extra arguments' in your description, they are: scale_parameter=False relative_step=False warmup_init=False
Thank you for pointing it out, I went ahead and added it. Cheers! :)
How the heck is your Automatic1111 rendering SDXL stuff so fast? I have a 4070Ti and mine is slower
because the 3090 graphics card is significantly more powerful than the 4070Ti. And the video is accelerated.
Precisely! I've sped up the video, who would want to watch it in real time? Not me, for sure :)) Thanks for watching, guys 😊
I don't think any stable diffusion tutorial has ever worked for me first try. My folders don't match the required pattern.
i got "Error caught was: No module named 'triton'"
but it still success to make captioning,
is it important to have triton?
No, I have the same error, don’t worry about it :) good luck!
ty for fast replaying :)@@CreatixAi
any loras on civit?
hjx, ihanks. i do follow video but it's same video. in 2p15, i chose 1 and ins ... very long and dont has any step, the end, next step, 3p11s. i dont choose train by cpu or gpu ...
pls help me. show me, can i use vga in Kohya. when i caption image, 8% vga use and 88% cpu use. i use 3090 + r7 2700 + 64gb ram.
Have anyone figure this out for macbook?
After installing torch 2 it doesn't ask me anything it then it again give me first 5 options so I tried launch in browser and it opens but havn't tested if its working
Hey, sounds like it’s working! :)
@@CreatixAi yeah I made my 1st lora model of Salma hayek lol she look so gorgeous in anime style 🤩
great tutorial. I created my SDXL model using this tutorial... Question - can you use this exact same tutorial to make 1.5 Loras?
For SD1.5 a few settings will be different. For example, you wouldn't checkmark "SDXL". Have you given it a shot?
@@CreatixAiyes I ended up making both 1.5 and SDXL models and used almost the same settings (other than that radio button you mentioned). Training for SDXL was about 4x longer to make and look better than the 1.5 outputs
Has anyone managed to install it for AMD GPU?
Will there be a SDXL training tutorial for Google Colab?
I don’t think I’ll be making one. There are already some good ones from other awesome creators ☺️
You start with the prerequisites.. but miss the most important.
You need a card with 24gigs of VRAM
With no second hand 3090s on the market any more, that means pumping down almost 2 grand on a 4090.
If I would have known that up front I could have saved myself 11 minutes.
apparently you can rent virtual gpus online for pennies and hour or so. runpod, aws, and others etc. even if you mess up and take longer than hoped, trying different settings etc, you'll still be saving money compared to the costs you're mentioned for your own gpu.
Excellent tutorial! TY!!!
I haven't had a chance to try it but I have a 12GB 3080Ti so I'm hoping...
You deserve like a thousand more thumbs up for this, it's very well explained. Timestamps too! :D
Much appreciated! Good luck! 😊
Since sd already knows that character, how can we tell if the lora even worked? You should really use unknown people to showcase these kind of tutorials. Also using famous people that sd knows as token to train an unknown person works really well btw. Been testing with sd1.5 and getting really good results.
Thanks for the comment! 😊 You can see if the Lora worked by comparing all the same settings with and without Lora, like I showed with Emma Watson. I think the Lora one has more likeness, don’t you think?
About using famous people as the token to train an unknown person - haven’t tried it, but sounds like that could work well. Thanks for the tip ☺️
Watched the full video and then cried when i heard "this configuration will use about 20 GB memory". I got a 3070 💀
i'm sure you can spare 2 usd to spin up some gpu compute on aws or runpod? :)
@@JanBadertscher lol ye that's what I decided to do in the end
Batch size depend on how much VRAM u got not what U wanna learn. alot of mistakes in this video.
If U say "zero point zero zero one its not 0.0001"
Dimension size - larger value mens that more information from model is used to train lora. It has nothing to do with quality - more with flexiblity if we used good dataset of photos with good captions.
11:57. Make sure to checkmark it.
There is no box
Wtf do you mean ?
can u explain more lora image informations.
i honestly like to get better feet for my model and want to make a feet lora.
what i need to do? is it smart to make only the feet on a white wallpaper to get the model without ugly anime girls with nice feet.
?
second question:
if i want make a full model anime girl lora. which parts i need to take.
1. head
2. chest
3. hips
4. knees
5. feet
?
is that good to parted the model in so many parts ore only
head with chest
hips with knees
and feet
?
Good:
i like you tutorial and is very amazing to see a girl on such cool content. you rock this and you explain very clearly.
Thanks for your comment! ☺️
Here’s what I think, making feet or hands Lora won’t help with the deformed feet or hands on the actual generations. If it was this easy - we would all have perfect hands and feet on them!
If you want to make a Lora for full body anime girls - you should use multiple images with full body anime girl references, not necessarily their body parts! Though, you can include middle and close-up crops too.
Just think about it - what kind of image do you want the model to generate in the end?
If it’s a girl with close-up feet (for example) then that’s the reference images you should be training on. Does it make sense? Hope this helps! ☺️
Fantastic video. But super confusing that you are first working on Mr Beast and then for some reason you change and work with "dark shine" 😅
04:06 ai voice busted, anyways best tutorial ever. thank you so much
🤣 I should stop cutting out me taking a breath when I record audio so people stop assuming I'm an AI trained on a non-native English speaker 🤣💖
RuntimeError: NaN detected in latents :< not going through the photos, but thank you for the tutorial!
you miss mark no half vae
I get the impression that this tutorial was heavily inspired Aitrepreneur
Incorrect. It mirrors SECourses. AITrep's tutorial follows Joe Penna's recommendations to train characters based on existing celebrities
❤
ummm hmmm 1% per hour on a 3060ti, with a big cpu and 64gig of ram for the system. at 6h at 6% i stopped it. wtf, on forums there are heaps doing training on GTX960 at 2h so what gives....
Ah yeah, let me just easily get a 3090, because then I can train a lora. Let me just do that right quick!
You can do it in the cloud. The also issue is the monopoly and sucky asshole tactics from nvjdia means a. A $2000 card has to run 100% for ages.
Mimic pc and stuff like that or civitai is the easy mode this is for people who want options.
@@Larimuss No thanks I prefer local, not trusting clouds.
hi, the tutorial is no longer good since it is old compared to the new version of Kohya. I really can't create a Lora, because the video shows an old version :(
👋
👋🏻
@@CreatixAi 👋
Taking 9 hrs to train a lora with 15 imgs on a 4090 hehe...
14:08 You missed a zero in your description.