Great video. Do you have any recommendation on how to train a LoRa on an aesthetic rather than a specific product? Like should I use a trigger word for an aesthetic?
Great video!! I would like a question - I don't understand something, I trained several different people but I don't understand how I approach each of them? Is there a main playground that accommodates everyone? I would appreciate an explanation on the matter. thanks!
Thank you for this tutorial! I tried it by integrating my photos, but without including a product, and the results are quite surprising! Do you think it would be possible to incorporate multiple variables in the fine-tuning process, such as a product, a model, and a logo?
Yes, absolutely. I think you might have to train a different LoRa with different views of the product. So if you want to have your prodcut with the logo in deferent positions and settings, take 15-20 images of the product with the logo but different variations and angles
Thanks Nico. Any luck on the logo? I've trained my product. The colours and shapes are perfect, but man it butchers the text. I think people just do a mask of some sort, but I can't figure out how wrap text at an angle. I'd love to see a video specifically about product and text / texture
I had some trouble with the van of the logo. Try to describe the logo in detail as well. So for the van example in the prompt I p[laced " Kawascar, a white van with the logo 'KAWASCAR' written on the side" You can throw the image to gpt first and ask it to describe the logo itself in detail and use that description in the prompt for flux if that makes sense?
The value i took from this was that it cant be used for product photos, non of those were close enough never mind accurate. Look at the rim of those best examples, they were way off. Still appreciate you putting in the time and effort so i dont have too. 👍
This is fantastic, thanks Nico. I was about to do the same experiment- IMO its not quite there yet? At least for my use case, because it seems to struggle a bit with products that have a lot of text on them. But hopefully in a few months.
I think it depends on the images you train it with, the product complexity itself, and the prompting style as well. But perhaps version 2 will be the fix
@@Nico_AIRanking Do it for some other not so known brand. Coca cola logo is well trained in the AI models. Try finding some lowkey brand - lets collaborate on this if you're down!
More training photos. You can try with 50 - 100 training photos. It will increase the time it takes to create the LoRa but that should improve the output
I have attempted to train it for a t-shirt design twice, but it did not work. The limitation is that the design cannot have various variations, so I tried to replicate the same design and created around ten files (even though I knew it probably wouldn’t work). I used a unique trigger word and experimented with multiple ways to apply that trigger on the t-shirt , but none of them were successful.
@@bhautube do you think it work for solid color garments: eg a black t-shirt or a red hoodie. I don’t need variations but need very high quality photos.
So... Can I train LoRa with different products and make em be together in just ona final image, right? I supose must be better to train it one item at a time.
at 6:15 your problem is that you didnt create a unique enough trigger word, greencup is too common and the flux model already associates that with many other images it was trained on. Trigger word should be as unique as possible for best results with Loras. This dates back to SD 1.5
@@Nico_AIRanking You can use something like "trcu" or "bble" words that dont really exists or wont be in the model you built on ;) . If your training stes has different backgrounds for the same object and person it helps too. A site note , you dont need captions ofr objects or people (objects might help if not clear) as flux has the caption build in.
Guys, youtubers, why you are all show such a easy object like mug for training?..It is is too easy because mug looks easy from all angles...Try to train sectioned sofa on photo set consisting of difficult shape sofas from different angles...i just trained one and results are mostly bad and inconsistent...It is only basically work if all images are from same angle or maybe if you write captions and describe every aspect of image in details. Or am i wrong?
I completely agree. Nobody has been trained on any complex product. I'm trying to train one of our packages, which includes text, a logo, and an Illustrator product box. It's been a frustrating experience, and I haven't had success yet. I've been experimenting with various methods, such as using captions, not using captions, and adjusting the number of training steps to either 2000 or 3000. It's difficult, but I'm still working on it.
@@Nico_AIRanking Do you have to pay for API and how do you get it? Will the same process in this video work for nsfw images? (sorry I'm a noob at this stuff)
@@Lovewithnoend I visit clients weekly to produce images, and it’s clear you have no idea what you’re talking about. One common request is: “We want our workers in authentic work situations.” Do you have any idea how to achieve that with AI? We’re dealing with the interiors of company buildings, papers on desks, and computer screens displaying the company’s specific software. Yes, AI can generate amazingly realistic images, but the level of detail required to meet client expectations simply isn’t achievable with AI. Here’s another example: products. I once took a photo of a croissant in my free time to show a concept to a client. His first comment? “This croissant wasn’t made in our bakery.” Honestly, all croissants look the same to me, but clients know their products down to the smallest details. Sure, AI can replicate a coffee cup, but a wedding or a company brochure featuring real workers? That’s a different story-they want to see themselves there. Before making silly comments on UA-cam, visit real companies, work on real projects, and listen to real clients’ demands. Reality is different from what UA-cam presents.
@@danpinho it’s clear you have enough brain cells so you don’t crap yourself in the pants. Automation is a thing believe it or not, keep in mind that for AI training you don’t need to capture the “beauty” or the “artistic” style of the product, you just need to teach the model what product is. Same goes with the model. Even if you have to pay some dude to take 20 pictures of the product/person it will still be much cheaper than a photographer (add setup lighting, equipment, logistics). With an AI model you can put that product in a photo in any imaginable way.
Join the Free AI Ranking Community www.skool.com/ai-ranking-free-1769/about
I'm watching all the videos I can about this topic and yours is the most helpful so far. Keep up the great work!
Thanks for the comment! Ill keep trying to provide more value!
Great video. Do you have any recommendation on how to train a LoRa on an aesthetic rather than a specific product? Like should I use a trigger word for an aesthetic?
I am afraid i dont. I have only trained it on products and people up until now
Great video!!
I would like a question -
I don't understand something, I trained several different people but I don't understand how I approach each of them?
Is there a main playground that accommodates everyone?
I would appreciate an explanation on the matter. thanks!
I don't understand your question. I just showed you a playground to train a model. You can train an object or a person... what do you mean?
Thank you for this tutorial! I tried it by integrating my photos, but without including a product, and the results are quite surprising! Do you think it would be possible to incorporate multiple variables in the fine-tuning process, such as a product, a model, and a logo?
Yes, absolutely. I think you might have to train a different LoRa with different views of the product. So if you want to have your prodcut with the logo in deferent positions and settings, take 15-20 images of the product with the logo but different variations and angles
Thanks Nico. Any luck on the logo? I've trained my product. The colours and shapes are perfect, but man it butchers the text. I think people just do a mask of some sort, but I can't figure out how wrap text at an angle. I'd love to see a video specifically about product and text / texture
I had some trouble with the van of the logo. Try to describe the logo in detail as well. So for the van example in the prompt I p[laced " Kawascar, a white van with the logo 'KAWASCAR' written on the side" You can throw the image to gpt first and ask it to describe the logo itself in detail and use that description in the prompt for flux if that makes sense?
The value i took from this was that it cant be used for product photos, non of those were close enough never mind accurate. Look at the rim of those best examples, they were way off. Still appreciate you putting in the time and effort so i dont have too. 👍
This is fantastic, thanks Nico. I was about to do the same experiment- IMO its not quite there yet? At least for my use case, because it seems to struggle a bit with products that have a lot of text on them. But hopefully in a few months.
I think it depends on the images you train it with, the product complexity itself, and the prompting style as well. But perhaps version 2 will be the fix
Great video! Can you use the images commercially??
Yes you can!
Thanks for sharing this, just one question, where is the link for the ChatGPT prompter....???
Great stuff
cheers
Awesome! 🙂👍
Thank you! Cheers!
Hey Nico! Can you do a video for a proper branded product? Would that even work? Like do for something like a Coke Can
that is not a bad idea
@@Nico_AIRanking Do it for some other not so known brand. Coca cola logo is well trained in the AI models. Try finding some lowkey brand - lets collaborate on this if you're down!
Any flux hosting services that allow you to lora/train a PRO ULTRA model with ~20 of your photos? Not the pro model.
If you don’t mind me asking… what is the point of the trigger word? Will it cause an error or is that how Flux identifies
easier to describe and identify the SUBJECT you trained the model on and how/where you want it to appear
is there any work around to get the scale right , atleast 80% accurate ?. my product looks smaller. sometimes thinner
More training photos. You can try with 50 - 100 training photos. It will increase the time it takes to create the LoRa but that should improve the output
Can we train two subjects at a time, e.g., a consistent mug held by consistent character in various environments or backgrounds?
Yes you can...you can do that in comfyUI workflow combining 2 loras of 2 different subjects for example girls and a perse
@@vjatseslavjertsalov2829 please make a video on it
@@vjatseslavjertsalov2829 can u do it here directly, i tried but its not working well for 2 people, even with same amount of images for both.
Would this work for apparel photography? Ie., upload a photos a t-shirt - on a mannequin / flatlay / on-model?
Yes, i have seen examples of others do this! just make sure that the trigger word is unique enough. not just 'greenshirt' but 'green24nrirShirt'
@@Nico_AIRanking Thx! I will definitely give this a try. If it works it would have huge potential for the apparel industry.
I have attempted to train it for a t-shirt design twice, but it did not work. The limitation is that the design cannot have various variations, so I tried to replicate the same design and created around ten files (even though I knew it probably wouldn’t work). I used a unique trigger word and experimented with multiple ways to apply that trigger on the t-shirt , but none of them were successful.
@@bhautube do you think it work for solid color garments: eg a black t-shirt or a red hoodie. I don’t need variations but need very high quality photos.
So... Can I train LoRa with different products and make em be together in just ona final image, right? I supose must be better to train it one item at a time.
For now you are probably a lot more safer to train one at a time
Almost there, but not quiet there yet lol
at 6:15 your problem is that you didnt create a unique enough trigger word, greencup is too common and the flux model already associates that with many other images it was trained on. Trigger word should be as unique as possible for best results with Loras. This dates back to SD 1.5
That makes complete sense! Thanks for the valuable feedback, I appreciate it!
@@Nico_AIRanking You can use something like "trcu" or "bble" words that dont really exists or wont be in the model you built on ;) . If your training stes has different backgrounds for the same object and person it helps too. A site note , you dont need captions ofr objects or people (objects might help if not clear) as flux has the caption build in.
How to use chat GPT prompt flux
Hey Nico!
When I press start, it just keeps loading without anything actually happening. Any idea why this is? Also wrote in the skool forum!
Just going through the comments now! Ill chat in there
A tool to create prompts from prompts 😂
promptception
Dude change your keyboard, or off the sound, is really annoying for us to listen your hypernervious typing, just my feedback. Nice videos.
Noted 😅 i appreciate the feedback
I have nice headphones and the bass when you type is intense
Guys, youtubers, why you are all show such a easy object like mug for training?..It is is too easy because mug looks easy from all angles...Try to train sectioned sofa on photo set consisting of difficult shape sofas from different angles...i just trained one and results are mostly bad and inconsistent...It is only basically work if all images are from same angle or maybe if you write captions and describe every aspect of image in details. Or am i wrong?
exactly, my product isn't even being generated correctly
I completely agree. Nobody has been trained on any complex product. I'm trying to train one of our packages, which includes text, a logo, and an Illustrator product box. It's been a frustrating experience, and I haven't had success yet. I've been experimenting with various methods, such as using captions, not using captions, and adjusting the number of training steps to either 2000 or 3000. It's difficult, but I'm still working on it.
is that free?
It's open source but requires good hardware to run so I use it on a platform that charges very little
It's good but it's veery slow
To train yes. but i think once its trained is quick
is there nsfw possible?
😂 asking the real questions. Only via the API not by a third party
@@Nico_AIRanking Do you have to pay for API and how do you get it? Will the same process in this video work for nsfw images? (sorry I'm a noob at this stuff)
RIP photographers. First requirement: you going to need 10-20 photos beforehand. 😂 well done genius!
that you can take with a smartphone.... you muppet. perhaps pay attention
When you take a picture, either with a camera or a smartphone, you’re the photographer. Well done again 😂
@@danpinho hilarious that u think this isn’t extremely disruptive for the photographer/modeling industry !!
@@Lovewithnoend I visit clients weekly to produce images, and it’s clear you have no idea what you’re talking about. One common request is: “We want our workers in authentic work situations.” Do you have any idea how to achieve that with AI? We’re dealing with the interiors of company buildings, papers on desks, and computer screens displaying the company’s specific software. Yes, AI can generate amazingly realistic images, but the level of detail required to meet client expectations simply isn’t achievable with AI.
Here’s another example: products. I once took a photo of a croissant in my free time to show a concept to a client. His first comment? “This croissant wasn’t made in our bakery.” Honestly, all croissants look the same to me, but clients know their products down to the smallest details. Sure, AI can replicate a coffee cup, but a wedding or a company brochure featuring real workers? That’s a different story-they want to see themselves there.
Before making silly comments on UA-cam, visit real companies, work on real projects, and listen to real clients’ demands. Reality is different from what UA-cam presents.
@@danpinho it’s clear you have enough brain cells so you don’t crap yourself in the pants. Automation is a thing believe it or not, keep in mind that for AI training you don’t need to capture the “beauty” or the “artistic” style of the product, you just need to teach the model what product is. Same goes with the model. Even if you have to pay some dude to take 20 pictures of the product/person it will still be much cheaper than a photographer (add setup lighting, equipment, logistics). With an AI model you can put that product in a photo in any imaginable way.