I believe this feature doesn’t work as advertised. A character is the combination of face, hair and makeup. Wardrobe and props. And it fails to keep consistency of all these elements. Perhaps this is the reason why nobody is showcasing this new feature completely in UA-cam.
But consistency in clothing is also necessary. I mean, it’s fine, but to say it's consistent, it’s not enough for it to just look similar-it really has to be a 1-to-1 match with the reference image to be usable for creating stories. Imagine if in every frame the clothing is different; it becomes difficult to maintain cohesion. I think it’s still better to train the model to include clothing as well, at least until this technology advances a bit more.
Hi @alrimvt02, great point! This is Coco from OpenArt. We added a toggle recently (after this video is made) to let you choose whether strictly keep character key features including hair clothing and so on. It's default on and it does help a lot keep the clothes consistent. However, we know that it's not perfect yet, and we'll work on making it better.
Looks fine if you want to view a character in a bunch of random outfits, but useless for storytelling. Not one of those images matched the input image character + outfit closely enough to use in a story. Kling Elements seems to be the only option available right now. Are you planning on adding it any time soon?
Hi @morpheus2573, great point! This is Coco from OpenArt. We added a toggle recently (after this video is made) to let you choose whether strictly keep character key features including hair clothing and so on. It's default on and it does help a lot keep the clothes consistent. However, we know that it's not perfect yet, and we'll work on making it better.
@@coco-openart Thanks for the update. Good luck with your development towards the Holy Grail of character consistency. TBH, that's all that really matters now. After years of "it's coming soon," from AI platforms, I feel like putting down the storyboard and coming back in a few months when it's finally been sorted.
So lit!! Will OpenArt have lip sync soon? Also I found that, a non-square image usually gives better results, not sure why 🧐 I see you like to generate rectangle images too
But how do we find out how much these generations, renderings, actions in OpenArt costs in credits? I noticed Helene's totals went from 47,000+ down to less than 25,000 with what she did. Do you have a chart or table that lays out how much for each thing? I don't mean your basic pricing.
hi great question & observation! from 47,000 to 25,000 i actually did much work other than recording clips for the video. i tested new features, generated some marketing content, and helped experiment things for users. i also dont have the best habit when it comes to using credits because my account is whitelisted, and i can add as many credits as i need for myself 🤣 but to view #credits for different actions, next to the generate button for each feature you'll see a line indicating "XX credits will be charged". It's usually 1 credit for each image generated from most SD-based models, 5 for FLUX Dev, 10 for trained models/characters. Most editor actions are ~5. (i dont remember everything exactly but something like that; you can view it before performing these actions!) More consumption heavy things like training a model or creating a character costs 2,000 credits each.
You can easily burn through 10,000 credits in one day fighting the "demon", as I call it, in the AI's mind. For example, videos are 150 credits each for a 5 second video so 10,000 credits gets you 66 five second videos most of which will not be what you wanted. Sometimes it comes out right the first time but most of the time demon will distort or add something, for example, a tail or floating bubble thingy that follows the character's butt or an extra leg or a foot growing out of another foot so you have to keep fine tuning the prompts and rerunning the process over and over. Different processes require different credit amounts ranging from 5 to 150 but it's rather expensive because the demon forces you to rerun the processes so much. Hopefully as I learn more it will get easier but after two months I still get those goddam tails and butt bubbles.
@ that's a comical way to think of it! Within our team we like to say we're doing a "lottery", it's like gambling, or drawing cards out of a deck. Because of the inherent randomness of AI, and even though we've made some cool advances the quality can definitely get better. We're working hard so that the demon appears less :)
Hi Helena! Thank you for sharing your tutorial.I'm very excited that your system has Kling for video generation, but unfortunately I can't find details about how many minutes of video I will get I will get with 12,000 credits in the advanced package. Can you please direct me to your website where I can find this information? Thank you! -Lisa
@@lisainseattle1 hi thanks for looking this up! each 5s video generation is 150 credits, so there's up to 80 of these 5s gens in the advanced pack :) that makes 6-7 minutes
Thank you for your videos! I created my character model, but how do I generate images using my model in the different preset styles listed? It doesn’t let me select one. Would really appreciate your help!
How can we stop the system from putting a tail on every character? Even with creativity set to zero it will inevitably put a tail or a strange looking floating bubble around the characters butt. Why does it do that?
did you use the create from one image option? if generations keep showing a tail when your original image didn't, it's possible the character training wasnt that successful. one thing to check that might help, is if you used a full body reference image for the character. if you only had a half body shot, AI can only "guess" the lower part and might have gotten it wrong. If so, uploading a full body reference image would help.
I love that KLING 1.6 has been added! I used to rely on Luma, but now that I’m astonished by how good KLING is, I know I’ll just stick to OpenArt. One question-will image-to-image be available in KLING with OpenArt? That’s a feature I know I’ll miss from Luma. Here’s a video I made using the model option for consistent character and KLING 1.6 in OpenArt: ua-cam.com/video/cyGZe-7Hecw/v-deo.html
@@dawnmccarthy249 hi! is your character photorealistic or cartoon/digital rendering? when using 4+ images, here're some ways that could help: 1. make sure the face remains consistent across your training images 2. include a good variety of angles of the face in the training images; ie if all your training images are front shot, the character might not generate a good side profile 3. if you care about the face of the character a lot (in some cases you'd care less about a face but more about the full body), try to include high fidelity shots where face takes up a bigger portion of the image. like at least some upper body shots or even a headshot where you can see the face clearly if you still have trouble feel free to dm helena on discord, you can find me in the openart server. link in bio
Learned a lot as always, you aren't just beautiful, you are super clever.
thanks for always sticking around! means a lot
I believe this feature doesn’t work as advertised. A character is the combination of face, hair and makeup. Wardrobe and props. And it fails to keep consistency of all these elements. Perhaps this is the reason why nobody is showcasing this new feature completely in UA-cam.
amazing
But consistency in clothing is also necessary. I mean, it’s fine, but to say it's consistent, it’s not enough for it to just look similar-it really has to be a 1-to-1 match with the reference image to be usable for creating stories. Imagine if in every frame the clothing is different; it becomes difficult to maintain cohesion. I think it’s still better to train the model to include clothing as well, at least until this technology advances a bit more.
Hi @alrimvt02, great point! This is Coco from OpenArt. We added a toggle recently (after this video is made) to let you choose whether strictly keep character key features including hair clothing and so on. It's default on and it does help a lot keep the clothes consistent. However, we know that it's not perfect yet, and we'll work on making it better.
Nice. Are you teasing us with those lip-synced videos of your Character? 🙂
Yes. Please create a video with your lip-synch workflow.
haha 🫢 we shall see a lip sync video soon!!
@@openart_aiNoice...
Looks fine if you want to view a character in a bunch of random outfits, but useless for storytelling. Not one of those images matched the input image character + outfit closely enough to use in a story. Kling Elements seems to be the only option available right now. Are you planning on adding it any time soon?
Hi @morpheus2573, great point! This is Coco from OpenArt. We added a toggle recently (after this video is made) to let you choose whether strictly keep character key features including hair clothing and so on. It's default on and it does help a lot keep the clothes consistent. However, we know that it's not perfect yet, and we'll work on making it better.
@@coco-openart Thanks for the update. Good luck with your development towards the Holy Grail of character consistency. TBH, that's all that really matters now. After years of "it's coming soon," from AI platforms, I feel like putting down the storyboard and coming back in a few months when it's finally been sorted.
So lit!! Will OpenArt have lip sync soon? Also I found that, a non-square image usually gives better results, not sure why 🧐 I see you like to generate rectangle images too
working on bringing lip sync soon 🤫
and yes i found square images are usually much more dull than vertical/horizontal aspect ratios!
But how do we find out how much these generations, renderings, actions in OpenArt costs in credits? I noticed Helene's totals went from 47,000+ down to less than 25,000 with what she did. Do you have a chart or table that lays out how much for each thing? I don't mean your basic pricing.
hi great question & observation! from 47,000 to 25,000 i actually did much work other than recording clips for the video. i tested new features, generated some marketing content, and helped experiment things for users. i also dont have the best habit when it comes to using credits because my account is whitelisted, and i can add as many credits as i need for myself 🤣
but to view #credits for different actions, next to the generate button for each feature you'll see a line indicating "XX credits will be charged". It's usually 1 credit for each image generated from most SD-based models, 5 for FLUX Dev, 10 for trained models/characters. Most editor actions are ~5. (i dont remember everything exactly but something like that; you can view it before performing these actions!) More consumption heavy things like training a model or creating a character costs 2,000 credits each.
You can easily burn through 10,000 credits in one day fighting the "demon", as I call it, in the AI's mind. For example, videos are 150 credits each for a 5 second video so 10,000 credits gets you 66 five second videos most of which will not be what you wanted. Sometimes it comes out right the first time but most of the time demon will distort or add something, for example, a tail or floating bubble thingy that follows the character's butt or an extra leg or a foot growing out of another foot so you have to keep fine tuning the prompts and rerunning the process over and over. Different processes require different credit amounts ranging from 5 to 150 but it's rather expensive because the demon forces you to rerun the processes so much. Hopefully as I learn more it will get easier but after two months I still get those goddam tails and butt bubbles.
@ that's a comical way to think of it! Within our team we like to say we're doing a "lottery", it's like gambling, or drawing cards out of a deck. Because of the inherent randomness of AI, and even though we've made some cool advances the quality can definitely get better.
We're working hard so that the demon appears less :)
@@openart_ai I forgot to mention I enjoy Open Art AI immensely despite the demon.
❤🔥❤🔥❤🔥❤🔥😘🤟🏾
Also are there any ways to generate images that look hand sketched/drawn based on your trained character model?
Hi Helena! Thank you for sharing your tutorial.I'm very excited that your system has Kling for video generation, but unfortunately I can't find details about how many minutes of video I will get I will get with 12,000 credits in the advanced package. Can you please direct me to your website where I can find this information? Thank you! -Lisa
@@lisainseattle1 hi thanks for looking this up! each 5s video generation is 150 credits, so there's up to 80 of these 5s gens in the advanced pack :) that makes 6-7 minutes
Thank you for your videos! I created my character model, but how do I generate images using my model in the different preset styles listed? It doesn’t let me select one. Would really appreciate your help!
How can we stop the system from putting a tail on every character? Even with creativity set to zero it will inevitably put a tail or a strange looking floating bubble around the characters butt. Why does it do that?
did you use the create from one image option? if generations keep showing a tail when your original image didn't, it's possible the character training wasnt that successful.
one thing to check that might help, is if you used a full body reference image for the character. if you only had a half body shot, AI can only "guess" the lower part and might have gotten it wrong. If so, uploading a full body reference image would help.
I love that KLING 1.6 has been added! I used to rely on Luma, but now that I’m astonished by how good KLING is, I know I’ll just stick to OpenArt. One question-will image-to-image be available in KLING with OpenArt? That’s a feature I know I’ll miss from Luma.
Here’s a video I made using the model option for consistent character and KLING 1.6 in OpenArt:
ua-cam.com/video/cyGZe-7Hecw/v-deo.html
How to animate like in the video?
My characters face is not staying consistent even though I trained it with more than 4 photos. :(
@@dawnmccarthy249 hi! is your character photorealistic or cartoon/digital rendering?
when using 4+ images, here're some ways that could help:
1. make sure the face remains consistent across your training images
2. include a good variety of angles of the face in the training images; ie if all your training images are front shot, the character might not generate a good side profile
3. if you care about the face of the character a lot (in some cases you'd care less about a face but more about the full body), try to include high fidelity shots where face takes up a bigger portion of the image. like at least some upper body shots or even a headshot where you can see the face clearly
if you still have trouble feel free to dm helena on discord, you can find me in the openart server. link in bio