It's Winter in New Zealand and I don't need a heater - my GPU is warming my room thanks to your Krita plugin Acly. Thanks mate! The Character separation is fantastic. The regions work beautifully. I suggest to others to use very broad thick strokes for Regions, (as in the start of your video) I've never been interested in the bare bones of Comfyui, there is always SOMETHING that needs to be done to an image to correct it in the way you want. The beauty of this plugin is it is embedded in a powerful image editor! I have access to Adobe products at my place of work that have Generative AI - and this plugin is superior. I can get an entirely different look just by changing the Model. You can't do that with Adobe. Acly, you are a Legend
After Adobe trying to fu** up their costumers, people gonna migrate to alternative options, and Krita is a perfect candidate, specially with a more powerfull AI
Totally agree with all the accolades! This and all the stuff you do for krita and comfyui users is outstanding! Thank you so much! Know that your work is loved.
you are Amazing, thankfully for everything you do, i love using images to express myself and with major disabilitys your tools give that expression back to me in a new way. also better i sucked at faces lol
This is an insane demonstration of the creative possibilities for art generation and artistic companion. Hard to go back to working the same way after this. Cheers, b
Hi Alcly, Thanks for the awesome plugin! I wanted to ask you something about it. Could you please tell us where we can load the embeddings? I’m feeling a bit confused and can’t figure out where they are
Load them as a line art control layer. Add them as a layer in Krita, select "Add Control Layer" next to Strength, change mode to Line Art or Scribble, select your sketch layer. Then make a new empty layer on top, and generate in it. The sketch should guide the generation.
Thanks for all the work. I have a question. How can I now copy from live generation to a new layer without destroying the previous result(active layer)? Is there a way to restore the old mode or add an additional button next to the new one: Copy to active layer and Copy to new layer?
Oh you are the guy that made the plugin! Fantastic work, I use this all the time. I have questions though. All the generations ever made, where are they saved? Cause in my mind this has to take up a lot of diskspace but I still can't find it? Also can I upgrade to the latest version without my generation history going kaput?
@@aclysia is this the default option? btw just wanted to say I love you 😀👍 your krita plugin is the best AI tool out there. I use it every day and it is so much fun to use!
This is so cool! But unfortunately it will throw an error on the regional masks due to the fact that they have alpha. Only solution I and a friend have found is to make a white background on them and use multiply, but then the zones don't work. Not sure how this works for Acly, but gives errors for me and a friend. The encoding of the alpha throws the error: expected mat1 and mat2 to have the same dtype, but got: float != struct c10::Half Any help would be MASSIVELY appreciated. I love this addon so much and really wanna use its regional stuff
I suspect it doesn't have anything to do with alpha. But it's impossible to tell without a bit more information. You can create an issue here: github.com/Acly/krita-ai-diffusion/issues Please provide GPU info and log files (github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#log-files)
I had the same problem. Updating the "ComfyUI Nodes for External Tooling" in the manager in ComfyUI fixed it. I was getting a "Background Region Does Not Exist" error.
NEED HELP On Ubuntu I passed everything according to the instructions with the version everything is fine, I see the script inside Crete. but it doesn’t show in Docker???!
I need someone to explain the first 46 seconds or so like I'm 5. Also what is the different between a region and a mask? How would you do the "monkey" for example, without regenerating everything around it, but NOT in "live" mode?
Installed 1.19.0. When I click the generate button, it only generates the image for the background layer, other linked regions are not auto generated. I precisely followed the steps shown by the video ....
Do you have any discord or anywhere there could be a help channel for this? You make it seem easy but I'm having a lot of trouble since I"m not used to it
The rough paintings on each layer : are they generated in "automatic scribble" mode? Or do they work as "region mask" that indicate the plug in where to generate. (So the text prompt is delivering all the content info (and the sketch has nothing to do with content details)?
That depends entirely on the Strength slider. At 100% they are used only as region masks, the actual colors/content doesn't matter. This is shown at the beginning (adventurer/skeleton). For the second part, the rough painting is used as a starting point for the AI generation, and color/position/shape influences what is generated. Lower strength restricts how much generation can change what you paint.
@@aclysia Okay, that sound instructive. I have the problem that the the generator ignores my bask borders (strentgh at 100%). May be this is why my scenery is a bit inverse to your first example in the video: AN open door at the rear wall (no windows) of modern office (minimal, elegant) interior,leading into a green rain forest with rich vegetation. mabs this is problematic because the smallest Mask should show something, that is in background (but small), while in your example the smalles Mask (traveler) is in front. Btw: Is the prompt common for each layer important? Why? Thankks for your fine work!
okay so i tried it, i have everything set up like in the video (the first part) prompts, layers etc... but when i generate it's seems to not care about the regions, and just generate a scene regardless of it. any ideas why? edit: i'm using a turbo model (dreamshaperXL) could that be an issue?
It generally works with all models, but it depends on how well they can follow your intended composition. I'd recommend to try a simple scenario, keep your regions large and separated, go from there. Regions are very good at keeping prompts seperated, but they don't always force a certain composition. Other tools like control layers (scribble/lines/depth...) are better at that. Using live mode like in the 2nd part of the video is also good way to get a feel at which point a region registers with the model.
Did you cherry picked your results ? It feels really random when i use it, sometimes it kinda work (like it will only generate 2 reg. layers out of 3), sometimes my regional prompt is just outright ignored and i get a basic character generation with nothing that look like my regional prompts and paint layers. I tried with a handful of models from sd1.5 to sdxl to pony but nothing seem to follow the regions either 😕
I did cut out a lot of not-so-great results, but not because they didn't follow regions (just other typical SD issues). There are "soft" limitations for using regions: If they are too small they can be ignored. It helps to use a general description of the whole scene as bottom/root prompt. Regions are great to keep prompts separated, but composition-wise can't force the model to generate something it would not otherwise generate.
No Fuss, No Annoying voice or music. Just showing you how to do it. Liked and Subbed.
It's Winter in New Zealand and I don't need a heater - my GPU is warming my room thanks to your Krita plugin Acly. Thanks mate!
The Character separation is fantastic. The regions work beautifully. I suggest to others to use very broad thick strokes for Regions, (as in the start of your video)
I've never been interested in the bare bones of Comfyui, there is always SOMETHING that needs to be done to an image to correct it in the way you want. The beauty of this plugin is it is embedded in a powerful image editor!
I have access to Adobe products at my place of work that have Generative AI - and this plugin is superior. I can get an entirely different look just by changing the Model. You can't do that with Adobe.
Acly, you are a Legend
which is your gpu
@@study_fizz_malayalam NVidia RTX 2070 Super
Thanks so much for this! I switch to Krita from Photoshop because of your plugin😁
good move, plus what you spent you can save to upgrade
I wonder if Adobe is looking at this extension and getting very worried..... I mean the price of Krita this extension can't be beaten
After Adobe trying to fu** up their costumers, people gonna migrate to alternative options, and Krita is a perfect candidate, specially with a more powerfull AI
This tool its amazing. The freedom its absolute. Thanks, Acly. Your work is supreme.
Totally agree with all the accolades! This and all the stuff you do for krita and comfyui users is outstanding! Thank you so much! Know that your work is loved.
Thanks so much for this update Acly. This looks extremely powerful - can't wait to go home and try this
amazing stuff, I have been trying the regions for a couple hours and i find it just powerfull!!!, a tool that will give us a lot of control, thanks!
you are Amazing, thankfully for everything you do, i love using images to express myself and with major disabilitys your tools give that expression back to me in a new way. also better i sucked at faces lol
Look forward to your next video on this Streamtabulous!
This is an insane demonstration of the creative possibilities for art generation and artistic companion. Hard to go back to working the same way after this.
Cheers,
b
amazing new addition to Krita Diffusion! Such a great addon, yet sadly not getting the exposure it needs still!
Thank you Acly, your work is very impressive. You have saved me hundreds of hours of work. ❤
Thank you, Acly! This is so good.
This is magic ... The only problem I have with Krita are some keys, shortcuts, some simple stuff but this ... will powerup this software a lot.
thank you for all the hard works you have done to achieve this!! 💯💯💯❤❤❤
Hello Acly
i just wanted to say thank you for this amazing work , krita+ sdai is the way to for ai generation of art.
keep it up.
thanks for all the work man, you work is amazing
Hi Alcly,
Thanks for the awesome plugin!
I wanted to ask you something about it. Could you please tell us where we can load the embeddings? I’m feeling a bit confused and can’t figure out where they are
I have rough sketches and I want to convert them into manga style or colored manga, how can I do that?
Load them as a line art control layer. Add them as a layer in Krita, select "Add Control Layer" next to Strength, change mode to Line Art or Scribble, select your sketch layer. Then make a new empty layer on top, and generate in it. The sketch should guide the generation.
Mind-blowing 🤩🤩🤩
Thats beyond awesome, kudos on the good work.
it would be great if the comfi process could be turned into a similar gui, because that's what's missing
you are the hero
Thank you so much!!!!!!!!!!!!!!!!! Definitely, when i have an income from this, i will Sponsor you!!
Thankyou for this amazing tool man!
謝謝 acly!
Thanks for all the work. I have a question.
How can I now copy from live generation to a new layer without destroying the previous result(active layer)? Is there a way to restore the old mode or add an additional button next to the new one: Copy to active layer and Copy to new layer?
There's a second button in latest update (1.18.1)
github.com/Acly/krita-ai-diffusion/releases/tag/v1.18.1
someone asked and I will asked to:
Which checkpoint and lora are you using mate? I like it
Thanks for the plugin!!
I dont jnow why when im adding people when inpainting always appear deformed or blurred... :/
Amazing and great work🥵! Thanks
Oh you are the guy that made the plugin! Fantastic work, I use this all the time. I have questions though. All the generations ever made, where are they saved? Cause in my mind this has to take up a lot of diskspace but I still can't find it? Also can I upgrade to the latest version without my generation history going kaput?
Generation history is stored inside the .kra (compressed). Documents from previous versions usually open fine in latest version.
My krita doesn't have that "T+" button, used for segmentation controlnet, how can i make this button appear to me?
incredible, thx!
Great regional method, would you please also share which XL model you were using for this illustration style? Mine seems not working as well.
ZavyChromaXL was used in most of the video
@@aclysia is this the default option?
btw just wanted to say I love you 😀👍
your krita plugin is the best AI tool out there. I use it every day and it is so much fun to use!
This plugin blows my mind! Is there an option to generate for example consistent characters like between comics frame? The best based on my own paint?
Which checkpoint and lora are you using mate? I like it
This is so cool! But unfortunately it will throw an error on the regional masks due to the fact that they have alpha. Only solution I and a friend have found is to make a white background on them and use multiply, but then the zones don't work. Not sure how this works for Acly, but gives errors for me and a friend. The encoding of the alpha throws the error:
expected mat1 and mat2 to have the same dtype, but got: float != struct c10::Half
Any help would be MASSIVELY appreciated. I love this addon so much and really wanna use its regional stuff
I suspect it doesn't have anything to do with alpha. But it's impossible to tell without a bit more information.
You can create an issue here: github.com/Acly/krita-ai-diffusion/issues
Please provide GPU info and log files (github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#log-files)
I had the same problem. Updating the "ComfyUI Nodes for External Tooling" in the manager in ComfyUI fixed it. I was getting a "Background Region Does Not Exist" error.
Hello, can you please make a video on how to install torch-directml for amd users 😢
NEED HELP On Ubuntu I passed everything according to the instructions with the version everything is fine, I see the script inside Crete. but it doesn’t show in Docker???!
where to get the XL graphic Bold style you are using ?
Always funny looking at these impossible things AI create. Like the shadows being cast by the buildings, oposite way of the sun :D
How generate with transparent like "LayerDiffuse"?
I need someone to explain the first 46 seconds or so like I'm 5. Also what is the different between a region and a mask? How would you do the "monkey" for example, without regenerating everything around it, but NOT in "live" mode?
Installed 1.19.0. When I click the generate button, it only generates the image for the background layer, other linked regions are not auto generated. I precisely followed the steps shown by the video ....
I have similar problem, regions are problematic for some reason.
I am having a error about insiteface installation... Their is a solution for windows, but how to solve it in Linux.
Are you using an online model or local? Also, what's your GPU?
holy shit this is amazing
Hello, can you make one of Goku? In the style of Akira Toriyama
Do you have any discord or anywhere there could be a help channel for this? You make it seem easy but I'm having a lot of trouble since I"m not used to it
discord.gg/pWyzHfHHhU
It's not really hard, but it takes some trial and error to figure out limitations of ai, what works and what doesn't
The rough paintings on each layer : are they generated in "automatic scribble" mode? Or do they work as "region mask" that indicate the plug in where to generate. (So the text prompt is delivering all the content info (and the sketch has nothing to do with content details)?
That depends entirely on the Strength slider. At 100% they are used only as region masks, the actual colors/content doesn't matter. This is shown at the beginning (adventurer/skeleton). For the second part, the rough painting is used as a starting point for the AI generation, and color/position/shape influences what is generated. Lower strength restricts how much generation can change what you paint.
@@aclysia Okay, that sound instructive. I have the problem that the the generator ignores my bask borders (strentgh at 100%). May be this is why my scenery is a bit inverse to your first example in the video: AN open door at the rear wall (no windows) of modern office (minimal, elegant) interior,leading into a green rain forest with rich vegetation. mabs this is problematic because the smallest Mask should show something, that is in background (but small), while in your example the smalles Mask (traveler) is in front.
Btw: Is the prompt common for each layer important? Why? Thankks for your fine work!
Hi Tom (gcr)!
okay so i tried it, i have everything set up like in the video (the first part) prompts, layers etc... but when i generate it's seems to not care about the regions, and just generate a scene regardless of it. any ideas why?
edit: i'm using a turbo model (dreamshaperXL) could that be an issue?
It generally works with all models, but it depends on how well they can follow your intended composition. I'd recommend to try a simple scenario, keep your regions large and separated, go from there. Regions are very good at keeping prompts seperated, but they don't always force a certain composition. Other tools like control layers (scribble/lines/depth...) are better at that.
Using live mode like in the 2nd part of the video is also good way to get a feel at which point a region registers with the model.
@@aclysia OK thank your answer, i will look into it :)
can i make consistent character with this?
谢谢您
why i have only 'add controll layer" button,
update the plugin
@@VanisherXP ah ty , is there any way to use autotag complet extention on krita??
Adobe is dead now. Long live the real king!
Honestly, let's stay opensource, folks.
may i know your system specs
doesnt work with refine only full 100% new generating
Did you cherry picked your results ? It feels really random when i use it, sometimes it kinda work (like it will only generate 2 reg. layers out of 3), sometimes my regional prompt is just outright ignored and i get a basic character generation with nothing that look like my regional prompts and paint layers. I tried with a handful of models from sd1.5 to sdxl to pony but nothing seem to follow the regions either 😕
I did cut out a lot of not-so-great results, but not because they didn't follow regions (just other typical SD issues). There are "soft" limitations for using regions:
If they are too small they can be ignored.
It helps to use a general description of the whole scene as bottom/root prompt.
Regions are great to keep prompts separated, but composition-wise can't force the model to generate something it would not otherwise generate.
@@aclysia Oh yeah, it makes sense. I'll try with bigger regions and see how it performs then, thanks