There's not much artists showcasing the way they work with Krita so this video is gold, i was actually wondering what process could be used in the manual creation part, now i'm feel a bit less at loss lol
Great video! I think the depth control net needs an actual depth image (depth as grayscale) as reference, it doesn't create it automatically from the referenced layer if I'm not mistaken. So you need to click "From Image" on it and let it reference the created grayscale image. You don't need to move the line art layer below the others; just hide it with the eye icon; it will still work for the control net but will be ignored for the denoising.
Thanks! You are right about needing to click the "from image" button first. I am so used to having ComfyUI set up to do the pre-processor automatically, I forgot it here. Great tip on just hiding the layer!
I managed to install it well but it is hard to get the basics for the actual software parts if you never used image edition. Like this part on 4:35 how did you get it to only edit the hand? I used the select tool but I don't know where to go from there.
Once you select part of the image, it will only change that part when you click "generate" (or "play" in the live mode). In the generate mode you will be given special fill options once you make a selection (add/remove content etc. ).
I need help please! I have a legion 5 with decent 1660 Ti with lots of storage I've downloaded like 5 models and some more I've tried the same process as yours on a line are with using the line art model but its so slow it took like half an hour or even an hour for the first result The dimensions are 1000x1000 and everything the same nothing extra What can i do to make it faster cause i feel there is a problem i can't figure
hey sorry for asking, but I'm not sure where else I can find help. Upscaling doesn't work for me whenever I use an SDXL model. I always get the error, that I should download the xinsir promax, but krita already downloaded it when I installed the ai diffusion plugin. Do you or anybody reading this has a solution?
This is exactly how I figured people could use AI to aid in creating art instead of vilifying it's use. Sure it can create things without any help from a human but, as shown by you, it can completely change the efficiency of actual human artists. Well done sir.
The picture looks like 20 different artists throughout and becomes a disjointed RNG AI piece. Having vision and achieving it vs a slot machine are not the same things.
@@DG-LG I don't think it is as bad as being a "random number generator". There is control, but I still wouldn't be comfortable presenting the finished piece here as my creative vision. There is still to much I accepted rather than decided on. I could have done more and gotten in there. This could be used as part of a larger creative process.
@@DG-LG Its great for concept art. Studios are already all over it because it speeds up the production. Instead of hating ai try to embrace it and use it to ur advantage otherwise u will end up being behind and finally forgotten.
I love your vids, they are super informative. Keep em up! Something that would be really helpful, at least for me, would be if you could do a short focused vid on the common krita features that you use in these workflows. I have a lot of SD experience, but have only done super basic inpainting with a brush and colors before. So, things like when and how to use various selection tools, brushes, layers, and masks.
Thanks! I think I'll do a more basic creation walkthrough and include an explanation for the Krita tools I am using. I realized I was skipping over a lot of that here.
wow awesome i will use this for my drawings too lul , BTW if u are drawing many characters ( for a Manga cover with at least 4 characters in different angles ) , probably u need to use the IA in each one in different layers?? what u recommend?
The "Regional Prompting" recently added to the plugin would probably be best for that. I am working on a tutorial about that. In the mean time, the developer has made a video demonstrating it: ua-cam.com/video/PPxOE9YH57E/v-deo.html
it actually good to create some stock image, which honestly I dont mind, and for art it more like a patch tool, I still wont trust it for create an art so I use it for some small adjustments task and the rest I still draw them myself.
Hey I tried installing but getting this error"raise Exception("Custom node 'ComfyUI_IPAdapter_plus' is outdated, please update.") Exception: Custom node 'ComfyUI_IPAdapter_plus' is outdated, please update." I already have comfy ui installed
@@IntelligentImage It's new from today so it's normal. It adds regional controls, if you find good examples with ipadapter and controlnet it would be interesting
its great for referencing...and this tool in krita is awesome for people who are learning the ropes as they'll get to see step by step their own stuff get 'corrected' and how it does.
@@quantumsoul3495 I would be careful about referencing AI if you art just starting to learn art. It can give you something that looks right, but is actually wrong. Sometimes it can be harder to figure what the AI has done than start from scratch. I happened to me with the hands here.
@@quantumsoul3495 If you are just starting to learn drawing/painting, I wouldn't use AI. At least not as a tool to try and learn from. Once you have learned the fundamentals (form, anatomy, perspective, color, composition), you will be able to better evaluate the quality of what the AI is giving you and it will be a more valuable tool. So yes, if your goal is to learn digital drawing or painting make sure you learn it first without AI.
When you lower the strength (denoising strength) below 100% the option changes from "Generate" to "Refine" because it is now resampling the current image instead of creating an entirely new one.
Depending on which SD model you use, optimal format is limited by the size of images used to train said model (512x512 for SD1.5, 1024x1024 for SDXL and SD3). Then you may use some specific secondary formats determined by SD standards. But it can already be a pain, for instance you may get duplications and anatomy mistakes by going for a non square format. At the very least to mitigate calculation issues if you don't go for one of these standards, you should only use sizes respecting the 64 steps (64x64, 64x128 etc). So you shouldn't use 1000x1000 but 1024x1024.
@@TheTwober I thought the aspect ratio mattered more than actual pixel dimensions. I guess I should stick to powers of two. I don't even know why I chose what I did 😅 Thanks!
@@IntelligentImage aspect ration also. The thing is that usually AI models are trained for exactly one image size. If you i.E. use 512 models on a 1024 canvas, they will usually start to duplicate content, have people with 4 legs or 2 torsos. And vice versa if you use 1024 models on a 512 canvas they tend to zoom it far too much. There is probably little visual difference between using a 512 model on a 500 canvas, but if the size is not crucial for your work, then better stick to the power of 2 size that the model was designed for.
Useful video, but too much zooming in and out, just keep a full screen and highlight clicks. Way too much movement that takes away from the use of this otherwise great video.
Thanks! I've been meaning to look for a better solution. At the very least it would save me some editing time. I have to keep in mind that some viewers have poor eyesight or are watching on their phone.
I believe there is a way even in OBS to show mouse clicks with highlights. I used to have something that would show mouse clicks with a callout and keystrokes typed onscreen. Have you tried to see how it looks with full screen, then after recording magnify a section of the screen and place it in the corner as PIP?
There's not much artists showcasing the way they work with Krita so this video is gold, i was actually wondering what process could be used in the manual creation part, now i'm feel a bit less at loss lol
Thanks! There is a lot to explore here.
Great video!
I think the depth control net needs an actual depth image (depth as grayscale) as reference, it doesn't create it automatically from the referenced layer if I'm not mistaken. So you need to click "From Image" on it and let it reference the created grayscale image.
You don't need to move the line art layer below the others; just hide it with the eye icon; it will still work for the control net but will be ignored for the denoising.
Thanks!
You are right about needing to click the "from image" button first. I am so used to having ComfyUI set up to do the pre-processor automatically, I forgot it here.
Great tip on just hiding the layer!
Loving the tutorial/walkthrough/process you’ve got going here; very much appreciated
Thank you!
I managed to install it well but it is hard to get the basics for the actual software parts if you never used image edition. Like this part on 4:35 how did you get it to only edit the hand? I used the select tool but I don't know where to go from there.
Once you select part of the image, it will only change that part when you click "generate" (or "play" in the live mode). In the generate mode you will be given special fill options once you make a selection (add/remove content etc. ).
@@IntelligentImage It did not appear for me. Maybe I am doing something wrong.
I need help please!
I have a legion 5 with decent 1660 Ti with lots of storage
I've downloaded like 5 models and some more
I've tried the same process as yours on a line are with using the line art model but its so slow it took like half an hour or even an hour for the first result
The dimensions are 1000x1000 and everything the same nothing extra
What can i do to make it faster cause i feel there is a problem i can't figure
There is a section on the wiki that might help: github.com/Acly/krita-ai-diffusion/wiki/common-issues#image-generation-is-really-slow
What is recommended hardware for this plugin?
You will need a GPU with at least 6 GB of VRAM minimum. I would recommend 8 or 12 GB.
hey sorry for asking, but I'm not sure where else I can find help. Upscaling doesn't work for me whenever I use an SDXL model. I always get the error, that I should download the xinsir promax, but krita already downloaded it when I installed the ai diffusion plugin. Do you or anybody reading this has a solution?
Sorry, I don't know why this error might be. Maybe someone else who reads this can help 🤞
This is exactly how I figured people could use AI to aid in creating art instead of vilifying it's use. Sure it can create things without any help from a human but, as shown by you, it can completely change the efficiency of actual human artists. Well done sir.
The picture looks like 20 different artists throughout and becomes a disjointed RNG AI piece. Having vision and achieving it vs a slot machine are not the same things.
Thanks! I'm still exploring the AI/human creative process. I'm trying to come up with an AI workflow that still retains total human control.
@@DG-LG I don't think it is as bad as being a "random number generator". There is control, but I still wouldn't be comfortable presenting the finished piece here as my creative vision. There is still to much I accepted rather than decided on. I could have done more and gotten in there. This could be used as part of a larger creative process.
@@IntelligentImage Now work in a studio that requires a consistent style and world-building logic.
@@DG-LG Its great for concept art. Studios are already all over it because it speeds up the production. Instead of hating ai try to embrace it and use it to ur advantage otherwise u will end up being behind and finally forgotten.
I love your vids, they are super informative. Keep em up!
Something that would be really helpful, at least for me, would be if you could do a short focused vid on the common krita features that you use in these workflows. I have a lot of SD experience, but have only done super basic inpainting with a brush and colors before.
So, things like when and how to use various selection tools, brushes, layers, and masks.
Thanks! I think I'll do a more basic creation walkthrough and include an explanation for the Krita tools I am using. I realized I was skipping over a lot of that here.
@@IntelligentImage that would be great!
wow awesome i will use this for my drawings too lul , BTW if u are drawing many characters ( for a Manga cover with at least 4 characters in different angles ) , probably u need to use the IA in each one in different layers?? what u recommend?
The "Regional Prompting" recently added to the plugin would probably be best for that. I am working on a tutorial about that. In the mean time, the developer has made a video demonstrating it: ua-cam.com/video/PPxOE9YH57E/v-deo.html
Super, thanks a lot.
Glad you liked it!
Good stuff Mr.II. Thanks for all the guides.
Thanks! Glad you like them!
Do you have a tutorial to install the plugins?
Do you have a tutorial to install the plugins?
I go over it in my intro to Krita AI video: ua-cam.com/video/C8HZG_ER7VQ/v-deo.html
You can find the install guide here: www.interstice.cloud/plugin
using pony v6 xl with krita ai would be a good deal
Coming soon!
@@IntelligentImage I can't wait to see
it that way require a large GPU store? the same as install automatic1111 on PC?
It will require a pretty fast gpu for it to run quickly.
it actually good to create some stock image, which honestly I dont mind, and for art it more like a patch tool, I still wont trust it for create an art so I use it for some small adjustments task and the rest I still draw them myself.
Yes, I think creating images for reference is probably the best use.
thanks for sharing, great help.
Glad it was helpful!
Hey I tried installing but getting this error"raise Exception("Custom node 'ComfyUI_IPAdapter_plus' is outdated, please update.")
Exception: Custom node 'ComfyUI_IPAdapter_plus' is outdated, please update." I already have comfy ui installed
You may need to update those nodes using the manager in your ComfyUI installation.
Did you check out new Krita AI release?
I didn't know about it until I saw your comment. I will definitely be looking at it more. I can't keep up with the pace of development!
@@IntelligentImage It's new from today so it's normal. It adds regional controls, if you find good examples with ipadapter and controlnet it would be interesting
I'll definitely be checking it out!
i tried ai some times, but it really anoys me that it controls my process more than i control it. now i just draw.
its great for referencing...and this tool in krita is awesome for people who are learning the ropes as they'll get to see step by step their own stuff get 'corrected' and how it does.
@@alias234For people who are learning drawing ?
@@quantumsoul3495 I would be careful about referencing AI if you art just starting to learn art. It can give you something that looks right, but is actually wrong. Sometimes it can be harder to figure what the AI has done than start from scratch. I happened to me with the hands here.
@@IntelligentImage Do you think learning with krita (first without ai) is a good way ?
@@quantumsoul3495 If you are just starting to learn drawing/painting, I wouldn't use AI. At least not as a tool to try and learn from. Once you have learned the fundamentals (form, anatomy, perspective, color, composition), you will be able to better evaluate the quality of what the AI is giving you and it will be a more valuable tool. So yes, if your goal is to learn digital drawing or painting make sure you learn it first without AI.
Why do you have "refine" as an option?
When you lower the strength (denoising strength) below 100% the option changes from "Generate" to "Refine" because it is now resampling the current image instead of creating an entirely new one.
@@IntelligentImage Oh, thanks man :)
Hi i happen to encounter a problem install Ai to my krita
What problem are you having?
"Is it possible to include the Flux 1.0 model to use it within Krita?"
For real footage whit Humans
Yes, Flux is currently supported experimentally: github.com/Acly/krita-ai-diffusion/discussions/1176
Depending on which SD model you use, optimal format is limited by the size of images used to train said model (512x512 for SD1.5, 1024x1024 for SDXL and SD3). Then you may use some specific secondary formats determined by SD standards. But it can already be a pain, for instance you may get duplications and anatomy mistakes by going for a non square format. At the very least to mitigate calculation issues if you don't go for one of these standards, you should only use sizes respecting the 64 steps (64x64, 64x128 etc). So you shouldn't use 1000x1000 but 1024x1024.
You're right, I try and at least stick to base 2 numbers. I don't know why I didn't here.
Do you haved direct linked download general ai Krita show me how I trouble connect my general ai sir thanks show me how ok made UA-cam show me ok
What's your hardware?
I only have 6gb VRAM so it runs slow if I run it locally. I use the RunDiffusion remote service.
@@IntelligentImage appreciated!
another way to fix cloudy or blur generation is to improve steps if possible
Thanks! I'll give that a try!
Using image size values that are not a power of 2 is illegal!
I guess I should have done 1024x1024😅 Does it matter?
@@IntelligentImage Slightly, yes. Those models are trained for a specific image size and deliver best results if you match that exactly.
@@TheTwober I thought the aspect ratio mattered more than actual pixel dimensions. I guess I should stick to powers of two. I don't even know why I chose what I did 😅 Thanks!
@@IntelligentImage aspect ration also. The thing is that usually AI models are trained for exactly one image size. If you i.E. use 512 models on a 1024 canvas, they will usually start to duplicate content, have people with 4 legs or 2 torsos. And vice versa if you use 1024 models on a 512 canvas they tend to zoom it far too much.
There is probably little visual difference between using a 512 model on a 500 canvas, but if the size is not crucial for your work, then better stick to the power of 2 size that the model was designed for.
Useful video, but too much zooming in and out, just keep a full screen and highlight clicks. Way too much movement that takes away from the use of this otherwise great video.
Thanks! I've been meaning to look for a better solution. At the very least it would save me some editing time. I have to keep in mind that some viewers have poor eyesight or are watching on their phone.
I believe there is a way even in OBS to show mouse clicks with highlights. I used to have something that would show mouse clicks with a callout and keystrokes typed onscreen. Have you tried to see how it looks with full screen, then after recording magnify a section of the screen and place it in the corner as PIP?
Thanks, I use OBS. I'll look into it. I agree, having a full screen with a magnified section would be less disorienting.
or.. you could just learn to draw... over stealing from others
He knows, the sketch in the beginning was made by him
What was stolen here? 🤔
tell someone in a wheelchair to just walk