I find mixing DepthMap and Canny lets you specify how abstract you want it to be. Pure DepthMap looks like more illustrated vector line art, but adding Canny makes it more and more like a sketch.
This is exactly what artists are going to be using to speed up their workflow. Get a draft line art and work their style from there. Infinite drafts for people who struggle with sketch ideas.
I need to test this of course but this might be another game changer. For someone with a little bit of artistic ability changing a line art image to what you want is A LOT easier than changing a photo. So I can do this, edit the line art and load it back into canny. Pretty cool.
I havnt had much time to dabble with controlnet but one of my first thoughts was making images into sketches as opposed to everyone turning sketches into amazing generated art... Great job as always...
Why I follow the same Settings as the video tutorial. For large models, controlnet is also set up. The image I generated was still white, no black and white lines,
I think the models have changed because I followed this video to the letter and all I get is very very faint line drawings. I even took a screen shot of the example image here used and got exactly the same issue. There are more controllers on the more recent iteration of ControlNet, but everything I try results in ultra feint line images.
@Captain Reason i use the canny model it preserves the sketch then i describe my character or scene. my sketch goes in the the control net and if i draw a rough sketch i add contrast. the scribble model does not work good for me atleast it creates its own thing from the sketch
Wow, this is exactly what I’ve been trying to do for weeks! It looks so simple, however, I only have an iPad so need to do it in a web up. Any suggestion? ❤️🇦🇺
I am noticing you have a set seed. Is this the seed from the generated image before? If so, does that explain why this is much harder to get it to work well on existing images that were NOT generated in SD? Because I'm struggling to get something that doesn't look like a weird woodcut.
You can try using the same seed he does in his img2img tab or changing it to see which lineart style you prefer. Every seed will make a different lineart style.
Wow! This is not working for me at all! I get a barely recognizable blob even though the standard canny line art at the end is fine. So I switched to your DreamShaper model. No good. Then I gave it ACTUAL LINE ART and it still filled a bunch of the white areas in with black. I also removed negative prompts that might be making a problem. No good. Then all negs. No good. I'm either doing something wrong or there's some other variable that needs to be changed like clip skip or something else. If it's just me... ignore it. If you hear from others you might want to look into it.
@@thanksfernuthin It is working for me. Maybe try this Lora with the prompt: /models/16014/anime-lineart-style (on civitai) Maybe its a version issue or a negative prompt issue
Hey there.. big fan of your videos.. I got your channel recommendation from another YT channel and I thank him a thousand times that I came here.. love all your videos and the way you simplify things to understand so easily ❤❤❤
I really love your work. Can you please make a video on "how to train lora on google colab". Some of us have cards with only 4gb vram. It would be really helpful.
@@krystiankrysti1396 not sure what your point here. but AI is does not mean Magic! you still need to edit the picture in Photoshop to ENHANCE the result to your liking. Using controNet indeed saves me hell of a lot of time. (do not forget, the burgers on the pictures NEVER look like what you really order !)
@@vaneaph well, i got it working better with hed than canny , its just if i would make new feature vids, id premade a couple examples to show more than one so people can also see fail cases
Any way to boost the contrast of the linework itself? I'm getting good details but the lines are near-white or very pale gray. Tried adding "high contrast" to my prompt but not much improvement.
The model you use seems to affect the outcome. I haven't tried the one he is using. And of course the input image you choose. Luck may be a factor as well. All of my attempts so far have looked absolutely horrible and nothing like the example here. Fun technique but nothing that I could use for anything if the results are going to look this bad. Anyway, it was interesting but now on to something else.
I just save it, bang it in Photoshop and I just use adjustment layers i.e. contrast, curves. Until I get good with stable diffusion I am doing this for now. For colour lines, try adjustment layer colour, then make overlay mode screen.
I swear when I try these mine just doesnt want to listen to me lol Mine Might be broken Granted I just use it to make my workflow better now even wht I have atm its still as strong anyway just not to the lvl of this lol now Knowing This I can make my line Art even better from Learning with using different brushes Man This makes things even Fun and Easier for me to test out different line art brushes for this Always enjoyable to see new Stuff being evolved just So Fascinating
The negative prompt used makes a big difference, here it is for anyone that is struggling: (bad quality:1.2), (worst quality:1.2), deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutated)), out of frame, extra fingers, mutated hands, pooraly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry
Can't get this to work it just results in a changed still colored image. I followed step by step and have triple checked my settings, I've only got it to work with one image no others. They all just end up being changed images from high denoising and still colored.
Hey Aitrepreneur, thanks for this vid! I recently read about TensorRT to speed up image generation, but couldn't find a good guide how to use it. Would you be willing to make a tutorial for it? (or even other techniques to speed up image generation, if any)
Very helpful! I'm interested in creating vector art for my laser engraving business. This is the closest thing I've seen that helps. Anything else you might suggest? Thank you=subd!
Haven't tried this yet but this might make it easier to cut (some) images from their background. Convert original image to line-art. Put both the original image and line art into photoshop (or equivalent) and use the magic background eraser to delete the background from the line art layer. Select layer pixels and invert selection. Swap to the layer with the original color image, add feather, and delete.
sure, just put the line art into controlnet and use canny. (txt2img) write a prompt etc Wait, does this make colorizing manga really easy? I never thought of that before
Hi, I replicated the steps on an image but the image came out with blurred lines like brush marks with no distinguishable outline. BTW, it took me nearly 4-5 minutes to generate on a macbook pro i9 32GB RAM.
In theory, we could use this same method, with slight variations, to have full color characters with white backgrounds, so we can then delete said background in Photoshop and thus have characters with transparent backgrounds?
Hey, I would like to use Runpod with your Affiliate Link. If I do Seed Traveling I've to wait about 1-3 hours on my Laptop. Thats long^^ So one question.. If I've found some good prompts with some good seeds. Can I copy the prompts and seeds after I'm happy with them to Runpod and just make the Seed Travel their? Will I get the exact same images with this way?
InstructPix2Pix "lineart" changes the image to a specific type of lineart style which loses some of the original image's structure. It works, it just has artistic character of its own.
That looks amazing but I have an issue, I have recently installed Controlnet and in the folder I have the model control_v11p_sd15_lineart but it's not showing in the model list ?
I followed this tutorial to the letter, but all I get is random lines, which I assume is related to Denoise Strength being so high. Can you try with a different model and see if this still works? Anybody got it to work?
@@Argentuza Hi, I found the only link about dreamshaper_331BakedVae. It's on hugging face, but seems not a file for download. Where can find a usable dreamshaper_331BakedVae file?
I found the first step unecessary. What's the point sending to img2img if you delete the whole prompt later on, just start from img2img directly then tweak any gen you have ? or any pic really
@@TheDocPixel I think seed become irrelevant with a denoise strength of 0.95. Besides, if your source is AI generated then the seed is in the metadata; if it's any images from somewhere else there's no meta = no seed. So I don't get your point here
Is keeping the denoising strength very low while inpainting with Only Masked the key to preventing it from trying to recreate the entire scene in the masked area? I've seen people keep it high and have that not happen, but it happens EVERY TIME I use a denoising strength more than .4 or so. Thanks in advance.
Hey my k my ai overlord, how do you use openpose for objects? like say i wanted to generate a spoon but have it mid air at 90o? Also, does it work for animals?
It's all ok before Inpaint procedure. When I click generate after all settings and black paint on face the Web UI tells me: ValueError: Coordinate 'right' is less than 'left'
When I do it, I just get an error message (with no generated image) saying- AttributeError ControlNet object has no attribute 'label_emb". Does anybody have any idea what I could be doing wrong? please help!
Can anyone assist me? I've installed Stable Diffusion but it gives me RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` . Not sure what to do as my pc meets necessary requirements.
Simple yet so effective. Controlnet is seriously magic.
Can't argue with that one :D
is it a website? I don't know how I get to it.
How u downloaded it?
I find mixing DepthMap and Canny lets you specify how abstract you want it to be. Pure DepthMap looks like more illustrated vector line art, but adding Canny makes it more and more like a sketch.
can you tell me the steps?
How do I get to this , screen/program?
SD came out and was amazing. Then dreambooth. Now Controlnet. Can't wait to see what's the next big leap.
Would be awesome with an updated edition of this video now as there is so many new options with the comfyUI. Thank you for the video Aitrepreneur!
This is exactly what artists are going to be using to speed up their workflow. Get a draft line art and work their style from there. Infinite drafts for people who struggle with sketch ideas.
Nah 90% artis is busy panicking ai take their job.
@@ribertfranhanreagen9821 this 🤣🤣🤣🤣
I need to test this of course but this might be another game changer. For someone with a little bit of artistic ability changing a line art image to what you want is A LOT easier than changing a photo. So I can do this, edit the line art and load it back into canny. Pretty cool.
U solved ti?
@@MrMadmaggot Nope!
Just binged your entire playlist on ControlNet. That and Inpainting are truly like magic. Thank you so much!
I havnt had much time to dabble with controlnet but one of my first thoughts was making images into sketches as opposed to everyone turning sketches into amazing generated art... Great job as always...
Why I follow the same Settings as the video tutorial. For large models, controlnet is also set up. The image I generated was still white, no black and white lines,
I think the models have changed because I followed this video to the letter and all I get is very very faint line drawings. I even took a screen shot of the example image here used and got exactly the same issue. There are more controllers on the more recent iteration of ControlNet, but everything I try results in ultra feint line images.
If you want to get the same results use the same model: dreamshaper_331BakedVae
I have the same problem. The line is not as clear as his.
I .. I haven't learned the last 10 videos yet. I need a full time job just to learn all these Stable Diffusion features.
man control net is awesome i use it to colorize my drawings
@Captain Reason i use the canny model it preserves the sketch then i describe my character or scene. my sketch goes in the the control net and if i draw a rough sketch i add contrast. the scribble model does not work good for me atleast it creates its own thing from the sketch
Wow, this is exactly what I’ve been trying to do for weeks! It looks so simple, however, I only have an iPad so need to do it in a web up. Any suggestion? ❤️🇦🇺
Exactly what I was looking for, thank you!
wow, it is amazing! But I have a question, here. My line art color is not black, it is very bright. Is there any way to make black color?
I am noticing you have a set seed. Is this the seed from the generated image before?
If so, does that explain why this is much harder to get it to work well on existing images that were NOT generated in SD? Because I'm struggling to get something that doesn't look like a weird woodcut.
Dude where did yuou downloaded teh dreamshaper model?
Mine turns out quite light/grayish. the lines are also quite thin. Any tips?
Same here, no way I can obtain the same results! why is this happening?
You can try using the same seed he does in his img2img tab or changing it to see which lineart style you prefer. Every seed will make a different lineart style.
3:33 For that it's better to pick a "solid color" adjustment layer.
So Instant Manga Panels? Nice!
Pretty much yeah :D
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
Wow! This is not working for me at all! I get a barely recognizable blob even though the standard canny line art at the end is fine. So I switched to your DreamShaper model. No good. Then I gave it ACTUAL LINE ART and it still filled a bunch of the white areas in with black. I also removed negative prompts that might be making a problem. No good. Then all negs. No good. I'm either doing something wrong or there's some other variable that needs to be changed like clip skip or something else. If it's just me... ignore it. If you hear from others you might want to look into it.
@@thanksfernuthin It is working for me. Maybe try this Lora with the prompt: /models/16014/anime-lineart-style (on civitai)
Maybe its a version issue or a negative prompt issue
@@hugoruix_yt995 Thanks friend.
Hey there.. big fan of your videos.. I got your channel recommendation from another YT channel and I thank him a thousand times that I came here.. love all your videos and the way you simplify things to understand so easily ❤❤❤
Hello humans? Lol
Here's the negative prompt if anyone wants to control-paste this:
deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing limbs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry
I really love your work. Can you please make a video on "how to train lora on google colab". Some of us have cards with only 4gb vram. It would be really helpful.
This is way more effective than anything i have tried with Photoshop.
which means you did not tried it because its not as good as that video makes it to be , he cherrypicked example image
@@krystiankrysti1396 not sure what your point here. but AI is does not mean Magic! you still need to edit the picture in Photoshop to ENHANCE the result to your liking.
Using controNet indeed saves me hell of a lot of time.
(do not forget, the burgers on the pictures NEVER look like what you really order !)
@@vaneaph well, i got it working better with hed than canny , its just if i would make new feature vids, id premade a couple examples to show more than one so people can also see fail cases
Very nice technique, thank you! Also you can tune canny's low/high thresholds to control the lines and fills
Any way to boost the contrast of the linework itself? I'm getting good details but the lines are near-white or very pale gray. Tried adding "high contrast" to my prompt but not much improvement.
i am getting the same thing
raise the denoising strength i missed that step :)
There's always photoshop.
The model you use seems to affect the outcome. I haven't tried the one he is using. And of course the input image you choose. Luck may be a factor as well. All of my attempts so far have looked absolutely horrible and nothing like the example here. Fun technique but nothing that I could use for anything if the results are going to look this bad. Anyway, it was interesting but now on to something else.
I just save it, bang it in Photoshop and I just use adjustment layers i.e. contrast, curves. Until I get good with stable diffusion I am doing this for now. For colour lines, try adjustment layer colour, then make overlay mode screen.
I swear when I try these mine just doesnt want to listen to me lol Mine Might be broken Granted I just use it to make my workflow better now even wht I have atm its still as strong anyway just not to the lvl of this lol now Knowing This I can make my line Art even better from Learning with using different brushes Man This makes things even Fun and Easier for me to test out different line art brushes for this Always enjoyable to see new Stuff being evolved just So Fascinating
The negative prompt used makes a big difference, here it is for anyone that is struggling:
(bad quality:1.2), (worst quality:1.2), deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutated)), out of frame, extra fingers, mutated hands, pooraly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry
For me, the negative prompt makes it even worse. I wonder if it is maybe important to use the same model as he does
you are the men
Not Working for me.
Amazing. Thank you. Sincerely, your loyal subject.
thanks! i looked for a good way for hours and hours ... and everything what i needed to do is search quick on youtube ...
Can't get this to work it just results in a changed still colored image. I followed step by step and have triple checked my settings, I've only got it to work with one image no others. They all just end up being changed images from high denoising and still colored.
What's this program called ?!
Awesome Awesome Awesome!!!!!!!!!!!!! You are the BOSS!!!
Happy to help ;)
Hey Aitrepreneur,
thanks for this vid! I recently read about TensorRT to speed up image generation, but couldn't find a good guide how to use it. Would you be willing to make a tutorial for it? (or even other techniques to speed up image generation, if any)
Dang using this with illustrator will be a lot time saver
My god, this is crazy good!!!!!!!!!! 😱😱😱
so cool!, thanks for sharing!
Another amazing tip, thank you.
I like it. Your vids are great
Glad you like them!
Very helpful! I'm interested in creating vector art for my laser engraving business. This is the closest thing I've seen that helps. Anything else you might suggest?
Thank you=subd!
Great Video
it dont work, i got a totally different result
WOW,great!
Haven't tried this yet but this might make it easier to cut (some) images from their background. Convert original image to line-art. Put both the original image and line art into photoshop (or equivalent) and use the magic background eraser to delete the background from the line art layer. Select layer pixels and invert selection. Swap to the layer with the original color image, add feather, and delete.
way supercool!
Is the reverse possible? line art to painting?
sure, just put the line art into controlnet and use canny. (txt2img) write a prompt etc
Wait, does this make colorizing manga really easy? I never thought of that before
It looks great. But I follow your steps but it doesn't work anymore...Maybe it's because the different version of webUI and controlnet.
So, it requires a WHITE background?
I guess using it for comic book art is a little more involved or is it?
very cool, i'm trying to get "one line art" drawing, do you happen to know how ?
What graphic card are you using? thanks
Hi, I replicated the steps on an image but the image came out with blurred lines like brush marks with no distinguishable outline. BTW, it took me nearly 4-5 minutes to generate on a macbook pro i9 32GB RAM.
U solved it?
@MAD MAGGOT Nope. I parked the project.
Is there any probability to create one line art forms using Controlnet. I hope next version will be bundled with this feature..
In theory, we could use this same method, with slight variations, to have full color characters with white backgrounds, so we can then delete said background in Photoshop and thus have characters with transparent backgrounds?
Hey, I would like to use Runpod with your Affiliate Link.
If I do Seed Traveling I've to wait about 1-3 hours on my Laptop. Thats long^^
So one question.. If I've found some good prompts with some good seeds.
Can I copy the prompts and seeds after I'm happy with them to Runpod and just make the Seed Travel their?
Will I get the exact same images with this way?
Would there be away to batch video frames like this ..
Can the opposite be done? sketch , line art to image ?
I'm using Automatic1111 and installed Controlnet, but Canny model isn't available, how come?
Can you tell me what I have to install to use this?
Hey K, what happen? did they delete your video again?
It just seems to make a white image. Ive triple checked that I got every step right :/
I got this problem too
Eh, if you use something like InstructPix2Pix to 'Make it lineart' it does it. So, this kind of thing kinda already existed
InstructPix2Pix "lineart" changes the image to a specific type of lineart style which loses some of the original image's structure. It works, it just has artistic character of its own.
That looks amazing but I have an issue, I have recently installed Controlnet and in the folder I have the model control_v11p_sd15_lineart but it's not showing in the model list ?
I had same issue i downloaded control_sd15_canny.pth file and put in models folder
The title says any image, how can I apply this style to one of my own photos? Please
Yeah exactly, I tried with one of my own photos and it wasn't as good
Where did u get that Canny model?
I followed this tutorial to the letter, but all I get is random lines, which I assume is related to Denoise Strength being so high. Can you try with a different model and see if this still works? Anybody got it to work?
If you want to get the same results use the same model: dreamshaper_331BakedVae
@@Argentuza Hi, I found the only link about dreamshaper_331BakedVae. It's on hugging face, but seems not a file for download. Where can find a usable dreamshaper_331BakedVae file?
i keep getting a completely different image can someone help me.
I found the first step unecessary. What's the point sending to img2img if you delete the whole prompt later on, just start from img2img directly then tweak any gen you have ? or any pic really
Don't forget that it's good to have the seed
@@TheDocPixel I think seed become irrelevant with a denoise strength of 0.95. Besides, if your source is AI generated then the seed is in the metadata; if it's any images from somewhere else there's no meta = no seed. So I don't get your point here
Is keeping the denoising strength very low while inpainting with Only Masked the key to preventing it from trying to recreate the entire scene in the masked area? I've seen people keep it high and have that not happen, but it happens EVERY TIME I use a denoising strength more than .4 or so. Thanks in advance.
Hey my k my ai overlord, how do you use openpose for objects? like say i wanted to generate a spoon but have it mid air at 90o?
Also, does it work for animals?
Any idea how to do this in comfyui? Auto1111 is really slow.
I dont have the guidance start in the settings, what is wrong with mine?
hello I met an error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)
Is there any way to solve this? Thanks
It's all ok before Inpaint procedure. When I click generate after all settings and black paint on face the Web UI tells me: ValueError: Coordinate 'right' is less than 'left'
Solved. It was Firefox. But the Inpaint "new detail" works only f I select Whole Picture.
not woking for me
Which website
Tattoo artists... Ohh I hate AI art! ... oh wait this fits into my workflow quite well.
Hey i am not able to achieve the quality of linework you are able to achieve in this video. is it a good idea to experiment with different models?
If you want to get the same results use the same model: dreamshaper_331BakedVae
I only get a black image from the canny model... any ideas?
I still get some color in my image when i trg to turn it i to sketch is there a fix for that?
Which app?
Plz how to download controlnet ?!
how do I get controllnet? or if it's a website?
What does the seed value say?
When I do it, I just get an error message (with no generated image) saying- AttributeError ControlNet object has no attribute 'label_emb". Does anybody have any idea what I could be doing wrong? please help!
Where can I get this canny model?
i wonder why i only have one cfg scale, not start and end like you, my controlnet should be up to date
edit: nvm, needed an update
and here i trained 2 embeddings all night long to do the same thing...
Ah.. well😅 sorry
@@Aitrepreneur no no, this will be excellent! Right after I get done with this Patrick Bateman scene...
@@Aitrepreneur Just tried to do this and I do not have a guidance start slider, only weight and strength.
Can anyone assist me? I've installed Stable Diffusion but it gives me RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` . Not sure what to do as my pc meets necessary requirements.
Great content but why are you in such a rush?! Please slow down to make it a little easier to follow.
As a lineart artist, I am deeply saddened...
Yeah
Im a comic artist. Just looking for a way to shorten amount of background tracing I have to do
Meh0 this works like 0.5% of the time , mostly doesnt work
Mine always looks like a grayscale or a fluffy model with shading. Its never lineart
how can i do this in batch?
ain't working for me, chief
Edit. I figured it out, the denoising is key.
help me human. not working for me
I don't knwo why but this isn''t working for me at all
Hi, do you do personal tutoring? I'd like to pay you for a private session
Has anyone tried this on a building/architecture photo?
where is the link to this ai tool??
gelişir gelişir
You know what? ppl that sell that kind of thing on facebook (there are a LOT) are not going to like this.