CONVERT ANY IMAGE TO LINEART Using ControlNet! SO INCREDIBLY COOL!

Поділитися
Вставка
  • Опубліковано 14 лис 2024

КОМЕНТАРІ • 189

  • @arvinds2300
    @arvinds2300 Рік тому +54

    Simple yet so effective. Controlnet is seriously magic.

  • @winkletter
    @winkletter Рік тому +26

    I find mixing DepthMap and Canny lets you specify how abstract you want it to be. Pure DepthMap looks like more illustrated vector line art, but adding Canny makes it more and more like a sketch.

  • @visualdestination
    @visualdestination Рік тому +3

    SD came out and was amazing. Then dreambooth. Now Controlnet. Can't wait to see what's the next big leap.

  • @titanitis
    @titanitis 10 місяців тому +2

    Would be awesome with an updated edition of this video now as there is so many new options with the comfyUI. Thank you for the video Aitrepreneur!

  • @IceTeaEdwin
    @IceTeaEdwin Рік тому +33

    This is exactly what artists are going to be using to speed up their workflow. Get a draft line art and work their style from there. Infinite drafts for people who struggle with sketch ideas.

  • @thanksfernuthin
    @thanksfernuthin Рік тому +7

    I need to test this of course but this might be another game changer. For someone with a little bit of artistic ability changing a line art image to what you want is A LOT easier than changing a photo. So I can do this, edit the line art and load it back into canny. Pretty cool.

  • @ryry9780
    @ryry9780 Рік тому +1

    Just binged your entire playlist on ControlNet. That and Inpainting are truly like magic. Thank you so much!

  • @iamYork_
    @iamYork_ Рік тому +2

    I havnt had much time to dabble with controlnet but one of my first thoughts was making images into sketches as opposed to everyone turning sketches into amazing generated art... Great job as always...

  • @廖秋华
    @廖秋华 7 місяців тому +2

    Why I follow the same Settings as the video tutorial. For large models, controlnet is also set up. The image I generated was still white, no black and white lines,

  • @jackmyntan
    @jackmyntan Рік тому +5

    I think the models have changed because I followed this video to the letter and all I get is very very faint line drawings. I even took a screen shot of the example image here used and got exactly the same issue. There are more controllers on the more recent iteration of ControlNet, but everything I try results in ultra feint line images.

    • @Argentuza
      @Argentuza Рік тому

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @hildalei7881
      @hildalei7881 Рік тому

      I have the same problem. The line is not as clear as his.

  • @Snafu2346
    @Snafu2346 Рік тому +2

    I .. I haven't learned the last 10 videos yet. I need a full time job just to learn all these Stable Diffusion features.

  • @Semi-Cyclops
    @Semi-Cyclops Рік тому +2

    man control net is awesome i use it to colorize my drawings

    • @Semi-Cyclops
      @Semi-Cyclops Рік тому

      @Captain Reason i use the canny model it preserves the sketch then i describe my character or scene. my sketch goes in the the control net and if i draw a rough sketch i add contrast. the scribble model does not work good for me atleast it creates its own thing from the sketch

  • @MissChelle
    @MissChelle Рік тому +1

    Wow, this is exactly what I’ve been trying to do for weeks! It looks so simple, however, I only have an iPad so need to do it in a web up. Any suggestion? ❤️🇦🇺

  • @GS-ef5ht
    @GS-ef5ht Рік тому +1

    Exactly what I was looking for, thank you!

  • @alexwsshin
    @alexwsshin Рік тому +2

    wow, it is amazing! But I have a question, here. My line art color is not black, it is very bright. Is there any way to make black color?

  • @CaritasGothKaraoke
    @CaritasGothKaraoke Рік тому +9

    I am noticing you have a set seed. Is this the seed from the generated image before?
    If so, does that explain why this is much harder to get it to work well on existing images that were NOT generated in SD? Because I'm struggling to get something that doesn't look like a weird woodcut.

    • @MrMadmaggot
      @MrMadmaggot Рік тому

      Dude where did yuou downloaded teh dreamshaper model?

  • @online-tabletop
    @online-tabletop Рік тому +9

    Mine turns out quite light/grayish. the lines are also quite thin. Any tips?

    • @Argentuza
      @Argentuza Рік тому +1

      Same here, no way I can obtain the same results! why is this happening?

    • @archael18
      @archael18 7 місяців тому

      You can try using the same seed he does in his img2img tab or changing it to see which lineart style you prefer. Every seed will make a different lineart style.

  • @desu38
    @desu38 Рік тому +2

    3:33 For that it's better to pick a "solid color" adjustment layer.

  • @vi6ddarkking
    @vi6ddarkking Рік тому +3

    So Instant Manga Panels? Nice!

  • @Aitrepreneur
    @Aitrepreneur  Рік тому +11

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @thanksfernuthin
      @thanksfernuthin Рік тому +2

      Wow! This is not working for me at all! I get a barely recognizable blob even though the standard canny line art at the end is fine. So I switched to your DreamShaper model. No good. Then I gave it ACTUAL LINE ART and it still filled a bunch of the white areas in with black. I also removed negative prompts that might be making a problem. No good. Then all negs. No good. I'm either doing something wrong or there's some other variable that needs to be changed like clip skip or something else. If it's just me... ignore it. If you hear from others you might want to look into it.

    • @hugoruix_yt995
      @hugoruix_yt995 Рік тому

      @@thanksfernuthin It is working for me. Maybe try this Lora with the prompt: /models/16014/anime-lineart-style (on civitai)
      Maybe its a version issue or a negative prompt issue

    • @thanksfernuthin
      @thanksfernuthin Рік тому +2

      @@hugoruix_yt995 Thanks friend.

    • @nottyverseOfficial
      @nottyverseOfficial Рік тому

      Hey there.. big fan of your videos.. I got your channel recommendation from another YT channel and I thank him a thousand times that I came here.. love all your videos and the way you simplify things to understand so easily ❤❤❤

    • @MarkKaminari00
      @MarkKaminari00 Рік тому

      Hello humans? Lol

  • @sailcat662
    @sailcat662 Рік тому +4

    Here's the negative prompt if anyone wants to control-paste this:
    deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing limbs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry

  • @sumitdubey6478
    @sumitdubey6478 Рік тому +1

    I really love your work. Can you please make a video on "how to train lora on google colab". Some of us have cards with only 4gb vram. It would be really helpful.

  • @vaneaph
    @vaneaph Рік тому +2

    This is way more effective than anything i have tried with Photoshop.

    • @krystiankrysti1396
      @krystiankrysti1396 Рік тому

      which means you did not tried it because its not as good as that video makes it to be , he cherrypicked example image

    • @vaneaph
      @vaneaph Рік тому +1

      @@krystiankrysti1396 not sure what your point here. but AI is does not mean Magic! you still need to edit the picture in Photoshop to ENHANCE the result to your liking.
      Using controNet indeed saves me hell of a lot of time.
      (do not forget, the burgers on the pictures NEVER look like what you really order !)

    • @krystiankrysti1396
      @krystiankrysti1396 Рік тому

      @@vaneaph well, i got it working better with hed than canny , its just if i would make new feature vids, id premade a couple examples to show more than one so people can also see fail cases

  • @alexandrmalafeev7182
    @alexandrmalafeev7182 Рік тому

    Very nice technique, thank you! Also you can tune canny's low/high thresholds to control the lines and fills

  • @mattmustarde5582
    @mattmustarde5582 Рік тому +5

    Any way to boost the contrast of the linework itself? I'm getting good details but the lines are near-white or very pale gray. Tried adding "high contrast" to my prompt but not much improvement.

    • @bustedd66
      @bustedd66 Рік тому

      i am getting the same thing

    • @bustedd66
      @bustedd66 Рік тому

      raise the denoising strength i missed that step :)

    • @zendao7967
      @zendao7967 Рік тому +3

      There's always photoshop.

    • @randomscandinavian6094
      @randomscandinavian6094 Рік тому +2

      The model you use seems to affect the outcome. I haven't tried the one he is using. And of course the input image you choose. Luck may be a factor as well. All of my attempts so far have looked absolutely horrible and nothing like the example here. Fun technique but nothing that I could use for anything if the results are going to look this bad. Anyway, it was interesting but now on to something else.

    • @TheMediaMachine
      @TheMediaMachine Рік тому

      I just save it, bang it in Photoshop and I just use adjustment layers i.e. contrast, curves. Until I get good with stable diffusion I am doing this for now. For colour lines, try adjustment layer colour, then make overlay mode screen.

  • @NyaruSunako
    @NyaruSunako Рік тому

    I swear when I try these mine just doesnt want to listen to me lol Mine Might be broken Granted I just use it to make my workflow better now even wht I have atm its still as strong anyway just not to the lvl of this lol now Knowing This I can make my line Art even better from Learning with using different brushes Man This makes things even Fun and Easier for me to test out different line art brushes for this Always enjoyable to see new Stuff being evolved just So Fascinating

  • @nolanzor
    @nolanzor Рік тому +13

    The negative prompt used makes a big difference, here it is for anyone that is struggling:
    (bad quality:1.2), (worst quality:1.2), deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutated)), out of frame, extra fingers, mutated hands, pooraly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry

    • @bobbyboe
      @bobbyboe Рік тому

      For me, the negative prompt makes it even worse. I wonder if it is maybe important to use the same model as he does

    • @arinarici
      @arinarici Рік тому

      you are the men

  • @ssj3mohan
    @ssj3mohan Рік тому +3

    Not Working for me.

  • @coda514
    @coda514 Рік тому

    Amazing. Thank you. Sincerely, your loyal subject.

  • @loogatv
    @loogatv Рік тому

    thanks! i looked for a good way for hours and hours ... and everything what i needed to do is search quick on youtube ...

  • @sestep09
    @sestep09 Рік тому +2

    Can't get this to work it just results in a changed still colored image. I followed step by step and have triple checked my settings, I've only got it to work with one image no others. They all just end up being changed images from high denoising and still colored.

  • @Botatoo-b9b
    @Botatoo-b9b 10 місяців тому +1

    What's this program called ?!

  • @segunda_parte
    @segunda_parte Рік тому

    Awesome Awesome Awesome!!!!!!!!!!!!! You are the BOSS!!!

  • @Knittely
    @Knittely Рік тому

    Hey Aitrepreneur,
    thanks for this vid! I recently read about TensorRT to speed up image generation, but couldn't find a good guide how to use it. Would you be willing to make a tutorial for it? (or even other techniques to speed up image generation, if any)

  • @ribertfranhanreagen9821
    @ribertfranhanreagen9821 Рік тому

    Dang using this with illustrator will be a lot time saver

  • @swannschilling474
    @swannschilling474 Рік тому

    My god, this is crazy good!!!!!!!!!! 😱😱😱

  • @amj2048
    @amj2048 Рік тому

    so cool!, thanks for sharing!

  • @KolTregaskes
    @KolTregaskes Рік тому

    Another amazing tip, thank you.

  • @nierinmath7678
    @nierinmath7678 Рік тому

    I like it. Your vids are great

  • @coulterjb22
    @coulterjb22 Рік тому

    Very helpful! I'm interested in creating vector art for my laser engraving business. This is the closest thing I've seen that helps. Anything else you might suggest?
    Thank you=subd!

  • @flonixcorn
    @flonixcorn Рік тому

    Great Video

  • @serjaoberranteiro4914
    @serjaoberranteiro4914 Рік тому +2

    it dont work, i got a totally different result

  • @jpgamestudio
    @jpgamestudio Рік тому

    WOW,great!

  • @brandonvanderheat
    @brandonvanderheat Рік тому +1

    Haven't tried this yet but this might make it easier to cut (some) images from their background. Convert original image to line-art. Put both the original image and line art into photoshop (or equivalent) and use the magic background eraser to delete the background from the line art layer. Select layer pixels and invert selection. Swap to the layer with the original color image, add feather, and delete.

  • @stedocli6387
    @stedocli6387 Рік тому

    way supercool!

  • @edmatrariel
    @edmatrariel Рік тому +1

    Is the reverse possible? line art to painting?

    • @kevinscales
      @kevinscales Рік тому

      sure, just put the line art into controlnet and use canny. (txt2img) write a prompt etc
      Wait, does this make colorizing manga really easy? I never thought of that before

  • @hildalei7881
    @hildalei7881 Рік тому

    It looks great. But I follow your steps but it doesn't work anymore...Maybe it's because the different version of webUI and controlnet.

  • @kiillabytez
    @kiillabytez Рік тому

    So, it requires a WHITE background?
    I guess using it for comic book art is a little more involved or is it?

  • @angelicafoster670
    @angelicafoster670 Рік тому

    very cool, i'm trying to get "one line art" drawing, do you happen to know how ?

  • @Argentuza
    @Argentuza Рік тому

    What graphic card are you using? thanks

  • @kushis4ever
    @kushis4ever Рік тому +1

    Hi, I replicated the steps on an image but the image came out with blurred lines like brush marks with no distinguishable outline. BTW, it took me nearly 4-5 minutes to generate on a macbook pro i9 32GB RAM.

  • @r4nd0mth0_ghts5
    @r4nd0mth0_ghts5 Рік тому

    Is there any probability to create one line art forms using Controlnet. I hope next version will be bundled with this feature..

  • @MaxKrovenOfficial
    @MaxKrovenOfficial Рік тому

    In theory, we could use this same method, with slight variations, to have full color characters with white backgrounds, so we can then delete said background in Photoshop and thus have characters with transparent backgrounds?

  • @TesIaOfficial.on24
    @TesIaOfficial.on24 Рік тому

    Hey, I would like to use Runpod with your Affiliate Link.
    If I do Seed Traveling I've to wait about 1-3 hours on my Laptop. Thats long^^
    So one question.. If I've found some good prompts with some good seeds.
    Can I copy the prompts and seeds after I'm happy with them to Runpod and just make the Seed Travel their?
    Will I get the exact same images with this way?

  • @paulsheriff
    @paulsheriff Рік тому

    Would there be away to batch video frames like this ..

  • @cahitsarsn5607
    @cahitsarsn5607 Рік тому

    Can the opposite be done? sketch , line art to image ?

  • @Isthereanyescape
    @Isthereanyescape Рік тому

    I'm using Automatic1111 and installed Controlnet, but Canny model isn't available, how come?

  • @fantastart2078
    @fantastart2078 Рік тому

    Can you tell me what I have to install to use this?

  • @kamransayah
    @kamransayah Рік тому

    Hey K, what happen? did they delete your video again?

  • @Niiwastaken
    @Niiwastaken 4 місяці тому +2

    It just seems to make a white image. Ive triple checked that I got every step right :/

    • @edsalad
      @edsalad День тому

      I got this problem too

  • @reeceyb505
    @reeceyb505 Рік тому +3

    Eh, if you use something like InstructPix2Pix to 'Make it lineart' it does it. So, this kind of thing kinda already existed

    • @R3V1Z3
      @R3V1Z3 Рік тому

      InstructPix2Pix "lineart" changes the image to a specific type of lineart style which loses some of the original image's structure. It works, it just has artistic character of its own.

  • @copyright24
    @copyright24 Рік тому

    That looks amazing but I have an issue, I have recently installed Controlnet and in the folder I have the model control_v11p_sd15_lineart but it's not showing in the model list ?

    • @klawsklaws
      @klawsklaws Рік тому

      I had same issue i downloaded control_sd15_canny.pth file and put in models folder

  • @welovesummernz
    @welovesummernz Рік тому +1

    The title says any image, how can I apply this style to one of my own photos? Please

    • @OliNorwell
      @OliNorwell Рік тому

      Yeah exactly, I tried with one of my own photos and it wasn't as good

  • @MrMadmaggot
    @MrMadmaggot Рік тому

    Where did u get that Canny model?

  • @andu896
    @andu896 Рік тому +1

    I followed this tutorial to the letter, but all I get is random lines, which I assume is related to Denoise Strength being so high. Can you try with a different model and see if this still works? Anybody got it to work?

    • @Argentuza
      @Argentuza Рік тому

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @sudhan129
      @sudhan129 Рік тому

      @@Argentuza Hi, I found the only link about dreamshaper_331BakedVae. It's on hugging face, but seems not a file for download. Where can find a usable dreamshaper_331BakedVae file?

  • @maedeer5190
    @maedeer5190 Рік тому +1

    i keep getting a completely different image can someone help me.

  • @tetsuooshima832
    @tetsuooshima832 Рік тому +2

    I found the first step unecessary. What's the point sending to img2img if you delete the whole prompt later on, just start from img2img directly then tweak any gen you have ? or any pic really

    • @TheDocPixel
      @TheDocPixel Рік тому

      Don't forget that it's good to have the seed

    • @tetsuooshima832
      @tetsuooshima832 Рік тому +1

      @@TheDocPixel I think seed become irrelevant with a denoise strength of 0.95. Besides, if your source is AI generated then the seed is in the metadata; if it's any images from somewhere else there's no meta = no seed. So I don't get your point here

  • @joywritr
    @joywritr Рік тому

    Is keeping the denoising strength very low while inpainting with Only Masked the key to preventing it from trying to recreate the entire scene in the masked area? I've seen people keep it high and have that not happen, but it happens EVERY TIME I use a denoising strength more than .4 or so. Thanks in advance.

  • @iseahosbourne9064
    @iseahosbourne9064 Рік тому

    Hey my k my ai overlord, how do you use openpose for objects? like say i wanted to generate a spoon but have it mid air at 90o?
    Also, does it work for animals?

  • @edwardwilliams2564
    @edwardwilliams2564 9 місяців тому

    Any idea how to do this in comfyui? Auto1111 is really slow.

  • @PlainsAyu
    @PlainsAyu Рік тому

    I dont have the guidance start in the settings, what is wrong with mine?

  • @方奕斯
    @方奕斯 Рік тому

    hello I met an error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)
    Is there any way to solve this? Thanks

  • @grillodon
    @grillodon Рік тому

    It's all ok before Inpaint procedure. When I click generate after all settings and black paint on face the Web UI tells me: ValueError: Coordinate 'right' is less than 'left'

    • @grillodon
      @grillodon Рік тому

      Solved. It was Firefox. But the Inpaint "new detail" works only f I select Whole Picture.

  • @cheruthana005
    @cheruthana005 Рік тому +2

    not woking for me

  • @global_ganesh
    @global_ganesh 3 місяці тому

    Which website

  • @davidbecker4206
    @davidbecker4206 Рік тому +2

    Tattoo artists... Ohh I hate AI art! ... oh wait this fits into my workflow quite well.

  • @vishalchouhan07
    @vishalchouhan07 Рік тому

    Hey i am not able to achieve the quality of linework you are able to achieve in this video. is it a good idea to experiment with different models?

    • @Argentuza
      @Argentuza Рік тому +1

      If you want to get the same results use the same model: dreamshaper_331BakedVae

  • @tails8806
    @tails8806 Рік тому

    I only get a black image from the canny model... any ideas?

  • @nemanjapetrovic4761
    @nemanjapetrovic4761 Рік тому

    I still get some color in my image when i trg to turn it i to sketch is there a fix for that?

  • @solomonkok1539
    @solomonkok1539 Рік тому

    Which app?

  • @goldenshark6272
    @goldenshark6272 Рік тому

    Plz how to download controlnet ?!

  • @St.MichaelsXMother
    @St.MichaelsXMother Рік тому

    how do I get controllnet? or if it's a website?

  • @theStarterPlan
    @theStarterPlan Рік тому

    What does the seed value say?

  • @theStarterPlan
    @theStarterPlan Рік тому

    When I do it, I just get an error message (with no generated image) saying- AttributeError ControlNet object has no attribute 'label_emb". Does anybody have any idea what I could be doing wrong? please help!

  • @Bra2ha1
    @Bra2ha1 Рік тому

    Where can I get this canny model?

  • @OtakuDoctor
    @OtakuDoctor Рік тому

    i wonder why i only have one cfg scale, not start and end like you, my controlnet should be up to date
    edit: nvm, needed an update

  • @HogwartsStudy
    @HogwartsStudy Рік тому +2

    and here i trained 2 embeddings all night long to do the same thing...

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Ah.. well😅 sorry

    • @HogwartsStudy
      @HogwartsStudy Рік тому

      @@Aitrepreneur no no, this will be excellent! Right after I get done with this Patrick Bateman scene...

    • @HogwartsStudy
      @HogwartsStudy Рік тому

      @@Aitrepreneur Just tried to do this and I do not have a guidance start slider, only weight and strength.

  • @dinchigo
    @dinchigo Рік тому

    Can anyone assist me? I've installed Stable Diffusion but it gives me RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` . Not sure what to do as my pc meets necessary requirements.

  • @olgayuryevich1123
    @olgayuryevich1123 3 місяці тому +1

    Great content but why are you in such a rush?! Please slow down to make it a little easier to follow.

  • @takezosensei
    @takezosensei Рік тому +9

    As a lineart artist, I am deeply saddened...

    • @psych18art
      @psych18art Рік тому +1

      Yeah

    • @nurikkulanbaev3628
      @nurikkulanbaev3628 2 місяці тому

      Im a comic artist. Just looking for a way to shorten amount of background tracing I have to do

  • @krystiankrysti1396
    @krystiankrysti1396 Рік тому +1

    Meh0 this works like 0.5% of the time , mostly doesnt work

  • @ChroyonCreative
    @ChroyonCreative 11 місяців тому

    Mine always looks like a grayscale or a fluffy model with shading. Its never lineart

  • @proyectorealidad9904
    @proyectorealidad9904 Рік тому

    how can i do this in batch?

  • @MonologueMusicals
    @MonologueMusicals Рік тому +1

    ain't working for me, chief
    Edit. I figured it out, the denoising is key.

  • @apt13tpa
    @apt13tpa Рік тому

    I don't knwo why but this isn''t working for me at all

  • @sojoba3521
    @sojoba3521 Рік тому

    Hi, do you do personal tutoring? I'd like to pay you for a private session

  • @dylangrove3214
    @dylangrove3214 Рік тому

    Has anyone tried this on a building/architecture photo?

  • @diegomaldonado7491
    @diegomaldonado7491 Рік тому

    where is the link to this ai tool??

  • @aliuzun2220
    @aliuzun2220 Рік тому

    gelişir gelişir

  • @crosslive
    @crosslive Рік тому

    You know what? ppl that sell that kind of thing on facebook (there are a LOT) are not going to like this.