CONVERT ANY IMAGE TO LINEART Using ControlNet! SO INCREDIBLY COOL!

Поділитися
Вставка
  • Опубліковано 8 бер 2023
  • Transform any image to lineart using ControlNet inside Stable Diffusion! So in this video I will show you how you can easily convert any previously generated color image to a black and white lineart version with ControlNet without leaving the auto1111 Stable Diffusion webui and then how to make it even better inside a free image manipulation tool website like photopea. So let's go!
    Did you manage to convert your image into lineart? Let me know in the comments!
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my work on Patreon: / aitrepreneur
    ⚔️ Join the Discord server: bit.ly/aitdiscord
    🧠 My Second Channel THE MAKER LAIR: bit.ly/themakerlair
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Runpod: bit.ly/runpodAi
    Photopea: www.photopea.com/
    a line drawing lineart linework
    Thanks to SergioPAL for Inpiring this video: / how_to_turn_any_image_...
    All ControlNet Videos: • ControlNet
    My previous ControlNet video: • TRANSFER STYLE FROM An...
    3D POSE & HANDS INSIDE Stable Diffusion! Posex & Depth Map: • 3D POSE & HANDS INSIDE...
    Multiple Characters With LATENT COUPLE: • MULTIPLE CHARACTERS In...
    GET PERFECT HANDS With MULTI-CONTROLNET & 3D BLENDER: • GET PERFECT HANDS With...
    NEXT-GEN MULTI-CONTROLNET INPAINTING: • NEXT-GEN MULTI-CONTROL...
    CHARACTER TURNAROUND In Stable Diffusion: • CHARACTER TURNAROUND I...
    EASY POSING FOR CONTROLNET : • EASY POSING FOR CONTRO...
    3D Posing With ControlNet: • 3D POSING For PERFECT ...
    My first ControlNet video: • NEXT-GEN NEW IMG2IMG I...
    Special thanks to Royal Emperor:
    - Merlin Kauffman
    - Totoro
    Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
    #stablediffusion #controlnet #lineart #stablediffusiontutorial
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    WATCH MY MOST POPULAR VIDEOS:
    RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
    ►► bit.ly/stablediffusion
    RECOMMENDED WATCHING - My "Tutorial" Playlist:
    ►► bit.ly/TuTPlaylist
    Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

КОМЕНТАРІ • 186

  • @ryry9780
    @ryry9780 Рік тому +1

    Just binged your entire playlist on ControlNet. That and Inpainting are truly like magic. Thank you so much!

  • @iamYork_
    @iamYork_ Рік тому +2

    I havnt had much time to dabble with controlnet but one of my first thoughts was making images into sketches as opposed to everyone turning sketches into amazing generated art... Great job as always...

  • @tripleheadedmonkey6613
    @tripleheadedmonkey6613 Рік тому +10

    I love this. Now we just need someone to change-up the ControlNet IMG2IMG pipeline so that you can use a batch folder in ControlNet specifically.
    That way you could use a white blank background to make animated line art in batches instead of having to do it frame by frame again with this new process.

  • @winkletter
    @winkletter Рік тому +23

    I find mixing DepthMap and Canny lets you specify how abstract you want it to be. Pure DepthMap looks like more illustrated vector line art, but adding Canny makes it more and more like a sketch.

  • @Aitrepreneur
    @Aitrepreneur  Рік тому +11

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @thanksfernuthin
      @thanksfernuthin Рік тому +2

      Wow! This is not working for me at all! I get a barely recognizable blob even though the standard canny line art at the end is fine. So I switched to your DreamShaper model. No good. Then I gave it ACTUAL LINE ART and it still filled a bunch of the white areas in with black. I also removed negative prompts that might be making a problem. No good. Then all negs. No good. I'm either doing something wrong or there's some other variable that needs to be changed like clip skip or something else. If it's just me... ignore it. If you hear from others you might want to look into it.

    • @hugoruix_yt995
      @hugoruix_yt995 Рік тому

      @@thanksfernuthin It is working for me. Maybe try this Lora with the prompt: /models/16014/anime-lineart-style (on civitai)
      Maybe its a version issue or a negative prompt issue

    • @thanksfernuthin
      @thanksfernuthin Рік тому +2

      @@hugoruix_yt995 Thanks friend.

    • @nottyverseOfficial
      @nottyverseOfficial Рік тому

      Hey there.. big fan of your videos.. I got your channel recommendation from another YT channel and I thank him a thousand times that I came here.. love all your videos and the way you simplify things to understand so easily ❤❤❤

    • @MarkKaminari00
      @MarkKaminari00 Рік тому

      Hello humans? Lol

  • @GS-ef5ht
    @GS-ef5ht Рік тому +1

    Exactly what I was looking for, thank you!

  • @arvinds2300
    @arvinds2300 Рік тому +52

    Simple yet so effective. Controlnet is seriously magic.

  • @titanitis
    @titanitis 6 місяців тому +1

    Would be awesome with an updated edition of this video now as there is so many new options with the comfyUI. Thank you for the video Aitrepreneur!

  • @visualdestination
    @visualdestination Рік тому +3

    SD came out and was amazing. Then dreambooth. Now Controlnet. Can't wait to see what's the next big leap.

  • @alexandrmalafeev7182
    @alexandrmalafeev7182 Рік тому

    Very nice technique, thank you! Also you can tune canny's low/high thresholds to control the lines and fills

  • @KolTregaskes
    @KolTregaskes Рік тому

    Another amazing tip, thank you.

  • @coda514
    @coda514 Рік тому

    Amazing. Thank you. Sincerely, your loyal subject.

  • @amj2048
    @amj2048 Рік тому

    so cool!, thanks for sharing!

  • @DimsOfBeauty
    @DimsOfBeauty Рік тому +1

    Love it! Can you show us how this would be used to convert lineart into a realistic image or painting? :)

  • @swannschilling474
    @swannschilling474 Рік тому

    My god, this is crazy good!!!!!!!!!! 😱😱😱

  • @IceTeaEdwin
    @IceTeaEdwin Рік тому +32

    This is exactly what artists are going to be using to speed up their workflow. Get a draft line art and work their style from there. Infinite drafts for people who struggle with sketch ideas.

  • @joywritr
    @joywritr Рік тому

    Is keeping the denoising strength very low while inpainting with Only Masked the key to preventing it from trying to recreate the entire scene in the masked area? I've seen people keep it high and have that not happen, but it happens EVERY TIME I use a denoising strength more than .4 or so. Thanks in advance.

  • @jpgamestudio
    @jpgamestudio Рік тому

    WOW,great!

  • @thanksfernuthin
    @thanksfernuthin Рік тому +6

    I need to test this of course but this might be another game changer. For someone with a little bit of artistic ability changing a line art image to what you want is A LOT easier than changing a photo. So I can do this, edit the line art and load it back into canny. Pretty cool.

  • @flonixcorn
    @flonixcorn Рік тому

    Great Video

  • @alexwsshin
    @alexwsshin Рік тому +2

    wow, it is amazing! But I have a question, here. My line art color is not black, it is very bright. Is there any way to make black color?

  • @segunda_parte
    @segunda_parte Рік тому

    Awesome Awesome Awesome!!!!!!!!!!!!! You are the BOSS!!!

  • @Knittely
    @Knittely Рік тому

    Hey Aitrepreneur,
    thanks for this vid! I recently read about TensorRT to speed up image generation, but couldn't find a good guide how to use it. Would you be willing to make a tutorial for it? (or even other techniques to speed up image generation, if any)

  • @iseahosbourne9064
    @iseahosbourne9064 Рік тому

    Hey my k my ai overlord, how do you use openpose for objects? like say i wanted to generate a spoon but have it mid air at 90o?
    Also, does it work for animals?

  • @stedocli6387
    @stedocli6387 Рік тому

    way supercool!

  • @nemanjapetrovic4761
    @nemanjapetrovic4761 Рік тому

    I still get some color in my image when i trg to turn it i to sketch is there a fix for that?

  • @MissChelle
    @MissChelle Рік тому +1

    Wow, this is exactly what I’ve been trying to do for weeks! It looks so simple, however, I only have an iPad so need to do it in a web up. Any suggestion? ❤️🇦🇺

  • @user-fn9dn1co5o
    @user-fn9dn1co5o 11 місяців тому

    hello I met an error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)
    Is there any way to solve this? Thanks

  • @nierinmath7678
    @nierinmath7678 Рік тому

    I like it. Your vids are great

  • @ribertfranhanreagen9821
    @ribertfranhanreagen9821 Рік тому

    Dang using this with illustrator will be a lot time saver

  • @Snafu2346
    @Snafu2346 Рік тому +2

    I .. I haven't learned the last 10 videos yet. I need a full time job just to learn all these Stable Diffusion features.

  • @ratatattattat
    @ratatattattat Рік тому

    I'm using Automatic1111 and installed Controlnet, but Canny model isn't available, how come?

  • @sestep09
    @sestep09 Рік тому +2

    Can't get this to work it just results in a changed still colored image. I followed step by step and have triple checked my settings, I've only got it to work with one image no others. They all just end up being changed images from high denoising and still colored.

  • @PlainsAyu
    @PlainsAyu Рік тому

    I dont have the guidance start in the settings, what is wrong with mine?

  • @dinchigo
    @dinchigo Рік тому

    Can anyone assist me? I've installed Stable Diffusion but it gives me RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` . Not sure what to do as my pc meets necessary requirements.

  • @sumitdubey6478
    @sumitdubey6478 Рік тому +1

    I really love your work. Can you please make a video on "how to train lora on google colab". Some of us have cards with only 4gb vram. It would be really helpful.

  • @coulterjb22
    @coulterjb22 10 місяців тому

    Very helpful! I'm interested in creating vector art for my laser engraving business. This is the closest thing I've seen that helps. Anything else you might suggest?
    Thank you=subd!

  • @fantastart2078
    @fantastart2078 Рік тому

    Can you tell me what I have to install to use this?

  • @edwardwilliams2564
    @edwardwilliams2564 6 місяців тому

    Any idea how to do this in comfyui? Auto1111 is really slow.

  • @loogatv
    @loogatv Рік тому

    thanks! i looked for a good way for hours and hours ... and everything what i needed to do is search quick on youtube ...

  • @Argentuza
    @Argentuza Рік тому

    What graphic card are you using? thanks

  • @r4nd0mth0_ghts5
    @r4nd0mth0_ghts5 Рік тому

    Is there any probability to create one line art forms using Controlnet. I hope next version will be bundled with this feature..

  • @theStarterPlan
    @theStarterPlan 10 місяців тому

    When I do it, I just get an error message (with no generated image) saying- AttributeError ControlNet object has no attribute 'label_emb". Does anybody have any idea what I could be doing wrong? please help!

  • @paulsheriff
    @paulsheriff Рік тому

    Would there be away to batch video frames like this ..

  • @cahitsarsn5607
    @cahitsarsn5607 Рік тому

    Can the opposite be done? sketch , line art to image ?

  • @CaritasGothKaraoke
    @CaritasGothKaraoke Рік тому +9

    I am noticing you have a set seed. Is this the seed from the generated image before?
    If so, does that explain why this is much harder to get it to work well on existing images that were NOT generated in SD? Because I'm struggling to get something that doesn't look like a weird woodcut.

    • @MrMadmaggot
      @MrMadmaggot Рік тому

      Dude where did yuou downloaded teh dreamshaper model?

  • @Semi-Cyclops
    @Semi-Cyclops Рік тому +2

    man control net is awesome i use it to colorize my drawings

    • @Semi-Cyclops
      @Semi-Cyclops Рік тому

      @Captain Reason i use the canny model it preserves the sketch then i describe my character or scene. my sketch goes in the the control net and if i draw a rough sketch i add contrast. the scribble model does not work good for me atleast it creates its own thing from the sketch

  • @NyaruSunako
    @NyaruSunako Рік тому

    I swear when I try these mine just doesnt want to listen to me lol Mine Might be broken Granted I just use it to make my workflow better now even wht I have atm its still as strong anyway just not to the lvl of this lol now Knowing This I can make my line Art even better from Learning with using different brushes Man This makes things even Fun and Easier for me to test out different line art brushes for this Always enjoyable to see new Stuff being evolved just So Fascinating

  • @angelicafoster670
    @angelicafoster670 Рік тому

    very cool, i'm trying to get "one line art" drawing, do you happen to know how ?

  • @desu38
    @desu38 Рік тому +2

    3:33 For that it's better to pick a "solid color" adjustment layer.

  • @user-ly7to7yt5d
    @user-ly7to7yt5d 6 місяців тому +1

    What's this program called ?!

  • @Mirko_ai
    @Mirko_ai Рік тому

    Hey, I would like to use Runpod with your Affiliate Link.
    If I do Seed Traveling I've to wait about 1-3 hours on my Laptop. Thats long^^
    So one question.. If I've found some good prompts with some good seeds.
    Can I copy the prompts and seeds after I'm happy with them to Runpod and just make the Seed Travel their?
    Will I get the exact same images with this way?

  • @MrMadmaggot
    @MrMadmaggot Рік тому

    Where did u get that Canny model?

  • @proyectorealidad9904
    @proyectorealidad9904 Рік тому

    how can i do this in batch?

  • @OnlineTabletopcom
    @OnlineTabletopcom Рік тому +8

    Mine turns out quite light/grayish. the lines are also quite thin. Any tips?

    • @Argentuza
      @Argentuza Рік тому +1

      Same here, no way I can obtain the same results! why is this happening?

    • @archael18
      @archael18 4 місяці тому

      You can try using the same seed he does in his img2img tab or changing it to see which lineart style you prefer. Every seed will make a different lineart style.

  • @St.MichaelsXMother
    @St.MichaelsXMother Рік тому

    how do I get controllnet? or if it's a website?

  • @MaxKrovenOfficial
    @MaxKrovenOfficial Рік тому

    In theory, we could use this same method, with slight variations, to have full color characters with white backgrounds, so we can then delete said background in Photoshop and thus have characters with transparent backgrounds?

  • @mattmustarde5582
    @mattmustarde5582 Рік тому +5

    Any way to boost the contrast of the linework itself? I'm getting good details but the lines are near-white or very pale gray. Tried adding "high contrast" to my prompt but not much improvement.

    • @bustedd66
      @bustedd66 Рік тому

      i am getting the same thing

    • @bustedd66
      @bustedd66 Рік тому

      raise the denoising strength i missed that step :)

    • @zendao7967
      @zendao7967 Рік тому +3

      There's always photoshop.

    • @randomscandinavian6094
      @randomscandinavian6094 Рік тому +2

      The model you use seems to affect the outcome. I haven't tried the one he is using. And of course the input image you choose. Luck may be a factor as well. All of my attempts so far have looked absolutely horrible and nothing like the example here. Fun technique but nothing that I could use for anything if the results are going to look this bad. Anyway, it was interesting but now on to something else.

    • @TheMediaMachine
      @TheMediaMachine Рік тому

      I just save it, bang it in Photoshop and I just use adjustment layers i.e. contrast, curves. Until I get good with stable diffusion I am doing this for now. For colour lines, try adjustment layer colour, then make overlay mode screen.

  • @user-vo7rv1mv3m
    @user-vo7rv1mv3m 3 місяці тому

    Why I follow the same Settings as the video tutorial. For large models, controlnet is also set up. The image I generated was still white, no black and white lines,

  • @solomonkok1539
    @solomonkok1539 Рік тому

    Which app?

  • @tails8806
    @tails8806 Рік тому

    I only get a black image from the canny model... any ideas?

  • @global_ganesh
    @global_ganesh 13 днів тому

    Which website

  • @Bra2ha1
    @Bra2ha1 Рік тому

    Where can I get this canny model?

  • @theStarterPlan
    @theStarterPlan 10 місяців тому

    What does the seed value say?

  • @jackmyntan
    @jackmyntan Рік тому +4

    I think the models have changed because I followed this video to the letter and all I get is very very faint line drawings. I even took a screen shot of the example image here used and got exactly the same issue. There are more controllers on the more recent iteration of ControlNet, but everything I try results in ultra feint line images.

    • @Argentuza
      @Argentuza Рік тому

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @hildalei7881
      @hildalei7881 Рік тому

      I have the same problem. The line is not as clear as his.

  • @ssj3mohan
    @ssj3mohan Рік тому +2

    Not Working for me.

  • @vi6ddarkking
    @vi6ddarkking Рік тому +3

    So Instant Manga Panels? Nice!

  • @copyright24
    @copyright24 Рік тому

    That looks amazing but I have an issue, I have recently installed Controlnet and in the folder I have the model control_v11p_sd15_lineart but it's not showing in the model list ?

    • @klawsklaws
      @klawsklaws Рік тому

      I had same issue i downloaded control_sd15_canny.pth file and put in models folder

  • @kamransayah
    @kamransayah Рік тому

    Hey K, what happen? did they delete your video again?

  • @vishalchouhan07
    @vishalchouhan07 Рік тому

    Hey i am not able to achieve the quality of linework you are able to achieve in this video. is it a good idea to experiment with different models?

    • @Argentuza
      @Argentuza Рік тому +1

      If you want to get the same results use the same model: dreamshaper_331BakedVae

  • @maedeer5190
    @maedeer5190 Рік тому +1

    i keep getting a completely different image can someone help me.

  • @brandonvanderheat
    @brandonvanderheat Рік тому +1

    Haven't tried this yet but this might make it easier to cut (some) images from their background. Convert original image to line-art. Put both the original image and line art into photoshop (or equivalent) and use the magic background eraser to delete the background from the line art layer. Select layer pixels and invert selection. Swap to the layer with the original color image, add feather, and delete.

  • @vaneaph
    @vaneaph Рік тому +2

    This is way more effective than anything i have tried with Photoshop.

    • @krystiankrysti1396
      @krystiankrysti1396 Рік тому

      which means you did not tried it because its not as good as that video makes it to be , he cherrypicked example image

    • @vaneaph
      @vaneaph Рік тому +1

      @@krystiankrysti1396 not sure what your point here. but AI is does not mean Magic! you still need to edit the picture in Photoshop to ENHANCE the result to your liking.
      Using controNet indeed saves me hell of a lot of time.
      (do not forget, the burgers on the pictures NEVER look like what you really order !)

    • @krystiankrysti1396
      @krystiankrysti1396 Рік тому

      @@vaneaph well, i got it working better with hed than canny , its just if i would make new feature vids, id premade a couple examples to show more than one so people can also see fail cases

  • @kushis4ever
    @kushis4ever Рік тому +1

    Hi, I replicated the steps on an image but the image came out with blurred lines like brush marks with no distinguishable outline. BTW, it took me nearly 4-5 minutes to generate on a macbook pro i9 32GB RAM.

  • @grillodon
    @grillodon Рік тому

    It's all ok before Inpaint procedure. When I click generate after all settings and black paint on face the Web UI tells me: ValueError: Coordinate 'right' is less than 'left'

    • @grillodon
      @grillodon Рік тому

      Solved. It was Firefox. But the Inpaint "new detail" works only f I select Whole Picture.

  • @andu896
    @andu896 Рік тому +1

    I followed this tutorial to the letter, but all I get is random lines, which I assume is related to Denoise Strength being so high. Can you try with a different model and see if this still works? Anybody got it to work?

    • @Argentuza
      @Argentuza Рік тому

      If you want to get the same results use the same model: dreamshaper_331BakedVae

    • @sudhan129
      @sudhan129 11 місяців тому

      @@Argentuza Hi, I found the only link about dreamshaper_331BakedVae. It's on hugging face, but seems not a file for download. Where can find a usable dreamshaper_331BakedVae file?

  • @edmatrariel
    @edmatrariel Рік тому +1

    Is the reverse possible? line art to painting?

    • @kevinscales
      @kevinscales Рік тому

      sure, just put the line art into controlnet and use canny. (txt2img) write a prompt etc
      Wait, does this make colorizing manga really easy? I never thought of that before

  • @kiillabytez
    @kiillabytez Рік тому

    So, it requires a WHITE background?
    I guess using it for comic book art is a little more involved or is it?

  • @serjaoberranteiro4914
    @serjaoberranteiro4914 Рік тому +2

    it dont work, i got a totally different result

  • @goldenshark6272
    @goldenshark6272 10 місяців тому

    Plz how to download controlnet ?!

  • @welovesummernz
    @welovesummernz Рік тому +1

    The title says any image, how can I apply this style to one of my own photos? Please

    • @OliNorwell
      @OliNorwell Рік тому

      Yeah exactly, I tried with one of my own photos and it wasn't as good

  • @hildalei7881
    @hildalei7881 Рік тому

    It looks great. But I follow your steps but it doesn't work anymore...Maybe it's because the different version of webUI and controlnet.

  • @bustedd66
    @bustedd66 Рік тому +1

    i tried using it on images i had already created and they came out not so great. does it only work with the same seed. is that why you have to create the image first and then send it to img2img

    • @kylehessling2679
      @kylehessling2679 Рік тому +1

      This is a great observation, I think you might be right because I'm having a rough go with real photography

    • @bobbyboe
      @bobbyboe Рік тому

      Interesting Idea... I thought it is because he is using a different model than I do... Also in my case it does not turn out well. there are no wider lines like in the video and it looks bad.

    • @bobbyboe
      @bobbyboe Рік тому +1

      I tried it with results from text2img... and the same seed, does not work. I am downloading the model now that he used... to check that out.

    • @bustedd66
      @bustedd66 Рік тому +1

      @@bobbyboe please let us know.

    • @bobbyboe
      @bobbyboe Рік тому +1

      Update: I used his model and it is still not good, only thin ugly outlines, I updated all extensions, I used the negative prompt like he did (a user further down in the comments has achieved better results using it - I did't). I used rendered photorealistic figures.... I still wonder if there is some deeper reason in only using a picture-result from text2img and transfer that to img2img... which would make the whole thing useless for me.

  • @aliuzun2220
    @aliuzun2220 Рік тому

    gelişir gelişir

  • @OtakuDoctor
    @OtakuDoctor Рік тому

    i wonder why i only have one cfg scale, not start and end like you, my controlnet should be up to date
    edit: nvm, needed an update

  • @diegomaldonado7491
    @diegomaldonado7491 Рік тому

    where is the link to this ai tool??

  • @tetsuooshima832
    @tetsuooshima832 Рік тому +2

    I found the first step unecessary. What's the point sending to img2img if you delete the whole prompt later on, just start from img2img directly then tweak any gen you have ? or any pic really

    • @TheDocPixel
      @TheDocPixel 11 місяців тому

      Don't forget that it's good to have the seed

    • @tetsuooshima832
      @tetsuooshima832 11 місяців тому +1

      @@TheDocPixel I think seed become irrelevant with a denoise strength of 0.95. Besides, if your source is AI generated then the seed is in the metadata; if it's any images from somewhere else there's no meta = no seed. So I don't get your point here

  • @apt13tpa
    @apt13tpa Рік тому

    I don't knwo why but this isn''t working for me at all

  • @Niiwastaken
    @Niiwastaken 21 день тому +1

    It just seems to make a white image. Ive triple checked that I got every step right :/

  • @TheAiConqueror
    @TheAiConqueror Рік тому

    💪💪💪

  • @sojoba3521
    @sojoba3521 Рік тому

    Hi, do you do personal tutoring? I'd like to pay you for a private session

  • @LouisGedo
    @LouisGedo Рік тому

    Hi

  • @takezosensei
    @takezosensei Рік тому +9

    As a lineart artist, I am deeply saddened...

  • @HogwartsStudy
    @HogwartsStudy Рік тому +2

    and here i trained 2 embeddings all night long to do the same thing...

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Ah.. well😅 sorry

    • @HogwartsStudy
      @HogwartsStudy Рік тому

      @@Aitrepreneur no no, this will be excellent! Right after I get done with this Patrick Bateman scene...

    • @HogwartsStudy
      @HogwartsStudy Рік тому

      @@Aitrepreneur Just tried to do this and I do not have a guidance start slider, only weight and strength.

  • @dylangrove3214
    @dylangrove3214 Рік тому

    Has anyone tried this on a building/architecture photo?

  • @davidbecker4206
    @davidbecker4206 Рік тому +2

    Tattoo artists... Ohh I hate AI art! ... oh wait this fits into my workflow quite well.

  • @cheruthana005
    @cheruthana005 Рік тому +2

    not woking for me

  • @MonologueMusicals
    @MonologueMusicals Рік тому +1

    ain't working for me, chief
    Edit. I figured it out, the denoising is key.

  • @UnderstandingCode
    @UnderstandingCode Рік тому

    a line drawing lineart linework

  • @krystiankrysti1396
    @krystiankrysti1396 Рік тому +1

    Meh0 this works like 0.5% of the time , mostly doesnt work