Map Bashing - NEW Technique for PERFECT Composition - ControlNET A1111

Поділитися
Вставка
  • Опубліковано 11 чер 2023
  • Map Bashing is a NEW Technique to combine ControlNet maps for Full Control. This allows you to create amazing Art. Have full artistic control over your AI works. You can exactly define where elements in your image go. At the same time you have full prompt control, because the ControlNET Maps have now color, daylight, weather or other information. So you can create many variations from the same composition
    #### Links from the Video ####
    Make Ads in A1111: • Make AI Ads in Flair.A...
    Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
    Goose unsplash.com/photos/eObAZAgVAcc
    Pillar www.pexels.com/photo/a-brown-...
    explorer: unsplash.com/photos/8tY7wHckcM8
    castle: unsplash.com/photos/8tY7wHckcM8
    mountains unsplash.com/photos/lSXpV8bDeMA
    Ruins unsplash.com/photos/d57A7x85f3w
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoffee.com/oliviotu...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
  • Навчання та стиль

КОМЕНТАРІ • 155

  • @OlivioSarikas
    @OlivioSarikas  Рік тому +8

    #### Links from the Video ####
    Make Ads in A1111: ua-cam.com/video/LBTAT5WhFko/v-deo.html
    Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
    Goose unsplash.com/photos/eObAZAgVAcc
    Pillar www.pexels.com/photo/a-brown-concrete-ruined-structure-near-a-city-under-blue-sky-5484812/
    explorer: unsplash.com/photos/8tY7wHckcM8
    castle: unsplash.com/photos/8tY7wHckcM8
    mountains unsplash.com/photos/lSXpV8bDeMA
    Ruins unsplash.com/photos/d57A7x85f3w

    • @aeit999
      @aeit999 Рік тому

      Latent couple when?

    • @xiawilly8902
      @xiawilly8902 Рік тому

      looks like the explorer image and castle image are the same.

  • @ainosekai
    @ainosekai Рік тому +74

    Sir, no need to check 'restore face'. Because if you use kinda 2.5D/animated base model, it face will looks weird.
    You can use an extension named 'After Detailer'. It can fix your character's faces flawlessly (based on your model). Also it can works perfectly with character (face) LoRa. There are also several models like it can fix hands/finger and body.
    Give it a try~

    • @HacknSlashPro
      @HacknSlashPro Рік тому +1

      how to put own face in a picture generated in SD, should we use inpaint or what? need the same style tho and matching lighting, should we use inpaint or what?

    • @ryry9780
      @ryry9780 Рік тому +3

      ​As a birthday gift to my sister three months ago, I made a picture featuring her and one of her favorite characters.
      The way it worked was I trained models of both the character and my sister. My sister's models had to be done in two steps: first with IRL pictures, then second with generated animated pictures.
      Once that was a done, it was a matter of compositing them all together in one pic via OpenPose + Canny + Depth and hours of Inpainting, with a little Photopea.
      Took me 20 work-hours.
      Idk how much of this process has changed since Auto1111 is now at v1.3.2 and ControlNet at 1.1.

    • @samc5933
      @samc5933 Рік тому +1

      What are these “other models” that fix hands? If you can point me in the right direction, I’d be grateful!

    • @Feelix420
      @Feelix420 Рік тому

      @@samc5933 until ai learns to draw hands and feet i wouldn't worry so much about ai like Elon is now

    • @cleverestx
      @cleverestx Рік тому +1

      adetailer is amazing, comes standard on Vladmandic...it can be set to detect hands and fix those as well if you choose the hand instead of face model, but only mildly, not as effective on hands as it is on faces, but still can save a picture from time to time!

  • @Minami1317
    @Minami1317 Рік тому +6

    ControlNet gets even better with every new update.

    • @aeit999
      @aeit999 Рік тому +1

      It is. But this method is as old as control

  • @jason-sk9oi
    @jason-sk9oi Рік тому +13

    Tremendous human artistic control while maintaining the ai creativity as well. Nice!

    • @paulodonovanmusic
      @paulodonovanmusic Рік тому

      Exactly. I think a lot of traditional artists, particularly those with at least basic desktop publishing skills (or basic doodling skills) would love how empowering this is. 1111 is such a wonderful art tool, it's a pity that it can be so technically challenging to get set up, I hope this gets solved soon and that the solution becomes more accessible to the unwashed masses.

    • @chickenmadness1732
      @chickenmadness1732 Рік тому

      @@paulodonovanmusic Yeah it's very close to how a real artist concept artist for movies and games works.
      Main difference is they use a collage of photos to get a rough composition and then paint over it.

  • @ex0stasis72
    @ex0stasis72 Рік тому +3

    I'm so excited to use this technique. I was getting frustrated with the limitations of openpose not being detailed enough. But this soft edge thing looks really powerful as long as I'm willing to do a little manual photo editing beforehand.

  • @mikerhinos
    @mikerhinos Рік тому +1

    This is amazing as very often... one of the most under rated UA-cam account on A1111 tutorials !

  • @akanekomi
    @akanekomi Рік тому +3

    I have been using similar techniques for a while now, I AI Dance animations I make are a lot more complex, glad you made a tutorial on this, Ill redirect anyone who asks for SD tutorials to your channel. Thanks Olivio❤❤

  • @eddiedixon1356
    @eddiedixon1356 Рік тому +1

    This is exactly what I was looking for. I still have a few things to piece together but this was huge, thank you so Much for your time.

  • @neeqstock8617
    @neeqstock8617 Рік тому +24

    Tried it, and this is probably the most simple, creative, and effort-effective technique I've come across. It's so easy to edit edge maps, even with simple image editing software. Thank you Olivio! :D

  • @AZTECMAN
    @AZTECMAN Рік тому +2

    One very similar method I've been exploring is creating depth maps via digital painting.
    Additionally, I've experimented with using a inference based map and then modifying by hand it to get more unusual results.
    Mixing 3D based maps (rendered), inference based (preprocessed), and digital painting methods, while utilizing img2img and multi-controlnet highlights the power of this tech.
    "Map Bashing" is a great term.

  • @luke2642
    @luke2642 Рік тому +15

    You could also use background removal tool step to preprocess each image, or as others suggested, non destructive masking when cutting them out.

    • @TorQueMoD
      @TorQueMoD Рік тому +3

      You don't even need to do any sort of masking. When both images have a black background and white strokes, just set the top layers to Linear Dodge blend and they will seamlessly blend together.

  • @CCoburn3
    @CCoburn3 Рік тому +1

    Great video. I'm particularly happy that you used Affinity Photo to create your maps.

  • @jacque1331
    @jacque1331 Рік тому

    Olivio, you're a Rockstar! Been following you for a while. Extremely grateful to have found your channel.

  • @BruceMorgan1979
    @BruceMorgan1979 Рік тому +1

    Fantastic, and well detailed video Olivio. Look forward to trying this.

  • @soothingtunes6780
    @soothingtunes6780 11 місяців тому

    You are a lot more amazing than Stable Diffusion XL bro, what good is a tool if we don't have people like you to show us how to use it properly!!!

  • @boyanfg
    @boyanfg Рік тому

    Hi Olivio! I am amazed about the master level at which you use the tools. Thank you for sharing this with us!

  • @frostreaper1607
    @frostreaper1607 Рік тому

    Oh wow, this actually solves the composition and color issues, great find Olivio thanks !

  • @GREATMAGICIANLYNEY
    @GREATMAGICIANLYNEY Рік тому

    I've been contemplating how best to bash up source images to create a final composition for SD rendering and this looks like a grand solution! Thanks for sharing.

  • @monteeaglevision5505
    @monteeaglevision5505 Рік тому

    You are a legend!!! Thank you sooooo much for this. Game changer. I will check back and let you know how it goes!

  • @travislrogers
    @travislrogers Рік тому

    Amazing process! Thanks for sharing this!

  • @aicarpool
    @aicarpool Рік тому +2

    Who’s da man? You da man!

  • @ronnykhalil
    @ronnykhalil Рік тому

    this is brilliant! thanks for sharing. opens up so many possibilities, and also helps me grasp the infinitely vast world of controlnet a little better

  • @joywritr
    @joywritr Рік тому +9

    This was very useful, thank you. I was considering drawing outlines over photos and 3D renders to do something similar, but using the masks generated by the AI should work as well and save a lot of time.

  • @trickydicky8488
    @trickydicky8488 Рік тому +1

    Watched your live stream over this last night. Highly enjoyed it.

  • @jonmichaelgalindo
    @jonmichaelgalindo Рік тому +5

    I've been using this for ages! ❤
    NOTE!: RevAnimated is *terrible* at obeying controlnet! (It is my favorite model for composition, but... I wouldn't use it like this.)
    I inpaint after the initial render. Same map bash controlnet, +inpaint controlnet (no image), inpaint her face w/ "face" prompt, pillar w/ "pillar" prompt, etc.
    No final full-image upscale; SD can't handle more than 3 large-scale concepts.
    You can get hires details in a 4k canvas by cropping a section, inpainting more detail, then blending the section back in w/ photoediting software. (This takes some extra lighting-control steps; there are tutorials on how to control lighting in SD.)

    • @foxmp1585
      @foxmp1585 11 місяців тому

      Could you clarify the "extra lighting-control steps" you mentioned? Is that the map we painted in Black&white and then feed into img2img tab?
      Thank you in advance!

    • @jonmichaelgalindo
      @jonmichaelgalindo 11 місяців тому

      @@foxmp1585 I barely remember my workflow from back then... SDXL is fantastic at figuring out what sketches mean in img2img. Right now, I block out a color paint sketch with a large brush, then run it through img2img with the prompt, then paint over the output, and run it through again and repeat, eventually upscaling and inpainting region by region with the same process. I have just about perfect control over composition, facial expressions, lighting, and style. :-)

  • @bjax2085
    @bjax2085 Рік тому

    Brilliant!! Thanks!

  • @mysterious_monolith_
    @mysterious_monolith_ Рік тому

    That was incredible! I love what you do. I don't have ControlNET but if I could get it I would study your methods even more.

  • @ctrlartdel
    @ctrlartdel 11 місяців тому

    This is one of your best videos, and you have a lot of really good videos!

  • @Braunfeltd
    @Braunfeltd Рік тому

    Love your stuff, learning lots. this is awesome

  • @amj2048
    @amj2048 Рік тому

    this was really cool, thanks for sharing!

  • @ex0stasis72
    @ex0stasis72 Рік тому +1

    I recommend playing around with adding this to your positive prompt: "depth of field, bokeh, (wide angle lens:1.2)"
    Without the double quotes of course.
    Wide angle lens is a trick that allows the subject's face to take up more of the area on the image while still fitting in enough context of the area around the subject. And the more pixels you allow it to generate the face, the more details you'll get generally. Although, if you already have controlnet dictating the composition of the image, adding wide angle lens to your prompt will likely have no effect and therefore reduce the effectiveness of everything else in your prompt.
    The depth of field and bokeh are just some ways to make it feel like it was a photo shot professionally by a photographer than if it was just shot by an average person with automatic camera settings.

  • @yadav-r
    @yadav-r Рік тому

    wow, learned a new thing today. Thank you for sharing.

  • @Aisaaax
    @Aisaaax 10 місяців тому

    This is a great video! Thank you! 😮

  • @dm4life579
    @dm4life579 Рік тому

    This will take my non-existant photo bashing skills to the next level. Thanks!

  • @morizanova
    @morizanova Рік тому

    Thanks .. smart trick to make machine function as our helper not just our overlord

  • @Carolingio
    @Carolingio Рік тому

    👏👏👏👏👏
    Nice, Thanks Olivio

  • @destructiveeyeofdemi
    @destructiveeyeofdemi Рік тому

    Thorough brother.
    Peace and love from Cape Town.

  • @MadazzaMusik
    @MadazzaMusik Рік тому

    Brilliant stuff

  • @ericvictor8113
    @ericvictor8113 Рік тому +1

    Incredible video like always is. GRats!

  • @minhhaipham9527
    @minhhaipham9527 Рік тому +1

    Awesome, please make more videos like this. Thank!

  • @coloryvr
    @coloryvr Рік тому

    Super helpful as always! Big FAT FANX!

  • @PhilippSeven
    @PhilippSeven Рік тому +2

    Thank you for this technique! It’s really useful. As for advice from my side, I suggest using an alternative methods for fixing faces (aDetailer, inpaint, etc ) instead of “restore faces”. It uses one model for each face, and as a result, the faces turn out to be too generic.

  • @accy1337
    @accy1337 Рік тому

    You are amazing!

  • @heikohesse4666
    @heikohesse4666 Рік тому

    very cool video - thanks for it

  • @ysy69
    @ysy69 Рік тому

    Beautiful

  • @starmanmia
    @starmanmia 6 місяців тому

    Hello future me,remember to use IP adapter for faces and body and have A detailer for a backup works well x

  • @Marcus_Ramour
    @Marcus_Ramour 11 місяців тому +1

    Brilliant video and thanks for sharing your workflow. I have been doing something similar but using blender & daz studio to build the composition first (although this does take a lot longer I think!).

  • @spoonikle
    @spoonikle Рік тому

    Holy smokes. This changes the flow

  • @williamuria4048
    @williamuria4048 Рік тому

    WOW I like It!

  • @adastra231
    @adastra231 Рік тому

    wonderful

  • @Grimmona
    @Grimmona Рік тому +3

    I installed automatic 1111 last week and now I'm watching one video after another from you, so i get ready to become an Ai artist😁

  • @SergeGolikov
    @SergeGolikov Рік тому +4

    Brilliant results! if not a very convoluted workflow beyond the scope of but the most dedicated, but as the saying goes, no pain - no gain 🍷
    Would it not be simpler to create the Control Maps right in Affinity Photo by using the FILTER/Detect Edges command on your source images? just a thought.

  • @TheGalacticIndian
    @TheGalacticIndian Рік тому

    I love it!♥♥

  • @WolfCatalyst
    @WolfCatalyst Рік тому

    This was a great tutorial on affinity

  • @blood505
    @blood505 Рік тому

    спасибо за видео 👍

  • @mayalarskov
    @mayalarskov Рік тому +1

    hi Olivio, the image of the castle has the same link as the explorer image. Great video!

  • @novabk2729
    @novabk2729 Рік тому

    超級有用!!!!! thx

  • @ddiva1973
    @ddiva1973 Рік тому

    @14:43 mind blown 🤯😵🎉

  • @kyoko703
    @kyoko703 Рік тому +1

    Holy bananas!!!!!!!!!!!!!!!!!

  • @EmilioNorrmann
    @EmilioNorrmann Рік тому

    nice

  • @nspc69
    @nspc69 Рік тому +4

    It can be easier to fuse layers with "additive" filter

  • @KryptLynx
    @KryptLynx Рік тому

    Those fingers, though :D

  • @glssjg
    @glssjg Рік тому +40

    you need to familiarize yourself with masks in your image editor so that way you're using a nondestructive process instead of rasterizing and then resizing things which will lose you quality and if you erase things you wont have a way to undo other than using the undo button.

    • @theSato
      @theSato Рік тому +20

      In a way, I agree with you - but honestly, the whole point of a workflow like this is (and AI/SD in general I think) that its as quick/efficient as possible. Going in and using more "proper" methods like masking/mask management, more layers, etc is nice, but it takes more time and more clicks to do, and for the purposes of making a quick map for ControlNet like this, likely not even worth bothering (in my opinion).

    • @glssjg
      @glssjg Рік тому +18

      @@theSato I mean once you learn to use masks it is so much quicker. for example he had to resize the girl larger because he wanted to make sure the quality was best, If he did a mask he could have just made a mask and erase with a black paint brush (hit x to switch to white brush to correct a mistake) or do the free section method and instead of pressing delete you just fill with the foreground color by hitting option+delete. it's a super small thing as you said but will make your workflow faster, your mistakes less damaging (resizing a rasterized image over and over will decrease it's quality), and lastly it will just make your images better.
      sorry for writing a book, once you learn masks you will never not use them again.

    • @jonmichaelgalindo
      @jonmichaelgalindo Рік тому +2

      I've found myself saving intermediate steps less and less. Something about AI just changes the way you feel about data. (Also, Infinite Painter doesn't have masks, and I can make great art just fine.)

    • @blakecasimir
      @blakecasimir Рік тому +2

      ​@@theSatoI agree with this. The bashing part of the process isn't so much about precision as giving SD a rough visual guide to what you want.

    • @theSato
      @theSato Рік тому +8

      @@ayaneagano6059 I know how to use masks, dont get me wrong. But it's an unnecessary extra step when you're just trying to spend 30 seconds bashing some maps or elements together for sd/ controlnet. The precision is redundant and I have no need to sit there and get it all just right.
      For purposes other than the one shown in the video, yes, use masks and itll save time long term. But for the use in the video, it just costs more time when it's meant to be one and done quickly and quality losses from resizing is irrelevant

  • @Pianist7137Gaming
    @Pianist7137Gaming Рік тому

    For iOS users in iOS 16 and above, there's an easy way to crop out the image, transfer the image to your phone (google photos or something), save image, press and hold on the area you want captured. Tap share and save image, then transfer it back to your pc.

  • @AlfredLua
    @AlfredLua Рік тому

    Hi Olivio, thank you for the super cool video! Curious, if you were using a depth map instead of softedge for the woman, how would you edit it in Affinity to remove the background? It seems trickier for depth map since the background might be a shade of gray instead of absolute black. Thanks.

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR Рік тому

    esa era mi método para crear arte 😊

  • @TorQueMoD
    @TorQueMoD Рік тому

    This is great! What's the AI program you're using called? It's obviously not Midjourney.

  • @merion297
    @merion297 Рік тому +1

    Cool! Now what if we make an animation using e.g. Blender but only for the line art, then input each frame to ControlNet then generate the finaly animation frame-by-frame? I wonder when it becomes so consistent that we can consider it as a real animation.

  • @hngjoe
    @hngjoe Рік тому

    Hi. Thanks for sharing your smart notes of every new thing. I really appreciate that. I have one question. After checking update in SD's extension, system response that I have lates controlnet(caf54076 (Tue Jun 13 07:39:32 2023)). However, I can't find Softedge control model in that dropdown list. Though, i do have Softedge controlnet type and pre-processer. What might be wrong?

  • @yoavco99
    @yoavco99 Рік тому

    To fix faces automatically you can use the adetailer extension.

  • @shipudlink
    @shipudlink Рік тому

    like always

  • @nsrakin
    @nsrakin Рік тому

    You're a legend... Are you available on LinkedIn?

  • @rodrigoundaa
    @rodrigoundaa Рік тому

    amazing video.!!! as usual. Im still not getting where to do it. it is local on your pc? need a very powerfull GPU? or its online?

  • @Kal-el23
    @Kal-el23 Рік тому

    It would be interesting to see what your outcome is without the maps, and just using the prompts as a comparison.

  • @gwcstudio
    @gwcstudio Рік тому +1

    How do you control a scene with 2 people in it? Say, fighting. Do a map bash and then a colored version of the map with separate prompts?

  • @ValicsLehel
    @ValicsLehel Рік тому

    OK to use A1111 to get the outline, but also Photoshop filter can do this and you can do at any resolution. So I think that this first steps can be done with filters to get the outline picture and bash it, Even you can do before the mix roughly and then apply the filter, will not speed up the process because you see what you are doing easier.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      I don't think Photoshop has filters for Depth map, Normal Map or Open Pose. And for the soft edge filter there is a option, but there are 4 options in ControlNET and does the PS version look exactily the same as the ControlNET version?

  • @hakandurgut
    @hakandurgut Рік тому +1

    It would have been much easier with photoshop select subject. I wonder if edge detection would do the same for soft edge

  • @cryptobullish
    @cryptobullish Рік тому

    Crazy cool! How can I retain the face if I wanted to use my own face? What’s the best prompt to use to ensure the closest resemblance? Thanks!

    • @wykydytron
      @wykydytron Рік тому +2

      Make Lora of your face then use adetailer

  • @NERvshrd
    @NERvshrd Рік тому

    Have you watched the log while running hires fix with upscale by 1? I tried doing so as you noted, but it just ignores the process. On or off, no difference in output. Might just be bacuase I'm using vlad's fork. worth double-checking, though

  • @moomoodad
    @moomoodad Рік тому

    How to fix finger deformity, multiple fingers, and bifurcation?

  • @hugoruix_yt995
    @hugoruix_yt995 Рік тому

    Oh I see, I missunderstood. Name makes more sense now

  • @honestgoat
    @honestgoat Рік тому

    Great video Olivio. What extension or setting are you using that allows you @ 11:13 to select the vae and clip skip right there in txt2img page?

    • @forifdeflais2051
      @forifdeflais2051 Рік тому

      I would like to know as well

    • @addermoth
      @addermoth Рік тому +1

      In Auto1111 go to settings, user interface, look down the page for "[info] Quicksettings list ". From there go to the arrow on the right and then highlight and check (A tick mark will appear) both 'sd_vae' and "CLIP_stop_at_last_layers". Restart the UI and they will be where Olivio has them. Hope that helped.

    • @forifdeflais2051
      @forifdeflais2051 Рік тому

      @@addermoth Thank you!

  • @anim8or
    @anim8or Рік тому

    What version of SD are you using? Have you upgraded to 2.0+? (If so do you have a video on how to upgrade?)

  • @lsd250
    @lsd250 Рік тому

    Hi all, may someone answer me a question?
    How much GPU do I need to run A111? I'm using mostly Midjourney because I've a really old PC

  • @rajendrameena150
    @rajendrameena150 Рік тому

    Is there any way to render the render elements inside 3d application like masking id, Z depth , Ambient occlusion, material id and different channels to add information in stable diffusion for making more variation out of it.

    • @foxmp1585
      @foxmp1585 11 місяців тому

      Currently SD can properly reads Z-Depth (Depth map), Material ID (Segmentation map), Normal map.
      And it depends on apps of your choice (Blender, Max, Maya, C4D, ...).
      Each of these app will have their own way of rendering/ exporting these maps, you need to find out yourself. It'll take time but worth it!

  • @Shandypur
    @Shandypur Рік тому

    There's is close button bottom right of the preview image. I feel little anxiety that you didn't click it. haha

  • @springheeledjackofthegurdi2117

    could this be done all in automatic using mini paint?

  • @bjax2085
    @bjax2085 Рік тому

    Still searching for this AI tool for comic book and children's book creators: 1. Al draws actor using prompts. 2. Option to convert the selected character to a simple, clean 3d frame (no background). The character can be rotated. 3. The limbs, head, eyelids, etc can be repositioned using many pivot points. 4. Then, we can ask for the character to be completely regenerated again using the face and clothing on the original. Once we are satisfied we can save and paste the character in a background graphic.

  • @Shoopps
    @Shoopps Рік тому

    I'm happy ai still struggle with hands.

  • @MONTY-YTNOM
    @MONTY-YTNOM Рік тому

    How do you see the 'quality' from that drop down menu ?

  • @d1m18
    @d1m18 Рік тому

    This is very valueable content but may I suggest you alter the title a bit? It is not very enticing to users who are not fully in the know of AI and prompts.
    Keep up the great work!

  • @maxeremenko
    @maxeremenko Рік тому +1

    The image is not generated from the mask I created. Only based on the Prompt. I have set all the settings as in the video. What could be the problem?

    • @jibcot8541
      @jibcot8541 Рік тому

      Have you clicked the "Enable" check box in control net blade? I'm often missing that!

    • @maxeremenko
      @maxeremenko Рік тому

      @@jibcot8541 Thank you. Yes, I clicked on enable. Unfortunately, it keeps generating random results. It feels like I have something not installed.

    • @maxeremenko
      @maxeremenko Рік тому

      @@jibcot8541 problem was solved by removing the segment-anything extension

  • @electricdreamer
    @electricdreamer Рік тому

    Can you do this with Invoke AI?

  • @andu896
    @andu896 Рік тому

    Remove background first with AI or right click on Mac. Then do the depth maps.

  • @serena-yu
    @serena-yu Рік тому

    Looks like rendering of hands is still the Achilles' heel.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      Hands are just really hard to create and understand. Even for actual artists, this is one of the hardest things to create

  • @TheElement2k7
    @TheElement2k7 Рік тому

    How do you got two tabs of controlnet?

  • @serizawa3844
    @serizawa3844 Рік тому

    0:01 six fingers ahushauhsuahsua

  • @itchykami
    @itchykami Рік тому

    Everyone wants to give bird wings. I might try using a peacock spider instead.

  • @emmanuele1986
    @emmanuele1986 Рік тому

    Why I don't have ControlNet on my automatic1111 ?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      because that is an extension you need to install

  • @ericvictor8113
    @ericvictor8113 Рік тому

    Almost FIRST?