TRANSFER STYLE FROM An Image With This New CONTROLNET STYLE MODEL! T2I-Adapter!

Поділитися
Вставка
  • Опубліковано 31 лип 2024
  • Recently a brand new ControlNet model called T2I-Adapter style was released by TencentARC for Stable Diffusion. That model allows you to easily transfer the style of a base image to another one inside ControlNet! So in this video I will show you how to download and install that new model and how to use it inside Stable Diffusion! So let's go!
    Did you manage to install that model? Let me know in the comments!
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    SOCIAL MEDIA LINKS!
    ✨ Support my work on Patreon: / aitrepreneur
    ⚔️ Join the Discord server: bit.ly/aitdiscord
    🧠 My Second Channel THE MAKER LAIR: bit.ly/themakerlair
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    Runpod: bit.ly/runpodAi
    T2I-Adapter models: huggingface.co/TencentARC/T2I...
    Github: github.com/TencentARC/T2I-Ada...
    All ControlNet Videos: • ControlNet
    My previous ControlNet video: • 3D POSE & HANDS INSIDE...
    Multiple Characters With LATENT COUPLE: • MULTIPLE CHARACTERS In...
    GET PERFECT HANDS With MULTI-CONTROLNET & 3D BLENDER: • GET PERFECT HANDS With...
    NEXT-GEN MULTI-CONTROLNET INPAINTING: • NEXT-GEN MULTI-CONTROL...
    CHARACTER TURNAROUND In Stable Diffusion: • CHARACTER TURNAROUND I...
    EASY POSING FOR CONTROLNET : • EASY POSING FOR CONTRO...
    3D Posing With ControlNet: • 3D POSING For PERFECT ...
    My first ControlNet video: • NEXT-GEN NEW IMG2IMG I...
    Special thanks to Royal Emperor:
    - Merlin Kauffman
    - Totoro
    Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
    #stablediffusion #controlnet #3d #stablediffusiontutorial
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
    WATCH MY MOST POPULAR VIDEOS:
    RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
    ►► bit.ly/stablediffusion
    RECOMMENDED WATCHING - My "Tutorial" Playlist:
    ►► bit.ly/TuTPlaylist
    Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

КОМЕНТАРІ • 188

  • @Aitrepreneur
    @Aitrepreneur  Рік тому +12

    HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx

    • @CapsVODS
      @CapsVODS Рік тому

      does the easystablediffusion program work as well for setup?

  • @ueshita6866
    @ueshita6866 Рік тому +22

    I am Japanese and have limited ability to understand English, but I am learning the content of your video by making use of an app for translation. The content of this video is very interesting to me. Thanks for sharing the information.

    • @dogmantc
      @dogmantc 10 місяців тому

      What app, if I may ask?

    • @ueshita6866
      @ueshita6866 10 місяців тому

      @@dogmantc
      Sorry. Maybe it was not appropriate to say that I used an application. I just turned on UA-cam's automatic translation feature.

  • @ADZIOO
    @ADZIOO Рік тому +12

    Its not working for me. I did exacly the same steps like on video, I have controlnet etc. but after render clip_vision/t2adapter it change nothing on the photo... just wtf? Tryed a lot of times with diffrent backgrounds, its always the same photo. Yes, I turned on ControlNet.

    • @lizng5509
      @lizng5509 Рік тому

      same here, have you figured it out?

  • @MathieuCruzel
    @MathieuCruzel Рік тому +6

    It's astounding to find all these news options in Stable Diffusion. A bit overwhelming if you did not follow up from the start but the sheer amount of possibilities nowadays is golden !

  • @odawgthat3896
    @odawgthat3896 Рік тому +2

    Always there with the new content! Love it

  • @danieljfdez
    @danieljfdez Рік тому +33

    I would like to add that I have been playing around with style model and with help of another video I realised that I was sometimes not getting desired result just because I wrote a prompt over 75 tokens. If you keep your prompt under 75 tokens, there is no need to add another controlnet tab. Thank you very much for keeping us uptodate!!!

    • @sinayagubi8805
      @sinayagubi8805 Рік тому +1

      which other video? I'd like to watch it

    • @Eins3467
      @Eins3467 Рік тому +3

      Yes I once had a prompt over 75 tokens and it will generate an error where it says style adapter or cfg guidance guess mode will not work due to inference which is kinda vague. Hopefully they can update the error message where it says you're above 75 tokens or something like that.

    • @sownheard
      @sownheard Рік тому +3

      Wait can u use two control net tabs to get to go over the 75 token limit?

    • @inbox0000
      @inbox0000 Рік тому

      that seems WAY excessive and like it would be extremely contradictory

  • @GeekDynamicsLab
    @GeekDynamicsLab Рік тому +1

    Im doing amazing things with style transfer, thanks for the guide and exceptional work 😁

  • @friendofai
    @friendofai Рік тому

    That's super cool, cant wait to try it! Thanks again K!

  • @notanactualuser
    @notanactualuser Рік тому

    Your videos are by far the best I've seen on all of this

    • @myday6074
      @myday6074 Рік тому

      this is the only video where the author couldn't get the plugin to work properly :)

  • @sownheard
    @sownheard Рік тому

    Wow this is so epic 🤩

  • @inbox0000
    @inbox0000 Рік тому

    That is VERY cool!

  • @StrongzGame
    @StrongzGame Рік тому +7

    Damn every time I’m about to take a break there’s something new

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +2

      I feel you :D

    • @StrongzGame
      @StrongzGame Рік тому

      😅

    • @Eins3467
      @Eins3467 Рік тому

      While I love progress this is what really irks me. In a month or two we will be seeing a model that consolidates all this new tech and make it easier to do, maybe even just via text2img. Sometimes I just want to wait it off but rabbit hole I guess.

  • @jasonhemphill6980
    @jasonhemphill6980 Рік тому

    You've been on fire with the upload schedule. Please don't burn yourself out.

  • @winkletter
    @winkletter Рік тому +4

    I love seeing these updates and having no idea how to use them. :-) BTW, might as well get the color adapter while you're getting style.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      True but the color one wasn't as interesting as the style one, so I just decided to leave it from the video but you can still get really interesting images with it too

  • @amj2048
    @amj2048 Рік тому

    This is so cool! thanks for sharing!

  • @eggtatata0-
    @eggtatata0- Рік тому +9

    RuntimeError: Tensors must have same number of dimensions: got 4 and 3
    help me pls . This issue is killin me

    • @vaneaph
      @vaneaph Рік тому +1

      try with git pull, @ 00:30
      i updated controlNet manually, it fixed for me
      (from the ui seems not working)

    • @frischkase1
      @frischkase1 Рік тому +1

      use txt2img not img2img

  • @wendten2
    @wendten2 Рік тому +8

    in img2img. denoising strength is the ratio of noise that is applied to the old image before trying to restore it. If you pick 1.0, its works like text2img, as nothing from the input image is transferred to the output.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +6

      I forgot to explain that the reason why using the img2img tab is better is that you can actually play around with the denoising strength and get different results

  • @Unnaymed
    @Unnaymed Рік тому

    it's not only fun it's an epic feature. I a have so many artist picture that i want to reuse for my own idea and portraits.

  • @tyopoyt
    @tyopoyt Рік тому +7

    You can also use guidance start to make it apply just the style without putting in the whole subject of the source image. I like using values between 0.25 and 0.6 depending on how strong the style should be

    • @Skydam33hoezee
      @Skydam33hoezee Рік тому +5

      Can you explain how you do that? What does 'guidance start' mean?

  • @the_RCB_films
    @the_RCB_films Рік тому

    YEAH SWEET nice!

  • @jameshughes3014
    @jameshughes3014 Рік тому

    I wouldn't have figured out that error, thanks for being awesome

  • @adapptivtech
    @adapptivtech Рік тому

    Thanks!

  • @mr.random4231
    @mr.random4231 Рік тому +4

    Thanks Aitrepreneur for another great video.
    For anyone that having this error: "Error - StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" when clip_vision preprocessor is loading and style doesn't apply.
    Try this in webui-user.bat: "set COMMANDLINE_ARGS= --xformers --always-batch-cond-uncond" the last parameter "--always-batch-cond-uncond" do the trick for me.

    • @kamransayah
      @kamransayah Рік тому

      Your trick did it for me too. Before that it wasn't working at all! So, Thank you Superman! :D

  • @devnull_
    @devnull_ Рік тому +1

    Nice to see you show what happens when this thing is configured incorrectly, not only step by step without failures. 👍

  • @wndrflx
    @wndrflx Рік тому +1

    How are you getting these long prompts to work with the style transfer? It seems like style won't work without using a much smaller prompt.

  • @junofall
    @junofall Рік тому +10

    You definitely need to make a video about the oobabooga text generation webui! Those of us with decent enough hardware can run 7B-13B parameter LLM models on our on machines with a bit of tweaking, it's really quite something. Especially if you manage to 'acquire' the LLaMA HF model.

    • @robxsiq7744
      @robxsiq7744 Рік тому

      how did you get the config?

    • @eyoo369
      @eyoo369 Рік тому

      Where can you acquire the LLaMA model? Heard quite some buzz around it and read up on it.

    • @mrrooter601
      @mrrooter601 Рік тому +2

      Should probably wait for them to patch the 8bit to work out of the box first. Its still broken without a manual fix on windows.
      Its issue 147 "support for llama models" on the oobabooga UI
      Basically needed to replace some text, and install a different version of bitsandbytes manually. Otherwise I was getting errors, and 8 bit would not work at all.
      With a 3090, i was able to launch 7 no issues, and 13 but it took nearly all my vram, and basically didn't work, i think it was partially on ram too which made it even slower.
      Now with 8 bit actually working (and 4+3 coming soon) I can run 13 with plenty of ram to spare, like 4-6g iirc. 30b is also confirmed to work on 24g when 4bit is finalized.

  • @ai-bokki
    @ai-bokki Рік тому +1

    Great video K! You are epic as always! I will try making a video on this ;)
    As of now I think controlnet only supports 1.5 models. Can't wait for 2.1 release.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      The 2.1 is already supported :)

    • @ai-bokki
      @ai-bokki Рік тому

      @@Aitrepreneur Oh is it!? I am getting some error with Illuminati Diffusion model, but with other 1.5 model the ControlNet works fine. I will ask with the tech advisor in your discord. Your discord is super helpful btw!

  • @davidlankester
    @davidlankester Рік тому

    Is this consistent like I could apply the same style on batch to an image sequence for video?

  • @qvimera3darts444
    @qvimera3darts444 Рік тому +1

    Hi Aitrepreneur which version of Stable Diffusion do you use in your videos? Im looking for the same to follow your videos but i didn't succeed. Thanks in advance

  • @j_shelby_damnwird
    @j_shelby_damnwird Рік тому +1

    If I run more than one ControNet tab I get the CUDA out of memory error (8GB of VRAM GPU). Any suggestions?

  • @Kontor23
    @Kontor23 Рік тому

    I‘m still looking for a way to generate consistent styles with same character(s) in different scenes (for a picture book for example) without using dreambooth to train the faces. Like using img2img with a given character and place the character in different poses and scenes with the help of controlnet without the need of train the character with dreambooth. Is there a way? Like in Midjourney you can use the url of a previous generated image follwed by the new prompt to get results with the same character in a new setting.

  • @jippalippa
    @jippalippa Рік тому

    Cool tutorial!
    And how can I apply the style I got to a batch of images taken from a video sequence?

  • @MondayMoustache
    @MondayMoustache Рік тому +7

    when I try this in controlnet, the style model doesn't affect the outputs at all, what could be wrong?

    • @Gh0sty.14
      @Gh0sty.14 Рік тому +4

      Same. I followed the video exactly and it doesn't change the image at all. I've tried it in both txt2img and img2img and it's doing nothing.

    • @CrixusTheUndefeatedGaul
      @CrixusTheUndefeatedGaul Рік тому +1

      Same here, and I get runtime errors when i use style or color model

  • @ErmilinaLight
    @ErmilinaLight 3 місяці тому

    Thank you! What should we choose as Control Type? All?
    Also, noticed that generating image with txt2img controlnet with given image it takes veeeeery long time, though my machine is decent. Do you have the same?

  • @victorwijayakusuma
    @victorwijayakusuma Рік тому

    thank you so much for this video, but i am getting this error "AttributeError: 'NoneType' object has no attribute 'unsqueeze'" if using this feature

  • @AbsalonPrieto
    @AbsalonPrieto Рік тому +2

    What's the difference between the T2I-models and the regular controlnet ones?

  • @michaelli7000
    @michaelli7000 Рік тому

    amazing, is there a colab version for this function?

  • @therookiesplaybook
    @therookiesplaybook Рік тому +2

    Where do I find clipvision? I have the latest controlnet and it ain't there.

  • @elmyohipohia936
    @elmyohipohia936 Рік тому

    I use an other UI and I don't have the preprocessor "clip_vision", I don't find where to get it?

  • @chayjohn1669
    @chayjohn1669 Рік тому +1

    my preprocessor is different how to add preprocessor in ?

  • @TheMaxvin
    @TheMaxvin 9 місяців тому

    T2I or IP adapter in ControlNet, both are for styling, what is preferable today?

  • @OriBengal
    @OriBengal Рік тому +1

    Have you seen anything that does Style Transfer the way that Deep Dream Generator does? It's not my favorite tool- but that feature alone is quite powerful. I was hoping this was the same thing... In that one, you can upload a photo/painting/etc... and then another image -- and it will recreate the original in this new style.... Complete with copying brush stroke style / textures, etc... I liked it for doing impasto type of stuff....

    • @the_one_and_carpool
      @the_one_and_carpool Рік тому

      check out visions of chaos that has a lot of machine learning and a style transfer in it

  • @victorvideoeditor
    @victorvideoeditor Рік тому

    My control net tab disappear, any idea? There is no option in settings < control net :c

  • @thegreatdelusion
    @thegreatdelusion Рік тому

    Why are these models so small? The ones from another video of yours are 5.71 GB. What is the difference? Also could you make a video on how to get these models to work on a colab? I couldn't get them to work eventhough they show up in the control net options. Also the open pose doesn't work either for me. I installed open pose editor and added the model it al shows up fine I can make a pose expert to png and then upload it but it just gets ignored or something, I get a blank image when I hit generate eventhough I selected the open pose model in the control net and hit enable and generate. This is all on colab. I can't test it on my pc as my gpu is too weak. Thanks for any help.

  • @the_one_and_carpool
    @the_one_and_carpool Рік тому

    where do you get clip vision i cant find it

  • @squiddymute
    @squiddymute 6 місяців тому

    i don't have a "clip_vision" preprocessor , any idea why ?

  • @pladselsker8340
    @pladselsker8340 Рік тому +2

    Interesting model. It doesn't look like you get a lot of control with this style transfer model, though. It's kind of in the right direction, but it's still very very far from being the same style at all. I'll try it out!

    • @myday6074
      @myday6074 Рік тому +1

      It's just not a very good guide. You need to keep your prompt under 75 tokens and it work perfectly

  • @hoangtejieng2247
    @hoangtejieng2247 Рік тому

    I have a question, Clipvision, open pose hand, and color under "Preprocessor -controlnet" how do you install and get it? Thank you

  • @vincentvalenzuela3171
    @vincentvalenzuela3171 11 місяців тому

    Hi how do I install clip_vision preprocessor?

  • @duphasdan
    @duphasdan Рік тому +1

    2:04 How does one add the Control Model tabs?

  • @SergioGuerraFX
    @SergioGuerraFX Рік тому

    Hi, I updated the files and restarted the UI. Now the clip vision pre processor shows up but not the t2iadapter_style model...did I miss a step?

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      make sure you have correctly placed the model, then refresh the model list

  • @novalac9910
    @novalac9910 Рік тому

    if your getting Warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference try changing the prompt to 75 or less token.

  • @duskfallmusic
    @duskfallmusic Рік тому +1

    Too many toys to keep track of, i'm trying to make a small carrd website for tutorials XD this is just gonna go in the resources OLOLOLOL

  • @iz6996
    @iz6996 Рік тому

    how can i use this for sequence?

  • @Grumpy-Fallboy
    @Grumpy-Fallboy Рік тому

    can u make a deep instructional video about deforum-for-automatic1111-webui, ty

  • @Skydam33hoezee
    @Skydam33hoezee Рік тому +4

    It doesn't work for me. Updated the ControlNet extension, put the models in the directory. Getting this remark in the console: "Warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference". Anybody else get that message and was able to fix it?

    • @Skydam33hoezee
      @Skydam33hoezee Рік тому

      It does seem that prompt length has something to do with it the issue. With shorter prompts it actually does work, but I see Aitrepreneur use prompts that are much longer.

    • @Eins3467
      @Eins3467 Рік тому +1

      Don't use prompts longer than 75 prompts and that error will be gone.

    • @Skydam33hoezee
      @Skydam33hoezee Рік тому

      @@Eins3467 Thanks. Any idea how Aintrepreneur manages to use these much longer prompts? It seems that negative prompts also contribute to the total prompt length in this case.

  • @SoccerMomSuho
    @SoccerMomSuho Рік тому +6

    RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [257, 1024] , anyone else have this error?

    • @SwordofDay
      @SwordofDay Рік тому +4

      yes and I'm super frustrated trying to figure out what happened. I feel like I followed all the steps to a t

    • @alexandr_s
      @alexandr_s Рік тому +2

      +1

    • @Georgioooo000
      @Georgioooo000 Рік тому +2

      @@SwordofDay +1

    • @SwordofDay
      @SwordofDay Рік тому +3

      ok so update. I dunno if any of you are still working on this or have committed a small crime to cope but, I found a solution ! if you go to stablediffusion folder, then extensions, sd webui control net, annotator, then clip folder. click in the address bar, backspace, type cmd in the address bar while in that folder. when the command prompt comes up type" git pull" and hit enter. restart and good luck! worked for me. basically manually updating the files.

    • @SwordofDay
      @SwordofDay Рік тому

      also make sure --medvram is not in your webui-user . bat file

  • @snatvb
    @snatvb Рік тому +1

    clip_vision doesn't work for me :(

  • @TheDropOfTheDay
    @TheDropOfTheDay Рік тому

    Jojo one was crazy

  • @kamransayah
    @kamransayah Рік тому +5

    Hey Super K, thanks for your amazing video as usual! Unfortunately for some reason the t2iadapter_style_sd14v1 model is not working for me at. All other models working except this one. So I just thought to leave my comment here to see if maybe other people with the same problem could fix the issue and can lead me in the right direction. Thanks for reading! :)

    • @rvre9839
      @rvre9839 Рік тому

      same

    • @DmitryUtkin
      @DmitryUtkin Рік тому

      not work at all ("Enable CFG-Based guidance" in settings is ON)

    • @DmitryUtkin
      @DmitryUtkin Рік тому

      the solution is in comments below!

    • @kamransayah
      @kamransayah Рік тому +1

      @@DmitryUtkin Thanks for your help but it didn't work.

  • @veralapsa
    @veralapsa Рік тому +3

    Unless it has been updated since that Reddit post was posted, it doesn't work if --medvram is part of your commandline. And with 8GB only gets me 1or 2 shots before I have to restart the program or else I get OOM errors. I can also confirm that the 75 token max in the pos&neg prompts does make it so only the style adapter is needed.

    • @Gh0sty.14
      @Gh0sty.14 Рік тому

      I removed --medvram and it's still not working at all for me.

    • @veralapsa
      @veralapsa Рік тому +1

      @@Gh0sty.14 you may need to change what config is used for the adapter models in Settings-Tab=>ControlNet to point to t2iadapter_style_sd14v1.yaml which if Control Net is up to date should be in your CN models folder. Try that, restart and test.

    • @Gh0sty.14
      @Gh0sty.14 Рік тому

      @@veralapsa Working now! thanks man

    • @corwin.macleod
      @corwin.macleod Рік тому

      it works with --lowvram cmd option. use it instead

  • @lujoviste
    @lujoviste Рік тому +2

    for some reason i can't make this work :(. it just makes random pictures

  • @draggo69
    @draggo69 Рік тому

    Appreciate the jojo reference!

  • @tuhoci9017
    @tuhoci9017 Рік тому

    I want to download the model you used in this video. Please give me the download link.

  • @dcpln7
    @dcpln7 Рік тому

    hi, may i ask, what are the pros and cons of running stable diffusion on GC vs running it on PC locally? I have a RTX 3070, and when I used controlnet, it will run very slow. Sometimes it will out of memory, will running on GC become faster? and what do you recommend? thanks in advance.

    • @exshia3240
      @exshia3240 Рік тому

      Colab is limited by time and space.
      Something with your settings ?

  • @ariftagunawan
    @ariftagunawan Рік тому

    Please teacher, I need How-To inpaint batch processing with inpaint batch mask in directory...

  • @tariksaid4536
    @tariksaid4536 Рік тому

    very usefull thanks
    could please make another video about runing kohya ss on runpod
    the last methode is not working for me

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      Yes I have these kinds of videos planned

    • @tariksaid4536
      @tariksaid4536 Рік тому

      @@Aitrepreneur thanks I m really looking forward to it

  • @Rocket-Gaming
    @Rocket-Gaming Рік тому

    Ai, how come you have 5 controlnet options but i have one?

  • @FleischYT
    @FleischYT Рік тому

    thx! safetensors? ;)

  • @danowarkills4093
    @danowarkills4093 11 місяців тому +1

    Where do you get clip vision preprocessor?

    • @HANKUS
      @HANKUS 4 місяці тому +1

      my question exactly, this is a poor tutorial if it skips over the installation of a key component

  • @zirufe
    @zirufe Рік тому +1

    I can't find Clip Vision Preprocessor. Where should I install it?

    • @megaaziib
      @megaaziib Рік тому

      it is now named t2ia_style_clipvision

  • @TheAiConqueror
    @TheAiConqueror Рік тому

    I know it... 😁 that you upload a video about this. 💪

  • @jurandfantom
    @jurandfantom Рік тому

    now models take 400MB instead 700MB ?

  • @wgxyz
    @wgxyz Рік тому

    AttributeError: 'NoneType' object has no attribute 'convert' Any ideas?

  • @CProton69
    @CProton69 Рік тому

    Cannot see clip_vision in Pre-processor drop down. Updating extensions is not working for me!

  • @TREXYT
    @TREXYT Рік тому

    how to get multiple tabs like you in controlnet ? i dont have it

    • @veralapsa
      @veralapsa Рік тому

      Update Control Net like he says in the first half of the video.

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      You need to select multiple models in settings, controlnet section, check out my first multi-controlnet video

    • @TREXYT
      @TREXYT Рік тому

      @@Aitrepreneur thanks a lot, sorry i missed some videos i was busy

    • @TREXYT
      @TREXYT Рік тому

      @@veralapsa already did, i got the answer but thanks anyway

  • @0rurin
    @0rurin 4 місяці тому

    Any better way to do this, a year later?

  • @theairchitect
    @theairchitect Рік тому +1

    clip vision preprocessor set with tiadapter_style_sd13v1 not working for me =( no errors. just generate and style image not impact in final result. anyone got this same issue? contronet and stable diffusion up to date ... frustrating =(

    • @339Memes
      @339Memes Рік тому

      yeah ,same setting ,not getting what he's showing

    • @theairchitect
      @theairchitect Рік тому

      @@339Memes i remove all prompts (using img2img with 3 controlnets activate: cany + hed + t2iadapter with clip_vision preprocessor), in generating process appears error: "warning: StyleAdapter and cfg/guess mode may not works due to non-batch-cond inference" and generated result appears with not affected style =( frustrating .... i try many denoising strengths in img2img and many weights on controlnet instances without success... not applying the style on final generated result =( try to enable "Enable CFG-Based guidance" in contronet setting too, and still not working =( anyone got this same issue?

    • @CrixusTheUndefeatedGaul
      @CrixusTheUndefeatedGaul Рік тому

      ​@@theairchitect i got the same message! Unfortunately i have no idea whats wrong
      Do you use colab?

    • @theairchitect
      @theairchitect Рік тому

      ​@@CrixusTheUndefeatedGaul i got the solution! users with low vram like me (im 6gb) have to remove --medvram at startup. works for me! =)

    • @CrixusTheUndefeatedGaul
      @CrixusTheUndefeatedGaul Рік тому +1

      @@theairchitect Thanks for the reply! I actually fixed it yesterday though. The problem for me was that i had to change the yaml file used in the settings of webui. Cheers man! The style adapter is awesome, especially when you use multinet to mix the styles of two different images

  • @respectthepiece4833
    @respectthepiece4833 Рік тому +1

    Please help, in my preprocessor list I do not have clip vision to pick
    I do have the t2istyle though

    • @3diva01
      @3diva01 Рік тому +1

      It's been renamed to "t2ia_style_clipvision".

    • @respectthepiece4833
      @respectthepiece4833 Рік тому

      @3Diva thanks for letting me know I'll try that

    • @respectthepiece4833
      @respectthepiece4833 Рік тому

      @3Diva for some reason it seems like it totally ignores that preprocessor, or maybe it is the model? I've tried both t2iadapter style fp16 and t2iadapter style sd14v1

    • @3diva01
      @3diva01 Рік тому

      @@respectthepiece4833 I haven't tried it yet, but I was looking forward to it. So it's a bummer that it sounds like it't not working. I'll have to try it out and see if I can figure it out. Thank you for letting me know. *hugs*

    • @respectthepiece4833
      @respectthepiece4833 Рік тому +1

      No problem, yeah it looks amazing, I'll let you know too

  • @EpochEmerge
    @EpochEmerge Рік тому +1

    HOLY GOD COULD WE JUST STOP FOR A MONTH OR SO I CAN`T EVEN HANDLE THIS AMOUNT OF UPDATES

  • @rageshantony2182
    @rageshantony2182 Рік тому

    How to convert a anime movie frame to a Realistic photograph like image?

  • @Thozi1976
    @Thozi1976 Рік тому

    you are using the "posex" extension?* *german laughter following: hehehehehehhehehhhehehe hehehehehehe*

  • @hugoruix_yt995
    @hugoruix_yt995 Рік тому

    how do you get different control model tabs?

    • @xellostube
      @xellostube Рік тому +2

      inside Settings, Control Net
      Multi ControlNet: Max models amount (requires restart)

    • @hugoruix_yt995
      @hugoruix_yt995 Рік тому

      @@xellostube hero, thanks!

  • @coleledger6613
    @coleledger6613 Рік тому +1

    clip skip on your homepage how does one makes this happen?

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      in settings, user interface, quicksettings put CLIP_stop_at_last_layers

    • @coleledger6613
      @coleledger6613 Рік тому

      @@Aitrepreneur Thank you for the help I really appreciate it. Keep up the good work!!!!

  • @sephia4583
    @sephia4583 Рік тому +1

    I don't have clip vision preprocessor on my install of controlnet. Should I reinstall it?

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      You just need to update the extension

    • @sephia4583
      @sephia4583 Рік тому

      @@Aitrepreneur you are lifesaver

  • @valter987
    @valter987 Рік тому

    how to "enable" skip clip on the top?

    • @Aitrepreneur
      @Aitrepreneur  Рік тому +1

      in settings, user interface, quicksettings put CLIP_stop_at_last_layers

    • @valter987
      @valter987 Рік тому

      @@Aitrepreneur thanks for repyling

  • @justinthehedgehog3388
    @justinthehedgehog3388 Рік тому +1

    This is all way beyond me. I couldn't possibly keep up with it. I'm still using command line SD 1.5. Let alone a webui.
    I use it to turn my scribbles into something decent looking, I think that's quite enough for me.

  • @ratside9485
    @ratside9485 Рік тому +1

    Can you also achieve better photorealistic images with it? If you transfer the style?

    • @sefrautiq
      @sefrautiq Рік тому

      Technically i think it interrogates the image with CLIP, and then adds extracted data to you prompt (completely speculating, don't punch me). I don't think that you gain photorealistic quality from this, better use Visiongen\Realistic vision models for this. But anyway, you can experiment

    • @ratside9485
      @ratside9485 Рік тому

      @@sefrautiq Maybe you can add skin detalis as a style. Let's have a look later and test it.

  • @robxsiq7744
    @robxsiq7744 Рік тому

    no doubt the tensor stuff is due to a bunch of mismatched wonkery from having all these weird AI programs going on. tavern, kobold, mika, SD, jupyter, etc...the question is, how to fix it without nuking the OS from orbit and starting over.

  • @LeroyFilon-xh2wp
    @LeroyFilon-xh2wp Рік тому

    Anyone else running this slow? I'm on a 3090 gtx but it takes 2 minutes to render 1 image. Not what i'm used to hehe

  • @johnjohn5932
    @johnjohn5932 7 місяців тому

    Pedro is this you???

  • @blackbauer
    @blackbauer Рік тому

    You can do this better in Photoshop right now blending

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA Рік тому +1

    не работает!!!

  • @eyoo369
    @eyoo369 Рік тому +1

    Wouldn't that really call a style transfer imo. That painting by Hokusai has a lot of impresisonistic elements which doesn't seem to be transferred over to the new image. The female character displayed still has that very typical "artgerm, greg rutkowski" style look to it. Still a cool feature nonetheless but misleading title. Better call it "transfer elements from an image"

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      This was one example among many, each image produces different results

  • @dk_2405
    @dk_2405 Рік тому

    bruh, too fast for the explanation, but thanks for the video

  • @squiddymute
    @squiddymute 6 місяців тому

    this ain't working , it needs an update

  • @sefrautiq
    @sefrautiq Рік тому

    Hmm, is he french?

  • @StrongzGame
    @StrongzGame Рік тому

    So way have runwayml GEN-1 but a lot a lot better and gen1 is not even full released yet 😂😂😂😂

    • @Aitrepreneur
      @Aitrepreneur  Рік тому

      GEN-1 isn't that great from what I heard :/

  • @CaritasGothKaraoke
    @CaritasGothKaraoke Рік тому

    why does everyone always assume we’re using stupid windows PCs?

  • @KarazP
    @KarazP Рік тому

    I got this message when I used color adapter
    runtimeerror: pixel_unshuffle expects height to be divisible by downscale_factor, but input.size(-2)=257 is not divisible by 8
    Does anyone know how I can fix it? I did some research last night but still no sign of any luck 😭