Animations with IPAdapter and ComfyUI

Поділитися
Вставка
  • Опубліковано 27 чер 2024
  • I just updated the IPAdapter extension for ComfyUI with all features to make better animations! Let's have a look!
    OpenArt Contest: contest.openart.ai/
    IPAdapter Extension: github.com/cubiq/ComfyUI_IPAd...
    AnimateDiff Extension: github.com/Kosinkadink/ComfyU...
    00:00 Intro
    00:09 Animated Masks
    04:21 Logo Animation
    08:48 32-Frames
    11:39 Animated Reference Images
    15:19 Contest and Conclusion
    ** Workflows **
    Download the masks from: cdn.matt3o.com/uploads/files/...
    Logo Animation: pastebin.com/9GbLTxVs
    4 Reference Images Animation: pastebin.com/JCcbciyk
    Dragon Lady: pastebin.com/ncFybRfE
    ** Background music **
    - "Part A" by Alexander Nakarada (www.serpentsoundstudios.com) Licensed under Creative Commons BY Attribution 4.0 License
    - "Manace" Synthwave by Karl Casey @ White Bat Audio (whitebataudio.com/)
    - "Last Stop" Synthwave by Karl Casey @ White Bat Audio

КОМЕНТАРІ • 161

  • @lucagenovese7207
    @lucagenovese7207 15 годин тому

    Praticamente mi devo studiare tutti i tuoi video step by step perché sono una risorsa immensa per me che ho iniziato da pochissimo a esplorare questo mondo. Il tuo modo di spiegare è meraviglioso, grazie mille.

  • @wholeness
    @wholeness 7 місяців тому +3

    The legend himself!

  • @zglows
    @zglows 6 місяців тому +20

    Dude, your channel is absolutely insane. It has the most advanced tips on ComfyUI right now. Your youtube channel is starting to grow becuase people are noticing you are the real deal. Please keep making amazing tutorials.

  • @dck7048
    @dck7048 7 місяців тому +22

    I really love this focus on generating what you actually want through various tools. Very refreshing compared to the more "black box" approach you often see with SD, and you can see from video to video how learning one thing can be repurposed for different use cases. Wonderful!

  • @enigmatic_e
    @enigmatic_e 7 місяців тому +9

    every update you make is awesome!

  • @nolanzor
    @nolanzor 7 місяців тому +4

    Awesome stuff Mateo! thanks so much for sharing these workflow ideas, will definitely be playing around with these, and I can't thank you enough for your continued development on IP-adapter!

  • @mellio19
    @mellio19 3 місяці тому +2

    This is OpenSource After Effects y'all! CREATIVE FREEDOM FOR ALL!!!! 😛

  • @Sitor79
    @Sitor79 7 місяців тому +4

    Thank you for your awesome contribution to the community! Great work!

  • @Photomonon
    @Photomonon 6 місяців тому +4

    Man, thank you for the work you do! No words for my appreciation. You're amazing 👏

  • @Taimoorabdullah
    @Taimoorabdullah 7 місяців тому +1

    Well done! I always enjoy your tutorials.

  • @dasomen
    @dasomen 7 місяців тому

    Great as usual! Thank you so much for your work! 🥇

  • @ahtoshkaa
    @ahtoshkaa 7 місяців тому +2

    Thank you so much for the detailed video. It was very nice to see the intermediate steps as well. Love the blinking girl animation 😍

  • @yotraxx
    @yotraxx 7 місяців тому +2

    This video is GOLD!!
    Thank you so much for sharing all of your work.
    IPadapter became a must have to me.

  • @comfyuiadrian
    @comfyuiadrian 7 місяців тому +1

    Thank you so much for the sharing the workflow.

  • @user-jg1me5mx6j
    @user-jg1me5mx6j 6 місяців тому +1

    Thanks in my deep heart, I love your videos so much, really appreciate your amazing work,please keep updating!

  • @ScraggyDogg
    @ScraggyDogg 6 місяців тому

    Many thanks, I only just moved from Auto to ComfyUI and assumed it would be a while before I could do much, but your plugins and Tutorials have got me there much quicker.
    I used your Logo Animation to drive two random pics I had lying round and love the result. Just the beginning... all the best. and agree with below, Great Channel.

  • @promptmuse
    @promptmuse 6 місяців тому +2

    So so neat, thankyou for all your work on this.

  • @areltheking
    @areltheking 2 місяці тому

    this is honestly one of the best explained ai videos ive seen. i'll be following you from now on!! :)

  • @ourdailyplanet
    @ourdailyplanet 7 місяців тому +1

    maraming salamat, Mateo! excellent work.

  • @NerdyRodent
    @NerdyRodent 7 місяців тому +8

    Awesome stuff! Best nodes out there 😉

    • @JustFeral
      @JustFeral 7 місяців тому +1

      Researching for your next video? lol

    • @NerdyRodent
      @NerdyRodent 7 місяців тому +1

      @@JustFeral 😆

  • @MaxPayne_in
    @MaxPayne_in 7 місяців тому

    this video introduced lots of new things to learn. Keep up the good work!

  • @Inner-Reflections-AI
    @Inner-Reflections-AI 7 місяців тому

    Thank you for all your hard work on this node!!!!

  • @jc2shile
    @jc2shile 6 місяців тому +1

    Thanks for sharing, your videos have always been so insightful, you are a true master and sharer!

  • @aivideos322
    @aivideos322 7 місяців тому +1

    Great video, also wanted to thank you for all your work.

  • @pedxing
    @pedxing 7 місяців тому

    stoked to see you connected to the OpenArt contest!!

  • @carstenli
    @carstenli 7 місяців тому

    Remarkable ✨ - and great to see that you have joined the jury.

  • @davidwadsworth1760
    @davidwadsworth1760 7 місяців тому +1

    Wow! Great work Matt30

  • @incaseidie
    @incaseidie 6 місяців тому +2

    You’re the developer!!! i’ve been playing with it for days it’s absolutely amazing

  • @ted328
    @ted328 Місяць тому

    Matteo, you've saved my life more times than I can count. Cheers

  • @AB-wf8ek
    @AB-wf8ek 7 місяців тому +2

    This is amazing, thank you thank you thank you so much!

  • @impactframes
    @impactframes 7 місяців тому +2

    Fantastic work 👍❤️

  • @mkvrtgo
    @mkvrtgo 5 місяців тому +1

    Absolutely nuts, indistinguishable from magic as goes the famous quote

  • @neofuturist
    @neofuturist 7 місяців тому

    Absolutely amazing !!

  • @WhySoBroke
    @WhySoBroke 6 місяців тому +1

    Matteo Maestro Latente!!! Absolutely amazing!! ❤️🇲🇽❤️

  • @PleaseOpenSourceAI
    @PleaseOpenSourceAI 7 місяців тому +1

    These tuts are the best!
    👍

  • @BoolitMagnet
    @BoolitMagnet 7 місяців тому +1

    Wow. Another powerful technique.

  • @BernardMaltais
    @BernardMaltais 7 місяців тому

    Great work, another very instructive video.

  • @MightyM1ke
    @MightyM1ke 7 місяців тому +1

    'add a bit-a noise, that always helps!' you're the man!

  • @adelechelmany
    @adelechelmany 6 місяців тому +10

    Man, there are so many channels about stable diffusion and AI on UA-cam but I'm learning from yours more than all of them combined. Thank you ❤

  • @FunwithBlender
    @FunwithBlender 7 місяців тому +1

    love it dude keep it up

  • @neonnexusnymph
    @neonnexusnymph 2 місяці тому

    My hero

  • @AIPixelFusion
    @AIPixelFusion 5 місяців тому

    wow, super cool stuff!

  • @blackbarba5450
    @blackbarba5450 7 місяців тому

    What a time to be alive!

  • @vtchiew5937
    @vtchiew5937 7 місяців тому +1

    simply amazing, i have learnt way more from this 16 minutes video than other videos on similar length. that logo animation simply blew my mind. kudos to you!

  • @ColoNihilism
    @ColoNihilism 6 місяців тому

    real cool!

  • @smitty7326
    @smitty7326 7 місяців тому +1

    this is the real stuff. great video

  • @hurricanepirate
    @hurricanepirate 7 місяців тому +2

    very clever

  • @funksmaname
    @funksmaname 7 місяців тому +1

    Amazing!

  • @user-ui2hw5of9l
    @user-ui2hw5of9l 7 місяців тому +1

    thank you very much~!

  • @cellonobakery
    @cellonobakery 7 місяців тому

    bro you deserve more subscribes.

  • @tailongjin-yx3ki
    @tailongjin-yx3ki 2 місяці тому

    so great

  • @Sebastian-cn8lh
    @Sebastian-cn8lh 7 місяців тому

    damn.. this is a high step up.... i looking for something like this but with real faces

  • @esJoyboy
    @esJoyboy 4 місяці тому

    Genius!

  • @MaybePlato
    @MaybePlato 6 місяців тому

    Wow, I am just blown away by how useful this has been. To be able to hear you explain the idea behind each step you take is invaluable for learning how to really understand and use these tools. That is something that is hard to find with other youtubers, who are quick to just say "do what I do exactly" without really teaching you the value of each component and node. Great video!

  • @animaticmediaUSA
    @animaticmediaUSA 7 місяців тому +1

    Mateo- nice work! Where is your Patreon so we can join! FYI- the most popular use case here is animation studios like ours- we want to take existing video and use IPAdpater to add a style and ControlNet and Animateddiff to get precise control over the animation! Keep rocking - Cheers!

  • @rainerzufall1868
    @rainerzufall1868 7 місяців тому +1

    The very last thing is insane, thanks for the workflow! Any hope for an upload of the two images, i.e. woman and dragon, too? :)

    • @latentvision
      @latentvision  7 місяців тому +2

      you can find them here: imgur.com/a/Rex4tzX

    • @rainerzufall1868
      @rainerzufall1868 7 місяців тому +1

      @@latentvision thanks soo much! Have a great day. You deserve so many more subs. They will come, just keep going!!

  • @JustFeral
    @JustFeral 7 місяців тому

    Holy shit, this is motion graphics in a few clicks.

  • @Dave_AI
    @Dave_AI 7 місяців тому +1

    Even more stuff for me to learn lol. Thanks so much for your wonderful work. I understand virtually everything in the video apart from ScaledSoftControlnetWeigts. What does it do exactly?

    • @latentvision
      @latentvision  7 місяців тому

      it removes control to the controlnet and gives it back to the text prompt. Experiment with a few values. You loose a little fidelity but sometimes helps with the animation

  • @dkamhaji
    @dkamhaji 7 місяців тому +1

    Hi Matteo, incredible as always! Quick Question, I see you connected the AnimDiff to the checkpoint and out of the AnimDiff to the IpA, then out teh IPA to the Ksampler. Many Workflows I have been working with have the IP connected following the Checkpoint and Loras, then out the IPA to the AnimDiff and then out to the Sample. Are there any benefits to order you have there?

    • @latentvision
      @latentvision  7 місяців тому +3

      ideally: Checkpoint > Lora > AnimateDIff > IPAdapter > other stuff like FreeU, rescaleCFG

    • @dkamhaji
      @dkamhaji 6 місяців тому

      @@latentvision thank you! And what does the rescale cfg node do? Looked really cool in your example.

    • @latentvision
      @latentvision  6 місяців тому +2

      @@dkamhaji it's a trick to use lower/higher CFG in the Ksampler without sacrificing quality. It often helps with animations and IPAdapter

  • @JamesTrue
    @JamesTrue 6 місяців тому

    Where did you download IPAdapter_image_encoder_sd15.safetensors? Can't find it anywhere

    • @latentvision
      @latentvision  6 місяців тому

      just download the image encoder linked in the repository and rename it.

  • @Digital_Toolbox
    @Digital_Toolbox 2 місяці тому

    Amazing Tutorial!! A small question: What does the Scaled Soft Controlnet Weights does exactly?

    • @latentvision
      @latentvision  2 місяці тому

      scales the weights over time instead of applying them linearly

  • @eltalismandelafe7531
    @eltalismandelafe7531 7 місяців тому

    Brilliant Matteo! Your videos are masterful! There are no limits to your creativity and wisdom. The resolution of the 16 images black & white for mask is 512 x 512? Did you create them yourself or are they purchased from a design collection? Best regards from Spain. Mario

    • @latentvision
      @latentvision  7 місяців тому +1

      I've put a link in the description with a few masks

    • @eltalismandelafe7531
      @eltalismandelafe7531 7 місяців тому

      Thank you very much! I see that they are 512x512 like the project. And if the project is 1024x1024 and the images are 512x512, how can you adapt their size, would it be with the GetImage width and height node?@@latentvision

  • @EranMahalu
    @EranMahalu 6 місяців тому

    Amazing stuff. Can I also use it to change the style of my input video? I mean input a video and an image for how I want it to look?

    • @latentvision
      @latentvision  6 місяців тому

      you can use multiple IPAdapters, one for the "animation" and one for the style. You probably need to play a little with timestepping to get exactly what you need, but it should work

  • @RamonGuthrie
    @RamonGuthrie 6 місяців тому

    Is there a ComfyUI node that allows me to render an Output video of the Denoising stage, you know the Preview you see on the KSamplers at the Denoising stage if you have Preview method set to Auto or TAESD is there a way to output that as a video?

  • @ALatentPlace
    @ALatentPlace 7 місяців тому

    Geez... now coffee please!

  • @AB-wf8ek
    @AB-wf8ek 7 місяців тому +2

  • @latent-broadcasting
    @latent-broadcasting 6 місяців тому

    Thanks for the video! Does the IP-Adapter custom node of ComfyUI download all the models or do I have to manually download them and place them in the folder?

    • @latentvision
      @latentvision  6 місяців тому

      you have to download them manually

  • @kenster4k
    @kenster4k 7 місяців тому

    This is great - much appreciated. Having a bit of trouble finding the Masks folders for VHS (e.g. zoom 16) ... do you know where to download those in particular?

    • @latentvision
      @latentvision  7 місяців тому +1

      those are not included, sorry, I still need to set up a repository where you could download all that stuff. But it's very easy to do with a video editing software like KDEnlive. I'll put a link in the description or in a pinned comment when I find a solution

    • @MannyGonzalez
      @MannyGonzalez 4 місяці тому

      Where do I put the files? I have the zipped masks files but where to put them so the node can find them? @@latentvision

  • @shshsh-zy5qq
    @shshsh-zy5qq Місяць тому

    13:20 Hey matteo could you explain why you put '2' in the 2nd Repeat Image Batch? You mentioned 16 framed animation. But you put 6 on the first and 2 on the 2nd.

  • @Smashachu
    @Smashachu 6 місяців тому

    1:32 Rude, you sound like my doctor.

  • @melondezign
    @melondezign 6 місяців тому

    This is absolutely incredible. I was very interested by the cat to dog transition workflow, but I couldn’t find it. I rebuilt it from the video. Still, it makes an animation with both start and ending image reinterpreted. I can't figure out how to get a final animation with start and end as close as possible as the two source images. Have an idea or some tips where to tweak ? Thanks for any help and for everything anyway !

    • @latentvision
      @latentvision  6 місяців тому +2

      if you need to be very close you could use tile controlnet, it's basically the same workflow at the logo transition

    • @melondezign
      @melondezign 6 місяців тому

      @@latentvision Thanks! I was stuck on the cat/dog workflow, trying to tweak it. I'm going to check the logo transition workflow then. :)

  • @KriGeta
    @KriGeta 6 місяців тому

    hello, amazing video, may you show some tips an tricks to make consistent anime character's face, body and with other controlnet models?

    • @latentvision
      @latentvision  6 місяців тому +1

      yes, that is in the pipeline

    • @KriGeta
      @KriGeta 6 місяців тому +2

      @@latentvision if possible may you make a tutorial on that?

  • @ArisenProdigy
    @ArisenProdigy 6 місяців тому

    Hey Matt3o, I've been using batch prompting with animatediff for a while now and I swear I thought I saw that there's now a way to use ipadapter plus to trigger an image at particular frames. I can't find it anywhere now that I'm looking for it though. Is there a way to create an animation with batch prompting and use one image at a time to influence it?

    • @latentvision
      @latentvision  6 місяців тому

      unfold batch is what you are looking for

  • @ericren5390
    @ericren5390 3 місяці тому

    Hi Mateo, I am studying all of your videos but found When I use 2 pictures and 16 frame masks with Ipadapter during animation process, I always get a colorful mosaic result, even though I have all the models selected and the settings configured the same as yours, how strange this is.And do you have any clue about this? Thanks.

  • @8561
    @8561 6 місяців тому

    Love your videos! I am running on a Mac which uses Metal Performance Shaders and not CUDA or CPU. Therefore unfortunately I cannot use IPAdapter as it uses autocast. Any suggested fixes or ways to run? Thanks

    • @latentvision
      @latentvision  6 місяців тому

      it should work now on a mac, autocast will be on the cpu.

  • @beatemero6718
    @beatemero6718 7 місяців тому

    Hi! I have a question. Is it possible to use the "CLIP-ViT-bigG-14-laion2B-39B-b160k" model with sdxl? I always get an Error of Size mismatch. If it is possible, what do II Need to do to make it Work?

    • @latentvision
      @latentvision  7 місяців тому

      Only one of the SDXL models is compatible with bigG (the only one without vit-h in the title)

  • @Hamilton_Gilpin
    @Hamilton_Gilpin 6 місяців тому +1

    Hey bud, how can I contact you? I love your work and would like to collaborate on my education company :) Thanks!

    • @latentvision
      @latentvision  6 місяців тому +1

      matt3o on discord, but I'm well over capacity at the moment :(

  • @rooley123
    @rooley123 6 місяців тому

    Been trying to follow along with your last animated example. but with SDXL i've not been having too much luck. Could be a great video if you show a few of your things for the SDXL workspace

    • @latentvision
      @latentvision  6 місяців тому +1

      SDXL AnimateDiff models are not great unfortunately, but yeah I gonna talk about that too

  • @moviecartoonworld4459
    @moviecartoonworld4459 6 місяців тому

    hello! I'm always watching and learning. thank you I have one question,
    3:01 This is a question for this section.
    I don't see the mask preview+ list. Can you tell me how to install it? I also updated to the latest version.

    • @latentvision
      @latentvision  6 місяців тому +2

      it's in the ComfyUI_Essentials extension

    • @moviecartoonworld4459
      @moviecartoonworld4459 6 місяців тому

      @@latentvision
      Oh my god!! It's solved. thank you!!!!

  • @tetianaf5172
    @tetianaf5172 5 місяців тому

    hi! Thank you so much for the tutorial! I tried to apply this method to my vid to vid setup, using the depth mask from the video as masks for IPadapter, but I get this error at the Ksampler stage :
    The size of tensor a (135) must match the size of tensor b (128) at non-singleton dimension 1.

    • @latentvision
      @latentvision  5 місяців тому

      it's rounding error, use image resolutions divisible by 8

  • @aigoof
    @aigoof 7 місяців тому

    Any repositories you know of for various sets of masks?

    • @latentvision
      @latentvision  7 місяців тому

      check the video description :)

  • @christianholl7924
    @christianholl7924 6 місяців тому

    A video from ipadapter sounds great! But the restriction to 244x244 pixel input images are a bit annoying if you're trying to get really specific results, isn't it?

    • @latentvision
      @latentvision  6 місяців тому +1

      we can only hope in higher resolution image encoders unfortunately

  • @patchworkpants
    @patchworkpants 6 місяців тому

    I'm having so much trouble trying to adapt this to work for SDXL. Could you do a video specifically about your last couple of setups but adapted for SDXL?

    • @latentvision
      @latentvision  6 місяців тому +1

      AimateDiff SDXL models are not great yet, but I'll make a more indepth video about animations in the future

  • @devoiddesign
    @devoiddesign 6 місяців тому

    Do you have a channel where we can ask questions? I have been stuffing my brain with as much ComfyUI that I can and I feel as if I may be asking a silly question here... how do I get an image as you did where everything in the image is the same except her eyes being opened or closed?

    • @latentvision
      @latentvision  6 місяців тому

      I'm thinking of opening a Discord but I'm well over capacity and I'm a bit worried that it would suck too much of my time... I need to think a bit about it

    • @devoiddesign
      @devoiddesign 6 місяців тому +1

      @@latentvision I think your discord is going quite well!! So glad you took the leap! Thank you!

  • @AlecuGrigore
    @AlecuGrigore 4 місяці тому

    Hi Mateo,
    Can the IPAdapter weight be scheduled?

    • @latentvision
      @latentvision  4 місяці тому +1

      not by a Jedi 😄
      no, not at the moment, but I'll make it possible in a future update

  • @samlavi
    @samlavi 4 місяці тому

    Hi, can you please explain when should I enable Unfold batch? I have Apply Ip Adapter node connected to Load Image Batch From Dir (Inspire) which feeds it a batch of images. However when I enable unfold batch I'm getting strange restuls that have nothing to do with the input images (the input images flashes as is at the start of the video, then strangeness). When it's disabled it's closer to the input images. So when should I use it? What does it actually do? Thanks!

  • @defensez0ne
    @defensez0ne 6 місяців тому

    I loaded your process for the logo, I get this error, when I remove the mask everything works:
    Error occurred when executing KSampler:
    Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [16, 512, 512] and output size of (64, 64). Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format.

    • @latentvision
      @latentvision  6 місяців тому

      you probably have an outdated version of either the extension of comfyui

  • @exnexe
    @exnexe 6 місяців тому

    what is difference from running model through ipadapter before animatediff vs after?

    • @latentvision
      @latentvision  6 місяців тому

      ideally it's: checkpoint > loras > AD > IPA. AD makes some changes to the model pipelines that are used later by IPA. So just to be sure I'd put IPA after AD

  • @SurfSurf934
    @SurfSurf934 6 місяців тому

    Hi i cant see what CLIP vision model is loaded @ 1:16

  • @vtchiew5937
    @vtchiew5937 6 місяців тому

    I tried updating ComfyUI and all the custom nodes, but then I'm getting this error AttributeError: 'NoneType' object has no attribute 'to' on the ControlNet.py . Do you have any idea if I have to disable any custom nodes, or install some other nodes?

    • @latentvision
      @latentvision  6 місяців тому

      maybe you are using the wrong controlnet model, there are a couple around

    • @vtchiew5937
      @vtchiew5937 6 місяців тому

      @@latentvision !!!! thanks for the advice, i was using ControlNet LORA. problem solved after switching to the standard controlnet .pth file

  • @HexaKafa
    @HexaKafa 22 дні тому

    @Latent Vision pastebin links are broken.

  • @TheJPinder
    @TheJPinder 4 місяці тому

    I'm trying to figure out how to do this with a video and an image, as in, take a photo of a teddy bear and have the video of two people morph into teddy bears.

  • @user-tl5lh7yg8k
    @user-tl5lh7yg8k 7 місяців тому

    Hello, why don't I have your mask selection in my image loading node?

    • @latentvision
      @latentvision  7 місяців тому

      I'll post a link to the masks. Sorry if I didn't before

    • @user-tl5lh7yg8k
      @user-tl5lh7yg8k 7 місяців тому

      @@latentvision Thank you very much, I think it is great, I like your tutorial very much, I learned a lot from it ❤

  • @singinggoldgem
    @singinggoldgem 6 місяців тому

    在2:11,VHS_loadimages的directory选项中怎样做才能出现您在MASK文件夹中的目录,它们需要放在什么地方?希望等到您的回复。

    • @latentvision
      @latentvision  6 місяців тому +1

      copy the video (or the frames) in the comfyui "input" directory, or alternatively you can use the "load video path" or "load images path" node

    • @singinggoldgem
      @singinggoldgem 6 місяців тому +1

      谢谢@@latentvision

    • @MannyGonzalez
      @MannyGonzalez 4 місяці тому

      Ahhhh... thank you... this was what I was looking for. @@latentvision

  • @EmanueleDelFio
    @EmanueleDelFio 6 місяців тому

    Grande Matteo , io uso AnimateDiff su SD 1.5 secondo me le animazioni sono ancora ottime, ma tu puoi farmi un corso per ComfyUI (pagando) ?

    • @latentvision
      @latentvision  6 місяців тому

      avessi il tempo... volentieri...

  • @kiiikoooPT
    @kiiikoooPT 6 місяців тому

    Maybe I'm stupid, but you can basically make an entire movie with this, using caracter morphs, movements, impaiting...
    I can not use this efectivly because I have a cheap pc, but I'm about to pay some money to have a dedicated machine and build a workflow.
    I just have 1 question, imagine you have that last schema, you save it and load it whenever you want and make the same kind of generation, that part I understand, but is there a way to for example get the output of that schema with the girl blinking her eyes, and continue in another workflow or schema without the need to manualy save and load the other schema? Can you queue it and then make another flow, queue it to but the second queue generation depends on the first one? Can you do that with ConfyUI? If so can anyone point me for any video or reference on how to do that?
    Thank you for your content, for your work with the IPnet, and for any response in advance ;)

    • @latentvision
      @latentvision  6 місяців тому +1

      you can link as many nodes as you want in a workflow... as long as you have enough system resources you can link any output image to any new image generation

    • @kiiikoooPT
      @kiiikoooPT 6 місяців тому

      I was thinking more at using the workflows as separate functions
      Like a way to connect a workflow to another workflow that is not loaded, but the way you said that gave me another view on this, just make all the workflows inside of one, and activate it when need.
      But yeah I think it will be very memory intenssive keeping a huge workflow with lots of nodes
      Thank you for your feedback ;) @@latentvision

  • @j_shelby_damnwird
    @j_shelby_damnwird 7 місяців тому

    Is it possible to run these workflows on an 8gb VRAM card?

    • @latentvision
      @latentvision  7 місяців тому +2

      if you are very careful with your resources, yeah. even better on linux

    • @j_shelby_damnwird
      @j_shelby_damnwird 7 місяців тому

      @@latentvision Thank you. Do you have a video on your channel covering that?

  • @BG4-R
    @BG4-R 6 місяців тому

    What is the channel of DISCORD?

    • @latentvision
      @latentvision  6 місяців тому +1

      I'm thinking of opening a Discord but I'm afraid it would suck up all my very little spare time I have left... I'll let you guys know if I find the courage to open one

  • @GCAGATGAGTTAGCAAGA
    @GCAGATGAGTTAGCAAGA 6 місяців тому

    Yeah, it is cool and all that, but it is still working with words and not shapes. And artists are working primarily with shapes, and this is what industry is requiring them to do. I would jump into this as soon as it will be more close to a 3d modelling or sculpting :P Think of neural networks more as a very densely archived asset library of materials, noise patterns, photo realistic surfaces, etc. But I don't like when you can assign any word to some series of shapes or images. Not like I am going to whine about ethics and where these "shapes" are coming from once again, but think about censorship for example, and how people who trained the particular dataset can limit your creativity and control your canvas through the words.

    • @latentvision
      @latentvision  6 місяців тому

      for shapes you have controlnets

  • @timtom1847
    @timtom1847 6 місяців тому

    Grazie Matteo x un'altro video fantastico. Posso mettermi in contatto con te? Ho una proposta di lavoro. Magari hai un Discord?

    • @latentvision
      @latentvision  6 місяців тому

      sono matt3o su discord ma sono veramente oberato al momento... non ho tempo neanche di respisrare :)