Infinite Variations with ComfyUI

Поділитися
Вставка
  • Опубліковано 24 гру 2024

КОМЕНТАРІ • 143

  • @zef3k
    @zef3k Рік тому +45

    dude unsampler is sick! I love that you're showing how some of these other nodes work and not just ipadapter, thanks!

  • @BoolitMagnet
    @BoolitMagnet Рік тому +27

    Wow. Another great video, so much info and all clearly explained. Your mastery of ComfyUI is impressive.

  • @knabbi
    @knabbi Рік тому +11

    "...and now she is pissed".
    Never had a better introduction for another useful comfyui node😂 Appreciate your work and your entertaining videos. I like your effective and pragmatic way of explanation.
    Thanks.

  • @Dunc4n1d4h0
    @Dunc4n1d4h0 Рік тому +5

    Hahaha, "and now she's pissed". I would never miss a lesson with such teacher 🙂 Every time I watch something from you I have new ideas, thank you.

  • @pedxing
    @pedxing Рік тому +16

    absolutely love watching these work sessions. ❤‍🔥💡💪

  • @Paulo-ut1li
    @Paulo-ut1li 11 місяців тому +3

    Saying this channel is the best ComfyUI resource on YT is an understatement . Thank you Matteo, please keep up the amazing work!

  • @WallyMahar
    @WallyMahar 3 місяці тому +1

    A WORKFLOW I DOWNLOADED AND IT ACTUALLY WORKS! OMFG!
    You don't understand how rare that is. As a pro artist but A python noob, you don't understand how many of my list that is at least a screen long of workflows that just don't work and I don't understand why. And I have spent hours and hours and usually give up Thank you.

  • @ronnykhalil
    @ronnykhalil Рік тому +4

    Unsampler is an insane option that I can only begin to imagine its potential. thanks for shining lights on all these unsung heroes. the channel remains my favorite by a long shot

  • @1E9L9I7J1A6
    @1E9L9I7J1A6 2 місяці тому

    not much to say other than thank you very much, great videos, im about to explore your whole channel, you definetly just won a new regular viewer

  • @BuckleyandAugustin
    @BuckleyandAugustin 3 місяці тому

    I agree with everyone here your content is so valuable, thank you for all you do Matteo!

  • @latent-broadcasting
    @latent-broadcasting Рік тому +1

    The unsampler blew my mind! It's amazing all the possibilities available with ComfyUI. Thanks for the tutorial!

  • @ProzacgodAI
    @ProzacgodAI Рік тому +1

    I was playing with the unsampler, and went (total 20 steps) - unsampler(5 steps) -> advanced sampler (5 steps -> 10 steps) -> advanced sampler +add noise (10->20) and it produces really good variations. I can even supply it with a new prompt at the last step and it's really really good at integrating it and keeping consistency

    • @latentvision
      @latentvision  Рік тому +2

      I guess this is a situation like: give a man a fish and you feed him for a day. Teach him how to fish and you feed him for a lifetime 😄

    • @ProzacgodAI
      @ProzacgodAI Рік тому +1

      ​@@latentvision Give the man the seed for the fish image and he'll have variations for a lifetime...

  • @chadhamlet
    @chadhamlet Рік тому +3

    Wow! Using the noise in this fashion really makes it so much nicer than image to image. I've done some really great enhancements of some old 1.5 generations that kept the look of the old but dramatically increased the details with the newer SDXL models. I've never had an upscale do something this nice and not change the image. Can't wait to see what you've got planned next. Your videos are amazing! I'd love to see you tackle a workflow that is geared towards reusing a character, face, clothes and in multiple poses!

  • @TheDocPixel
    @TheDocPixel Рік тому +2

    You have read my mind! I've been searching for more information and usage videos and tuts for all of these nodes that are bundled in packages, that other YTers suggest to install, but use only one or 2 of them. Please continue on with these easy, to the point videos for advanced users. WE NEED THEM!

  • @ooiirraa
    @ooiirraa Рік тому +6

    Dear Matteo, I became your absolute fan 🎉 your videos and projects (ip-adapter) are generous and abundant. Every your product is valuable but understandable in the same time. Thank you very much, Please keep creating ❤

  • @michaelbayes802
    @michaelbayes802 Рік тому +2

    Wow! You could have made 10 videos with this content. Respect

  • @antiquechrono
    @antiquechrono Рік тому +1

    Short, to the point, and absolutely jam packed with information. Great video.

  • @uk3dcom
    @uk3dcom Рік тому +1

    So many useful nugets of information. Taking control of the generative image is fascinating. Thank you.❤

  • @johnmcaleer6917
    @johnmcaleer6917 Рік тому

    'and now she's pissed' cracked me up.....Your vids continue to impress and your knowledge of such a new subject is amazing....Love your explanations and subject choices..Wonderful stuff again..

  • @64kernel
    @64kernel Рік тому +2

    Applying this in my workflow immediately. Very useful. Thanks!

  • @Bartskol
    @Bartskol Рік тому +1

    This video is gold.

  • @eagleeyedjoe0075
    @eagleeyedjoe0075 Рік тому +1

    These videos are fantastic, I'm learning many new techniques and you've introduced me to loads of new nodes. Can't wait to see the new IPAdapter you mentioned.

  • @nawrasryhan
    @nawrasryhan Рік тому

    The best comfyUI tutorials hands down, the amount of info, small tips, real experience that you show in these videos is unmatched and highly appreciated. Keep it up and of course Thanks for sharing!

  • @dck7048
    @dck7048 Рік тому +1

    These videos are so consistently useful, thanks for taking the time! Even on subjects that you'd think are "solved" like image variations, the fine control can be a real asset when you're looking to generate something specific.

  • @vizsumit
    @vizsumit Рік тому

    You are making me falling in love with ComfyUI

  • @Renzsu
    @Renzsu Рік тому +1

    Love your videos man, they're a joy to watch. And I like how you keep your examples relatively simple and straight to the point, no unnecessary fluff :)

  • @crow-mag4827
    @crow-mag4827 Рік тому

    Found you after the release of ipadapter, your skills in comfy are amazing. Watching all your videos.

  • @LucasSavelli-e3w
    @LucasSavelli-e3w Рік тому

    Mateo, YOU are the god! Thank you so much for sharing all your knowledge with us!

  • @moviecartoonworld4459
    @moviecartoonworld4459 Рік тому +1

    I am always grateful to hear amazing and moving lectures.
    0

  • @morphidevtalk
    @morphidevtalk 10 місяців тому +1

    mindblowing! ty for the workflow! i'll try it for myself

  • @Enricii
    @Enricii Рік тому

    PAZZESCO!
    My favourite one was the unsampler method. I think I need to play with it very soon!
    Grazie ancora per tutto quello che fai!

  • @terrorcuda1832
    @terrorcuda1832 Рік тому +1

    That was a fantastic video. I want to leave work and go home and experiment.

  • @svenhinrichs4072
    @svenhinrichs4072 Рік тому

    Thanks a lot. Your tutorials are great ! Perfectly explained and going to the details which are really hard to find out without the technical insights. Keep up the great work!

  • @TheJAM_Sr
    @TheJAM_Sr Рік тому

    Wow, great demonstration!
    I have been playing around with combing noises for a bit now and I still learned a lot!
    I’m going to take what I’ve learned here and play around with all the different type of noise formats.

  • @Ulayo
    @Ulayo Рік тому +2

    This video is amazing! I learned so much today! 👍

  • @world4ai
    @world4ai Рік тому +2

    I have to say that so far I found all of your videos really useful. I would like some AnimateDiff tutorials.

  • @HisWorkman
    @HisWorkman Рік тому +1

    As always this was a fantastic tutorial. Thank you!

  • @human-error
    @human-error 9 місяців тому +1

    Amazing as usual Mateo. Gracias !

  • @JoeSim8s
    @JoeSim8s 8 місяців тому +1

    Pure gold! Thank you!

  • @MannyGonzalez
    @MannyGonzalez 10 місяців тому

    Absolute master class. Thanks for these tutorials.

  • @tiporight
    @tiporight 11 місяців тому

    Excellent. Thank you for sharing this type of tutorials

  • @tonikunec
    @tonikunec Рік тому

    That's pretty amazing! I am kinda new to all this AI thing and still learning a lot, but this video really opened my eyes on how to get started and make even more amazing stuff. Keep those videos coming as it seems you really know your stuff! Subscribed!

  • @steveyy3567
    @steveyy3567 7 місяців тому

    mind blowing, great job!

  • @roktecha
    @roktecha Рік тому +1

    These videos are excellent! Thank you

  • @pedxing
    @pedxing Рік тому

    REALLY looking forward to the seeing your process for the logo animation as well!

  • @abdelkaioumbouaicha
    @abdelkaioumbouaicha Рік тому +1

    📝 Summary of Key Points:
    The speaker discusses various techniques for creating small variations on an image using the SDXL workflow. They suggest adding low-weight tokens or random numbers to slightly change the image.
    The concept of "horror negatives" is introduced, where negative prompts with words like "horror" or "zombie" are used to achieve a clean result.
    Conditioning comcat is explained as a way to change the style or details of an image while keeping the same composition. Conditioning combine is also discussed for achieving more mutation in the image.
    The use of IP adapter is explored to guide the composition of the image, using different reference images to achieve different styles.
    The unsampler node from the confi noise extension is shown as a technique to modify an existing image by removing noise until it reaches the original noise at the first step of generation.
    Creating a batch of images with little differences is demonstrated using fixed base noise and the slurp latent node. The strength of the noise can be adjusted, and a new batch of similar images can be generated by changing the seed in the noise generator.
    💡 Additional Insights and Observations:
    💬 "There is no one-size-fits-all solution" - The speaker emphasizes that different techniques may work better for different images and prompts.
    📊 No specific data or statistics were mentioned in the video.
    🌐 The video provides practical examples and demonstrations to support the techniques discussed.
    📣 Concluding Remarks:
    The video provides a comprehensive overview of techniques for creating image variations using the SDXL workflow. From simple tricks like adding tokens or random numbers to more advanced techniques like conditioning comcat and using IP adapter, the speaker demonstrates practical examples and offers valuable insights for achieving desired image variations.
    Generated using Talkbud (Browser Extension)

  • @MicheleBrugiolo
    @MicheleBrugiolo Рік тому +1

    Grazie grazie grazie!

  • @ysy69
    @ysy69 Рік тому +2

    incredible. thnak you

  • @hakandurgut
    @hakandurgut Рік тому +4

    In last 16 mins, i have learned more than i had in last months.... great video, great knowlwdge... ate you an ai scientist?

  • @WhySoBroke
    @WhySoBroke Рік тому +1

    You have my full attention Maestro Latente!!! Please create a discord community!! ❤️🇲🇽❤️

  • @ChandreshJoshi
    @ChandreshJoshi Рік тому

    your approach is very creative and very easy to understand thanks for video

  • @sincdraws
    @sincdraws 11 місяців тому

    great stuff as always

  • @thelookerful
    @thelookerful 7 місяців тому

    These tutorials are great!!

  •  Рік тому

    You are amazing.. This is the best video I've ever seen...

  • @christianblinde
    @christianblinde Рік тому

    Great examples with good explainations

  • @TimVerweij
    @TimVerweij Рік тому

    So much useful information! Thanks!

  • @koalanation
    @koalanation Рік тому +1

    This is a great essentials video! Thanks Matteo. Not sure if everyone thinks inpainting is lame, though 😂😂😂

  • @paulofalca0
    @paulofalca0 Рік тому +1

    Great stuff! Thanks!

  • @dflfd
    @dflfd 10 місяців тому

    thank you, this is really great!

  • @MikevomMars
    @MikevomMars 7 місяців тому

    Just adding a number to the prompt to get a variation is true ZEN - simple but effective 😊

  • @danielmatejka1976
    @danielmatejka1976 Рік тому +1

    thank you ❤

  • @j_shelby_damnwird
    @j_shelby_damnwird Рік тому +1

    This and Scott's are the coolest AI art channels. Kudos! are these workflows available somewhere for reverse engineering? I tried to follow along but it's hard to keep track of everything that's going on.

  • @bwheldale
    @bwheldale Рік тому

    I'm slowly absorbing these valuable insights, my favourite Comfy channel. At the beginning of 'light conditioning' I wasn't getting subtle changes they were drastic until I tried other seeds. Some worked for subtle changes while some did not. Unless I'm mistaken this light conditioning may be seed dependent. Just wondering if some seeds you tried weren't "subtle friendly"?

    • @latentvision
      @latentvision  Рік тому

      sometimes it's hard to see them but there's always a difference. Try to use the "enhance difference" node from the Comfy_Essentials extension. Yes, some seeds will show more difference than others, but it's completely random.

    • @bwheldale
      @bwheldale Рік тому

      My appologies, I was just about to edit my post to say my wiring to each text box was not from both "text_g and text_l". It's now all working fine and looks exactly as yours with the subtle results achieved. I'll also play with the extension as suggested, thank you for the tips.

  • @P4TCH5S
    @P4TCH5S Рік тому +1

    so cool! thank you

  • @Homopolitan_ai
    @Homopolitan_ai 10 місяців тому

    Total ❤

  • @impactframes
    @impactframes Рік тому

    Another excellent tutorial. ❤

  • @HideousSlots
    @HideousSlots Рік тому

    Awesome!

  • @tomolson6169
    @tomolson6169 11 місяців тому

    I noticed you never re-adjusted the values for width/Height on the ClipTextEncode nodes after you switched to the Unsampler demo. Even tho you started working with a different latent size. Was that just an oversight? It didn't seem to make a difference, your images still looked GREAT! I was just curious, I ended up using a node template for SDXL with primitives set up to quickly adjust the values to 4x the latent size as you suggested. Thank you so much for all your teachings! You've helped me GREATLY!

    • @latentvision
      @latentvision  11 місяців тому +2

      yeah I noticed after I posted the video. the size conditioning doesn't make much difference, it's more of a refinement, so it's not crucial, but yeah in this case it's an oversight

  • @pk.9436
    @pk.9436 Рік тому

    great work 👏

  • @fgmanfredini
    @fgmanfredini Рік тому +1

    Very Nice, really! Very useful, thank you. If i can give you a suggestion would be for a vídeo about dynamic composition using automatic masks. Example: generate a subject, cut it with automatic masking (Sam?) and paste it over a generate background and then a second pass to fix The composition and then generate variations of the background for the same subject or vice versa.

  • @petruschka222
    @petruschka222 Рік тому

    Thank You. Great Job.

  • @petneb
    @petneb 25 днів тому

    Wonderful

  • @blisterfingers8169
    @blisterfingers8169 Рік тому

    Would conditioning concat be the same as something like Automatic1111's blend function or is it something different?
    Love these videos, thanks!
    Also: "a hint of Klimt" had me chuckling.

    • @latentvision
      @latentvision  Рік тому

      no, blend is another option. The node is called conditioning average.

  • @kdesign1579
    @kdesign1579 10 місяців тому

    awesome!

  • @bgtubber
    @bgtubber 6 місяців тому

    Fascinating! Is this something like the Noise Inversion feature in A1111?

  • @opposegravity
    @opposegravity Рік тому

    Can you go over all comfy nodes, I’ve learned more watching your videos than any other resource! Thanks

    • @latentvision
      @latentvision  Рік тому

      I started doing that, but it's a bit boring...

    • @opposegravity
      @opposegravity Рік тому

      Maybe to make them but not to watch, I'm enjoying the content!@@latentvision

  • @PradeepKumar6
    @PradeepKumar6 11 місяців тому

    Great video, I have a question what is text_g and text_l in clip text encode? Thanks

  • @chornsokun
    @chornsokun 5 місяців тому

    Thank you Matteo for the great content. Could you advise which node/extension used in the clip to convert noise into input?

    • @latentvision
      @latentvision  5 місяців тому

      you mean the unsampler? it's comfyui_noise

    • @chornsokun
      @chornsokun 5 місяців тому

      @@latentvision the step at ua-cam.com/video/Ev44xkbnbeQ/v-deo.htmlsi=cWiy-uDpeQelusMM&t=58 and 1:00 noise_seed node I can't find in base comfy

    • @latentvision
      @latentvision  5 місяців тому

      @@chornsokun that's just a primitive. convert the seed to an input and you can connect a primitive to it

  • @salvatorecancilla1605
    @salvatorecancilla1605 5 місяців тому

    sei un grande

  • @GForcenuwan
    @GForcenuwan Рік тому +1

    wow💡

  • @ai-roman-ai
    @ai-roman-ai Рік тому

    I love your videos, they are the best! I want to generate keyframes and then interpolate them to create a realistic video in the end without any time constraints.
    Can you advise me on how I can apply your approaches to create the consistent frames, which you show in this video or other videos? For example, a dog plays with a ball in the garden. The dog must run and be in different positions in each frame, the camera does not move. How to specify the position of the dog and the ball in each keyframe?

    • @latentvision
      @latentvision  Рік тому

      what are asking is pretty complicated, it can't be really explained in a YT comment

    • @aliyilmaz852
      @aliyilmaz852 9 місяців тому

      @@latentvision it would be good if you can teach us in another video. btw you are amazing Matteo!

  • @alexgilseg
    @alexgilseg 11 місяців тому

    This is really cool however I have a question. In The Video you set "end at step" to 0 and it keeps the structure of the loaded image. When I set it to 0 it just uses nothing of my loaded image and just goes by the prompt.. And that's what I thought the whole thing was, to go backwards in an image and then load from there so to say.. By setting it to zero don't you tell the workflow to ignore the loaded image ?

  • @kakochka1
    @kakochka1 Рік тому

    @latentvision Could you explain how you created start_at_step primitive (to control both unsampler and kSampler inputs) with just one click and the correct naming? Is this some custom nodes magic? And as an idea for future videos - could you share how you debug the content of different nodes (maskPreivew and PreviewImage aside) with int/bool/etc values in them?

    • @latentvision
      @latentvision  Рік тому

      double click on the input little dot 😄

  • @___x__x_r___xa__x_____f______

    Matteo, I wish you would explore latent upscaling and show us some useful possibilities for getting high frequency details most effectively through step iterative upscaling and though other more esoteric modes such as block weights etc. And how to best leverage specialised upscale models such as SkinDiff etc

    • @latentvision
      @latentvision  Рік тому +1

      yeah working with noise to increase details is in the pipeline :)

    • @___x__x_r___xa__x_____f______
      @___x__x_r___xa__x_____f______ Рік тому

      @@latentvision right, what you just showed us! that is a great idea. I will try it now. Love this community ! right, what you just showed us! that is a great idea. I will try it now. Love this community !

  • @iozsoo
    @iozsoo 11 місяців тому

    Why my SDXL node hasn't got green pins on it? Also, my positive and negative prompts has conditioning, not string :(

  • @bgtubber
    @bgtubber 6 місяців тому

    I tried this with a few images. I'm getting back a similar image, but not the same ones as the original. What am I doing wrong? Mostly the background is different, while the subject stays more or less the same (some little differences in attire).

    • @latentvision
      @latentvision  6 місяців тому

      hard to say, it was an "old" workflow so it might be just a matter of updated checkpoints or different version of some library

    • @bgtubber
      @bgtubber 6 місяців тому

      @@latentvision Ah, I see. No worries. I'll keep trying. Hopefully I'll figure it out. :)

  • @luiswebdev8292
    @luiswebdev8292 Рік тому

    can you explain more detail why you're using the CLIPTextEncodeSDXL and not just CLIPTextEncode? Is that important to this workflow?

    • @latentvision
      @latentvision  Рік тому

      no, it's not essential. As I mentioned at the very beginning CLIPTextEncodeSDXL generally gives slightly sharper details

    • @luiswebdev8292
      @luiswebdev8292 Рік тому

      @@latentvision that only works with SDXL models right? Is there an alternative with other models (e.g dreamshaper) or for those you would simply use CLIPTextEncode?

  • @81sw0le
    @81sw0le Рік тому

    I have a unique way of creating characters in midjourney. I'd like to use it as an ipadapter and pose it but I never get any good results. (very detailed, grotesque cartoon style)
    The goal it to be able to create a character sheet so I can animate it.
    Have you seen a way to do something like this?

    • @latentvision
      @latentvision  Рік тому

      I'd need to see the pictures. Technically it's possible, you probably need a checkpoint or a lora with a close style and depends on the kind of result and fidelity you are after.

    • @81sw0le
      @81sw0le Рік тому

      Do you have a discord so I can send you the images?@@latentvision

  • @AntonioRomero-x1e
    @AntonioRomero-x1e 10 місяців тому

    Ive watched this video many times trying to use one of this methods to fake an "unstable" animation. Animatediff evolved so quickly that it seems imposible now to make each frame in a different style...... can u make a video on how to make a video with animatediff where Ipadapter keeps the identity of the main subject but the rest of the composition changes style in each frame? have in mind that scheduled prompots are not a solution here. It would be very difficult to write a prompt for each frame.

  • @bobgalka
    @bobgalka 6 місяців тому

    I just have to laugh... I wanted to use some of the ideans from this work flow and started a new flow and started building my flow and almost immediately got stuck on on the pos and neg nodes... took me awhile to figure out that the nodes are called primitiveNode... so i added that but it looked nothing like yours.. tried different things... then I though to just copy paste the node to my new on... nope.. no text area to type.... How did you create those primitiveNode nodes to have string out and multiline text area? BTW I am totally enjoying my self watching and learning from your videos. ;O)

  • @generalawareness101
    @generalawareness101 Рік тому

    For whatever reason if I put the int to 0 I get nothing and the closer I get to the sample steps (30 in this example) the more the image comes in.

  • @gamersgabangest3179
    @gamersgabangest3179 10 місяців тому

    Ciao Matteo, che GPU utilizzi? Grazie

  • @cyril1111
    @cyril1111 Рік тому +1

    Thanks for the explanations! Super helpful! Now, Im a bit confused of the width and height of your TextencodeSDXL - it is huge! How come it goes so fast on your workflow, when for me it takes more than 5min with a 4090 ?

  • @kikoking5009
    @kikoking5009 8 місяців тому

    The Unsampler node is not working
    (import failed) it shows after Downloading

    • @latentvision
      @latentvision  8 місяців тому +1

      comfy made a breaking upgrade, the nodes need to be updated. I believe the unsampler should be fine now

  • @Kikoking-y9b
    @Kikoking-y9b 8 місяців тому

    Hello
    I have 2 issues
    Repeat Latent Batch gives exactly 2 same images.
    And:
    Working with Get Sigma it shows this error :
    Error occurred when executing BNK_GetSigma:
    'SDXL' object has no attribute 'get model_object'

    • @latentvision
      @latentvision  8 місяців тому

      you probably just need to upgrade comfy

    • @Kikoking-y9b
      @Kikoking-y9b 8 місяців тому

      @@latentvision unfortunately no. The error is still there. Also with the ksampler Variation with noise injection.
      I tried with juggernaut sdxl checkpoint and sd_xl_base 1.0 checkpoint. Same issue with 'get _model_object

    • @xieporter
      @xieporter 8 місяців тому

      I have the same problem

    • @Kikoking-y9b
      @Kikoking-y9b 8 місяців тому

      @@latentvision would it help to delete comfy at all and install it again so maybe like these the error goes away!
      Because a lot of updates didn't help at all. Its crazy

  • @dan323609
    @dan323609 Рік тому

    What is sigma in comfy (or SD)? What it means, or it does?

    • @latentvision
      @latentvision  Рік тому +1

      roughly it is the current progress in the generation. you can compare it to a sigma start/end to know where you are in the image generation

    • @dan323609
      @dan323609 Рік тому

      @@latentvision oh i get it, thx

  • @whatwherethere
    @whatwherethere Рік тому

    How are you getting consistently good images? The moment I change anything in my prompts the image goes crazy. This is nowhere close to my experiences.

  • @kakochka1
    @kakochka1 Рік тому

    Am I the only one who can't open "pastebin" links? Does anyone know what am I doing wrong?)

    • @latentvision
      @latentvision  Рік тому

      seems to be working for me... I'll find a better location for all the workflows soon

    • @kakochka1
      @kakochka1 Рік тому

      @@latentvision Sorry for the trouble) Previously I averted this problem just by going to your github page, but couldn't find them there this time(

  • @swannschilling474
    @swannschilling474 11 місяців тому

    zombie is a very good negative to remove unwanted artifacts in the face...

  • @thienbao27071980
    @thienbao27071980 11 місяців тому

    love clip, but Workflows not download