ComfyUI AI: What if the new IP adapter weight scheduling meets Animate Diff evolved?

Поділитися
Вставка
  • Опубліковано 12 чер 2024
  • This is the first part of a series, in the coming episodes I will show a workflow in which the upscaling and image enhancement method Perturbed Attention Guidance is integrated and with which the animations can be generated in high resolution and long playback time. And where I try out various additional methods to control the output video, such as different control nets.
    Once again, it's incredibly cool what the developer of the IP Adapter Plus nodes has created for us. The longer I play around with the adapters, the more ideas I come up with.
    You can find and download the workflow on my website www.alienate.de.
    0:00 - 1:13 Intro
    1:14 - 7:42 Setup Workflow
    7:43 - 13:00 Explaining Nodes
    13:01 - 13:44 Outro
  • Фільми й анімація

КОМЕНТАРІ • 99

  • @MisterCozyMelodies
    @MisterCozyMelodies 22 дні тому +3

    everything on this tutorial is awesome, the voice, the background music, the detailed in each step of the tutorial, very immersive video, thanks a lot! you are doing a next lvl video here

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому +1

      Thanks a lot, I really appreciate that!
      It always drives me crazy when I watch tutorials and numerous in-between steps are simply skipped. I definitely didn't want to do that in my videos. That's why I always rebuild the workflow after the video has been completed according to its instructions to see if it works.

    • @eccentricballad9039
      @eccentricballad9039 20 днів тому +1

      @@Showdonttell-hq1dk Thanks a lot for actually creating art instead of creating content. It's so immersive and i feel like i stepped into my own artificial intelligence work studio.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  20 днів тому

      That's a wonderful compliment, thanks a lot!

    • @electronicmusicartcollective
      @electronicmusicartcollective 17 днів тому +1

      @Showdonttell-hq1dk ...uhhhm except the room of the voice ;) better dry signal. pls not a recognize reverb/delay

    • @wizards-themagicalconcert5048
      @wizards-themagicalconcert5048 15 днів тому

      @@Showdonttell-hq1dk It works very well ! Very easy to understand and follow ! Thanks !

  • @wizards-themagicalconcert5048
    @wizards-themagicalconcert5048 15 днів тому

    Fantastic Content and video,keep em up ! Subbed !

  • @SylvainSangla
    @SylvainSangla 19 днів тому

    Thanks a lot for sharing these tutos and workflows !

  • @AmazenWisdom
    @AmazenWisdom 23 дні тому +1

    Wow. Another great tutorial! Thank you so much for sharing!

  • @abaj006
    @abaj006 26 днів тому

    Amazing work! Thanks for sharing, much appreciated!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  26 днів тому

      I'm glad you like it. Thanks for watching and subscribing

  • @dead0barbie
    @dead0barbie День тому

    👏👏👏👏👏👏

  • @697_
    @697_ 23 дні тому

    The way your AI says hugging face is quite cute tbh 1:36

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  23 дні тому

      Tbh, one of the reasons I chose Charlotte was because her voice keeps me motivated when making the videos. And, if that works for me, then there's a good chance that viewers will like her AI voice too. ;)

  • @FlippingSigmas
    @FlippingSigmas 23 дні тому

    great video!

  • @skycladsquirrel
    @skycladsquirrel 22 дні тому

    amazing!

  • @Marek_Kotwinski
    @Marek_Kotwinski 5 днів тому

    WOW , Thanks

  • @MrXRes
    @MrXRes 17 днів тому

    Thank you!
    What voice generator did you use?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  15 днів тому

      This is the AI voice profile Charlotte from Elevenlabs. Thanks for watching.

  • @BuzzJeux_Studio
    @BuzzJeux_Studio 21 день тому

    Fantastic tutorial and very useful, but idk why i have an issue (oom) with 16 VRAM.. how much VRAM do you use for that ?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  21 день тому +1

      Thanks for watching, glad you like it. My graphics card has 12 GB of vram. Maybe it helps to enlarge the swap file, i.e. the virtual memory. I have set mine to 80 GB and since then I have hardly had any problems of this kind.

    • @BuzzJeux_Studio
      @BuzzJeux_Studio 21 день тому

      @@Showdonttell-hq1dk
      First of all thanks for your quick reply, after increasing my virtual memory (I was currently at 30 GB) as you mentioned, but I still had the problem, after several hours looking for the why and wherefore, I finally found where my error was coming from, I was using input images that were far too large in terms of resolution! Problem solved by using basic 512x512 images :)

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  20 днів тому +1

      @BuzzJeux_Studio ​ So business as usual! :) An error occurs, a simple fix doesn't work --> many hours, countless websites read and how-to-fix-problem-xyz-vidoes later --> problem was basically easy to solve.
      However, the images are usually downscaled to a low resolution of 224x224 by the image batch multiple node anyway. I have just tried it again with 5 images in a resolution of 6000x6000. I only got an error message when I tried to upload an image with 20480x12288 to the load image node.
      This means that images larger than 512x512 should also work in principle, at least with a graphics card like yours.

  • @GamingDaveUK
    @GamingDaveUK 23 дні тому

    Do you have a tutorial for the sdxl version? so far every guide i look at for animation always shows 1.5 models. given sdxl's prompt cohesion and better image quality, its surprising so many are still using 1.5.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  23 дні тому +1

      Unfortunately, this does not yet work with SDXL. At least not the version 2 Motionloras etc. yet. This means that you can't really use everything that AnimateDiff Evolved provides with SDXL. It was also an adjustment for me because I have only been using SDXL models for the last few months. In the next few days I want to try everything with HotshotXL, maybe it will work better, but I can't really say anything about that yet.
      You can download a basic XL workflow from the site. But as I said, there's not much you can do with it. Most of the workflows I've found mix SD 1.5 with SDXL in some way with different adapter loras, but they're not satisfactory.
      Link: civitai.com/articles/2950/guide-comfyui-animatediff-xl-guide-and-workflows-an-inner-reflections-guide

  • @DerekShenk
    @DerekShenk 26 днів тому +1

    Since viewers will want to learn what you teach them, it would be far more beneficial if you included links to your workflow. Additionally, if you really want to stand out ahead of other tutorials, if you include links to the actual images you use in your workflow, thereby enabling viewers to fully reproduce what you show them, it would be fantastic!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  26 днів тому +1

      Thanks for watching!
      You can find and download the workflow on my website alienate.de.
      As for the images, my idea is to show in the tutorials how you can set up and use the workflow yourself. Without exception, all the images I use in the videos are created or photographed by myself. I also work as a photographer, which means that some of the images used are also linked to image rights. Apart from all the fun of learning how to use ComfyUI and create videos from it, it's also a financial matter. Thanks for your remarks and interest anyway.

    • @clangsison
      @clangsison 23 дні тому

      sometimes people are lazy that’s why they want the workflow. others view these types of videos (and matteo’s) as very insightful if one truly wants to understand how things work.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  23 дні тому +1

      @@clangsison I didn't want to say it out loud. But yes, it's probably true. Although I can understand it somewhat. When you come into contact with it for the first time, a fully functional workflow like this is really helpful. You can take it apart and understand step by step how it works.
      Thanks for watching. :)

    • @amorgan5844
      @amorgan5844 19 днів тому +1

      ​@Showdonttell-hq1dk it's always appreciated, your work and workflows are some pf the best I've ever seen

  • @CosmicFoundry
    @CosmicFoundry 26 днів тому

    awesome, thanks for this! Do you have workflow somewhere?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  26 днів тому +3

      Thanks for watching. I'm glad you like it. I'll definitely upload the workflow to my website later today, www.alienate.de.

    • @WhySoBroke
      @WhySoBroke 26 днів тому

      @@Showdonttell-hq1dkGreat method and thanks in advance for the workflow!

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  26 днів тому +1

      You can now download the workflow as a json file from my website if you like. Have fun trying it out. The link is usually in the video description.

    • @CosmicFoundry
      @CosmicFoundry 23 дні тому

      @@Showdonttell-hq1dk got it thanks! keep up the great work!

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 22 дні тому

      @@Showdonttell-hq1dk I got error pls help

  • @wonder111
    @wonder111 15 днів тому

    Great approach to teaching what only the programmers can understand. I worked on this for a few hours, it fails at the last (video combine) node with this error: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED. Any idea what maybe in error? Thanks, and I will be following..

  • @alexhalka
    @alexhalka 19 днів тому

    Amazing!!! Would love to have your workflow, can't acces your site.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  19 днів тому

      The website takes a while to load. Would you try again, it should actually work.

  • @czlaczimapping
    @czlaczimapping 18 днів тому

    I have an error message: 'VAE' object has no attribute 'vae_dtype'
    Do you know what is the problem?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  18 днів тому

      Have you tried using a different Vae? Or connect the Vae from the checkpoint to the Vae decode node?

  • @double-7even
    @double-7even 22 дні тому

    I can't understand the weights for IPAdapter weights. There are two values: e.g. "0.0, 1.0" in IPAdapter weights. Is the first value (0.0) the weight for first image batch (blue in workflow) and the second value (1.0) for second image batch (cyan in workflow)?. Btw. Amazing work 👍

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому +1

      Thanks for watching! I spent another couple of hours today looking for a detailed explanation of the nodes involved. But it seems that there are no detailed texts available. So I can only tell you what my very long tests have shown. By the way, I'm currently working on a new video about it, and some things have become a bit clearer. My approach is empirical, so to speak; that is, I test it and see how the nodes behave with each other. It's incredibly complex, even though it looks so simple sometimes.
      My observations are; The two values (0.0, 1.0) indicate how much weight is given to the IP adapters on the one hand and the prompt on the other.
      1.0 = the Ip adapter receives the greater weight, i.e. the images.
      0.0 = the prompts receive the greater weight.
      As the outputs of the IP adapter weights nodes are called Image_1 and Image_2, I assume that the first image of the Images Batch Multiple node is processed by the first IP adapter batch node, at least more strongly. This is also shown by the test. And therefore the second image from the second Ip adapter batch node. However, things get more complex here.
      I'll try to shed more light on this darkness in the next few videos. :)
      But the short answer to your question is; yes, something like that.

    • @double-7even
      @double-7even 22 дні тому +1

      @@Showdonttell-hq1dk Thank You! I'm looking forward to the new video and I really appreciate Your hard work! Another problem which I found is that changing resolution to 2x (768x768) produce broken video. Details are repeated vertically and overall the whole scene is mixed up. Do You know why and how can I prevent this? EDIT: I think I know answer. It's latent size and it's limited to trained model data size (512x512). For bigger size we need to upscale it?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому +1

      @@double-7even Yes, that's right. I had the same problem, but after a few runs with the same seed and a resolution of 768 x 512 the problem disappeared completely. Anyway, it seems advisable to use the same seed, even if the changes only occur after a few runs. My seed is 998999, so if you use a copy of my workflow, there's a good chance that it will work there too. I don't know if you have changed it. But I would be interested to know whether the seed works across all computers.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому +1

      @@double-7even And that is, as you say, a typical SD.1.5 problem. With the SDXL models, you no longer have these worries. But unfortunately Animate Diff does not yet work properly with these models.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому +1

      But I have just found out that one can integrate additional IP adapter embeds into the workflow. That's pretty cool and will definitely be included in the new video.

  • @sunlightevidence4359
    @sunlightevidence4359 9 днів тому

    Dunno why but i tried your workflow and my renders are coming out blurred. Its almost like everything was rendered in high res but a blur filter was added on top.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  9 днів тому

      Thanks for watching. Did you use one of the LCM checkpoint models? As said in the video, I use the Absolute Reality LCM model. But there are others that you can download from Civitai.
      Let me know if it works that way.

    • @sunlightevidence4359
      @sunlightevidence4359 9 днів тому +1

      @@Showdonttell-hq1dk YES! I finally figured that when i switched to an LCM model. I was using non lcm's. Is it possible to add more than two motion lora's to this? Sorry i am just learning comfy :)Thank you.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  9 днів тому +1

      @@sunlightevidence4359 Very good, I'm glad!
      Theoretically, you can even use a lot of Motion Loras. Simply connect them one after the other to the load animate-diff-lora node via the prev-motion-lora input (on the left side of the node).

    • @sunlightevidence4359
      @sunlightevidence4359 8 днів тому

      Ah im reaching the 8gb limit on my 3070. Cant render more than 80 frames and was wondering if its possible to ‘resume’ a render? E.g frame 80 onwards with the same seed?

  • @martinkaiser5263
    @martinkaiser5263 23 дні тому

    Wo genau kann ich den workflow downloaden ? Sehe es einfach nicht

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  23 дні тому +1

      Hey, danke fürs Anschauen. Der Workflow ist als Json-Datei auf meiner Webseite, alienate.de, zu finden. Einfach herunterscrollen bis zu den Comfy-Bildern, das erste Bild ist das Logo meines Kanals, rechts daneben ist eine Liste mit der Überschrift: "Download Workflow Json". Der letzte Punkt auf der Liste lautet: "IPA Weight Scheduling + Animate Diff Workflow", das ist der Link zum Workflow. Mittels Rechtsklick öffnet sich das Kontextmenü, dann auf: "Link speichern unter ...", klicken und die heruntergeladene Json-Datei anschließend einfach in die Benutzeroberfläche von ComfyUI ziehen und die rotmarkierten Knoten via ComfyUI-Manager und: "install missing custom nodes", installieren. So sollte es gehen. Ich hoffe, das war hilfreich. Wenn ja, dann viel Spaß damit.

  • @kargulo
    @kargulo 11 днів тому

    Hi , I did build workflow but results is very blury

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  11 днів тому +1

      First of all, thanks for subscribing. You are the thousandth subscriber. :)
      To solve the problem, you can set the resolution a little higher. If you are using the absolute reality LCM model, then the optimal resolution is 576 to 320, and you can insert a NNlatent upscale node between the custom sampler and the Vae decode node. For this node, you only need to set SD-1.5 and the factor to 2.0. The input images also play a role and should not be too small in resolution.
      If you are using a multi-scaled mask, the min-float-value should be set to about 1.0.
      Let me know if any of this has helped.
      If none of this works, the workflow can also be found on my website alienate.de. Maybe you can try it out as well.

    • @kargulo
      @kargulo 11 днів тому

      @@Showdonttell-hq1dk thanks for reply
      I so glad that Im thousandth subscriber :)
      I found my mistake during build workflow, I missed checkpoint file and I choose different then recommended by You :)

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  11 днів тому

      @@kargulo Ah, ok, yes, the workflow is set up for LCM. But you can also use other checkpoint models if you download the corresponding lcm-lora-model. So: Connect the Lora model loader to the checkpoint and then select the LCM-Lora, connect the model output to the Use-Evolved-Sampling node. And then connect its model output to the Model Sampling Discrete node and select LCM in this node.
      You can install the LCM-Lora model via the ComfyUI manager.
      Have fun with it.

  • @atenore_
    @atenore_ 6 днів тому

    I am getting this error: "Error occurred when executing SamplerCustom: only integer tensors of a single element can be converted to an index".
    Any idea of a possible solution?
    Thanks for the amazing content

    • @atenore_
      @atenore_ 6 днів тому

      EDIT: Looks like the problem is related to the IPAdapter Weights node outputting into the IPAdapter Batch nodes weight inputs. If i turn back "weight" input into a widget, Sampler manages to render. Would that be solvable?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  6 днів тому +1

      @@atenore_ Thanks for watching.
      If you tried the workflow with all the models set and it didn't work, it may be that the problem is with some dependencies.
      If you have already updated ComfyUI, it may help to open the ‘update’ folder in the ComfyUI directory and run update_comfyui_and_python_dependencies.bat. At least if it is due to the dependencies.
      I remember that I also had this error once. That's why I've just searched a few of my old videos for it, but unfortunately found nothing.
      Let me know how it goes.

    • @atenore_
      @atenore_ 6 днів тому +1

      @@Showdonttell-hq1dk Thanks a lot! Gonna give this a shot crossing my fingers.

    • @atenore_
      @atenore_ 6 днів тому

      Just tried, sadly no luck.

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  6 днів тому

      @@atenore_ Did you download the workflow from my website?

  • @daoshen
    @daoshen 16 днів тому +1

    Amazing work and results! The voice is annoying to listen to and distracts from the content. This is, of course, subjective. A more neutral voice might appeal to more of us?

  • @nirdeshshrestha9056
    @nirdeshshrestha9056 22 дні тому

    It didnot work I get an error, can you help?

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому +1

      What is the error message?

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 22 дні тому

      @@Showdonttell-hq1dk Error occurred when executing IPAdapterBatch:
      cannot access local variable 'face_image' where it is not associated with a value
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
      output_data, output_ui = get_output_data(obj, input_data_all)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
      return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
      results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 761, in apply_ipadapter
      return (work_model, face_image, )
      ^^^^^^^^^^

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 22 дні тому

      @@Showdonttell-hq1dk Error occurred when executing IPAdapterBatch:
      cannot access local variable 'face_image' where it is not associated with a value
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
      output_data, output_ui = get_output_data(obj, input_data_all)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
      return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
      results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 761, in apply_ipadapter
      return (work_model, face_image, )
      ^^^^^^^^^^

    • @Showdonttell-hq1dk
      @Showdonttell-hq1dk  22 дні тому

      @@nirdeshshrestha9056 I have tried to reproduce the error, but without success. What you can do is first click on “Update all” in the Comfy Manager and then restart ComfyUI. Then you can check in the ComfyUI Manager if all extensions (custom nodes) are updated, if not then update them manually. If relevant nodes are marked in red in the ComfyUI manager under “import failed”, try uninstalling and reinstalling them. And check if all necessary models: Clipvision, IP-Adapter, and AnimateDiff Motion-Models and Motion-Loras are installed.
      Please also make sure that the images you are using are still in the same folder and have not been moved somewhere else in the meantime.
      I hope this helps. If not, please let me know. Good luck!

    • @nirdeshshrestha9056
      @nirdeshshrestha9056 22 дні тому

      @@Showdonttell-hq1dk tried but failed ahain

  • @697_
    @697_ 23 дні тому

    ip adAPTer