Create Morphing AI Animations | AnimateDiff: IPIV’s Morph img2vid Tutorial

Поділитися
Вставка
  • Опубліковано 4 січ 2025

КОМЕНТАРІ •

  • @MDMZ
    @MDMZ  8 місяців тому +5

    Need help? Check out our Discord channel: bit.ly/44Qtkin
    Use these workflows to add more than 4 images: bit.ly/45lDiZD
    I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help

    • @grovemonk
      @grovemonk 7 місяців тому +1

      Hey man, the link says its invalid. Could you update it please? :)

    • @MDMZ
      @MDMZ  7 місяців тому +1

      @@grovemonk fixed

    • @MDMZ
      @MDMZ  7 місяців тому

      @AillusoryOfficial thanks for letting me know, just updated the link

    • @spiraldiver
      @spiraldiver 6 місяців тому +2

      i cant access your discord server for list of models, any ideas?

    • @Spa3tan-ai-content
      @Spa3tan-ai-content 3 місяці тому +1

      hi how can i download all the models if i cant join your discord says unable to accept invite

  • @epick-studios
    @epick-studios 3 місяці тому +5

    I gave up on comfyui forever until I saw your tutorial. Yours is truly the best one on youtube! Thank you, and keep up your amazing work!

    • @MDMZ
      @MDMZ  3 місяці тому

      Wow, thank you!

  • @rowanwhile
    @rowanwhile 7 місяців тому +5

    You are such a master at Comfy UI.. but also just as an educator! Having spent so many hours on youtube, your approach to teaching is just so concise, easy to follow, and generally brilliant... Thank you so much for taking the time to share your knowledge with the world.. You legend!

    • @MDMZ
      @MDMZ  7 місяців тому

      Wow, thank you!

  • @drkwontube
    @drkwontube 6 місяців тому +5

    I just wanted to express my immense appreciation for your ComfyUI Animatediff tutorial! It was incredibly clear and well-paced, making a complex topic feel so approachable. Your detailed explanations and step-by-step guidance were exactly what I needed to grasp the concepts fully. Thanks to you, I feel much more confident in implementing these animations in my projects. Looking forward to learning more from your expertise!

  • @ItsCYBER92
    @ItsCYBER92 7 місяців тому +4

    As always the best tutorial ever, helped reaching dope crazy results thanks bro 🙏

    • @MDMZ
      @MDMZ  7 місяців тому

      Happy to help!

  • @24differentb21
    @24differentb21 5 місяців тому +1

    Very helpful and thank you so much. I would recommend this to my friends who asked me before about these ai morph transitions. again thank you.

    • @MDMZ
      @MDMZ  5 місяців тому

      Thanks for sharing!

  • @brianmonarchcomedy
    @brianmonarchcomedy 6 місяців тому +1

    Great vid! Did you ever find a way to keep the likeness of the celebrities you were morphing between? I know you said you were looking into it. Thanks!!!

    • @MDMZ
      @MDMZ  6 місяців тому +1

      Not yet! It didnt work

  • @dpixvid
    @dpixvid 6 місяців тому +1

    Thanks! Been looking for a tut on AnimateDiff!!!

    • @MDMZ
      @MDMZ  5 місяців тому

      Awesome!

  • @SiriusVoxelBarf
    @SiriusVoxelBarf 6 місяців тому

    I’ve done this 2 times and keep coming out with errors. Cannot execute because VHS node doesn’t exist. node id #53. Any ideas how to fix?

    • @MDMZ
      @MDMZ  6 місяців тому

      try re importing the workflow

  • @duxast11
    @duxast11 7 місяців тому +1

    I can't get rid of the red notifications, and when I try to update or install anything (at video 1:13) I get errors. Reinstalled and uninstalled the program several times already and still errors. Can you please advise me what I am doing wrong?

    • @MDMZ
      @MDMZ  7 місяців тому

      that's werid, can you share more context on discord please ? easier to share screenshots and resources over there

  • @psznt
    @psznt Місяць тому

    Hi! This looks so dope but I cannot seem to make it work. The generation is a blank white image. Can someone help? Thanks

    • @MDMZ
      @MDMZ  Місяць тому

      can u try with default settings first ?

  • @CoreyMcKinneyJr
    @CoreyMcKinneyJr 2 місяці тому

    I Still can't get thus workflow to work :( i am getting this error after following everything precisely. IP ADAPTER ADVANCED Error(s) in loading state_dict for ImageProjModel:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).

    • @MDMZ
      @MDMZ  2 місяці тому +1

      so sorry, I dont think I know the fix for this, did u try sharing on discord ?

  • @ppoinha
    @ppoinha 13 днів тому

    any idea what needs to change to add more images, I added more nodes and I changed the fade mask but somehow it just does the first 4 images. Thanks.

    • @MDMZ
      @MDMZ  6 днів тому

      yes there is a way, I've shared workflows with more images before, right now it's only available on my Patreon if you're interested

  • @InfinantGamers
    @InfinantGamers 7 місяців тому +1

    Just discovered this workflow today, thanks for the tips!

    • @MDMZ
      @MDMZ  7 місяців тому +1

      Happy to help!

  • @tonmendesart
    @tonmendesart 4 місяці тому

    Hello my friend!! I am following the Morph tutorial from the video:
    "Create Morphing AI Animations | AnimateDiff: IPIV’s Morph img2vid Tutorial"
    I did all the steps as shown in the video, but when I click "Queue Prompt," it starts running in the terminal (I am using a Mac M1), and at the end, the message I attached here appears, and it just stays at 0%, even though I left the upscale nodes deactivated as instructed in the video. Can someone help me solve this issue? In the terminal, it only shows 0% as in the image. Thank you in advance!

  • @CoreyMcKinneyJr
    @CoreyMcKinneyJr 3 місяці тому

    Great tutorial but Why is the simple math node not working for me? i haven't touched it but its highlighting the b input after trying to generate. 😮‍💨

    • @MDMZ
      @MDMZ  3 місяці тому +1

      I saw your comment on discord, responded

  • @nicolagrossi2926
    @nicolagrossi2926 2 місяці тому

    Hi, i need that first and last uploaded image remain as they are, without interpretato. Is it possible? Thanks

    • @MDMZ
      @MDMZ  2 місяці тому

      The images will always change a bit

  • @madhudson1
    @madhudson1 7 місяців тому +1

    fantastic tutorial. Instant results

    • @MDMZ
      @MDMZ  7 місяців тому

      Great to hear!

  • @fo4ez_142
    @fo4ez_142 4 місяці тому

    Hi! I regularly use this workflow, but lately, I've encountered a few issues with it. All the problems started after the Comfy update. Initially, the issue was that instead of smooth morphing, a bunch of images similar to what I inserted into the IP adapter were generated, and they would rapidly switch between each other (restarting helped, but only for one generation). However, the biggest problem appeared today (also after the update): there’s an issue with the "Simple Math" node, and honestly, I don’t know what to do. There are just two red circles around "A" and "B" that are highlighted. I’d really appreciate your help-I have no one else to turn to

    • @MDMZ
      @MDMZ  4 місяці тому +1

      that sucks, some things tend to break after updating, I will test it out again and see if it works for me

    • @fo4ez_142
      @fo4ez_142 4 місяці тому

      @@MDMZ After the recent update, the issue with the IP adapter has been completely resolved, but the workflow still isn't working due to the Simple Math node.

    • @fo4ez_142
      @fo4ez_142 4 місяці тому +1

      @@MDMZ I've fixed everything. In case anyone else encounters this issue, you just need to replace the "Simple Math" node with "Math Expression" and make sure to write "a/b".

    • @onkardomkawle3469
      @onkardomkawle3469 2 місяці тому +1

      @@fo4ez_142 Thanks for this

  • @Sappydance
    @Sappydance Місяць тому

    so, have you found a way to have the generated video looks like the uploaded photo? (face)

    • @MDMZ
      @MDMZ  Місяць тому

      Not yet :/

  • @fillill-111
    @fillill-111 7 місяців тому +1

    Great tuttorial as always! Thank you!

    • @MDMZ
      @MDMZ  7 місяців тому

      Glad you liked it!

  • @andrruta868
    @andrruta868 6 місяців тому

    I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.

    • @MDMZ
      @MDMZ  6 місяців тому

      There's some math and numbers involved, but i can tell u that making the transition longer can produce bad results

    • @andrruta868
      @andrruta868 6 місяців тому

      @@MDMZ I understand. But I want to try it myself. Is it possible to find out in which node you can play with numbers?

  • @eltalismandelafe7531
    @eltalismandelafe7531 5 місяців тому

    Thanks for your amazing workflow! Please;I have two questions:
    1) In the Samplers group there is "Upscale Image By" node: scale_by 3.75 = 1080p and you say also is possible to set scale_by 2.5 = 720p .
    - How do you calculate the factors (3.75 and 2.5) for 1080p and 720p?
    2) If we choose scale_by 3.75 in the Upscale /w Model group in the node "Upscale Image" we need to set width 1080 height 1920.
    If we were to choose scale_by 2.5 in the node "Upscale Image" we should change it to width 720 height 1280 ?

    • @MDMZ
      @MDMZ  5 місяців тому

      I later found out that both the scale ratio and final resolution are independent from each other, you can use the ratio to do a first upscale, then the final resolution to upscale again. and as for the calculation, simply multiply the ratio to the batch size and you'll get the upscaling resolution

    • @eltalismandelafe7531
      @eltalismandelafe7531 5 місяців тому

      @@MDMZ Thanks for the answer. Simply multiply the ratio to the batch size and you'll get the upscaling resolution:
      In your original workflow: batch size = 96 512 x 288 = 16:9 ratio scale_by 1.75
      1) 16:9 = 1.777777777777778 * 96 = 170.6666666666667
      2) 96 * 16 = 1.536 96 * 9 = 864
      how do you get scale_by 1.75?

    • @MDMZ
      @MDMZ  5 місяців тому

      @eltalismandelafe7531 wanted to go from 288 to 504
      504/288 is 1.75, and that's how i found the ratio

    • @eltalismandelafe7531
      @eltalismandelafe7531 5 місяців тому

      @@MDMZ yes, you have rounded it off by 288 * 1.75 = 504 although in Empty latent Image you have written Width 288 Height 512 , not 504

  • @abdulbasitkhan449
    @abdulbasitkhan449 7 місяців тому +1

    There is no manager option in my comfy ui what should i do now?

    • @MDMZ
      @MDMZ  7 місяців тому +1

      did you install the manager ? check this video: ua-cam.com/video/E_D7y0YjE88/v-deo.html

    • @abdulbasitkhan449
      @abdulbasitkhan449 7 місяців тому +1

      @@MDMZ much respect for you bro to be very honest you and your community is great i love to be a part of your community.

  • @maganab
    @maganab 3 місяці тому

    Hello, I’m following the ipiv's Morp tutorial, and everything is going well, but I’m using reference images without humans, just hallways or structures, and yet a human always appears at the end. Is there something I’m doing wrong? I’m using the same models and LoRAs that come by default. The only thing I’ve adjusted is the motion scale to add more movement to the animation.

    • @MDMZ
      @MDMZ  3 місяці тому

      perhaps you can try to use another model thats trained on images similar to what you're trying to achieve ? example: if you wanna generate buildings, get a model that's trained on building images

    • @maganab
      @maganab 3 місяці тому

      @@MDMZ Thank you for the response. I tried some more architectural models, but I don't think they were that good. In the end, I believe what helped was increasing the weight in the IPAdapter Advanced (haha, but I'm not sure that's the reason). Thank you very much for the effort put into this tutorial; it's very good.

  • @justaguy-201
    @justaguy-201 2 місяці тому

    I hope it's not too much of a bother but when I place the ipiv-Morph-img2vid-AnimateDiff-HyperSD on the page I get an error message: Warning: Missing Node Types When loading the graph, the following node types were not found: About 18 of them. How do I fix this?

    • @MDMZ
      @MDMZ  2 місяці тому

      Looks like u missed parts of the video 😏😉

    • @justaguy-201
      @justaguy-201 2 місяці тому +1

      @@MDMZ ah, thanks for getting back so quickly! I was following along from 0:53 to 0:58 with dragging and dropping the json. In this tutorial your nodes are green and when I drag and drop 18 are red and that's when I get the error message. But thanks anyway...

  • @mandy_d4596
    @mandy_d4596 6 місяців тому

    I'm getting a error that says control net object has no attribute latent_format

    • @MDMZ
      @MDMZ  6 місяців тому

      hi, please check the pinned comment

  • @JainamSutaria777
    @JainamSutaria777 6 місяців тому

    Hey i am using think-diffusion for this.
    When i am uploading 2 files which named as model.safetensors inside ComfyUI / clip_vision folder. I am not able to rename it to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"

    • @JainamSutaria777
      @JainamSutaria777 6 місяців тому

      can you please help me?

    • @MDMZ
      @MDMZ  6 місяців тому

      you dont have to rename it, just make sure you load the correct file in the node

  • @rayshax814
    @rayshax814 6 місяців тому

    Any suggestions on how to do a longer video? I want to use more than 4 images, how do i add nodes?

    • @MDMZ
      @MDMZ  6 місяців тому +1

      you can duplicate the image group nodes to add extra images

  • @eddeyman
    @eddeyman 4 місяці тому

    thank you brother , it was working perfect but just today there's a problem showed up in the simple math node in the qr code group , would you please help with it ?

    • @MDMZ
      @MDMZ  4 місяці тому +1

      I will check

  • @yoavPK1
    @yoavPK1 3 місяці тому

    How long should it approximately take to create something like in this tutorial with a MacBook Pro M1 Max and 32GB RAM, just to understand the scale?

    • @MDMZ
      @MDMZ  3 місяці тому

      hard to tell, because this is way faster when u have an NVIDIA GPU, which MACs dont have

  • @ComfyCott
    @ComfyCott 7 місяців тому

    The king has answered our prayers. Just upgraded to a 4060 ti cant wait to get better quality outputs!

    • @MDMZ
      @MDMZ  7 місяців тому

      Congrats!! 8 or 16 VRAM ?

    • @ComfyCott
      @ComfyCott 7 місяців тому

      @@MDMZ 16!

    • @MDMZ
      @MDMZ  7 місяців тому

      @@ComfyCott Power 💪

  • @chimishetv
    @chimishetv 7 місяців тому

    I went according to the same links you posted and download *ed the required files, but it gave me an error again, what's the problem?
    Error occurred when executing ADE_LoadAnimateDiffModel:
    'Hyper-SD15-8steps-lora. safetensors' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks.
    File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj. FUNCTION, allow_interrupt=True)
    File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 74, in map_node_over_list
    results. append(getattr(obj, func)( ** slice_dict(input_data_all, i)))
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff-
    Evolved\animatediff
    odes_gen2.py", line 178, in load_motion_model
    motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings)
    AAAN
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-
    Evolved\animatediff\model_injection.py", line 1084, in load_motion_module_gen2
    mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name)
    AAA
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff-
    Evolved\animatediff\motion_module_ad.py", line 136, in normalize_ad_state_dict
    raise ValueError(f"'{mm_name}' is not a valid SD1.5 nor SDXL motion module - contained {down_block_max}
    downblocks.")

    • @MDMZ
      @MDMZ  7 місяців тому

      hey, it's possible that the file 'Hyper-SD15-8steps-lora. safetensors' is corrupted, try re-downloading it, you can also share this on discord for more help

  • @huxun
    @huxun 5 місяців тому

    I don´t have a color correct node in my workflow. how do I get it?

    • @MDMZ
      @MDMZ  5 місяців тому

      make sure you've installed all the missing custom nodes

    • @huxun
      @huxun 5 місяців тому

      @@MDMZ can´t find a color correct node

  • @PYcifique
    @PYcifique 4 місяці тому +1

    Hey,
    Thank you a lot for this tutorial.
    The workflow works for me, except that the generated video is too fast, not smooth, as if there is no interpolation but just a rapid succession of images.
    Thanks in advance for your help.

    • @MDMZ
      @MDMZ  4 місяці тому

      strange, can u share more context on discord ?

  • @ezequielamster684
    @ezequielamster684 7 місяців тому

    I have a problem where images from the previous generation are saved, and even though I remove them, they still appear in the generation

    • @MDMZ
      @MDMZ  7 місяців тому

      that's strangem try restarting comfyui, and set the seed to randomize

  • @JasonBernstock
    @JasonBernstock 7 місяців тому +2

    Thank you for this! Any other morph workflows keep the image "exact" rather than reimagining it? I love these morphing loops but would love it to follow my initial images completely. I though I saw a 2 image morph that seemed to not "reimagine" the inputs. Thank you for the detailed settings walkthrough. Improved my results considerably.

    • @MDMZ
      @MDMZ  7 місяців тому

      I would love to have that too, I don't know of a way to do it yet

    • @FlorentSc
      @FlorentSc 7 місяців тому

      hey! i'm also interesting on what you describe! did you find something ?

    • @pancat422
      @pancat422 4 місяці тому

      I think its because the IPA, althought you set it to Strong, it will still reshape the image, not sure if anyone have the solution yet?

  • @qwazi9054
    @qwazi9054 7 місяців тому

    why does the final output video turn super dark when i use super bright images??

    • @MDMZ
      @MDMZ  6 місяців тому

      make sure you use the right settings and models, if it persists, try reducing the steps down to 15-20

  • @samu7015
    @samu7015 7 місяців тому +1

    The upscaling keeps getting stuck and won't generate anything

    • @MDMZ
      @MDMZ  7 місяців тому

      no errors at all ? did u try upscaling to 720 or 1080 ? trying a lower res might help

  • @NWO_ILLUMINATUS
    @NWO_ILLUMINATUS 3 місяці тому

    No matter what I do, I always get the, "cannot find IPAdapter model" when I try to use Plus(High Strength), I've got the model, several times, and renamed it properly; but it's NEVER found. Thoughts?

    • @MDMZ
      @MDMZ  3 місяці тому +1

      In which folder are u placing the model ?

    • @NWO_ILLUMINATUS
      @NWO_ILLUMINATUS 3 місяці тому

      @@MDMZ I've got it In the /ComfyUI/models/clip_vision folder. Same spot as where I have the medium Strength model that IS functioning.
      Looks like I may need a hardware upgrade or something though; using a medium strength model, my project fails at the second Ksampler "torch.cuda.OutOfMemoryError: Allocation on device"
      Running an RTX 4070TI Super, 16Gig VRAM, I feel that SHOULD be enough.

    • @NWO_ILLUMINATUS
      @NWO_ILLUMINATUS 3 місяці тому

      @@MDMZ
      I'm putting the model in the /ComfyUI/models/clip_vision folder, same folder as the medium strength model which is working.
      I Get a couple "Allocation on device" errors; Running an I-9, RTX 4070TI Super 16 G VRAM and 32 GIG ram, I'm wondering if I need more RAM for this workflow?

    • @MDMZ
      @MDMZ  3 місяці тому

      @@NWO_ILLUMINATUS that's not the correct folder for IPAdapter models, it should be placed in the IPAdapter models folder, and you might need more VRAM depending on how high you're pushing your settings

    • @NWO_ILLUMINATUS
      @NWO_ILLUMINATUS 3 місяці тому

      @@MDMZ Saddly, didn't work. Still model not found. Also, the notes in the workflow say to add the models to the clip vision folder, and the medium model works in the clip vision folder. odd

  • @icosart1520
    @icosart1520 6 місяців тому

    is there a way to add more pictures to the process? and how can I make a linger video out of this?

    • @MDMZ
      @MDMZ  6 місяців тому

      yes, check the pinned comment

  • @sanjitfx1862
    @sanjitfx1862 7 місяців тому

    There is no manager option in my comfy ui manger what should i do now?

    • @MDMZ
      @MDMZ  7 місяців тому

      You need to install it, check my comfyui installation video for instructions

  • @CarCrashesBeamngDrive
    @CarCrashesBeamngDrive 7 місяців тому +1

    hi, what is the resolution of the uploaded photos?

    • @MDMZ
      @MDMZ  7 місяців тому +1

      for this particular video 1024x1024, but so far I havent had restrictions with resolution or aspect ration, better quality helps tho

    • @CarCrashesBeamngDrive
      @CarCrashesBeamngDrive 7 місяців тому

      @@MDMZ Are these images publicly available? I can't achieve your result.

    • @MDMZ
      @MDMZ  7 місяців тому

      @@CarCrashesBeamngDrive yes they are, together with the workflow

  • @eddeyman
    @eddeyman 6 місяців тому

    how can i control clip vision on this workflow my friend ?

  • @hariom404-u6c
    @hariom404-u6c 8 місяців тому +3

    Can you create on a video how we can increase the video length i.e adding more images then 4

    • @MDMZ
      @MDMZ  7 місяців тому +1

      I will be experimenting with that

  • @cure7398
    @cure7398 7 місяців тому +1

    I have a question that other people might have too. I am new to the AI world and don’t know how things work. In your video, you show us how to do everything step by step. But if I want to try new things or use other models, how do I do that? I think we can do more fun stuff in ComfyUI besides just changing the video. Can you make a video about that or write back to explain? This will help many people like me. Thank you for all your hard work!

    • @MDMZ
      @MDMZ  7 місяців тому +1

      I get you, I think that comes with experience, try different workflows, you can also look up tutorials on specific nodes and what they're used for

  • @townbytown
    @townbytown 8 місяців тому +1

    Thanks for this amazing KI Video Information. Many greetings from good old Vienna 🎡 🎩

    • @MDMZ
      @MDMZ  7 місяців тому +1

      Glad you enjoyed it!

  • @jsonslim
    @jsonslim 5 місяців тому +2

    I like how the link "List of nessesary links" leads to your Discord server with no clear way to get the file

    • @MDMZ
      @MDMZ  5 місяців тому

      The list is there, with full instructions, check the pinned msg in the discord channel

    • @jsonslim
      @jsonslim 5 місяців тому

      @@MDMZ thanks, now i see it!

  • @AmitShq
    @AmitShq 7 місяців тому

    anyway to set video combine to download in h264 format because the uncompress is too big also for the images anyway to always save as jpeg or webp not png? ps im not talking about previewing.

    • @MDMZ
      @MDMZ  7 місяців тому

      u can increase the crf to reduce file size, I think the combine node has options to change codec as well

  • @petertucker455
    @petertucker455 7 місяців тому

    Hi @mdmz, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?

    • @MDMZ
      @MDMZ  7 місяців тому

      That's normal, it doesnt stay 100% true to the input, i tried with sdxl, couldnt gey good results

  • @visualmenace4481
    @visualmenace4481 7 місяців тому

    hmmm. I must be missing something because I can't seem to get the video to look anything like the original images...any tips?

    • @visualmenace4481
      @visualmenace4481 7 місяців тому +1

      Fixed it, Lora's did not auto pull in settings! noobing my way through, thanks for this tut!

  • @Serjayz
    @Serjayz 5 місяців тому

    Thanks for the tutorial, do you perhaps know why the face of me in the picture is getting deformed?

    • @MDMZ
      @MDMZ  5 місяців тому

      it doesn't work well with real faces, I talked about it in the video

  • @Guilvero
    @Guilvero 7 місяців тому

    Donde es esta lista de modelos necesarios. No la veo en discord. Ayuda.

    • @johnlonggone
      @johnlonggone 6 місяців тому

      anclado en la chincheta arriba a la derecha.

    • @Guilvero
      @Guilvero 6 місяців тому

      @@johnlonggone gracias.

  • @Best.Gaming.Futures
    @Best.Gaming.Futures 4 місяці тому

    How to make it if from Video? from 1 video or multi video for example 6 video faces? please make a tutorial, if possible sync with the mouth in the video.

  • @masoud.art.videos
    @masoud.art.videos 7 місяців тому

    Great guide thanks. I managed to produce something and this basically is what Krea ai is offering but their output is bit dark and unpolished. Really appreciate the points on using vram.

    • @MDMZ
      @MDMZ  7 місяців тому +1

      interesting, I'm gonna try Krea ai

  • @shivamp5410
    @shivamp5410 6 місяців тому

    I keep getting the IPAdapter model not found error. Any solution?

    • @MDMZ
      @MDMZ  6 місяців тому

      Make sure you place the files in the correct path

  • @jesvsibarra
    @jesvsibarra 7 місяців тому +1

    Subscribed, very complete tutorial, what video card are you using?

    • @MDMZ
      @MDMZ  7 місяців тому

      4090

  • @odonrutven001
    @odonrutven001 7 місяців тому

    Error occurred when executing ImageSharpen why always like this?
    help me please

    • @MDMZ
      @MDMZ  7 місяців тому

      what does the error say

  • @Hebrideanphotography
    @Hebrideanphotography 6 місяців тому

    I can't find the list of models, when I click the link for discord.

    • @MDMZ
      @MDMZ  5 місяців тому

      check the pinned message in the discord channel

  • @ezrawithacamera
    @ezrawithacamera 7 місяців тому

    To use juggernaut_reborn, where in the ComfyUI folder structure did you put it? I downloaded it and tried a bunch of different places but it wouldn't show up in the "Load checkpoint" box

    • @MDMZ
      @MDMZ  7 місяців тому

      hi, all the correct placements of models are included in the full list (link in the description) make sure you use the correct path

  • @patrickblf7635
    @patrickblf7635 3 місяці тому

    Where can i find the list of the models? The link is not working :(
    Can you please update the Link since the discord Chat its linking to has no list as pinned Message.

    • @MDMZ
      @MDMZ  2 місяці тому

      Hi, I just checked, the link still works fine, when u accept the server imvite check yhe pinned messages on the ipiv morph channel

    • @stijnpruijsen
      @stijnpruijsen 2 місяці тому

      @@MDMZ I also can't seem to find it in the discord. Dont see an ipiv morph channel! tips?

    • @MDMZ
      @MDMZ  2 місяці тому

      @@stijnpruijsen the discord link will take you directly there, and then look for the pinned message

  • @Kikoking-y9b
    @Kikoking-y9b 6 місяців тому

    How many vram and ram should i have for that? I have 32 ram- 8vram

    • @MDMZ
      @MDMZ  6 місяців тому

      I recommend atleast 12GB of VRAM, you can still give it a try

  • @idoshor4470
    @idoshor4470 7 місяців тому

    can someone send me a link to the IPAdapter model please. I think the link mentioned here is not good. thanks.

    • @MDMZ
      @MDMZ  7 місяців тому

      What happens when you click on the link ? Seems to be working fine for me

  • @websater
    @websater 7 місяців тому +1

    How long does your full render take and what is your gpu? It takes my 3080 about 1hr to render 720p but fails on the upscale. Any suggestions?

    • @MDMZ
      @MDMZ  7 місяців тому

      I use 4090, it takes around 20-30 mins to do the whole thing, try reducing the upscaling ratio, don't use your computer when it's upscaling

  • @지도니-f4u
    @지도니-f4u 6 місяців тому

    How do you print it out in a 16:9 ratio resolution!!!???????????????????? plz

    • @MDMZ
      @MDMZ  5 місяців тому

      just swap the dimensions, it's actually explained in the video

  • @Blaqk_Frozste
    @Blaqk_Frozste 4 місяці тому

    Followed the original video and cant work out why my outputs look extremely low quality

    • @MDMZ
      @MDMZ  4 місяці тому

      perhaps you need to adjsut the resolution, upscaling ratio, and steps

  • @vladimirshlygin3211
    @vladimirshlygin3211 7 місяців тому

    Thank you man! This is dope

    • @MDMZ
      @MDMZ  7 місяців тому

      Glad you like it!

  • @DeanCassady
    @DeanCassady 8 місяців тому

    The motion graphic site is down, how can I get the Video? Thx

    • @MDMZ
      @MDMZ  7 місяців тому

      seems to be working fine now

    • @DeanCassady
      @DeanCassady 7 місяців тому

      @@MDMZ I try many times, still can't link the site, pls help, if upload the video to somewhere else(maybe the site block some IP address)

  • @progressiveGREEN
    @progressiveGREEN 5 місяців тому

    I use same settings as you and use a Geforce RTX 3070. Is it normal that a full render will take 7 hours???

    • @MDMZ
      @MDMZ  5 місяців тому +1

      I've replied to u on discord

    • @progressiveGREEN
      @progressiveGREEN 5 місяців тому

      @@MDMZ Thank you sir!

  • @dollarproduction0
    @dollarproduction0 7 місяців тому

    Can I easily create AI animation with Animate Diff/Comfy UI's help using an Nvidia Geforce 1050TI 4GB graphics card?

    • @MDMZ
      @MDMZ  7 місяців тому

      4GB is a bit too low

  • @cure7398
    @cure7398 7 місяців тому +1

    When I use a real person's image, it completely changes that person's face to a different man. Is there any way to fix that to maintain the same face?
    By the way, great video. Keep it up.🔥🔥

    • @MDMZ
      @MDMZ  7 місяців тому

      check 5:00

    • @cure7398
      @cure7398 7 місяців тому +1

      @@MDMZ Yes, I caught this the second time I watched the video. Thank you for clarifying this, though.
      love your content 🔥🔥

    • @erysonrodriguez8398
      @erysonrodriguez8398 7 місяців тому

      @@MDMZ hi sir, i know this tutorial just came out but i want to know if this is possible

  • @lonelytaigahotel
    @lonelytaigahotel 6 місяців тому

    how to increase the number of frames?

    • @MDMZ
      @MDMZ  6 місяців тому

      check the pinned comment

  • @raneanubis
    @raneanubis 5 місяців тому +1

    you are pure excellence.

  • @comosefazessabagaca
    @comosefazessabagaca 7 місяців тому

    Invalid invitation to discord, would it be possible to update the link? Congratulations on the work!!

  • @Cutepettie441
    @Cutepettie441 7 місяців тому

    How to open comfyui after installing it all??

    • @MDMZ
      @MDMZ  7 місяців тому

      open run_nvidia_gpu

  • @UnleashYourGr8ness
    @UnleashYourGr8ness 7 місяців тому

    Error occurred when executing ADE_LoadAnimateDiffModel:
    'NoneType' object has no attribute 'lower'
    File "C:\Users\piyus\OneDrive\Desktop
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\piyus\OneDrive\Desktop
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\piyus\OneDrive\Desktop
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\piyus\OneDrive\Desktop
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff
    odes_gen2.py", line 178, in load_motion_model
    motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\piyus\OneDrive\Desktop
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 1066, in load_motion_module_gen2
    mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\piyus\OneDrive\Desktop
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file
    if ckpt.lower().endswith(".safetensors"):
    ^^^^^^^^^^
    facing this error

  • @GamingDaveUK
    @GamingDaveUK 7 місяців тому

    Great video and I look forward to trying it. But, do you have a link to the model list that does not require discord?

  • @kholismaruf
    @kholismaruf 5 місяців тому

    Can u help me?
    Error occurred when executing IPAdapterUnifiedLoader:
    IPAdapter model not found.
    File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 535, in load_models
    raise Exception("IPAdapter model not found.")

    • @Cevherbenn
      @Cevherbenn 5 місяців тому

      me too

    • @MDMZ
      @MDMZ  5 місяців тому +1

      make sure you download all the models from the list, and place them in the right folders

  • @点点爹
    @点点爹 7 місяців тому

    please tell me:how to add 5th or 6th photo or more photos?thank U a lot

    • @MDMZ
      @MDMZ  7 місяців тому +1

      still looking into this, might need to start off a different workflow

  • @JackTorcello
    @JackTorcello 5 місяців тому

    The only preset I can get to work is ViT-G (medium strength)?!

    • @MDMZ
      @MDMZ  5 місяців тому

      You need to download ALL of the ipadapter models

  • @vitaolsen4601
    @vitaolsen4601 21 день тому

    Очень круто! Спасибо!

  • @pocong9867
    @pocong9867 5 місяців тому

    hello why me eror ipaadapter loaded sir can u help me

    • @MDMZ
      @MDMZ  5 місяців тому

      hi, check the pinned comment

  • @bildatheventure
    @bildatheventure 5 місяців тому +5

    these tutorials are great they just completely skip crucial steps for the truly uninitiated .... i keep having problems installing all the models and no one is providing a clear instruction ive never used github before and im not a developer ...... maybe its gatekeeping, maybe its just me ... but this is truly the most frustrating learning experience ive ever had

    • @MDMZ
      @MDMZ  4 місяці тому +1

      can you head to our discord and share what specific issues u ran into? we'll be happy to help

    • @tonmendesart
      @tonmendesart 4 місяці тому +1

      ​@@MDMZyou are a gentleman, soooo patient😂

  • @eddeyman
    @eddeyman 7 місяців тому +1

    Thank you brother for being so kind doing this amazing vid ❤ , sadly i still can't get any results.. i followed all the steps and every file is in it's right place , but i always get an error once i reach to ksampler , would you please help ?

    • @MDMZ
      @MDMZ  7 місяців тому +1

      you can share the error on discord, u might be able to get help if u provide more context

  • @balibike9024
    @balibike9024 4 місяці тому

    Can I use the real photos. eg : from my dad, change to me and change to my son ?

    • @MDMZ
      @MDMZ  4 місяці тому

      Hi there, this question was covered in the video

  • @RhettWilliams-gu5kc
    @RhettWilliams-gu5kc 7 місяців тому

    Hi There! This is so sick, do you do anything paid or know anyone who does this for commision? Just recently have been exploring AI art and am totally new to the field. Thank you so much!

    • @MDMZ
      @MDMZ  5 місяців тому

      u might be able to find some talent on our discord server

  • @Gmlt3000
    @Gmlt3000 7 місяців тому

    Can u please post your workflows somewhere else? Cuz Patreon not available in many countries...

    • @MDMZ
      @MDMZ  7 місяців тому

      I believe the unavailability issue affects the payment stage only, I put the workflow there for FREE, can you check if you're able to see the post ?

    • @Gmlt3000
      @Gmlt3000 7 місяців тому

      ​@@MDMZ Nope. Patreon is blocked from their side, they decide which nation have privilege to join... If workflow is free, maybe u can link it via google drive?

  • @TimeTalesTT
    @TimeTalesTT 7 місяців тому

    Better then deforum?

    • @MDMZ
      @MDMZ  7 місяців тому

      depends who u ask, both can be used for different things

  • @kanall103
    @kanall103 5 місяців тому

    How do I do looping?

    • @MDMZ
      @MDMZ  5 місяців тому +1

      it loops by default

  • @Injaznito1
    @Injaznito1 8 місяців тому +1

    Great tutoria! Thanx MDMZ!

    • @MDMZ
      @MDMZ  7 місяців тому

      happy to help!

  • @mikestaub
    @mikestaub 2 місяці тому

    Does this work with flux?

    • @MDMZ
      @MDMZ  2 місяці тому +1

      I'm gonna try this, it might take ages to generate tho

  • @NeoCentral02131
    @NeoCentral02131 7 місяців тому

    2.5 for 720p and 3.75 for 1080p, what about 4k?

    • @shortsLLC
      @shortsLLC 7 місяців тому

      how long is your rendertime with 3.75? thx

    • @MDMZ
      @MDMZ  7 місяців тому

      7.5 try at your own risk 😅

  • @agnessf143
    @agnessf143 5 місяців тому

    Can u do tutorial on krea similar options may be easier for many

    • @MDMZ
      @MDMZ  5 місяців тому

      Krea is awesome, but I don't think u can use it to do smth like this

  • @DELUXEMUSICAiArTSELECTION
    @DELUXEMUSICAiArTSELECTION 7 місяців тому +1

    Wow Thanks

  • @LsdHippies
    @LsdHippies 2 дні тому

    hey, thx man got it. to everyone DOWNLOAD ALL THE FILE!!! at 1:57 it is mandatoray. It's not an option...

  • @motgarbob7551
    @motgarbob7551 8 місяців тому +1

    This looks similar to sparse ctrl workflows, i'll see how they compare

  • @The_Daliban
    @The_Daliban 8 місяців тому +1

    AMAZING❤️

    • @MDMZ
      @MDMZ  7 місяців тому +1

      Thank you!

  • @thewebstylist
    @thewebstylist 7 місяців тому

    Amazing results indeed but wow at 1:00min lost me as wayyy to complex to use unfortunately

    • @MDMZ
      @MDMZ  7 місяців тому +1

      haha that was my exact reaction when I first saw it, don't get discouraged, it gets easier 😉

  • @Ai_mayyit
    @Ai_mayyit 7 місяців тому +1

    raise Exception("ClipVision model not found.")

    • @MDMZ
      @MDMZ  7 місяців тому +2

      make sure you download the correct clipvision files AND... rename them as described in the list

    • @Ai_mayyit
      @Ai_mayyit 7 місяців тому +1

      @@MDMZ it's solve but still i get error on cv2

  • @ATLJB86
    @ATLJB86 7 місяців тому +1

    Nice