3D+ AI (Part 2) - Using ComfyUI and AnimateDiff

Поділитися
Вставка
  • Опубліковано 24 гру 2024

КОМЕНТАРІ •

  • @enigmatic_e
    @enigmatic_e  11 місяців тому +1

    Topaz Video AI: topazlabs.com/ref/2377

  • @P4TCH5S
    @P4TCH5S 10 місяців тому

    Ohhh snapppp part 2 here we gooooooo!

    • @P4TCH5S
      @P4TCH5S 10 місяців тому

      "Controlnets... I aint teachin you that" LOOOL

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      😂

  • @gabesaltman
    @gabesaltman 10 місяців тому

    THANK YOU for putting together all the resources in a clean document and thank you for a great workflow! One thing I noticed is that the iterative upscaler definitely adds details or extra elements to a render that may disrupt your original composition. The quality is fantastic but I'm wondering if there's a way to maintain quality upscale without adding extras?

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      No problem. Regarding your question, maybe reducing cfg or denoise in the upscale ksampler?

  • @chi_squared
    @chi_squared 10 місяців тому

    high quality tutorial, thanks bro

  • @fillill-111
    @fillill-111 10 місяців тому

    Please, don't stop! Great tuttorials!°

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      thanks for checking it out. hope it helps.

  • @user-zc9eh1qn5s
    @user-zc9eh1qn5s 10 місяців тому

    so cool,expect your next video!!

  • @PulpoPaul28
    @PulpoPaul28 10 місяців тому

    es realmente increible lo bueno que sos explicando y manjeando estas herramientas, gracias crack!

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      de nada👍🏼👍🏼

  • @yomi0ne
    @yomi0ne Місяць тому

    hello thank you for the tutorial, i downloaded the workflow to get familiar on animatediff and i have an error on the ipadapterapply, it says its not found but i download everything, there are no missing nodes yet this is still missing, how do i fix this? thank you!

  • @mhfx
    @mhfx 10 місяців тому +1

    Thank you for sharing this, I'm a 3d artist that's been waiting for ai to get to this point so I am super excited to try this out. I am curious about openpose --- is there no option to use an exported rig directly from your 3d software? You already have the camera and rig in blender, you should be able to export this info somehow so openpose doesn't have to guess with depth or soft edges and that would ideally solve the issue with it messing up which way the character is facing --- I'll investigate it own my own as well, but I'd figure I'd at least ask first

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      I dont know of a way to do what you're saying. The closest thing I've seen is someone created a rig and model that is designed like the openpose skeleton but I haven't tested that. But if you do find anything out let me know. I would love to learn about it. Thank you!

    • @calvinherbst304
      @calvinherbst304 9 місяців тому

      @@enigmatic_e I bet if you were to render the wire frame as a separate mp4 that mirrors the same as your 3D video, you could use it as the input latent for open pose, then send the output to intercept the latent of the 3D video. Not sure how the node tree would look but I bet it's possible.

    • @enigmatic_e
      @enigmatic_e  9 місяців тому

      yea im sure theres a way to do that. Thats the great thing about comfyui, theres so many possibilities@@calvinherbst304

  • @SapiensVirtus
    @SapiensVirtus 6 місяців тому

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @amkkart
    @amkkart 11 місяців тому +1

    Hi i used your installation guide and set the base path to my A1111.. where do I drop the loras and embeddings and how to install the ip adapter? Love your videos thanks for your efforts to educate us

    • @enigmatic_e
      @enigmatic_e  11 місяців тому

      You drop the Lora and embedding in the A1111 folders. I can’t remember exactly where those folders are but check the models folder, they’re not hard to find if you explore a little bit. And the ipadapter can be installed if you go manager and install missing nodes. Let me know if you still run into issues.

  • @pro_rock1910
    @pro_rock1910 8 місяців тому

    😍😍😍

  • @armandadvar6462
    @armandadvar6462 7 місяців тому

    How did you create Comfy UI workflow? Where is it?

  • @alexebcy
    @alexebcy 7 місяців тому

    HELP PLS :/
    all my video combine node are red :/
    Failed to validate prompt for output 281:
    * (prompt):
    - Return type mismatch between linked nodes: frame_rate, INT != FLOAT
    * VHS_VideoCombine 281

  • @KuschArg
    @KuschArg 10 місяців тому

    Hi, this is great! Thanks for sharing :) Btw how can I put the wires straight lines in the workflow? I do really like that setting, looks more cleaner than the curve wires... Thanks!

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      Yea just go to settings in the manager window and change spline to straight I believe

  • @moritzryser
    @moritzryser 10 місяців тому

    tysm! got everything working except on the last node group: FaceRestoreModelLoader & Upscale Model Loader, which 2 models do you recommend to install there so I can finalise my renders

    • @enigmatic_e
      @enigmatic_e  10 місяців тому +1

      I would look at the model name it shows when you first upload the workflow. You might be able to find them through the manager, install model.

    • @moritzryser
      @moritzryser 10 місяців тому

      @@enigmatic_e thanks, will do

    • @KuschArg
      @KuschArg 10 місяців тому

      Hi there! I have the same problem, did you find the model for FaceRestoreModelLoader? Thanks in advance!

  • @planet_cs
    @planet_cs 8 місяців тому

    Any clue how to fix the frame rate issue? All nodes connected to the initial Frame Rate node have a red circle around the frame_rate input.

    • @enigmatic_e
      @enigmatic_e  8 місяців тому +1

      Try a different video combine.

  • @mVRx3i
    @mVRx3i 10 місяців тому

    Hi, I downloaded all the files from your PDF and when I try to generate some video i'm getting this error in the KSampler from the "Output" section:
    AttributeError: type object 'GroupNorm' has no attribute 'forward_comfy_cast_weights'
    Can somebody help to know what i'm doing wrong? :S

  • @Truthseeker_12638
    @Truthseeker_12638 10 місяців тому

    where can i find and install the the LineartStandardPreprocessor node
    ERROR: comfyui When loading the graph, the following node types were not found: LineartStandardPreprocessor Nodes that have failed to load will show as red on the graph.
    FIX: If you stumble across this after already installing the preprocessor node just uninstall the node and reinstall and you will be fixed

  • @drviolet396
    @drviolet396 10 місяців тому

    Have you tried Ksampler RAVE? seems working pretty well , would be curious to hear if in this specific workflow it helps even more or nah

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      Hmm I don’t think I’ve used it. What does it do differently?

  • @futurediffusion
    @futurediffusion 10 місяців тому

    why do you upload the info on mega T.T mega still loading forever and dont give me the file .

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      Never had any complaints about it but what would you recommend?

  • @hefland
    @hefland 11 місяців тому

    Hmm, I'm getting a purple outline on my ksampler, so it seems that everything before it seems to load and work well. Plus i bypassed everything after, such as Iterative Upscale, Face Detailer sections. I get these errors below. If I figure it out, I'll update a comment.
    ERROR:root:!!! Exception during processing !!!
    ERROR:root:Traceback (most recent call last):

    • @enigmatic_e
      @enigmatic_e  11 місяців тому +1

      I’m taking a wild guess and thinking it might have to do with the ipadapter or animatediff. Which models are you using there?

    • @hefland
      @hefland 11 місяців тому

      Ah, so i fixed that by bypassing the Softedge controlnet section section since had control-lora-depth-rank running in that slot. Whoops!

    • @hefland
      @hefland 11 місяців тому

      @@enigmatic_e Load IPAdapter Model = ip-adapter-plus_sd15.safetensors
      AnimateDiff Loader = v3_sd15_mm.ckpt
      I'ts actually running fine now. I had the wrong controlnet model running on the SoftEdge section. I ran a control-lora-depth-rank128 in there. I only have the openpose section running right now (all others are bypassed).

  • @tiberiuslawson1172
    @tiberiuslawson1172 9 місяців тому +1

    The volume on your videos are very low compared to any other youtube video I watch. Just letting you know

    • @enigmatic_e
      @enigmatic_e  9 місяців тому

      Do you feel that way about multiple videos or just this one? I’ll try to keep a closer eye on it. I typically keep the voiceover levels at what’s considered industry standards but I gotta double check this video. Thanks for the feedback.

    • @tiberiuslawson1172
      @tiberiuslawson1172 9 місяців тому

      @@enigmatic_e Yes I have watched a bunch of your videos with low volumes. Part 1 of this seems louder. You should try to hit close to 0db when editing. Industry Standards might be different than UA-cam since everyone is watching on different devices with different volume output levels. I notice I have to put my volume up by 30%+ when switching to your video from someone elses on my Studio Monitors. Nevertheless you have some great tutorials on your channel. Keep up the good content

  • @luisgregori3817
    @luisgregori3817 10 місяців тому

    The render take more than 30 minutes for me, i dont understand i have a rtx4060ti 16go

    • @enigmatic_e
      @enigmatic_e  10 місяців тому

      Depends on how high your resolution is.

  • @DimiArt
    @DimiArt 9 місяців тому

    i really really really hope you can get it to work with automatic1111! i love using automatic1111's UI.