Loki - Live Portrait - NEW TALKING FACES in ComfyUI !

Поділитися
Вставка
  • Опубліковано 7 жов 2024

КОМЕНТАРІ • 34

  • @ArrowKnow
    @ArrowKnow 3 місяці тому +3

    Thank you for this! I was playing with the default workflow from LivePortrait but your workflow fixed all of the issues I was having with it. Perfect timing. Love it

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  3 місяці тому +1

      Glad it helped! the credit goes to the author as we used his nodes to fix the framerate :) thanks so much tho - this is exactly why i make mildly custom editions for my packs. I just want to share these tools and see what everyone can do !

  • @dadekennedy9712
    @dadekennedy9712 2 місяці тому +2

    So good!

  • @GamingDaveUK
    @GamingDaveUK 3 місяці тому +2

    Got all excited for this as it looked to be exactly what iwas looking for... a way to create an animated avatar reading along to a mp3/wav speech file... sadly it looks like a video to video. looks cool... but the search to create a video based on a tts sound file continues lol

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  3 місяці тому +1

      we covered that previously, you can use HEDRA to do TTS or use your TTS with a picture, this will generate the talking heads also. In this video we are specifically looking at ComfyUI, where we used the Hedra to animate our puppet target character.
      In the previous deep dive we are exploring 2D puppet animation, with motion tracked talking heads. I have also recorded myself mimicking the words from an audio file, this can then drive the speaking animation :) -- it can work !

    • @DaveTheAIMad
      @DaveTheAIMad 3 місяці тому

      @@FiveBelowFiveUK Just tried Hedra and the result was really good...but limited to 30 seconds, slicing the audio up could work but i am likely to have a lot of these to do over time.
      The more I look into this, the more it seems like there is no local solution where you can just feed in an image and a wav/mp3 file and get a resulting video.
      hedra did impress me though. I rember years ago using something called "crazy talk" that worked well but you had to mask the avatar, set the face locations yourself etc....which honestly i would be ok with doing in comfyui lol.
      Every solution either fails (dblib for dreamtalk node for example) or needs a video as a driver. Its actually all rather frustrating. maybe someone will solve it down the line.

  • @sejaldatta463
    @sejaldatta463 2 місяці тому +1

    Hey great video - you mention the liquifying and using dewarp stabilizers. What nodes would you recommend in comfyui to help resolve this?

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  2 місяці тому

      unfortunately i might have been unclear, afaik there are not any nodes for that (yet haha) but i would use Adobe Premiere/After Effects or Davinci Resolv or some other dedicated video editing softwares to achieve that kind of post processing.
      In previous videos we have looked at using Rotoscoping and motion tracking with generated 2D assets for webcam driven puppets etc, thing like this.
      Recently my efforts were to hunt down and build some base packs to replace those actions in comfyui, eliminating most of the work down with paid software or online services.
      short answer is, we fixed that in post :)

  • @9bo_park
    @9bo_park 2 місяці тому +1

    How were you able to capture your own movements and include them in the video? I’m curious about how you managed to show your captured video itself in the bottom right corner.

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  2 місяці тому

      I have never shown how i create my avatar on screen, it is myself and was captured using a google Pixel 5 phone. I have also started using Motion tracking with the DJI Osmo Pocket 3, which is excellent for this. The process has been refined from a multi-software Adobe method to a 100% in ComfyUI approach. It used to be left on all night to finish a 1 minute animation, but now i can complete 600 frames in just 200 seconds. We need 30FPS so we are close to but not quite reaching 30FPS for Live Rendering. The process is simpler now, however originally it involved Large sequences of images, with Depth/Pose and a lot of manual rotoscoping. Before i would have to do a lot of editing and use Adobe Photoshop, Premiere and After Effects. Now i can just load the video from my cameras into the workflow and it does all the hard work, leaving me with assets to place into the scenes.

  • @adamsmith-lb9zv
    @adamsmith-lb9zv 2 місяці тому +3

    What,Prompt outputs failed validation: Return type mismatch between linked nodes: images, LP OuT != IMAGEWHs_VideoCombine :Return type mismatch between linked nodes: images, LP OUT != IMAGE

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  2 місяці тому

      which workflow in the pack is giving this error?

    • @adamsmith-lb9zv
      @adamsmith-lb9zv 2 місяці тому

      @@FiveBelowFiveUK V12

    • @adamsmith-lb9zv
      @adamsmith-lb9zv 2 місяці тому +1

      @@FiveBelowFiveUK V12 workflow, on the liveportrait node conversion composite video in the process of this error, update and re-add models and so on are this error.

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  Місяць тому +1

      There will be an update to this pack, because we switch the backend to mediapipe (opensource), the old ones used inswapper (research model).
      This can happen from time to time when the authors made significant changes to the code. Thanks for letting me know

  • @sprinteroptions9490
    @sprinteroptions9490 3 місяці тому +1

    great stuff.. works well.. but the workflow's a lot slower than the standalone when just trying out different photos to sync.. it's like it's processing the video again every time? With the demo animating a new image takes roughly 10 seconds after a video has been processed the first time.. so the comfy workflow takes over a minute every time no matter what.. maybe i tripped something ? i dunno

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  3 місяці тому

      if you used my demo video head, it's quite long and it's possible to setup a frame limit, then batch them by moving the start frames. I used the default of the whole source clip, which might be hundreds of frames.
      If you see slowness in general there is a note about ONNX support and a link to how to fix it in the LivePortrait github, i believe this is to do with the reactor backend stack, which is similar -
      With Loki Face Swap, you should see almost instant face swapping, when using a presaved face model that you loaded.

  • @angloland4539
    @angloland4539 2 місяці тому +1

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  2 місяці тому

      don't forget to check the latest video ! an alternative for talking with motion

  • @guillaumebieler7055
    @guillaumebieler7055 2 місяці тому +1

    What kind of hardware are you running this on? It's too much for my A40 Runpod instance 😅

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  2 місяці тому

      even my 4090 can actually bottleneck on the CPU side with more than ~1000 frames in a single batch.
      this used the video input loader, the default will use the whole source clip. if you used more than 10-20 seconds at 30fps, it might start to struggle even with a nice setup. I split my source clips up and use the workflow like that.
      alternatively with a longer source clip, use 600 frame cap and use the start frame skip 0, 600, 1200, 1800, etc adding 600 frames. then you can join the results later. I'll include a walkthrough in the next Loki video, it splits the job into parts which are more manageable :)

  • @adamsmith-lb9zv
    @adamsmith-lb9zv 3 місяці тому

    blogger, this node can only be used on Apple devices OS can be used, workflow node through, but there is an error message is not associated with the MPS

  • @Avalon19511
    @Avalon19511 3 місяці тому +1

    How did you get one image in the results, mine is split between the source and target?

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  3 місяці тому

      if you are using the workflow provided (links in description), i have made the changes shown in this video - those changes were: 1. removed the split view (we want the best resolution for use later) 2. added FPS sync with the Source video 3. Connected the Audio, so the final video used the input speech.

    • @Avalon19511
      @Avalon19511 3 місяці тому

      @@FiveBelowFiveUK All good just copied yours, definitely not as smooth as hedra but it's a start:)

  • @bugsycline3798
    @bugsycline3798 2 місяці тому +1

    hu?

  • @alirezafarahmandnejad6613
    @alirezafarahmandnejad6613 3 місяці тому

    why the face in my final video is covered with a black box?

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  3 місяці тому +1

      this would indicate that something did not install correctly with your backend.
      check the github for the node you are using, and see if there are any reports from other people. Two people have reported this since i launched the video.
      github.com/Gourieff/comfyui-reactor-node
      contains good advice if you have problems with insightface (required)

    • @alirezafarahmandnejad6613
      @alirezafarahmandnejad6613 3 місяці тому

      @@FiveBelowFiveUK i dont think if it's a insightface issue cause i fixed it beforehand, i dont have issues with result coming out of others flows or nodes that include insightface, only this one, that's weird, i even tried the main flow, and user-made ones, same issue.

    • @alirezafarahmandnejad6613
      @alirezafarahmandnejad6613 3 місяці тому

      @@FiveBelowFiveUK never mind bro fixed it :) the issue was that i was using cpu for rendering , changed it to cuda, now works fine

  • @Avalon19511
    @Avalon19511 3 місяці тому

    also your video combine is different from mine, mine says image, audio, meta_batch, vae, is it possible to change the connections?

    • @veltonhix8342
      @veltonhix8342 3 місяці тому

      Yes, right click the node and select convert widget to input.

    • @Avalon19511
      @Avalon19511 3 місяці тому +1

      @@veltonhix8342 thank you, any thoughts about getting one image in the results?

    • @FiveBelowFiveUK
      @FiveBelowFiveUK  3 місяці тому

      download my modified workflow from the description :) it's on civit