How to use LivePortrait. Learn to Animate AI Faces in ComfyUI.

Поділитися
Вставка
  • Опубліковано 23 січ 2025

КОМЕНТАРІ • 140

  • @sebastiankamph
    @sebastiankamph  6 місяців тому +3

    Patreon subscribers saw this video first as an early access video. Detailed guides and exclusive content on www.patreon.com/sebastiankamph

    • @jonrich9675
      @jonrich9675 6 місяців тому

      is there a way to install this with 1 simple button yet? I really don't like installing pythons and etc to get this working.

    • @LouisGedo
      @LouisGedo 6 місяців тому

      👋

    • @chenfang2331
      @chenfang2331 6 місяців тому

      @@jonrich9675 I made an app for it. Cooraft!

  • @L3X369
    @L3X369 4 місяці тому +2

    Man, I wish I had this two years ago, my work from home experience would have been sooo relaxing.

  • @ImAlecPonce
    @ImAlecPonce 6 місяців тому +5

    lol... after I spent hours getting the LicePortrait stand alone to work yesterday.....
    Thanks a ton!!!!

  • @SerginMattos
    @SerginMattos 6 місяців тому

    Idk why I insist into looking for something out of this channel just to find what I was looking for in here after I downloaded +20GB of something that I didn't need to download... Thanks a lot Sebastian!

  • @christophermoonlightproduction
    @christophermoonlightproduction 6 місяців тому +5

    This just isn't working for me. I don't have anything in ComfyUI_windows_portable\ComfyUI\models\insightface\models and I'm missing the top nodes in the first and second rows. Manager can't seem to find them. Please, help.

  • @JustFor-dq5wc
    @JustFor-dq5wc 6 місяців тому

    Great, as always. I had to use NOT PORTABLE version of ComfyUI to make it work. It's pretty easy to install. It's amazing how fast it's working even on my GTX 3060.

  • @aswittyasAIcanbe
    @aswittyasAIcanbe 3 місяці тому

    So there is some confusion about the usage of pretty anything that comes from LivePortrait. People who used their tools have their videos had removed on UA-cam. Does anybody know if people can use it or not? I mean, is UA-cam always considered monetizable?

  • @Mowgi
    @Mowgi 6 місяців тому +15

    I can't take anything you say seriously at 7:50 with her in the background doing THAT 😂

    • @antonin766
      @antonin766 3 місяці тому

      That's exactly what this tool is made for 😄

  • @sirmeon1231
    @sirmeon1231 6 місяців тому +1

    New tutorial in your face!! 🙌🏼😂 fascinating tool!

  • @JM_Traslo
    @JM_Traslo 6 місяців тому +3

    I use Forge WebUI for the most part. I honestly wish we'd get an extension like this but specifically just for the eyes, even if we could just point them in the direction we want without taking it into photoshop and then back into AI tools.

    • @MattLind-oi5kg
      @MattLind-oi5kg 6 місяців тому

      EXACTLY! Read my comment above where I am raging out over us not having SIMPLE solutions like this to manipulate facial EXPRESSIONS via single image guidance input. If they can do it with video, why not with single images?

    • @123playwright
      @123playwright 6 місяців тому

      ​@@MattLind-oi5kgit might have to do with how computer vision/machine learning works ....maybe solution isnt so simple after all

  • @nirsarkar
    @nirsarkar 6 місяців тому

    Thanks. I tried on Mac, and it works with the tweaks for MPS. This is insane tech.

    • @tdnueve
      @tdnueve 5 місяців тому

      What is the tweak ? I can't get it to work

  • @MikevomMars
    @MikevomMars 6 місяців тому +2

    Couldn't get it to work - The following node types were not found:
    DownloadAndLoadLivePortraitModels
    LivePortraitProcess
    Lots of users seem to experience the same issue. This python install madness is so annoying 🤦‍♂

  • @user-fo9ce3hr5h
    @user-fo9ce3hr5h 4 місяці тому

    how to live images and synce them by only voice not live record or video?
    i really searching for this.

  • @ВладимирБондарь-т8ь
    @ВладимирБондарь-т8ь 6 місяців тому

    Hello! Please tell me how to fix the error.
    (Error occurred when executing LivePortraitProcess:
    LivePortraitProcess.process() missing 1 required positional argument: 'crop_info')
    Thank you

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 6 місяців тому +6

    the million question....its is possible using it in atomatic1111? thanks

    • @Steamrick
      @Steamrick 6 місяців тому +2

      Not unless someone takes the time to port the code...

  • @CharlesMonroe
    @CharlesMonroe 5 місяців тому

    Hi Sebastian!
    One question: What is the maximum resolution and fps you can get from your output mp4 video?

  • @angryyoungman4389
    @angryyoungman4389 6 місяців тому +2

    Hi Sebastian, can we combine two or three models output into 1, like if we want to use Lip-sync and some other model with it, can you show us how it's done.
    How can we route or make nodes in such a way.

  • @Maltebyte2
    @Maltebyte2 6 місяців тому

    I dont have any of this video nodes and image nodes it looks very different when i launch my run_nvidia_gpu.bat pls help! Thanks!

  • @chronicallychill9979
    @chronicallychill9979 4 місяці тому

    Is there anything like this that can generate the animation without a source video?

  • @ragemax8852
    @ragemax8852 6 місяців тому +1

    I need Live Portrait to be more like Hedra with the body moving, the face just moving at enough no more, but it's the first real alternative to HeyGen, though which is now obsolete.

  • @qwetry-j2u
    @qwetry-j2u Місяць тому

    Hi! Is there a similar AI app (i.e free and runs directly on the user's PC) that works not only with speech and facial expressions, but with hand gestures as well?

  • @DigitalAI_Francky
    @DigitalAI_Francky 6 місяців тому

    Can you tell me how to make a video longer than 8 seconds ? Anyone know in which node I need to set that ?

  • @rogersnelson7483
    @rogersnelson7483 5 місяців тому +1

    After hours, I give up. Every time I give Comfy a chance, this happens. Nothing but installation problems. I followed everything, step by step still nothing but weird errors that I search and can't find answers for. I now think there is something in with my system because even Pinokio installs also fail. Errors like: Prompt outputs failed validation
    ImageConcatMulti:
    - Return type mismatch between linked nodes: image_2, LP_OUT != IMAGE
    ImageResizeKJ:
    - Return type mismatch between linked nodes: get_image_size, LP_OUT != IMAGE and many more.

  • @realmakebelieve
    @realmakebelieve 6 місяців тому +2

    Whats the minimum GPU GB .
    you need to run this system in ComfyUI?

  • @Radarhacke
    @Radarhacke 6 місяців тому

    Thank you, but why cant we use a webcam for live recording?

  • @ThomasYoutubeKanal
    @ThomasYoutubeKanal 6 місяців тому

    When loading the graph, the following node types were not found:
    DownloadAndLoadLivePortraitModels
    LivePortraitProcess
    Nodes that have failed to load will show as red on the graph.

    • @ThomasYoutubeKanal
      @ThomasYoutubeKanal 6 місяців тому

      Manager fails :
      (IMPORT FAILED) ComfyUI-LivePortraitKJ
      Try fix
      Uninstall
      Nodes for LivePortrait, insightface is required

    • @ThomasYoutubeKanal
      @ThomasYoutubeKanal 6 місяців тому

      Im on windows10 Portable version of comfyUi

  • @antonin766
    @antonin766 3 місяці тому

    Meanwihle I'm still trying to install SadTalker. This seems much better. Thanks!

  • @ian2593
    @ian2593 3 місяці тому

    Nice video. Question: If I want to stop the outputted video from following the eyes from my driving video. How to I do it? The guy in my video is looking up all of the time reading lyrics. Thanks.

  • @caseywilson6893
    @caseywilson6893 4 місяці тому

    Any recommendations for a good Audio node? I have a video of me talking, and a video of them talking and want to use their face and voice over my video.

  • @wagnerfreitas3261
    @wagnerfreitas3261 4 місяці тому

    great video man, thank u

  • @donghyuk80
    @donghyuk80 5 місяців тому

    Appreciate for the nice tutorial. When I follow the instruction, it generates only 8-second video whatever the length of input is. Can I make output video length same with source video's?

  • @123playwright
    @123playwright 6 місяців тому +1

    Please make the same workflow guide for v2v!

  • @JoshRobirds
    @JoshRobirds 5 місяців тому

    can this be used with a webcam for live rendering?

  • @Cserror1
    @Cserror1 2 місяці тому

    i just got a mac. and installing comfyui on mac is so hard:/ any tips to make it more easy?

  • @rayx5002
    @rayx5002 16 днів тому

    I‘m New to this pinokio ai thing… how long does it take to Generate such a Video? Cause i tried it yesterday and my Mac was just powering the vans… but Without a result even after 15 minutes… i stopped the perfomance then. Works it better with intel based mac? Or are the waiting times just very long?

  • @skycladsquirrel
    @skycladsquirrel 6 місяців тому

    Anyone else having issues with the video outputs shaking and jittering?

  • @ogrekogrek
    @ogrekogrek 6 місяців тому +2

    because insightface : "Please note that insightface license is non-commercial in nature."

  • @-Burs
    @-Burs 5 місяців тому

    Finally! I got it working. Thanks for the video! Some parts (especially insightface) were tricky due to my portable ComfyUI installation, and some models must be manually downloaded and put into the correct folders. But heck, it was worth it.
    The only thing I didn't figure out is why lip and eye retargeting don't work, even though they are enabled. Workflows (now 3 of them) are looking a bit different, maybe there is something else I need to tweak for that to work? Setting them to either 0.1 or 10 using different assets doesn't have any effect at all.

    • @nextlevelbrosagency
      @nextlevelbrosagency 3 місяці тому +1

      How did you get insightface working with the portable ComfyUI installation? I am stuck on that. *EDIT* Nevermind I found the solution here: ua-cam.com/video/vCCVxGtCyho/v-deo.html

    • @-Burs
      @-Burs 3 місяці тому

      @@nextlevelbrosagency Yeah, that's exactly where I found the fix too. I had to install the .WHL file manually. Hope everything works for you now.

    • @newwen2102
      @newwen2102 2 місяці тому

      @@nextlevelbrosagency Thanks a lot

    • @SecretAsianManOO7
      @SecretAsianManOO7 21 день тому

      is the head area of your animated character all wobbly and shaking like it's being affected by heat distortion?

    • @-Burs
      @-Burs 21 день тому +1

      @@SecretAsianManOO7 No, I don't think so. I would remember such an issue. I had some other issues, so I had to play with some commands and configuration, but not related to the image in the way you're describing.

  • @Ceeed100
    @Ceeed100 6 місяців тому

    Got a question, in your example you got settings for eyeretargeting etc in the LIVEPortrait Process, I don't have this setting^^

  • @joechip4822
    @joechip4822 6 місяців тому

    How can this be used in the comfyUI version of 'Stability Matrix'?

  • @lennoyl
    @lennoyl 6 місяців тому

    I had a big memory issues (crashes/freezes when reaching 100%) but it was due to image concatenate multi and video combine nodes that took too much RAM. I just add a save image node at the full images output of the liveportrait node and it solved it (the nodes then took the images on the drive instead of the RAM)

  • @wealthgrowth
    @wealthgrowth 5 місяців тому

    Is it possible to run live portraits real time with a video feed straight from a camera or is it just a prerecorded video

  • @erdbeerbus
    @erdbeerbus 6 місяців тому

    excellent workflow! thank you ... which value is responsible for getting smoother results? most of my tryings are a little bit to much moving around the face ... thx in advance ...

  • @pitoko666666
    @pitoko666666 6 місяців тому

    Awesome! Where can we find more example face video assets?

  • @Romeo615Videos
    @Romeo615Videos 6 місяців тому

    is this 100% free to use if done on my local machine?

    • @-Belshazzar-
      @-Belshazzar- 6 місяців тому

      yes, but I am not sure what are the commercial use terms if that's what you mean

  • @KookyBone
    @KookyBone 6 місяців тому

    Noob here: Can someone tell me were i have to put the json-example to open in it in comfyUI... just installed it with some models and the manager. But now i am not sure in which folder i have to put the example or how to open it. I tried dragndrop it in the broswer window, nothing happened, the same when i click "load" in the manager window.

    • @123playwright
      @123playwright 6 місяців тому

      Chatgpt is your friend

    • @remaztered
      @remaztered 5 місяців тому

      Did you RMB and save file? Enter the file and use download arrow on the right hand side instead.

  • @RickySupriyadi
    @RickySupriyadi 6 місяців тому +1

    i wish comfyUI can have this feature:
    setup this kind node test it, it's success, export it as app
    export will make installer file for windows/Linux / mac/or even for ios or android or arm if it's met requirement.
    when install the app from installer it also will install all necessary files so the nodes will work as in ComfyUI without any technical installation like going on in this video....

  • @heditrigui4384
    @heditrigui4384 4 місяці тому

    got this error: Prompt outputs failed validation
    ImageConcatMulti:
    - Return type mismatch between linked nodes: image_2, LP_OUT != IMAGE
    ImageResizeKJ:
    - Return type mismatch between linked nodes: get_image_size, LP_OUT != IMAGE
    Anyone can help?

  • @a.aye.
    @a.aye. 6 місяців тому

    This was really helpful thanks. Do you have any idea why it speeds up some of the videos? The output is way faster than the source video.

  • @jonahoskow7476
    @jonahoskow7476 6 місяців тому

    Are you going to do the video to video version soon?

  • @Avalon1951
    @Avalon1951 6 місяців тому

    The issue I'm running into is I get a squash and stretch on the head, at first I thought it was the resizing node that was doing it but even after removing it I still get it even though I'm using your exact settings, my head is not even as smooth as yours, any thoughts? The image I'm using is 512x512, should the image and video be the same size maybe?

  • @drucshlook
    @drucshlook 5 місяців тому

    there's an early access to the workflow on your patreon, is that correct ??

  • @kelvin8143
    @kelvin8143 6 місяців тому

    good job! If I turn the live-action video into AI animation, the AI animation dialogue lip sync is not obvious. So, can I use this technique to enhance the mouth movements of animated characters?

  • @rangorts
    @rangorts 6 місяців тому

    can you use this on a1111?

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 6 місяців тому

    Nice video!

  • @iresolvers
    @iresolvers 6 місяців тому

    can't wait until you can use audio file to drive the lip-sync for live portrait !

  • @marcdevinci893
    @marcdevinci893 6 місяців тому +1

    This is great! Is it only usable with square format?

  • @nimatells
    @nimatells 6 місяців тому

    Thank you so much!

  • @SecretAsianManOO7
    @SecretAsianManOO7 21 день тому

    Your guide works and the liveportrait workflow is functioning properly, however the face of my character is very wobbly, shaking a bunch like it's being affected by heat distortion, anyone else experiencing this?
    EDIT:
    I thought perhaps it had to do with the file being an mp4 being that the examples provided by the author uses mp4's, but even when I convert my video files to mp4 it still shakes and wobbles like crazy, fml. Maybe it has to do with my camera rather than format. I'm surprised to find no discussions about this anywhere.
    Solution: I seem to have found the fix, just incase anyone else is experiencing what I had, you need to denoise your video. I took the ffmpeg route and used chatGPT to give me the proper commands to use in cmd prompt to denoise my video. If your avatar's head is still not 100% you can tell GPT to spit out a command line that increases denoising strength even more.
    It seems ffmpeg is also the way to go when synchronizing FPS and audio of your input video to your finished avatar. Just got done.. it works, it's awesome, I'm stoked!

  • @dougmaisner
    @dougmaisner 6 місяців тому +1

    crazy cool. best AI stuff on this channel.

  • @riccardobiagi7595
    @riccardobiagi7595 6 місяців тому

    why isn't it working on macbook 😥

  • @deviantmultimedia9497
    @deviantmultimedia9497 6 місяців тому +1

    Doesn't work. Red everywhere. No manager button

    • @CosmicFoundry
      @CosmicFoundry 3 місяці тому

      you have to insall the Manager

  • @ElHongoVerde
    @ElHongoVerde 6 місяців тому

    YAY!!! Monday vidWAIT. Is tuesday... Where is my monday video?

  • @YTbannedme-g8x
    @YTbannedme-g8x 6 місяців тому

    Its currently only img to video and quite frankly a meme still and not useful outside of memes. The true amazing stuff comes when they release the video to video code. Someone has already tweaked the code and uploaded Gladiator scenes with it in action and its honestly amazing. I cant wait to use it to make videos with it. You can parody and entire movie with a whole different script.

  • @singhsgurpreet
    @singhsgurpreet 6 місяців тому

    you are the best, I was really impressed when in your sdxl release video you gave the release date in the beginning without worrying about video counts, thanks

  • @vickyrajeev9821
    @vickyrajeev9821 6 місяців тому

    Thanks, can I run on CPU because i don't have GPU

    • @123playwright
      @123playwright 6 місяців тому

      Probably no, need atleast 3gb vram

  • @WonDerAnh
    @WonDerAnh 14 днів тому

    thanks :D

  • @Steamrick
    @Steamrick 6 місяців тому

    From the name I was kind of hoping that this would be something I could use my webcam as input and an animated picture as a live output.

  • @tobiasmuller4840
    @tobiasmuller4840 6 місяців тому +3

    Maybe you should put a disclaimer behind "you can use it for anything". As the licensing for insightface is a mess I guess you can basically just use it for research, right?

  • @AlejandroGuerrero
    @AlejandroGuerrero 6 місяців тому

    This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!

    • @123playwright
      @123playwright 6 місяців тому

      You can use Facefusion lip sync. You can use "source" as mp3 and target as your video

  • @kritikusi-666
    @kritikusi-666 6 місяців тому

    for whatever reason, I cannot get the portable version to work lol. So annoying. Nice guide. Thanks for sharing.

  • @Beauty.and.FashionPhotographer
    @Beauty.and.FashionPhotographer 6 місяців тому

    You know what would be coool and awesome?..if such a Top Expert like You would do a review on what PC package with what Graphic card and what ram and etc etc, would be best for Auto1111 , comfyui and even for creating LLMs . i dont think anyone out there has done one in the last 12 months. i see some people doing multiple Graphic cards to create LLMs, which is fascinating,...for which there is no one , who ever having done a video on youtube yet neither.... i mean...how much faster would such a multiple Card setup be for auto1111 and comfyui... if you would consider ever doing such a video, dont forget to make it for 3 different budgets,..super dirt cheap ..versus middle budgets,...versus unlimited budget packages .... all for home.... so it makes sense to viewers here.... i doubt companies or corporation go to youtube to find such packages... wäre echt toll

    • @123playwright
      @123playwright 6 місяців тому

      It depends on how long u can wait. I see generally people recommend RTX 3060.
      I also remember reading that an RTX 4090 would be 10x speed of RTX 3050 for generations

  • @Instant_Nerf
    @Instant_Nerf 6 місяців тому

    This is a step forward .. but we need the rest of the body movement

  • @assasinsbear
    @assasinsbear 6 місяців тому +1

    7:25 That's what Zoomers call the Skibidi Toilet mode

  • @ImAlecPonce
    @ImAlecPonce 6 місяців тому +3

    4060 ti, in comfy (portable version) it's taking 15.5 minutes... in LivePortrait stand alone it's taking 3 minutes for a 1 minute video.

    • @juanjesusligero391
      @juanjesusligero391 6 місяців тому

      Hey, that's some valuable info, thanks for sharing! :D
      Just a question, does the LivePortrait standalone have the same control options that ComfyUI has? More? Less ones?

    • @ImAlecPonce
      @ImAlecPonce 6 місяців тому +2

      @@juanjesusligero391 it has options for pasting back into the image or just keeping the square as well as a few more things I haven't checked as of yet.

    • @123playwright
      @123playwright 6 місяців тому

      ​way less@@juanjesusligero391

  • @-Belshazzar-
    @-Belshazzar- 6 місяців тому

    great video as always, thank you! I just don't get why I change the resolution to 1024 but it's still output a 512 video?

  • @imagineArtsLab
    @imagineArtsLab 5 місяців тому

    Hey guys & gals - this is my entry.
    I believe our first priority should be to make this 'stuff', the tools of the future, as ACCESSIBLE as possible. To get artists IN we need to be OPEN.
    And the best way to do that is to flatten the 'learning curve' as fast as humanely possible.
    * No more gatekeeping!
    * No more hiding 'workflows' that are out date in a week's time behind paywall!
    * No more technical jargon!

  • @thokozanimanqoba9797
    @thokozanimanqoba9797 6 місяців тому

    you should work on your lip setting, all your images lips looks glued

  • @Edur2d22
    @Edur2d22 6 місяців тому

    Please V2V

  • @RobertsDigital
    @RobertsDigital 3 місяці тому

    I really hate codes.
    I hate websites without simple click interfaces like leonardo
    The problem with these code sites is that 99% of the time things don't ever work and you have to spend hours to learn. This is bad especially if you have other things to do.

  • @RickySupriyadi
    @RickySupriyadi 6 місяців тому +9

    as a Chinese born i found this is my wildest dream having big eyes! 👀
    boy oh boy I'll be really handsome in camera now!

    • @beyounickvlog5285
      @beyounickvlog5285 6 місяців тому +4

      Don't Be insecure bro Having small eyes is totally ok and don't make you less handsome

    • @RickySupriyadi
      @RickySupriyadi 6 місяців тому

      @@beyounickvlog5285 thanks but I'll be still use it heheh

    • @Naynay37758
      @Naynay37758 6 місяців тому +4

      Trying to be funny by self discriminating your entire race, good on ya.❤

  • @MattLind-oi5kg
    @MattLind-oi5kg 6 місяців тому +2

    ipadapter, controlnets, reactor and all of the rest ASIDE....what tools are there that can TRANSFER facial expressions like this, with SINGLE IMAGE input? It seems like it would be so easy. If they can do it with a vid, why not with single images? Fuck a "face swap"! We've got that sorted. Now we want to tweak expressions like with these "driver" videos, but using single images.
    Fuck text prompting out expressions! Fuck fiddling around with laborious and complicated shit. Just a simple image input where the EXPRESSION, (not the facial characteristics), can affect your gens.
    What's the hold up, tech bro's?

    • @juanjesusligero391
      @juanjesusligero391 6 місяців тому +1

      Dude, you can use exactly this (LivePortrait) for only an image by adding a 'video' containing one frame as a source. I'm not sure if you can load an image sequence as a video, but if you can, that would be it. If the node needs a video, you can use FFmpeg to convert an image into a 1-frame video.
      You could also make a constructive critique in the comfyUI node repo, asking for the feature nicely. Remember, the people making this tech possible are working hard, even if it's not as user-friendly as we'd like yet.

    • @MattLind-oi5kg
      @MattLind-oi5kg 6 місяців тому +1

      @@juanjesusligero391 Yeah sorry about the frustrated tone in my comment. :/
      I actually did try the solution you suggest, but had sub par results. I will look into it further. I used a video editor and drug a few frames from a sequence down to the time line, and let them play together as a video. Still feels like a "hack" or "work around".
      What I was getting at was that if LivePortrait can make it so that we can manipulate features on a face without going into face swapping, or using loras, or trying to achieve this by other means such as ipadapter and so on and so forth, and can accomplish this with VIDEO, surely it could be done with single images?
      There are many guides on how to try and "tease out" expressions in images, but this LivePortrait solution is so "direct" and simple. It's what I have been looking for.

    • @MattLind-oi5kg
      @MattLind-oi5kg 6 місяців тому

      @@juanjesusligero391 I mean think about it. You've done a great gen of a face you like. Now you want this face to smile, or "look cheeky", or distraught or whatever. You wanna try a multitude of different things. Various nuances of emotion. With a single image LivePortrait solution you could do so much easier that via traditional methods.

    • @juanjesusligero391
      @juanjesusligero391 6 місяців тому +1

      @@MattLind-oi5kg Yeah, I totally understand :) But all this technology is still soo young... Developers are yet trying to figure things out (and most of them are doing this just out of love, since they don't see a single dollar for their contributions), but in the (probably near) future we'll have a lot more control over things like this. Be patient, my friend! ^^

    • @MattLind-oi5kg
      @MattLind-oi5kg 6 місяців тому

      @@juanjesusligero391 I will take heed. :)

  • @MorrisLiveProductions
    @MorrisLiveProductions 6 місяців тому +1

    Not Scary At All

  • @RikkTheGaijin
    @RikkTheGaijin 6 місяців тому +2

    I tried it on my own video, the results looks like ass.

    • @jaywv1981
      @jaywv1981 6 місяців тому +3

      I've found it works best when first frame of your video is a neutral expression.

    • @MattLind-oi5kg
      @MattLind-oi5kg 6 місяців тому +2

      @@jaywv1981 Good tip. I am faffing around with their own webui version and all their "examples" videos work flawlessly but with others you might use, it's hit or miss. I've had some horrendous "I have no mouth but I must scream" moments, haha.

  • @g.s.3389
    @g.s.3389 6 місяців тому +2

    it is not live, it is batch processing... that is misleading.

    • @piemoul
      @piemoul 6 місяців тому +1

      Lax man its just a brand name

  • @kitooart
    @kitooart 2 місяці тому

    make it simple for beginners because most of the viewer are doing it first time not helpful for us

  • @nextlevelbrosagency
    @nextlevelbrosagency 3 місяці тому

    For anyone using the Portable Comfy UI installation and having diffficulty with the insightface installation, I found a solution that works perfectly follwing the directions in this video:
    ua-cam.com/video/vCCVxGtCyho/v-deo.html

  • @miguelsureda9762
    @miguelsureda9762 6 місяців тому

    🎵🎹🎶 💃 OpenAI My Sugar Papi by PEACHY da WHUUPi on UA-cam NOW.
    Hey competition coming guys !
    Used hedra