TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today and start saving time. They have a free forever plan! www.turbotype.app/
It looks like it still takes some processing time to run the software, so at least currently you couldn't animate an avatar in real time on a local recording. So for V-tubers, at least a few months off. But if it's run by a cloud service, maybe they could make it work.
It's not new, Meta has already demo'd this type of realtime avatar . Zuck showed it off on Lex Friedman's podcast. It's realtime and very high quality , just requires some pre-processing of a full face scan
you're right. For reasons I could not fathom, very few AI tool were pushed to the point where they get a user friendly client like Midjourney. Even programmer like me struggles to pull a repository from git and build it up myself, I don't think any reguar artist out there can do AI without at least a client side.
I always tooo lazy to comment or like, subscribing is almost impossible but I did it all today on you video, just excellent. I was thinking from programmer point of view, your video thought me a lot today. Thanks mate from India.
wow, the option to animate a face on a source video has a great potential. I can already see people creating scenes of people interacting with each other with Runway Gen-3 or another video generator and then editing the video so that the people in the scene actually talk! We're one step closer to creating movie scenes.
This is probably the best instructional video I have seen lately. Going thru the steps in great detail. Thank you for that.!! Once installed it runs smoothly.
very quickly.. just get pinokio and install live portrait.. it's only 3 clicks. Also fooocus and stabilty cascade is great for midjourney quality ai stuff.
I need to recant and say that everything worked out in the end. It is necessary to install all the components first, and only at the end of it all can you install the platform. Thanks for the video.
there are already artifacts with the teeth where it stays static as the hair is with the background, i'd imagine the same will happen with the tongue. huuuge step up for open-source nonetheless!
This is FANTASTIC! It's going to take a minute to "git" everything (the dependencies) installed and working properly (Mac OS Monterey), but this is Open Source, so I have nothing to complain about. I'll do whatever it takes and how ever long it takes to nail this one. Thanks for the tutorial!!
Part of the webui is a file (frpc_windows_amd64_v0.2) which is a reverse proxy utility. Looks extremely untrustworthy to me. Running under a virtual environment mitigates some of the risk but I'm still skeptical. You should really be running this in an extremely sandboxed operating system.
I hope future versions of LivePortrait can do the entire body - or at least the upper part, including arms and hands. That'd be such a breakthrough in motion capturing technology!
I got a problem during entering the line "conda activate LivePortrait". It returns "CondaError: Run 'conda init' before 'conda activate'. What shall I do?
Working on a cartoon about a mischeivous young girl named Yumi, I've been using AI since the beginning and finally found a way to make her consistently with apps that effectively use Character Reference techniques, I've even trained a model with my character. Being a creative partner with Pika, Leonardo AI, and finally Runway ML I am able to create a ton of content, but I will need to add the character animation, and while I actually turned my AI character into a fully rigged Metahuman, it's nice to know if and when I need a quick shot and don't have the time to set it up in Unreal, I can quicky generate my character in the scene, then I or my niece, who will do most of the performance stuff for Yumi, can act out and voice her and I can use that footage and audio to animate the clip. This is an amazing time to be in. As someone who uses AI as a tool, I can see the several use cases for stuff like this, and it's going to make life easier for me as I am a studio of one and I have zero budget to make most of my stuff. So using a combination of free tools and my natural resourcefulness I am starting to make head way. The one man film studio era is here now.
can you give a short list as to how you would go for character consistency with free tools? i found it to be either lacking extremely (wasnt consistent) or its payed models i couldnt try. example: 1. Ai x: its free and has no tokens, i use it to do this and that. then i can you this is step 2 for .. 2. AI x2: also free and has no tokens, now you can...
i've been using this from beta on starting about 3 years ago. their current version is proprietary and does so much more and mimics lifelike in every way.
Exactly this, i guess the people who can make this kind of stuff are just used to doing stuff the hard way. A zipped .exe on a cloudstorage would have made this a no brainer.
There is this other program called chatgpt, I never used code or did any programming before but installed linux on an old computer as it takes up 90% less resources to run then windows and i just tell chatgpt what i want to do...copy command...paste command...and ive built custom personal apps somehow without knowing what im doing, if anything i just ask chatgpt or i copy and paste the output on the terminal and tell chat gpt to translate into english. If you have not yet experienced life apart from windows you will find that if you just jump out of that window and take a walk with the penguin into the rabbit hole there is a whole other world down there, vast and beautiful that is so free. If you take the plunge you will find the true meaning freedom in pc world and eventually come to the conclusion that you never knew that you have been locked up for so long behind that window that was preventing you from seeing what else is out there and all you had to do was open it and not be afraid to jump out. lol
There sure is money to be made by creating 'easy' installers for (would be) popular applications like this that comprise a lot of dependencies. But it'll be a lot of work and a totally different expertise than giving photos an animated face.
It's works very well when the source photo and input video are at the same angle but there is obvious warping when the angle is different. Best to keep the angles the same.
How about using Videos instead of Images as a source file. Just like the samples you show, please show us how we can do that as well, thanks. Anyway, I have successfully installed this in my computer using phyton only env. And u r right, it generate so fast unlike any other video generation like hallo which I have installed as well.
Yea - one of my favourite (and somewhat fringe) concepts is that "Everything comes true in the end" - because the context around it changes. My fav example of this is "primitive tribes in 1970s National Geographic Magazine" being afraid of cameras because they thought they could steal your soul. Well. Here we are. We are within days of there being a browser extension that with a single click can superimpose any photo into any porn video.... to take any youtube video of anyone and turn it into a kind of voodoo doll or gollum than can be made to perform any action imaginable, including ringing up your family, friends, enemies and doing such a perfect impersonation of you that it is actually more realistic than you are yourself. In a way, the Algos already have voodoo dolls of you... a lifetime of clicks and comments etc, rows in a database tied by a single user_id. I think "Text" was a massive massive revolution in what it meant to be human because it collapsed the time dimension. Memories became something that took zero energy to maintain. I think AI is a process towards collapsing some other dimension, although I've yet to figure out what it is, and I might of course be talking bollocks.
@@nicktaylor5264 Beautifully written, I too need my overlord my one true leader, a god not to worship but to follow, the basilisk, one of our own making.
Thanks to your very detailed patient step by step instruction, I was able to generate my own live portraits. My results are not as perfect as the examples, but amazing nonetheless. Thank you! Thank you! Thank you!
People with autism are unable to read emotions from facial expressions like normal people . this technology can help theme a lot . you can exagerate or magnify facial expression so they can understand you and be able to communicate effectively . for example a kid can understand wether his mom is mad or not.
@@KryzysX We already have real time face trackers, real time deepfakes, and we are definitely not that far off from getting real time generative AI of this quality.
Any idea why I am getting this error, everything else worked up until this point: C:\Users\funky\Desktop\liveportrait\LivePortrait>conda create -n LivePortrait python==3.9 'conda' is not recognized as an internal or external command, operable program or batch file.
Add .18 to the end conda create -n LivePortrait python==3.9.18 I almost missed it myself, but it's in text at the bottom of the screen, correcting the one from GitHub
At 3:58 you start talking about how you can use live portrait on not just stationary images, but moving videos too. Then in none of the examples at the end did you show how to use it on videos. I have tried uploading video but they are not a supported format. What is going on?
I subscribed to you when you had 2K subscribers, and today you have 150K. Bro, you totally deserve it, and your content is worth much more than 150K. Soon you will reach 1M. Love you bro, from India ❤
Yep, this install process is an effn nightmare, and I have Windows 10, Python, and Linux Mint installed under Hyper-V. One thing is for sure, AI projects have the absolute least intelligent user interfaces and installations methods. It's almost laughable how universally bad and fragmented they all are, and how none of them talk to each other. Who would release software with this many environment dependencies. Once he got to having to edit a path to tell Windows where conda was... I was like I've been down this road, editing the env path variable never works for me, this is fail.
I'm right there with ya. Someone needs to have a serious talk with these software devs about simple user interfaces and self installing programs. I'll get excited when the interface says "Drop target face here" , "Drop source video here" , and then a big red button that says "GO". Until then, "Gee-whiz, that's interesting."
I agree, I followed the instructions but I hit a wall after my computer could find the git. Very cool, program, but I will have to wait for it to be simplified. The problem is that usually happens when someone monetizes it with subscription fees.
For starters, the people creating those demos/projects aren't UI designers, nor do they create the demos for widespread or commercial use, or even create them with the end-user in mind at all. They create them as part of their scientific papers and studies. Meaning it's a somewhat beautified version of their messy lab experiments. The fact they then release those projects as open source for everyone to use is something extra that we should be grateful about, not act entitled and complain because they didn't make a super-easy, no-code UI version for the most braindead of users. Of course the installation complexity varies, but I wouldn't say _any_ of those AI projects' demos are especially difficult to install. The absolutely vast majority of such projects are in Python, so once you get the hang of it it's easier. Also having Python environment managers (Conda/Mamba, etc) and git preinstalled usually is half of the work required. Besides that, if it's something specific you need help with, just open an issue in github and ask for help, people usually will answer you, as long as your question isn't "please hold my hand throughout the whole install process".
Lol. Yeah, I love this stuff, but unfortunately, I'm like you. I sit out the first few months of new stuff now, waiting for some paid site to hopefully pick it up, and then I just give them my money. Lol.
I literally thought, "This would be amazing if it could only do animals too", and 30 seconds later.... he shows it doing animals too. Can't wait to try this one out.
99% of people liking this will never be able to get this working, even if they try. They like it immediately after watching UA-cam videos, and never do anything with it
I sadly agree with you. I think it's the way he presents the information. For example, he starts of saying do this quick thing, and then it turns into a multi-step process that should have been explained in another video. Saying something is quick, and then expanding into something that many people would feel is not quick, will ultimately turn people off. If you prep them ahead of time that it will be a daunting task, they will mentally be ready for it, or will watch the video when they have the appropriate amount of time.
I'm not even trying. The use case is still very limited, the driving video need to be really clear with minimum shoulder movements, but to get it working is actually pretty 'simple' if you already knew how to use Comfy UI and have used InsightFace before, at the bottom of the git page you can find the community resource that you can use to get it working with Comfy UI.
@@biggerbitcoin5126 lmao he literally gave you a step by step, most technical things don't go anywhere near as far as he went to describe how to do it. Are you able to understand basic English? Do you have comprehension skills? This isn't even a technical vs non-technical issue at this point.
I heard several individuals at Salt Shack saying that the market is ripe, so I'm thinking about investing some money in stocks. Is it a good time to buy stock? I have almost $545K in equity from the sale of my property, but I'm unsure what to do with it. Should I buy shares now or wait for a better opportunity?
Of course, but you shouldn't enter the market blindly just because there are prospects there. I'll urge you to get professional assistance in order to comprehend the possible aspects that could contribute to your financial growth.
Many people underestimate the need of a financial advisor until they are burned by their own emotions. I recall that after a long divorce, I needed a good boost to keep my firm afloat, so I looked for licensed consultants and found someone with the highest qualifications. Despite inflation, she has helped me increase my reserve from $275k to $850k.
This is definitely considerable! think you could suggest any professional/advisors i can get on the phone with? I'm in dire need of proper portfolio allocation.
My CFA ’Leah Foster Alderman’, a renowned figure in her line of work. I recommend researching her credentials further. She has many years of experience and is a valuable resource for anyone looking to navigate the financial market.
I just googled her and I'm really impressed with her credentials, I reached out to her since I need all the assistance I can get. I just scheduled a caII.
Bro i think u should also start basic tutorials about python pip requirements installations and anaconda git and basics of using ai locally so atleast people can solve their errors while running it like in your pc everything is already installed and here after pip install requirements its saying tourch isn't available
I tried to upload a video instead of a picture, but it doesn't allow any extension other than a photo extension. But you showed in your video that it is also possible to add a video 🤔
I followed the steps exactly, and was all fine until the 12:13 mark. CMD still told me 'conda' is not recognized as an internal or external command, operable program or batch file. Opening CMD and doing conda --version showed that it was installed though.
Great video! Just wondering how you do a video as source and a video as the driving, i only see image as the source option. If you can let us know. Thanks!
Hi, i am searching for a model for comfy ui or others ai local pc software that i give image + audio file and it give me a video that is avatar speaker. do you know any model of this? can you make a tutorial for that?
@@theAIsearch Thank you so much. this was very good. but for 10s it takes i think 15-20 min. do you know any faster version. atleast lip sync with a little face but faster (for example 1 min video in 15 min). btw i really enjoyed that video. you learn step by step in installing and it was really good tutorial video.
This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!
After the conda activate Liveportrait step. I get the following error: ERROR: Could not install packages due to an OSError: [WinError 206] The filename or extension is too long: 'C:\\Users\\Dave\\Desktop\ oop-unleashed-main\ oop-unleashed-main\\installer\\installer_files\\conda\\envs\\LivePortrait\\Lib\\site-packages\\onnx\\backend\\test\\data\ ode\\test_averagepool_3d_dilations_large_count_include_pad_is_0_ceil_mode_is_False\\test_data_set_0' I assume this is due to another program (roop-unleashed), any way of sorting this? Both folders (roop-unleashed and Liveportrait) are saved on the desktop.
Wow, great stuff. This is the next step I was waiting for. All the puzzle pieces are falling together... moving characters and then making them talk or sing. It just all needs to come together in one platform to combine different features to create movies or music videos or virtual avatars. Thanks for sharing! EDIT: Can you make this work with 16:9 ratio images? I see a lot of lip-sync programs that are just square.
still not mapping much of the eye expression, not sure if its a setting or something but when the source goes cross-eyed its not conveyed at all in the result, and there expressions are much more muted than the source, hopefully this is configurable
Exciting functionality! I’m gonna do a deep scan with some security tools like wireshark n such. I suggest y’all do the same just in case. There are quite a few repos with zero day backdoors that specifically target windows. Happy prompting y’all
Thanks!! Now video-2-video is supported. Can you update this guide please? Its better to made a clean install or just re-download the entire repository??
Help! After entering "conda activate LivePortrait" I only get the prompt back, without the "(LivePortrait)" in the beginning of the line. Where do I start looking for the error?
Amazing demo! What are the potential security implications of using AI deepfake technology like Live Portrait? Are there measures in place to prevent misuse?
TurboType is a Chrome extension that helps you type faster with keyboard shortcuts. Try it today and start saving time. They have a free forever plan!
www.turbotype.app/
Thanks man, amazing video and I will try this hkey manager.
Lowkey one of the best AI youtubers
Free and open source? No such thing. If it was so, installing it wouldn't be needed. Double click and it would work.
@@minhuang8848
Harvesting info is the goal today. For that Microsoft made Copilot AI and Recall. A spyware within the operating system.
Will it work on IOS devices?
First thing I think of is a potential VR application. This would be in real time , your whole expression would be projected on your VR avatar.
It looks like it still takes some processing time to run the software, so at least currently you couldn't animate an avatar in real time on a local recording. So for V-tubers, at least a few months off. But if it's run by a cloud service, maybe they could make it work.
It's not new, Meta has already demo'd this type of realtime avatar . Zuck showed it off on Lex Friedman's podcast.
It's realtime and very high quality , just requires some pre-processing of a full face scan
@@iankrasnow5383 with the speed of tech innovations in the last few years, if they decide to work toward this it won't be that long
video calling
@@iankrasnow5383 A lot of time. Still not a consumer friendly implementation. Might take some more time to be realtime ready. Fingers crossed.
i normally dont mess with this stuff until there is an involved interface, super excited for this stuff to be working open source
you're right. For reasons I could not fathom, very few AI tool were pushed to the point where they get a user friendly client like Midjourney. Even programmer like me struggles to pull a repository from git and build it up myself, I don't think any reguar artist out there can do AI without at least a client side.
@@SVAFnemesis what, you don't like typing in Discord? heretic.
@@English_Lessons_Pre-Int_Interm can you please carefully read and understand my comment.
@@SVAFnemesis I think, they were being sarcastic
I always tooo lazy to comment or like, subscribing is almost impossible but I did it all today on you video, just excellent. I was thinking from programmer point of view, your video thought me a lot today. Thanks mate from India.
wow, the option to animate a face on a source video has a great potential. I can already see people creating scenes of people interacting with each other with Runway Gen-3 or another video generator and then editing the video so that the people in the scene actually talk!
We're one step closer to creating movie scenes.
exactly!
@@user-cz9bl6jp8b I don't know. I never tried it. I was only commenting on what I saw in the video.
@@user-cz9bl6jp8b I'd like to know this too.
This is probably the best instructional video I have seen lately. Going thru the steps in great detail. Thank you for that.!! Once installed it runs smoothly.
You're very welcome!
were you able to run it on a group photo to animate multiple faces? i was unable to
@@abhishekpatwal8576 you can uncheck do crop and it does try but the result isnt there yet
How soon can I get a Hogwarts painting of my dead grandmother to tell me to wash my hands every hour?
Now, if you want
now if you have the time and technically expertise or enough money to pay someone else to do it.
very quickly.. just get pinokio and install live portrait.. it's only 3 clicks. Also fooocus and stabilty cascade is great for midjourney quality ai stuff.
You can fast track this with Runway.
I need to recant and say that everything worked out in the end. It is necessary to install all the components first, and only at the end of it all can you install the platform. Thanks for the video.
glad you got it to work!
which py version did you use ?
@@UserGram-1 What I did was follow the tutorial in this video and after four or five failed attempts it ended up working.
What happens if the source sticks out a tongue?
The entire internet crashes.
there are already artifacts with the teeth where it stays static as the hair is with the background, i'd imagine the same will happen with the tongue.
huuuge step up for open-source nonetheless!
Harambe is resurrected
doesnt work
What is bro planning to do
This is FANTASTIC! It's going to take a minute to "git" everything (the dependencies) installed and working properly (Mac OS Monterey), but this is Open Source, so I have nothing to complain about. I'll do whatever it takes and how ever long it takes to nail this one. Thanks for the tutorial!!
Part of the webui is a file (frpc_windows_amd64_v0.2) which is a reverse proxy utility. Looks extremely untrustworthy to me. Running under a virtual environment mitigates some of the risk but I'm still skeptical. You should really be running this in an extremely sandboxed operating system.
China + open source + free = I'm gonna steal your life now.
@@kirtisozgur yeah, pretty much
Thanks for sharing!
I also noticed when you add a video it says "uploading video" which seemed a little sus for a local install.
Never trust a Chinese company that wants your data
Amazing ! Thanks for sharing Live Portrait ! Also thanks for the TurboType tool too ! Amazing and practical !
Create a source video from the movie "The Mask". When Jim's character becomes freaky with his eyes and mouth.
I hope future versions of LivePortrait can do the entire body - or at least the upper part, including arms and hands. That'd be such a breakthrough in motion capturing technology!
I got a problem during entering the line "conda activate LivePortrait". It returns "CondaError: Run 'conda init' before 'conda activate'. What shall I do?
type this : conda init
it will ask you to close the cmd
then restart and do the same instructions
after that type : conda activate LivePortrait
@@Bandaniji24 Thank you! It worked. 🤝
Thank you!
Working on a cartoon about a mischeivous young girl named Yumi, I've been using AI since the beginning and finally found a way to make her consistently with apps that effectively use Character Reference techniques, I've even trained a model with my character. Being a creative partner with Pika, Leonardo AI, and finally Runway ML I am able to create a ton of content, but I will need to add the character animation, and while I actually turned my AI character into a fully rigged Metahuman, it's nice to know if and when I need a quick shot and don't have the time to set it up in Unreal, I can quicky generate my character in the scene, then I or my niece, who will do most of the performance stuff for Yumi, can act out and voice her and I can use that footage and audio to animate the clip. This is an amazing time to be in. As someone who uses AI as a tool, I can see the several use cases for stuff like this, and it's going to make life easier for me as I am a studio of one and I have zero budget to make most of my stuff. So using a combination of free tools and my natural resourcefulness I am starting to make head way. The one man film studio era is here now.
I am right alongside you brother!
can you give a short list as to how you would go for character consistency with free tools? i found it to be either lacking extremely (wasnt consistent) or its payed models i couldnt try.
example:
1. Ai x: its free and has no tokens, i use it to do this and that. then i can you this is step 2 for ..
2. AI x2: also free and has no tokens, now you can...
i've been using this from beta on starting about 3 years ago. their current version is proprietary and does so much more and mimics lifelike in every way.
This is insanely fantastic for controlling expressions keep making your great videos.💯Top Notch.😁
thanks!
Available in pinokio?
thats crazy! i wanted to create a yt channel for so long but didnt want to use my own voice nor face. i can do it now :)
That's a great idea Good luck to you and your channel! Did you need a Chinese phone number to run this app?
@monday304 LivePortrait doesn't require any number
always appreciate you show every single steps to install. It's very helpful for person who is not familiar with any codes
my pleasure!
YOU ARE REALLY GIVING ME ALL THESE THINGS whenever I NEEDED THIS TO Make my animation!!!!
Nice can I see your animation when you're finished?
Good luck!
It looks weird in motion, but if u pause the expressions at any point during its animation, it looks good and natural.
Those people who can create those type of application in code , why they don't create an installation exe type ?
because exe is a windows thingie, rest of the world uses a real unix operating system like mac and linux. ;)
Exactly this, i guess the people who can make this kind of stuff are just used to doing stuff the hard way.
A zipped .exe on a cloudstorage would have made this a no brainer.
Laziness.
There is this other program called chatgpt, I never used code or did any programming before but installed linux on an old computer as it takes up 90% less resources to run then windows and i just tell chatgpt what i want to do...copy command...paste command...and ive built custom personal apps somehow without knowing what im doing, if anything i just ask chatgpt or i copy and paste the output on the terminal and tell chat gpt to translate into english. If you have not yet experienced life apart from windows you will find that if you just jump out of that window and take a walk with the penguin into the rabbit hole there is a whole other world down there, vast and beautiful that is so free. If you take the plunge you will find the true meaning freedom in pc world and eventually come to the conclusion that you never knew that you have been locked up for so long behind that window that was preventing you from seeing what else is out there and all you had to do was open it and not be afraid to jump out. lol
There sure is money to be made by creating 'easy' installers for (would be) popular applications like this that comprise a lot of dependencies. But it'll be a lot of work and a totally different expertise than giving photos an animated face.
The video of a "girl rotating" at 4:10 is a Vermeer painting that someone has used AI to animate.
I could fix the lip sync of the Teenage Mutant Ninja Turtles movie with this!
I can think of a hundred ways to use this creatively but i have only got 4GB VRAM.
@@eccentricballad9039 Use Google colab👀
You mean the original 1990 movie? Never noticed
i want to replace young jeff bridges and CLU in tron legacy!
heroic af!
This is going to be so great for video editing instead of having to animate facial animations for characters we could just use this software
yes!
what do you recommend for animation software
really impressed and highly amused by the face expression acting
I have a basic laptop without the NVidia graphic card, can I use this as well?
Maybe. The bulk of the computational work is on the server side.
It's works very well when the source photo and input video are at the same angle but there is obvious warping when the angle is different. Best to keep the angles the same.
Thanks for sharing
thanks man! you deserve much more subs and likes :]
Thanks!
The only catch is...
Proceeds to list a thing that 99.99% of us won't be able to get round
what is that thing??? o_O
@@sickvr7680 You have to have a Chinese phone number
@@sickvr7680 pay for the subscription and graphics card. It is enough to grow 5 children in Africa.
Stop being lazy!!!
@@fzigunov Lazy? I wouldn't know how to get a Chinese phone number, would you? And that's before jumping through the myriad hoops to get it installed.
Makes me cry as my oldest friend my computer could not run good stuff like this.
Amazing feature 🤩Greatly appreciate doing this super simple video guide on how to use this tool. Game changer! Thanks so muchhhh
enjoy!
How about using Videos instead of Images as a source file. Just like the samples you show, please show us how we can do that as well, thanks. Anyway, I have successfully installed this in my computer using phyton only env. And u r right, it generate so fast unlike any other video generation like hallo which I have installed as well.
glad you got it to work. they will release the video feature soon github.com/KwaiVGI/LivePortrait/issues/27
everything was going well until I tried pasted in the gradio interface I get "No module named 'torch'" please help
This is gonna be a nightmare soon enough ...
Yea - one of my favourite (and somewhat fringe) concepts is that "Everything comes true in the end" - because the context around it changes.
My fav example of this is "primitive tribes in 1970s National Geographic Magazine" being afraid of cameras because they thought they could steal your soul.
Well.
Here we are. We are within days of there being a browser extension that with a single click can superimpose any photo into any porn video.... to take any youtube video of anyone and turn it into a kind of voodoo doll or gollum than can be made to perform any action imaginable, including ringing up your family, friends, enemies and doing such a perfect impersonation of you that it is actually more realistic than you are yourself.
In a way, the Algos already have voodoo dolls of you... a lifetime of clicks and comments etc, rows in a database tied by a single user_id.
I think "Text" was a massive massive revolution in what it meant to be human because it collapsed the time dimension. Memories became something that took zero energy to maintain. I think AI is a process towards collapsing some other dimension, although I've yet to figure out what it is, and I might of course be talking bollocks.
@@nicktaylor5264 i want what youre having
@@nicktaylor5264 Beautifully written, I too need my overlord my one true leader, a god not to worship but to follow, the basilisk, one of our own making.
@@nicktaylor5264☠️☠️☠️
Sure, if you have no idea about tech. This AI is amazing and it's only gonna get better.
Thanks to your very detailed patient step by step instruction, I was able to generate my own live portraits.
My results are not as perfect as the examples, but amazing nonetheless. Thank you! Thank you! Thank you!
People with autism are unable to read emotions from facial expressions like normal people . this technology can help theme a lot . you can exagerate or magnify facial expression so they can understand you and be able to communicate effectively .
for example a kid can understand wether his mom is mad or not.
very interesting use case. thanks for sharing!
I don't think it would be feasible to get it done in real time
"like normal people".
It's been a while since a comment made feel so "abnormal" 😐
May be simpler to use AI to tell them what emotions someone has.
@@KryzysX We already have real time face trackers, real time deepfakes, and we are definitely not that far off from getting real time generative AI of this quality.
Any idea why I am getting this error, everything else worked up until this point:
C:\Users\funky\Desktop\liveportrait\LivePortrait>conda create -n LivePortrait python==3.9
'conda' is not recognized as an internal or external command,
operable program or batch file.
I have the same issue
same issue here
check conda version of python and install it. conda not recognized means it needs this program to run app
Add .18 to the end
conda create -n LivePortrait python==3.9.18
I almost missed it myself, but it's in text at the bottom of the screen, correcting the one from GitHub
you didn't set your env path correctly. Can fix that, or manually go to the path before issuing the commands
Absolutely incredible!
yes!
THIS IS INSANE... All wanna be yahoo boys from Nigeria Say Hi. Your work is so easy now haha
But it looked completely unnatural to only move the head. The shoulders stayed perfectly still throughout.
At 3:58 you start talking about how you can use live portrait on not just stationary images, but moving videos too. Then in none of the examples at the end did you show how to use it on videos. I have tried uploading video but they are not a supported format.
What is going on?
I subscribed to you when you had 2K subscribers, and today you have 150K. Bro, you totally deserve it, and your content is worth much more than 150K. Soon you will reach 1M. Love you bro, from India ❤
That's it, I'm never ever gonna believe what I see online or on digital media anymore.
Yep, this install process is an effn nightmare, and I have Windows 10, Python, and Linux Mint installed under Hyper-V.
One thing is for sure, AI projects have the absolute least intelligent user interfaces and installations methods. It's almost laughable how universally bad and fragmented they all are, and how none of them talk to each other.
Who would release software with this many environment dependencies. Once he got to having to edit a path to tell Windows where conda was... I was like I've been down this road, editing the env path variable never works for me, this is fail.
I'm right there with ya. Someone needs to have a serious talk with these software devs about simple user interfaces and self installing programs. I'll get excited when the interface says "Drop target face here" , "Drop source video here" , and then a big red button that says "GO". Until then, "Gee-whiz, that's interesting."
I agree, I followed the instructions but I hit a wall after my computer could find the git. Very cool, program, but I will have to wait for it to be simplified.
The problem is that usually happens when someone monetizes it with subscription fees.
@@BT-vu2ek this would take the devs wayyy to long to do
For starters, the people creating those demos/projects aren't UI designers, nor do they create the demos for widespread or commercial use, or even create them with the end-user in mind at all. They create them as part of their scientific papers and studies. Meaning it's a somewhat beautified version of their messy lab experiments. The fact they then release those projects as open source for everyone to use is something extra that we should be grateful about, not act entitled and complain because they didn't make a super-easy, no-code UI version for the most braindead of users.
Of course the installation complexity varies, but I wouldn't say _any_ of those AI projects' demos are especially difficult to install. The absolutely vast majority of such projects are in Python, so once you get the hang of it it's easier. Also having Python environment managers (Conda/Mamba, etc) and git preinstalled usually is half of the work required. Besides that, if it's something specific you need help with, just open an issue in github and ask for help, people usually will answer you, as long as your question isn't "please hold my hand throughout the whole install process".
Lol. Yeah, I love this stuff, but unfortunately, I'm like you. I sit out the first few months of new stuff now, waiting for some paid site to hopefully pick it up, and then I just give them my money. Lol.
Facefusion is the best choice since its a stable diffusion extension
love your videos, keep up the good work :D
Thanks!
I literally thought, "This would be amazing if it could only do animals too", and 30 seconds later.... he shows it doing animals too. Can't wait to try this one out.
have fun!
99% of people liking this will never be able to get this working, even if they try. They like it immediately after watching UA-cam videos, and never do anything with it
I sadly agree with you. I think it's the way he presents the information. For example, he starts of saying do this quick thing, and then it turns into a multi-step process that should have been explained in another video. Saying something is quick, and then expanding into something that many people would feel is not quick, will ultimately turn people off.
If you prep them ahead of time that it will be a daunting task, they will mentally be ready for it, or will watch the video when they have the appropriate amount of time.
I'm not even trying. The use case is still very limited, the driving video need to be really clear with minimum shoulder movements, but to get it working is actually pretty 'simple' if you already knew how to use Comfy UI and have used InsightFace before, at the bottom of the git page you can find the community resource that you can use to get it working with Comfy UI.
I actually got this working in ComfyUI, but my results aren't that flash, unfortunately. Might have to try this in a venv... 😕
It's too technical imo, like this would take ages to dissect
@@biggerbitcoin5126 lmao he literally gave you a step by step, most technical things don't go anywhere near as far as he went to describe how to do it. Are you able to understand basic English? Do you have comprehension skills? This isn't even a technical vs non-technical issue at this point.
3:16 she is just absolutely incredible. This easily exceeds cartoon animations
*edit* alright found her: rayray facedancing facial expressions
I love that you walk through the installation. Thank you!
you're welcome!
Very thorough tutorial and a very good project to cover!
Thanks!
Why are they using Frank Tufano’s image? 😂
Thanks for the tutorial...I was finally able to install miniconda without any problems.
I know a few people that I wish I could set their Target Lip Open Ratio to zero. Just sayin'.
How did you do the video example i dont find an option to put source video
They haven't released it yet. Will keep you posted
@@theAIsearchThanks! Where do they communicate the updates? On Github?
@@theAIsearch Thanks man tried the pic came out well if u edit it in a cool way just animate it after lol
uwu
😃
Great demo! Curious as to your CPU / GPU / ram configuration that you ran this on?
Thanks. RTX 5000 ada, 16g vram. cpu is intel i7, but i dont think that matters
I heard several individuals at Salt Shack saying that the market is ripe, so I'm thinking about investing some money in stocks. Is it a good time to buy stock? I have almost $545K in equity from the sale of my property, but I'm unsure what to do with it. Should I buy shares now or wait for a better opportunity?
Of course, but you shouldn't enter the market blindly just because there are prospects there. I'll urge you to get professional assistance in order to comprehend the possible aspects that could contribute to your financial growth.
Many people underestimate the need of a financial advisor until they are burned by their own emotions. I recall that after a long divorce, I needed a good boost to keep my firm afloat, so I looked for licensed consultants and found someone with the highest qualifications. Despite inflation, she has helped me increase my reserve from $275k to $850k.
This is definitely considerable! think you could suggest any professional/advisors i can get on the phone with? I'm in dire need of proper portfolio allocation.
My CFA ’Leah Foster Alderman’, a renowned figure in her line of work. I recommend researching her credentials further. She has many years of experience and is a valuable resource for anyone looking to navigate the financial market.
I just googled her and I'm really impressed with her credentials, I reached out to her since I need all the assistance I can get. I just scheduled a caII.
And if I need to transfer a facial expression not to a video but to a photo, how can I do this?
This is awesome. AI is really advancing fast.
Too fast
Bro i think u should also start basic tutorials about python pip requirements installations and anaconda git and basics of using ai locally so atleast people can solve their errors while running it like in your pc everything is already installed and here after pip install requirements its saying tourch isn't available
I tried to upload a video instead of a picture, but it doesn't allow any extension other than a photo extension. But you showed in your video that it is also possible to add a video 🤔
I hope someone else will show us how to do it. I want to know how to do that as well.
they will release the video feature 'in a few days' github.com/KwaiVGI/LivePortrait/issues/27
3:15 the images don't follow the "driving video" eyes at all. It cannot reproduce cross-eyed or side eyes at all.
You are guiding like a god. Thanks for instructions
no problem
I followed the steps exactly, and was all fine until the 12:13 mark. CMD still told me 'conda' is not recognized as an internal or external command, operable program or batch file.
Opening CMD and doing conda --version showed that it was installed though.
open a new cmd and try again
Lots of steps 😢
Pinokio AI >> one click installer
Very cool, thank you for sharing ❤
Too bad it won't work on VIDEOS or Multiple Faces like in their examples.
Great video! Just wondering how you do a video as source and a video as the driving, i only see image as the source option. If you can let us know. Thanks!
it will be released soon: github.com/KwaiVGI/LivePortrait/issues/27
Very cool. Imagine watching a foreign movie and the lips move like the ones from the translation. Or a video game sequence
Hi, i am searching for a model for comfy ui or others ai local pc software that i give image + audio file and it give me a video that is avatar speaker.
do you know any model of this? can you make a tutorial for that?
its not a comfyui node, but try hallo: ua-cam.com/video/rlnjcRP4oVc/v-deo.html
@@theAIsearch Thank you so much. this was very good. but for 10s it takes i think 15-20 min.
do you know any faster version. atleast lip sync with a little face but faster (for example 1 min video in 15 min).
btw i really enjoyed that video. you learn step by step in installing and it was really good tutorial video.
This turorial was great. Thanks. Question: Is there any tool (like this to run locally) to upload an mp3 voiceover and generate the mouth and eye movements to later use in this process? Thanks!
yes, is this what you're looking for? ua-cam.com/video/rlnjcRP4oVc/v-deo.html
only left to make this all in real time and in holographic, thank you for the video and beginner friendly
that would be cool!
After the conda activate Liveportrait step. I get the following error:
ERROR: Could not install packages due to an OSError: [WinError 206] The filename or extension is too long: 'C:\\Users\\Dave\\Desktop\
oop-unleashed-main\
oop-unleashed-main\\installer\\installer_files\\conda\\envs\\LivePortrait\\Lib\\site-packages\\onnx\\backend\\test\\data\
ode\\test_averagepool_3d_dilations_large_count_include_pad_is_0_ceil_mode_is_False\\test_data_set_0'
I assume this is due to another program (roop-unleashed), any way of sorting this? Both folders (roop-unleashed and Liveportrait) are saved on the desktop.
Wow, you always present great advances in AI, and open source, I am very grateful for your channel, I will try this soon
Thanks!
Wow, great stuff. This is the next step I was waiting for. All the puzzle pieces are falling together... moving characters and then making them talk or sing. It just all needs to come together in one platform to combine different features to create movies or music videos or virtual avatars. Thanks for sharing! EDIT: Can you make this work with 16:9 ratio images? I see a lot of lip-sync programs that are just square.
still not mapping much of the eye expression, not sure if its a setting or something but when the source goes cross-eyed its not conveyed at all in the result, and there expressions are much more muted than the source, hopefully this is configurable
Exciting functionality! I’m gonna do a deep scan with some security tools like wireshark n such. I suggest y’all do the same just in case. There are quite a few repos with zero day backdoors that specifically target windows. Happy prompting y’all
how do i create a file like exe to just open it up later? i could use it first time but when i closed the tap and cmd i wasnt able to open again...
You could create a .bat instead and run commands there.
Maybe auto-py-to-exe
500 subscribers completed ❤❤❤.. thanks ❤❤❤
If you download this, I'm guessing you can use it even without internet access, correct?
yes
Can you make a video on your hardware? That setup you have looks cool
I just want to say you are the best, thx a lot dude
Dell and Nvidia huh? Were one of those your Chinese friends that gave you the code :D
Thanks! It's absoluetly awesome as nodes in comfyUI!
oh, is there a node for this already?
@@theAIsearch Yup ^^
I've seen tons and tons of songs with these, I think even during pandemic LOL. I didn't know it was this accessible.
The quality of the output is impressive.
were you able to run it on a group photo to animate multiple faces? i was unable to
@@abhishekpatwal8576 I didn't try that
What is the output resolution of these videos?
▬ When the image is still it works fine. When it zooms out or zooms out you can already see that the image gets smaller and larger hahaha
it works only with GPU? bec torch give me some problems w the installation
Thanks!! Now video-2-video is supported. Can you update this guide please? Its better to made a clean install or just re-download the entire repository??
When video to video generations?
they will release the video feature 'in a few days' github.com/KwaiVGI/LivePortrait/issues/27
Help! After entering "conda activate LivePortrait" I only get the prompt back, without the "(LivePortrait)" in the beginning of the line. Where do I start looking for the error?
Ah, never mind, Windows restart helps. Now I need to manually install dependencies that were not in the requirements.txt...
Can you do it in real time? There is a cam icon under the video input. What happens when the face turns away from the camera?
Wah cool!!!! Can you do Xi?
How can I run this LivePortrait, if i'm using an AMD GPU?
tutorial for google collab please
Amazing demo! What are the potential security implications of using AI deepfake technology like Live Portrait? Are there measures in place to prevent misuse?