One other thing to note which I forgot to mention: after rendering all the sequence images, hit x next to "save sequence" in DeepFaceLive or the previous files will be overwritten the next time you load a different video file
It likely has something to do with the compression of the original compared to your custom file. Grabbing frames from a video and creating stills is essentially the same thing as when you compress a video file, ignoring duplicate chunks of data to save space and only pulling unique still images/data. But those frames are not extended and blended together to create an identical length video footage. Instead, it is put back together more like a handmade flipbook scene where each frame still holds the same value in terms of fps without the data that was ignored. So it appears to be faster, even if the scene looks the same to the naked eye. It's really just missing chunks. When you compress a video, the encoders are taking out the work for you of adjusting the speed in the final product by correcting the fps each time chunks are removed. That is why compressed videos are always seamless. The only thing that will change is the fuzziness or color distortion, and that is based on how many chunks you remove from the file size. The higher the file size and the more it is compressed, the fuzzier the image gets. Deepfake is not as seamless because rather than correct fps where duplicates are removed, you are only changing the overall playback speed. It helps, but good eyesight can detect missing fames still. Deepfaking is more or less sloppy manual video compression with edited faces if that helps. But if you are good at video editing. You can use pro tools like Premier/After Effects to blend the stills better than apps like this, if you want genuine looking fakes. But if you're just having fun playing with faces this is perfect. It IS fps related, but it is not as simple as just changing fps, since different amounts of data are ignored in different places. Unless you can specify fps by frame sets instead of as a whole project? That's why I recommend those apps.
It'd be mildly cool if you gave me credit when using my models. I made them public so idc what you use them for, but there seems to be a lack of indication in any of your videos that specify who made the models or whose Google drive you're linking to.
Sorry, I had no idea you made these face models. I assumed the Google Drive link was just a general repository for people to add the models. DeepFaceLab itself seems to be made by a whole bunch of people (going by the credits) and this particular model is included with it, not from the Drive link. I always credit people on my videos (hence I would just add the credit "DeepFaceLab" on the DeepFake videos), in this case it just wasn't clear to me that there was a single person to give credit to. But anyway I've amended all the videos featuring DeepFake to include a link to your channel. Thanks for the great work.
@@Monoville I didn't make Deepface Lab (the creator Iperov is from Russia) but I made all the models in that Google drive folder, and made it public so people could play with the live models. Thanks for updating the description. I enjoy your content btw.
to render without missing frames, disable "real-time" in file source
Great, thanks!
CANT wait to see this real actors faces in a games!😲😲😲
One other thing to note which I forgot to mention: after rendering all the sequence images, hit x next to "save sequence" in DeepFaceLive or the previous files will be overwritten the next time you load a different video file
Thank you bro i can finally have Indiana jones
David Kovalniy - Alexei Navalny (RIP)
I am so happy you're doing more videos!!
It's faster probably because of the FPS, try matching the fps in the composition to 25/29.97/30 or even 15. Idk what it's like, didn't try it yet.
Tried that already, didn't seem to work (although I'm sure it must be fps related)
It likely has something to do with the compression of the original compared to your custom file. Grabbing frames from a video and creating stills is essentially the same thing as when you compress a video file, ignoring duplicate chunks of data to save space and only pulling unique still images/data. But those frames are not extended and blended together to create an identical length video footage. Instead, it is put back together more like a handmade flipbook scene where each frame still holds the same value in terms of fps without the data that was ignored. So it appears to be faster, even if the scene looks the same to the naked eye. It's really just missing chunks. When you compress a video, the encoders are taking out the work for you of adjusting the speed in the final product by correcting the fps each time chunks are removed. That is why compressed videos are always seamless. The only thing that will change is the fuzziness or color distortion, and that is based on how many chunks you remove from the file size. The higher the file size and the more it is compressed, the fuzzier the image gets. Deepfake is not as seamless because rather than correct fps where duplicates are removed, you are only changing the overall playback speed. It helps, but good eyesight can detect missing fames still. Deepfaking is more or less sloppy manual video compression with edited faces if that helps. But if you are good at video editing. You can use pro tools like Premier/After Effects to blend the stills better than apps like this, if you want genuine looking fakes. But if you're just having fun playing with faces this is perfect. It IS fps related, but it is not as simple as just changing fps, since different amounts of data are ignored in different places. Unless you can specify fps by frame sets instead of as a whole project? That's why I recommend those apps.
can use custom face not from that list?
It'd be mildly cool if you gave me credit when using my models. I made them public so idc what you use them for, but there seems to be a lack of indication in any of your videos that specify who made the models or whose Google drive you're linking to.
Sorry, I had no idea you made these face models. I assumed the Google Drive link was just a general repository for people to add the models. DeepFaceLab itself seems to be made by a whole bunch of people (going by the credits) and this particular model is included with it, not from the Drive link. I always credit people on my videos (hence I would just add the credit "DeepFaceLab" on the DeepFake videos), in this case it just wasn't clear to me that there was a single person to give credit to. But anyway I've amended all the videos featuring DeepFake to include a link to your channel. Thanks for the great work.
@@Monoville I didn't make Deepface Lab (the creator Iperov is from Russia) but I made all the models in that Google drive folder, and made it public so people could play with the live models. Thanks for updating the description. I enjoy your content btw.
where i put .fdm from the drive ?
DeepFaceLive_NVIDIA (or whichever version you have) > userdata > dfm_models