DeepFaceLab 2.0 Faceset Extract Tutorial

Поділитися
Вставка
  • Опубліковано 27 жов 2024

КОМЕНТАРІ • 195

  • @maxer167
    @maxer167 3 роки тому +16

    Even with no interface the easiest one to use. Thanks for the tutorials also

    • @Deepfakery
      @Deepfakery  3 роки тому +4

      If you're looking for a GUI try MachineVideoEditor: github.com/MachineEditor/MachineVideoEditor
      There's a little bit of a learning curve but once you get it setup there's some amazing features.

    • @maxer167
      @maxer167 3 роки тому +1

      @@Deepfakery thanks a lot for the link sir

  • @bradcasper4823
    @bradcasper4823 8 місяців тому +3

    I need to admit that your tutorials are amazing and very simple to follow, because you explain most of the things you show and I do think that it could be even better if you could explain the following:
    13:30 I don't understand this part. Some of those images on the left (source) seem ok and similar to destination (right), at least for me so why delete them if they are ok (I mean 00661,00653.png)?
    Could you explain more what do you mean by "destination range", because I don't get it or provide some resources to understand what is?
    Thank you again!

  • @LukmanHakim-np2fk
    @LukmanHakim-np2fk Рік тому +1

    thanks for tutorial, its realy great. i have question : can i use only image for data source ?

    • @Deepfakery
      @Deepfakery  Рік тому

      If you mean that you want to use only images (not video), then yes you can. I mention in the tutorial that you should put the files into the data_src folder then extract the faces. You will still need many images though. If you're looking for something that uses only 1 image to deepfake then thats a different story. I believe DFLive has this capability.

    • @LukmanHakim-np2fk
      @LukmanHakim-np2fk Рік тому +1

      @@Deepfakery thank you for the answer, which .bat file should i use for faces extract from the images ??

  • @antoniomorales7103
    @antoniomorales7103 3 роки тому +6

    Hi DeepFakery, thanks for the tutorial. I've been doing short deepfaces and you learn early, I'm very happy. But the overlapping faces look a bit blurry, is there a parameter to play without looking too fake? I liked Luper's video, are you going to do an advanced video tutorial? Thanks.

    • @Deepfakery
      @Deepfakery  3 роки тому +10

      There’s so much that goes into making it look real. If you’re using Quick96 it will probably never look good. You need to use SAEHD, XSeg masking, and a lot of training time

    • @antoniomorales7103
      @antoniomorales7103 3 роки тому +3

      @@Deepfakery Thank you, we will try

  • @Li776ghv
    @Li776ghv Рік тому +1

    I accidentally ran 4) "data_src faceset extract" while already having a folder with aligned faces. I exited the bat, but now the folder is empty. is it possible to get the images back?

    • @Deepfakery
      @Deepfakery  Рік тому +1

      No, DFL will sometimes dump files if they will interfere with the process.

  • @tomaskareem7717
    @tomaskareem7717 Рік тому +1

    How to reuse the previous faceset..??

  • @Deepfakery
    @Deepfakery  3 роки тому +6

    📌 How to deepfake for beginners - ua-cam.com/video/lSM-9RBk3HQ/v-deo.html

  • @patrickhwang3217
    @patrickhwang3217 Рік тому

    does that mean that extracts with a clear face but rotated similar to the bottom left photo should be removed from DST set? or do those just need to be adjusted in the manual alignments? (talking about the cleaning DST face extracts portion at around 12:20 for the example images shown.)

    • @Deepfakery
      @Deepfakery  Рік тому

      If its rotated that much its probably a bad alignment, which can be seen in the debug images. So for the DST faceset it will need to be fixed manually. Check out MachineVideoEditor, it has advanced alignment tools.

    • @patrickhwang3217
      @patrickhwang3217 Рік тому

      @@Deepfakery I am really sorry but one more question. I am in the cleaning DST aligned debug folder. I noticed the guide and your video says to remove the videos, i assume that means delete, but my debug folder also contains many errors (non face frames), am i supposed to remove those as well? wont they come back in the re extraction?

  • @bradcasper4823
    @bradcasper4823 8 місяців тому

    12:34 So it's only necessary to delete debug images in order to get rid of bad dst frames?

  • @felipevilu5351
    @felipevilu5351 Рік тому

    HELLO!
    should clean from the data_dst, images of ppl eating? or should i keep them for the ai to learn them.
    thank you very much for this content, is awesome

    • @Deepfakery
      @Deepfakery  Рік тому

      Well in general you need to keep all the dst images or they will be missing in the final video. Doing an eating scene is a little more advanced so it will require extra work. Using just DFL the only thing you can do is mask out the obstruction and try to have similar images in the src faceset, which could be tough. If you really want to do this one I would take a look at MachineVideoEditor also. It will allow you to fix bad alignments on the mouth and such. www.deepfakevfx.com/downloads/machine-video-editor/

    • @felipevilu5351
      @felipevilu5351 Рік тому

      @@Deepfakery if i do that, do i have to set a mask for every frame of the eating scene?

    • @Deepfakery
      @Deepfakery  Рік тому

      No all of them, but several. Depends on how much the shape changes. Make sure to have plenty of masked faces without obstruction as well.

    • @felipevilu5351
      @felipevilu5351 Рік тому

      @@Deepfakery TY for the answer.
      So i started using MVE and i'm having some trouble with the dst file.
      There are a lot of frames with faces that weren't recognized. So i started to set the faces manually frame by frame, but those faces aren't being saved in the aligned carpet, i'm not sure if the model will be merged to those frames. Plus it will take me for ever to edit every frame of the video dst.
      Do you have any guide, or any tip that could help me here?
      PS: Sorry for the big question.

    • @Deepfakery
      @Deepfakery  Рік тому

      Its a bit confusing for sure. There's a wiki and some videos on the GitHub.
      What I do is extract the frames and faces in DFL, initial cleanup, then open the workspace folder in MVE. Open the aligned images then on the Detection Management tab (right-hand menu), under Image Information, select Face and choose the Parent Frame Folder (the frame images) and 'Set faces to parent frames' then click Import Face Data. Hit Save when done. Now go to work on the frames; the faceset is basically irrelevant once you've imported the alignments. Open the Frames and you will see all the aligned faces placed on the frames. From there I use (right click on a frame with no face) 'Approximate face from neighbors', or copy an alignment from one frame to another. You can also edit the individual points on the face. Its a long process with alot of trial and error, but at the end you should have alignments for (hopefully) all of the frames. Finally you have to re-extract the entire faceset, on the Detection Management tab, but this time select Image Information from Video Frame.

  • @MPSystem
    @MPSystem 3 роки тому +1

    Hi, if the face in data_dst never smiles, is it better if there are no smiles in the face of data_src too?

    • @Deepfakery
      @Deepfakery  3 роки тому +1

      Yes, the more similar the faces are the better. If you have a lot of source images then you can remove the ones that are different from the dst, but it’s also good to have variety.

  • @1hitkill973
    @1hitkill973 Рік тому

    Question. I noticed that on my system, FaceSwap could extract faces much faster than DFL. But DFL couldn't use those aligned faces and returns an error. I've heard that DFL marks those aligned faces with some metadata. But I'm not sure which metadata I'm supposed to be looking at. Is there a way to use those aligned faces from FaceSwap to DeepFaceLab? Thank you.

  • @princepurohit8028
    @princepurohit8028 2 роки тому +1

    Hi Deepfakery, I have a Question, Deepfacedlab runs very slower in my laptop, it takes me almost a day for facset Extract and 5 days for completing 40,000 model iterations, so is it normal? Or there's something wrong with my laptop?

    • @Deepfakery
      @Deepfakery  2 роки тому

      How many frames are you extracting? When training are you using CPU or GPU?

    • @princepurohit8028
      @princepurohit8028 2 роки тому +1

      @@Deepfakery I was extracting 4000 frames and was using CPU

  • @izzygenie6312
    @izzygenie6312 2 роки тому +1

    which one is the one where you will apply the expressions to?

    • @Deepfakery
      @Deepfakery  2 роки тому

      The source is the face you want to use, the destination is the clip you want to put the deepfake on.

  • @Theone1001
    @Theone1001 2 роки тому

    Hi there, great tutorial.
    In the video you are showing how to extract faces from video from src video and dst video.
    Do I have to extract the source face from a video everytime ? Is it posible to re-use previously extracted faces and just copy it into the data_src folder ?
    When I copy the aligned faces from previous extraction to data_src folder, the training doesnt seem to work.
    I guess I dont understand where the training takes it ressources from or do I have to extract from a source video everytime?

    • @Deepfakery
      @Deepfakery  2 роки тому

      Make sure you put the aligned images into 'data_src/aligned' folder. The data_src folder is for the extracted frames.

    • @fatloss_pushit
      @fatloss_pushit 2 роки тому

      Thanks for your reply , do I have to make it manual to the aligne folder ?

  • @seattleloud
    @seattleloud Рік тому

    Hi! I have a problem with the alignment of some dst frames. The batch file program hardly ecognize the shape of the face because there are many frames that it blends with the background. Also there are many moments when the face is on profile. So I proceed to do it manualy but the tool is very poor for those faces. It's literally a front mask that doesn't adapt to the shape of the face I want to align. What could I do? Thank you!

    • @Deepfakery
      @Deepfakery  Рік тому

      "It's literally a front mask that doesn't adapt to the shape of the face I want to align." - Is this using '5) data_dst faceset extract MANUAL'? Did you right-click? It will toggle between automatic and static alignments, but it should start out on automatic and look for the face wherever the mouse pointer is.
      Using just DFL you can do the "re-extract" process which is somewhat tricky. Go to data_dst/aligned_debug and remove all the bad images. Then run '5) data_dst faceset MANUAL RE-EXTRACT DELETED ALIGNED_DEBUG' and it will load only the frames with deleted faces. Here you can re-extract, but its basically the same tool.
      The best solution is probably to use MachineVideoEditor. Among other things it allow you to manually edit, copy, and even approximate the face landmarks. github.com/MachineEditor/MachineVideoEditor

  • @fabriziohasselbach5735
    @fabriziohasselbach5735 2 місяці тому

    I have I little Problem. When using "data_dst faceset MANUAL RE-EXTRACT DELETED ALIGNED_DEBUG", how do I mark/extract more than 1 face per Frame?

    • @Deepfakery
      @Deepfakery  2 місяці тому

      Sadly you can't do that.

  • @Alk942
    @Alk942 Рік тому

    when i start training can i add more images and videos during the training, or will i have to start the whole process again?

    • @Deepfakery
      @Deepfakery  Рік тому

      Yeah you can add more images and train longer. If you're using SAEHD/AMP make sure to enable random warp

  • @lilillllii246
    @lilillllii246 Рік тому

    Is there any way to use only the movement in the video, but change the clothes and background?

  • @nikkodaymielsilang2029
    @nikkodaymielsilang2029 2 роки тому +1

    Im making a deepfake but the face is always the top layer. Objects are passing through the face but still the face is on top layer. It doesnt go over the face. What can i do

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      You need to use XSeg to create a mask. You can apply the generic XSeg mask or make your own.

    • @nikkodaymielsilang2029
      @nikkodaymielsilang2029 2 роки тому

      @@Deepfakery thank you! But which is better? Create my own or the generic Xpseg mask?

  • @paulgeorge9228
    @paulgeorge9228 2 роки тому

    I use saehd for wf at 128 resolution, i saw on mrdeepfake guide to not use 128 resolution (too low). I'm already at 53 k iterations, do i have to start over or is it ok?
    Also could this explain why my iteration time is so high (~8000 ms)?

  • @a.p.7383
    @a.p.7383 3 роки тому +1

    i always get
    "Traceback (most recent call last):
    File "pathlib.py", line 1248, in mkdir
    File "pathlib.py", line 387, in wrapped (...)
    During handling of the above exception, another exception occurred:
    Traceback (most recent call last):
    File "multiprocessing\process.py", line 258, in _bootstrap
    File "multiprocessing\process.py", line 93, in run"
    whatever i try, no matter what version of deepfacelab i try, it keeps happening and nobody on google got this issue, i'm giving up lol

  • @masterji999
    @masterji999 10 місяців тому

    I got error in tarin quick 96 and the error is DLL load failed The paging file is too small for this operation

  • @canfz7286
    @canfz7286 2 роки тому

    I pretrain my model 500k but then train normally i see yellow red mask not faces why this happened?

  • @instinctisfiercenotcruel.958
    @instinctisfiercenotcruel.958 2 роки тому

    Ok I have a video of Elon Musk with 2 scenes in it, one where he is on the beach, and one where is in a building with completely different lighting, do i need to separate the scenes into 2 different videos? and do them separately? and put them in different folders,
    and can i use photos of him from google to train, and do all of them need to be in separate folders aswell?

  • @thesplaymedia9807
    @thesplaymedia9807 2 роки тому

    Please help! I did followed the steps but with my own videos. But the results looks fake and blurry

  • @gamez1665
    @gamez1665 2 роки тому

    Hi, this is a great tutorial!
    I wanna ask, when I sort it by hist, can I delete all of the similar images until it remains 1 picture from all of the similar images?

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      Yes you should delete as many similar images as possible while keeping in mind the subtle eye and mouth movements you might need for the fake.

    • @gamez1665
      @gamez1665 2 роки тому

      @@Deepfakery got it, thank you!

  • @deeber35
    @deeber35 Рік тому

    I was trying to add images to my data_src file, then when I try 4) data_src faceset extract, I get the error: "error in fetching the last index. Extraction cannot be continued." Any idea what that is? Do you have to use a video initially or can you add only photos to the data_src folder?

    • @Deepfakery
      @Deepfakery  Рік тому +1

      It wants to continue extraction because there are already images in the aligned folder. I've never really seen it work correctly though. My suggestion is take all of the images (frames and aligned) from the first extraction and move them somewhere temporarily. Then add your new images to data_src folder, make sure none of the filenames match any from the first set. Then extract the images, and finally drop the first set of images back in. Alternatively you could delete the aligned images and completely redo the extraction, which should be able to do all images, old and new, in one shot.

  • @moldorm30
    @moldorm30 2 роки тому

    So Source is the face you want to use and Destination is the video to paste the face on?

  • @HarleyBreakoutGuy
    @HarleyBreakoutGuy 2 роки тому

    When ever i use manual face extract it is starting from frame 0 but the missing aligned faces are frame 300 to frame 320. Hey guys i have missing faces after extracting .... how do i manually extract just those missing aligned faces to save time?

    • @Deepfakery
      @Deepfakery  2 роки тому

      For the destination video you can use the debug image process. Open the debug images and remove the frames that you want to re-extract. Then use "manual re-extract deleted aligned debug images." It'll open the manual extractor, but only for the frames you deleted

  • @detroxlp1
    @detroxlp1 Рік тому

    What can I do if the video has many times where objects are in front of the face?
    For example, a microphone.
    This just looks bad at the moment, because often only parts of the face are visible

    • @Deepfakery
      @Deepfakery  Рік тому

      It will work better if you have similar obstructions in both the source and destination

  • @Ozvideo1959
    @Ozvideo1959 3 роки тому

    Hi, I'm still learning DFL and I'm having a problem with face extraction, and I'm wondering if you can help. After I extract face from my data_dst video DFL does about half of the images with the image in the correct way, after that a model that should be upright is actually at about 45 degrees. I've tried several times and I'm getting the same result. I've checked the aligned debug folder and the images are all ok, but in the aligned folder they are not.
    Any info would be appreciated.
    Thanks

    • @Deepfakery
      @Deepfakery  3 роки тому

      Many times the side profile images will have this type of variation. This is usually due to the inability of DFL to detect the position of the eye/eyebrow that is hidden on the other side of the face. It has to guess where the landmarks should be which causes the direction of the face to be somewhat incorrect. There is not much you can do about this using only DFL. However if this is happening on images where the face is looking forward then you must have a different problem.

  • @paulgeorge9228
    @paulgeorge9228 2 роки тому

    what does 4.2 do actually? after sorting the images, are the images put in a certain place? are they all renamed? are the blurred images deleted?

    • @Deepfakery
      @Deepfakery  2 роки тому

      They are renumbered according to the sort method. Use 4.2) data_src util recover original filename to set them back to the original (parent frame) filename.

  • @mitchellmorris1213
    @mitchellmorris1213 3 роки тому

    when I put a group of photos into the "data_src" folder and try to extract the faceset, I get a message saying "error in fetching the last index. Extraction cannot be continued." whats do I do?

    • @Deepfakery
      @Deepfakery  3 роки тому

      It sounds to me like you're trying to "continue the extraction" but with a different group of images. If so, this is likely happening because it sees some images are already in the aligned folder and assumes you want to continue from the last frame you extracted. You'll want to move them out of the way for now. You'll notice in the video that i've taken measures to ensure none of the files have the same name by adding a prefix to them. Hope this helps, but if i've misunderstood your problem then feel free to add more info.

    • @mitchellmorris1213
      @mitchellmorris1213 3 роки тому

      @@Deepfakery so basically I need to remove everything that isn’t the still images out of the aligned folder?

  • @Arewethereyet69
    @Arewethereyet69 Рік тому

    I have a src that I want to use on various destinations what files do I need after pre-training?

    • @Deepfakery
      @Deepfakery  Рік тому

      Just save the model files and src faceset, and put in your dst. If you want to make multiple videos of the same person you can train 1 deepfake, then use the same model but delete the inter_ files and start a new dst.

  • @Don_4511
    @Don_4511 Рік тому

    Can you please tell how to face swap a single image in the video?
    I mean how to make deepfake when the source is a single image?

    • @Deepfakery
      @Deepfakery  Рік тому

      Try DeepFaceLive, it has a single image mode

  • @worldwideoffline6423
    @worldwideoffline6423 Рік тому

    So can I use a combination of video and photos for the dst data? If so, do i just need to add prefix to seperate video images from basic images that i add?

    • @Deepfakery
      @Deepfakery  Рік тому +1

      Yes you've got the right idea. Extract the video images and drop the photos into the data_dst folder. You can prefix the image files before faceset extraction, which will also help separate the faces after extraction if you need to.

    • @worldwideoffline6423
      @worldwideoffline6423 Рік тому

      @@Deepfakery Do all the videos and photos need to be in the data_dst folder with their respective prefixes...? In other words, they will not be detected if they are in subfolders within "data_dst" folder?

  • @zapx6480
    @zapx6480 2 роки тому

    Hi, thanks for video. Can you help pls, at moment with train SAEHD programm doesn`t start loading samples. It makes 100% Initializing models, but after it just nothing. No mistakes and nothing more. I`ve done everything like you.

  • @paulgeorge9228
    @paulgeorge9228 2 роки тому

    if my dst video facial structure is different from my src, should i use full head instead of wf for better accuracy?

    • @Deepfakery
      @Deepfakery  2 роки тому

      During XSeg make sure the masks have a similar shape (follow the chin and hairline the same way) even though one will be bigger. Then while training, learning rate dropout (LRD) may help, as its supposed to keep the source face from overfitting the destination. Use it at the end of random warp phase, and again at the end of normal training and through GAN (don't disable it at the end.)

  • @johnjohnsin3762
    @johnjohnsin3762 Рік тому

    For a video that shows manual mode as the preview pic there is little to none about it.

  • @aag1977
    @aag1977 2 роки тому

    What if there are two faces I want to use in the destination video? Do I need to make two separate videos? And can I use the sort function two separate the two faces from each easily? Then when the first one is done, I guess the result video must become the new destination video or is there a faster way?

    • @aag1977
      @aag1977 2 роки тому

      Say I want the dance scene from Frida Kahlo, but Hermione dancing with Nancy Wheeler.

    • @Deepfakery
      @Deepfakery  2 роки тому +2

      You need to make 2 projects at some point. You can make 2 videos and stitch them together in an editor, or you can do the first one and make the merged frames into the dst frames for the 2nd face. Yes, you can extract only once and separate the 2 faces, then train them separately. So do the extraction and just make a copy of the entire data_dst folder, then clean it up so there's only one face in each folder. Afterward you can stitch them together or do the swap.

    • @aag1977
      @aag1977 2 роки тому

      @@Deepfakery Can I use sort to separate the two faces for easier well, sorting, or can't the tool recognise that much? This process alone can take a while. Even for a video of a few minutes.

  • @cafugo_yt
    @cafugo_yt 2 роки тому

    Shall I use GPU (NVIDIA GeForce RTX 2060) or CPU (Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz 2.59 GHz)?
    I have absolutely no idea and I just started deepfaking...
    Do I have to delete all the files from my first deepfake if I want to start a new one?

    • @tomotion8524
      @tomotion8524 2 роки тому +1

      The very first command is called: clear workspace. Let that run and it'll clear everything

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      First of all, use the RTX 2060 and the "up_to_2080_ti" build. If you want to start over you can use "clear workspace" or just delete everything inside /workspace. You can also make a backup of any /workspace folder and just swap it in or out. You can also reuse the files for another fake, as in the same source images or the same model files.

    • @tomotion8524
      @tomotion8524 2 роки тому

      @@Deepfakery hey man, question. I've been working on a deepfake, I have 2 hours of footage, all different face angles and lighting, and the deepfake is working over 400.000 interations, but it's still a bit blurry. But my footage is pretty high quality, so I don't understand why my deepfakes are a bit blurry, how can I fix this

    • @cafugo_yt
      @cafugo_yt 2 роки тому

      @@Deepfakery Thank you💙

  • @tranchikien6383
    @tranchikien6383 2 роки тому

    i used gpu intel iris xe for training, after a while it gave me the following error: F tensorflow/core/common_runtime/dml/dml_upload_heap.cc:56] HRESULT failed with 0x887a0005: chunk->resource->Map(0, nullptr, &upload_heap_data). Can someone help me

  • @sakuraistrash2much
    @sakuraistrash2much Рік тому

    plz help its showing module not found.I dont have any good graphics card so i only installed direct x12 do i also need to install rtx 2080ti and nvidia?its saying failed to load the native tensorflow runtime

    • @Deepfakery
      @Deepfakery  Рік тому +1

      No, you only need the DX12 version. Module not found couple be alot of things. First thing to check is the directory to your DFL folder. Make sure there's no space or special characters like "C:/Some Folder/Some other folder/Deep Face Lab/". Best to have it in something like "C:/DeepFaceLab_Directx12_build_XX_XX_XX"

    • @sakuraistrash2much
      @sakuraistrash2much Рік тому

      @@Deepfakery bro thx a lot.also it takes too much of time could u deepfake a supergirl tv series porn video for me bro plz same costume as tv supergirl exact.i just need mellisa benoist face on that video.coiuld u plz deepfake me and give me ?on mega or email?

  • @HighLanderPonyYT
    @HighLanderPonyYT Рік тому

    2:38 The extraction was completed afaik but there are no images in this folder, what could be the problem? All there is is an "aligned" folder that's empty. A single mp4 file was used and the program was put in a custom folder. Drivers are up to date.

    • @Deepfakery
      @Deepfakery  Рік тому

      So it looks like you got stuck at extracting the frames images. At 2:28 you can see a frame count near the bottom of the screen. Did it take time to process the frames or did it end abruptly with no frame count? Any other messages or errors? The first thing I would check is that the videos have the proper filename, like data_src.mp4, because it sounds to me like it didn't find the video.

    • @HighLanderPonyYT
      @HighLanderPonyYT Рік тому

      @@Deepfakery Thanks for getting back to me!
      The process ran all the way and it didn't end abruptly from what I've seen, and it returned zero errors. The file name was data_src and it was an mp4 file, although I don't have the file format visible in the file name (I doubt that matters).
      I get a huge list of frames with the frame, FPS, size, drop, and so on info, several rows. Then some info about video, audio, subtitle, and so on, and it says it's "Done". No errors. I can "press any key to continue". When I do, it closes and I see no images in the src folder.
      Someone else said that it might be the folder path that messes it up, I'm gonna look into that later.

    • @HighLanderPonyYT
      @HighLanderPonyYT Рік тому

      @@Deepfakery Seems like it's putting the files on to the default drive, not where I put DFL itself. I just don't know where. The "output" section of the process specified a folder where I put DFL, not somewhere on the default drive. Any ideas?

    • @HighLanderPonyYT
      @HighLanderPonyYT Рік тому

      @@Deepfakery Nvm, found it! It's in some odd random HIDDEN folder. :S From what I gathered it might be an AV sandbox file. Did my AV shove the files in there instead of their intended place? No idea. lol
      How'd I go about handling these? Move them to the intended folders? Work with them from here? If so, how? Thx!
      Yup... it keeps extracting to my main drive rather than the drive I put DFL on. How'd I go about specifying which place DFL should put the files in?

    • @Deepfakery
      @Deepfakery  Рік тому

      Well normally its going to look for the files inside the intended folder, but something weird is happening here. I'm not sure that you mean by AV sandbox file. Antivirus? I suppose its possible the files were quarantined. Maybe check the log? I just Windows Defender and haven't had any problems, other than the initial warning when decompressing the exe file. Try putting them into the correct folder anyway and see what happens.
      Sometimes there can be problems if there are spaces in the path like "C:/some folder/another folder name/Deep Face Lab/ blah...", or if the path is really long. I put my DFL in C:/DeepFaceLab/Whatever_build_name.
      The only other thing I can think of that would change the directory is if you modified the code, or are using DFL via a python environment and have somehow set a different output path.

  • @itsrak6791
    @itsrak6791 2 роки тому

    Sir in a video where there is multiple faces, can i deep fake a single face among them without blurring others?

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      Yes, please refer to the portion of the video about cleaning the faceset. You just need to remove the extra faces after extraction.

  • @timex9605
    @timex9605 2 роки тому

    From where can i download readymade facesets, I searched a lot to find the faceset of that celebrity but couldn't, what should i do now?

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      I'm working on a faceset repo for my new site www.deepfakevfx.com/. Right now you can find some pretrained models

  • @robertoalcantar2853
    @robertoalcantar2853 2 роки тому

    Can I use just one Still image? When I try to extract says not enough images.

    • @Deepfakery
      @Deepfakery  2 роки тому

      You can, just drop the image in the data_src or data_dst folder (however you intend to use it) and then extract. You should end up with one face in the /aligned folder. If it doesn't work maybe the image type isn't supported or has some weird characters in the filename. Try using a jpg or png and rename it to 00001.jpg or whatever, the way DFL would name the files.

  • @worldwideoffline6423
    @worldwideoffline6423 Рік тому

    Everything made sense up until 12:58... How do i compare the two results after sorting?

    • @worldwideoffline6423
      @worldwideoffline6423 Рік тому

      What does it mean to compare ranges?

    • @Deepfakery
      @Deepfakery  Рік тому

      I meant that you need to view both facesets, side by side maybe, and see how they match up. You just have to eyeball it, and decide if anything in the src can be removed, or if you're lacking some images to match the dst.

    • @Deepfakery
      @Deepfakery  Рік тому

      Suppose you have a dst where the actor is always facing slightly left. You would sort both facesets by yaw/pitch and remove src image that are further to the left, a big chunk of images facing right, and some of the up/down angles that don't match the dst images. Similarly with color and facial expressions, try to match the dst and discard anything extra, keeping just a little extra for variety.

  • @pyroswolf8203
    @pyroswolf8203 3 роки тому

    Thanks for the video, I want to use 2 faces 1 faces will give the face movements and this movements will be shown in other face how can ı do this ?

  • @menchincen
    @menchincen 2 роки тому

    thank for the tutorial , i have a question, how can i edit aligned_debug with two or more faces because only can select one face with data_dst faceset MANUAL RE-EXTRACT DELETED ALIGNED_DEBUG :/

    • @Deepfakery
      @Deepfakery  2 роки тому

      Sadly it only allows one face. You’d have to make a separate project for additional faces

  • @dedfish7495
    @dedfish7495 3 роки тому

    How to go about applying different faces to multiple people in destination video?

    • @Deepfakery
      @Deepfakery  3 роки тому

      You'll have to make separate projects for each different face. You can extract the faces then split them up. You'll need to include the original frames in each project. However I prefer to mask out the faces I don't want and do a separate video export for each.

  • @androiddrummer8554
    @androiddrummer8554 3 роки тому

    When I'm running extract images from src video data_src it is showing this error cuda version is insufficient for cuda runtime version error in deepface...please help me to fix it

    • @MasterHKS
      @MasterHKS 3 роки тому

      Try using CPU insead of GPU. At the bottom there's an option that says "Make everything CPU only"

    • @lupsik1
      @lupsik1 2 роки тому

      The CPU suggestion will run terribly slow.
      What youre supposed to do is check what version of CUDA youre running and what version of your gpu drivers youre running.
      Then there’s a table where you can find where the conflict is :)
      For example if you see that :
      CUDA 10.1 (10.1.105) >= 418.39
      CUDA 10.0 (10.0.130) >= 410.48
      And your nvidia driver is 397.36
      Then you’ll know that you just need to update your drivers.
      Im almost certain that the CUDA and CuDNN toolsets that got installed are correctly installed so its very likely that all you need is to update your gpu drivers.
      If its still not working then i would check if both those toolsets got correctly added to PATH which should be doable for anyone who can use Google as its a common question on forums

  • @kimkar6402
    @kimkar6402 3 роки тому

    Tell me please. Can deepfacelab replace the face in the photo? or only on video

    • @Deepfakery
      @Deepfakery  3 роки тому

      Yes you could do a photo, just us the image as your destination frame and extract the single face from it.

  • @bradfilms8278
    @bradfilms8278 2 роки тому

    Hi, I'm getting the error "Error in fetching the last index. Extraction cannot be continued" when running Step 5. Can anyone help me?

    • @bradfilms8278
      @bradfilms8278 2 роки тому

      Never mind! Turns out I had some old files in the "aligned" folder. Deleting them fixed it! :)

  • @krinodagamer6313
    @krinodagamer6313 2 роки тому

    ok so where is the final file of the trained data I can use in another program

    • @Deepfakery
      @Deepfakery  2 роки тому

      After merging the images you’ll have 2 png sequences in the data_dst folder, one for images and one for masks. If you merge to video afterward you’ll also get the same as video files.

  • @DDD.123s
    @DDD.123s Рік тому

    how to re-extract in src face set

  • @ahmed3261
    @ahmed3261 3 роки тому

    HELP!
    Whenever I try to extract faces from data_dst, data_src
    Images are found, but no faces found. even though I am using the same videos in this tutorial!
    Specs:
    RTX3060 Laptop (6 GB) , AMD RYZEN 7 5800, and 8GB RAM

    • @Deepfakery
      @Deepfakery  3 роки тому

      Make sure you are using the RTX3000 build of DFL and updated GPU drivers. There are still some improvements to be made as some users are still having problems.

    • @ahmed3261
      @ahmed3261 3 роки тому

      @@Deepfakery I made sure I downloaded the right version, and everything is updated, but still can't find faces? (Tried on different videos same results)

    • @gregbjorkman
      @gregbjorkman 2 роки тому

      @@ahmed3261 do you have any updates? I'm running into the same thing this morning and its a very straight forward, one person in frame looking directly at the camera shot

    • @ahmed3261
      @ahmed3261 2 роки тому

      @@gregbjorkman No updates, I lost hope :( if you have any updates let me know

  • @PascalQNH2992
    @PascalQNH2992 Рік тому

    I do not have the map called ''workspace'' My downloaded version only has two maps called ''_internal'' and the other map ''userdata''

    • @Deepfakery
      @Deepfakery  Рік тому

      Make sure you downloaded from one of the official sources: www.deepfakevfx.com/downloads/deepfacelab/
      Otherwise try re-downloading, or extracting the archive again.

  • @katana6533
    @katana6533 2 роки тому

    i got the face detected: 0 error, any ideas guys?

  • @Ak-st1wi
    @Ak-st1wi 3 роки тому

    can you please make a tutorial on how to use Xseg mask(Quick96).. plzzzzz

  • @ghelfling_bunny
    @ghelfling_bunny Рік тому

    Deep fake: Is it possible to create a non existing face by using more than one face in replacement?

    • @Deepfakery
      @Deepfakery  Рік тому +1

      Technically, maybe. You'd need to have an even spread of all the faces and even then the trainer is going to prefer the stuff that matches the destination images. I suspect it would result in a shifty likeness that changes slightly from pose to pose. I've thought of this but haven't tried it myself. You might consider using some 'AI' tool that can take the images as input, generate a new face, then put that into the deepfake.

    • @ghelfling_bunny
      @ghelfling_bunny Рік тому

      @@Deepfakery thanks! If you try it, please let us know. It would be interestig even if it doesn't work.

  • @jivadaya6439
    @jivadaya6439 2 роки тому

    what about using still photos only? Trying but says input_file not found. ?

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      Put the photos into the data_src folder. Skip the video extraction and go directly to 4) data_src faceset extract.

    • @jivadaya6439
      @jivadaya6439 2 роки тому

      @@Deepfakery Thank you. Another question: when switching to new project with same src but new dst video, says can't continue at data_dst faceset extraction. aligned folder has info from previous project, just delete those files from aligned folder? Or do I have to clear workspace entirely and start all over with each project?email sent to you thanks

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      Delete everything in data_dst. DFL will rebuild the subdirectories as needed.

  • @JeremyTryit2
    @JeremyTryit2 2 роки тому

    Hey Mr. Deepfakery, this trick can be used to make any "Real Time" deepfake? Like where i can match my lips & eyes up to the person in the picture? Do you a website i can go to normal people pictures to do this not celebrities? I would like to make at least 5 deepfake people and are you in the united states? I'm asking because I would pay you to teach me step by step how to do this! I've been researching for 3 months and really want to perfect this technique.
    Let me know if the specs on my laptop is enough to do these deepfakes or should i get a different laptop. Thhanx so much
    8th Gen Intel Core i7-8750H 6 Core - NVIDIA GeForce GTX 1060 Max-Q - 16GB RAM - 128GB SSD + 1TB HDD

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      The developer has also created DeepFaceLive which works with webcams for real time deepfakes. You have to train the model first in DeepFaceLab then export as DFM. As for pictures of average people i'm not sure how you would get alot of the same person. Maybe a stock photo site or social media? The laptop is good to start with just make sure it doesn't overheat. If you want to get serious about deepfaking you'll probably want to build a desktop PC with the best NVIDIA GPU that you can get. I have 2 x 1080 Ti right now, looking at the RTX A6000 as an upgrade. If you need more info please check out my new site www.deepfakevfx.com. There's some tutorials and downloads already and I will be posting a full guide this week.

  • @video_photo_editing
    @video_photo_editing Рік тому

    can the hair on the head be replaced?

    • @Deepfakery
      @Deepfakery  Рік тому

      Yeah you have to use Face Type: HEAD and mask the entire head including hair. Its best to use src faceset images from a single video so that the hair is consistent.

    • @video_photo_editing
      @video_photo_editing Рік тому

      @@Deepfakery I don't know, I spent more than 2 days on the iteration (360,000) and the result was not impressed, it's better for me to do a face replacement in after effects using mocha ae😀

  • @yandels
    @yandels Рік тому

    i am not getting the .bat file. Is it the version i downloaded?

    • @Deepfakery
      @Deepfakery  Рік тому

      There's a link to download windows builds on the GitHub repo. My Installation Tutorial covers it: ua-cam.com/video/8W9uu-pVOIE/v-deo.html

  • @lalittak6270
    @lalittak6270 3 роки тому

    What is the minimum requirements of a pc to run deepfacelab in terms of CPU only.

    • @Deepfakery
      @Deepfakery  3 роки тому

      I don't know if there are minimum requirements, but you'll do better with a CPU that has AVX.

  • @hoeepor2437
    @hoeepor2437 Рік тому

    Thanks deepfakery. but in 'DATA_SRC FACESET EXTRACT' process, i got an error 'Error in fetching the last index. Extraction cannot be continued.'... what is cause? and how i fix it?

    • @Deepfakery
      @Deepfakery  Рік тому

      Seems like it is trying to continue extraction. Did it get interrupted? Are there any files in data_src/aligned? Also, did yo extract the frames from video first?

  • @renderthings722
    @renderthings722 Рік тому

    Is this footage from a movie? Were you get it?

    • @Deepfakery
      @Deepfakery  Рік тому +1

      Margot Robbie is from various films, the other stuff is from the Birds of Prey TV show

  • @mitchellmorris1213
    @mitchellmorris1213 3 роки тому

    how do you make a backup of the data_src/aligned?

    • @Deepfakery
      @Deepfakery  3 роки тому +1

      Just make a copy of the folder, it will not interfere with DFL.

  • @Nastyflubber
    @Nastyflubber 3 роки тому +2

    Omg so amazing now i'm a deepfake pro

    • @ekereoghenero5734
      @ekereoghenero5734 Рік тому

      Can you teach me how to make these deeofakes

    • @benvideos1853
      @benvideos1853 Рік тому

      ​@@ekereoghenero5734 therr are many UA-cam tutorials

  • @TheOne-iy8ok
    @TheOne-iy8ok 3 роки тому

    i have only 1 picture and i want to deepfake it on a video, how can i do that plz help

    • @Deepfakery
      @Deepfakery  3 роки тому

      With only one picture you should try a software called First Order Motion Model

  • @MasterHKS
    @MasterHKS 3 роки тому +1

    Really helpful tutorial! I have a question, I have a model wich is trained alredy (18,000 iterations) and i want to add 6 more images, how do i do?

    • @Deepfakery
      @Deepfakery  3 роки тому +3

      You can add more images to the aligned folders at any time. If you need to extract more images then you have to temporarily move the workspace folder with all the original stuff, or make another copy of DFL and do it there.

    • @MasterHKS
      @MasterHKS 3 роки тому

      @@Deepfakery Thank you so much!

  • @nateeaton7729
    @nateeaton7729 2 роки тому +1

    Thanks! Solved all my problems!

  • @LolaKnight-tf1hv
    @LolaKnight-tf1hv 10 місяців тому

    Can this be used to video call someone for fun ??

    • @Deepfakery
      @Deepfakery  10 місяців тому +1

      DeepFaceLive

    • @LolaKnight-tf1hv
      @LolaKnight-tf1hv 10 місяців тому

      @@Deepfakery did you do a Tutor on how it works please

  • @Alter3go
    @Alter3go 2 роки тому

    does this works with deepface live too?

    • @Deepfakery
      @Deepfakery  2 роки тому

      Yes, in fact you need to prepare the data and train the model in DeepFaceLab, then export it for DeepFaceLive.

  • @Thalaivan001
    @Thalaivan001 3 роки тому

    Is there any chance of doing it on an android phone?

  • @marcoaureliohunter
    @marcoaureliohunter Рік тому +1

    Muito obrigado por seus tutoriais.

  • @DansaSemesta
    @DansaSemesta 3 роки тому +2

    It's very clear, thanks

  • @krinodagamer6313
    @krinodagamer6313 2 роки тому

    whoahhhhh

  • @fubukisophicha8831
    @fubukisophicha8831 3 роки тому +1

    can you please make a tutorial on how to use Xseg mask(Quick96).. yes Xseg is the key to make the result look real

    • @Deepfakery
      @Deepfakery  3 роки тому

      I am going to make one but the process is rather simple: mark many faces, train the mask, apply the mask

    • @hight7602
      @hight7602 3 роки тому

      I've created a video on doing XSeg masking, if you want to check it out: ua-cam.com/video/e-k-k67gf1o/v-deo.html

  • @guss2012
    @guss2012 2 роки тому

    Can you add more footage to the training if the facial expressions in the imported footage are not enough? How to do it? ty

    • @Deepfakery
      @Deepfakery  2 роки тому +1

      Yes you can add more images to the faceset. You can just drop them into data_src/aligned but you have to be careful with filenames. The best way is to take your original clip into an editor and add more footage to the end then extract the faceset again. If you already cleaned the faceset then just make a copy of it before you extract the new longer clip, then delete that section after extraction and focus on the new faces. Otherwise you can do a completely new extraction and append something to the frame image filename, as I pointed out in the video, to make sure the filenames are different. There is an option to "continue extraction" but i haven't used it much.

  • @fubukisophicha8831
    @fubukisophicha8831 3 роки тому

    why no subtitle?

    • @Deepfakery
      @Deepfakery  3 роки тому

      Still processing I guess. I’m going to add my own with other languages soon

    • @fubukisophicha8831
      @fubukisophicha8831 3 роки тому

      @@Deepfakery you didnt turn on, ua-cam.com/video/Nv0mL_cIhGo/v-deo.html

  • @akasblack5686
    @akasblack5686 2 роки тому

    make detils with multiple face

    • @Deepfakery
      @Deepfakery  2 роки тому

      You mean to make a deepfake with multiple faces? Best way is to do separate projects and combine them in post.

    • @akasblack5686
      @akasblack5686 2 роки тому

      @@Deepfakery no...if there 3 person 3 face ..want to replace one face only...how
      Could be

  • @koos1418
    @koos1418 2 роки тому

    Merge tutorial

  • @ДенисБекренев-х8ф
    @ДенисБекренев-х8ф 3 роки тому

    Есть русские? Нужна помощь ! Обучение за вознаграждение !

  • @Gh0zTEdiTs
    @Gh0zTEdiTs 3 роки тому

    Could not load library cudnn_cnn_infer64_8.dll. Error code 1455
    Please make sure cudnn_cnn_infer64_8.dll is in your library path!
    Help?

    • @Deepfakery
      @Deepfakery  3 роки тому

      This looks like a file error, try unzipping DFL again. Also make sure it’s the right version for the GPU or CPU you are using

  • @phoraonesinatra
    @phoraonesinatra 2 роки тому

    Does the Evga Black Founders Edition GeForce Rtx 2080ti not work on Deepfacelab? I have a Ryzen 7 2700x for my CPU. If the GPU won't work will I at least be able to run the CPU on a quick96 format?

    • @Deepfakery
      @Deepfakery  2 роки тому

      It should work with the "up to 2080ti" build. Are you getting an error or anything?