Easy Deepfake tutorial for beginners Xseg

Поділитися
Вставка
  • Опубліковано 28 лис 2024
  • Easy Deepfake tutorial for beginners Xseg.
    Easy Deepfake tutorial for beginners Xseg,Deepfake tutorial for beginners,deepfakes tutorial,face swap,deep fakes,deep learning,deepfakes guide,deepfacelab tutorial,how to deepfake,tutorial,deepfake for beginners,deepfaketutorial,machine learning,deepfacelab quick96 tutorial,deepfacelab 2.0 tutorial,deepfacelab 2020 tutorial,deepfacelab 2.0 guide,quick96 tutorial,easy deepfake,deepfake app,xseg,dune2020,saehd best settings,xseg editor tutorial,xseg mask,dune trailer,dune 2020 trailer
    #deepfake
    #deepfacelab
    #dfl
    #dune2020
    #jsfilmz
    Check out my Unreal Engine 4 for Filmmaking Beginners Course!!
    sellfy.com/jsf...
    Learn Unreal Engine 4 for filmmaking from scratch!!
    Here is another way you can support the channel! Check the description on the list of topics i covered.
    sellfy.com/jsf...
    Buy my course from Artstation
    www.artstation...
    How to make a movie in Unreal Engine 5 Beginners Edition Sellfy Link:
    fpvltjxd.sellf...
    How to make a movie in Unreal Engine 5 Beginners Edition Artstation Link:
    www.artstation...
    Udemy Link
    www.udemy.com/...
    Check out My Linktree for more:linktr.ee/jsfilmz

КОМЕНТАРІ • 113

  • @Jsfilmz
    @Jsfilmz  4 роки тому +8

    vote for my short film here!myrodereel.com/watch/8674

    • @SP95
      @SP95 4 роки тому +1

      🗳✔

    • @SPT1
      @SPT1 4 роки тому

      Hello, just took a look at your channel, some impressive stuff. Congrats. Especially the Fast & Furious aging video. And that's why I'm writing to you : I have solid knowledge of DFL 2, but none about Fakeapp and almost none about EBSynth. So, could you make a tutorial on how you did the aging in this video and do you think it's doable with DFL + EbSynth ? If so, you would only need to explain the EBSynth part. Or link me a good tutorial for aging if it already exists. Also, whatever step you do in postprod to have this clean and crisp look, I'm also interested to know (I know Premiere well, AE not so much).

    • @Jsfilmz
      @Jsfilmz  4 роки тому +1

      SPT1 hey bro sorry for late response youtube is terrible at notifying. I will make a tutorial for sure

    • @oliverwatson6689
      @oliverwatson6689 4 роки тому

      VOTE done brother

  • @zabique
    @zabique 4 роки тому +4

    You got DFL hooked up
    [13:40] hit space to change for the actual preview to src or dst and then P to change frame to identify any xseg issues.
    U can use pretrained xseg model too for faster learning. There are 1mln iter available.

  • @kaaaiyu
    @kaaaiyu 2 роки тому +12

    18:55 and I will see you guys in a week. Lmao 😂 Great tutorial dude!

  • @Slay0r815
    @Slay0r815 Місяць тому

    face is ok, all you need is a new hair style. BTW,I'm very new to this dfl thng. Can i use pretrained model & Pretrained Xseg mask together for SEAHD training?

  • @SP95
    @SP95 4 роки тому +6

    There is still a lot of manual work but for those who are brave enough it seems rewarding

  • @drgclangamez
    @drgclangamez 7 місяців тому

    Is there a faster method or program /software with better results today?

  • @Snafu2346
    @Snafu2346 3 роки тому +1

    Previously on older versions of deepfacelab, I never had any issues with obstructions over the face. I never had to do any custom roto scoping techniques, but now with dlf 2.0, it seems like any thing covering the face, like a hand waving, or long hair, or whatever it is causes the mask to breakdown or go blurry. What settings have changed that allow the behind the mask to stay intact. Previously it was just learned DST, but that doesn't work the way it used to.

    • @Snafu2346
      @Snafu2346 3 роки тому

      without any obstructions, I have no problem. The deepface mask looks fine, but as soon as something gets in the way, what didn't used to be a problem is now suddenly a problem.

  • @deviljoe7171
    @deviljoe7171 3 роки тому +1

    Hey do you recommend using a pretrained model or training it from scratch

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      i train once then save it just incase i have to use that same face again

  • @enochAyim
    @enochAyim 2 роки тому +2

    derm derm derm, u really helped me , thumbs up bro

  • @botlifegamer7026
    @botlifegamer7026 Рік тому

    Mine doesn't merge them all to the same setting. any reason or can you think of one why it doesn't

  • @georges8408
    @georges8408 3 роки тому +1

    SAEHD is by far more time consuming than Quick96.. it takes for ever to finish something.... the question is, can we use Xseg to train masking and use Quick96 (which is much faster) or XSeg training it works only with SAEHD ?

    • @Arewethereyet69
      @Arewethereyet69 Рік тому

      did you ever find out?

    • @georges8408
      @georges8408 Рік тому

      @@Arewethereyet69 unfortunately I quit ... it is so complicated and time consuming that doesn't worth the time

  • @LukmanHakim-np2fk
    @LukmanHakim-np2fk Рік тому

    question : can i use only images for data source

  • @BenjiJames
    @BenjiJames 2 роки тому +2

    Your work is so good, dude, that i decided to get into this myself, however, upon starting up, i realized that there's so little info out there about pretraining stuff, so i have a couple of questions:
    1. How long did you pretrain your src model for (when you did)?
    2. Did you place your src model into the folder "/_internal/pretrain_faces"? If not, how did you pretrain your face model?

    • @Vacated204
      @Vacated204 Рік тому

      Did yo figure this out? I’m still confused on how to use pretrained models. Do I train the celeb I’m swapping with the pre train?? I don’t understand lol

  • @freehaven-junprince2376
    @freehaven-junprince2376 3 роки тому

    Very good video, but I have a question about the end.... Did you really spend a full week with 800k+ iterations to make a 2-3 second destination video? If you wanted to make a 5 minute video how much longer would we need? Is it the same, or are we talking a few months?

    • @chi11estpanda
      @chi11estpanda 3 роки тому

      You know, I seriously wrote out a long answer and then when re-reading the question again, I suddenly realized that you're actually/probably just teasing him so it was rhetorical and you probably already know all the wrong turns this video made without me telling you, huh?

  • @clydefrog6961
    @clydefrog6961 3 роки тому

    Very noob question indeed but what are in a nutshell the differences between XSeg train, Quick train and SAEHD train ?

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      xseg is manual others are automatic

  • @danielreso4405
    @danielreso4405 2 роки тому

    good afternoon you how to remove the blur caused deepfake creation is blurry is there any technology for this.

  • @kunhasanhadi3711
    @kunhasanhadi3711 4 роки тому +2

    Nice tutorial! What's your pc spec? Tks

    • @Jsfilmz
      @Jsfilmz  4 роки тому +2

      Kun Hasan Hadi ryzen threadripper 1950 gtx 1080 its almost 4 years old. I tried to snag a 3080 but u know how that went lol

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      Anastazy Staziński of course man ill make more as i learn

  • @tgtutorials
    @tgtutorials 11 місяців тому

    Najlepszy samouczek jaki widziałem na UA-cam😉

  • @georges8408
    @georges8408 3 роки тому

    Thank you for this tutorial... please let us know something. If we make a good model of ourselves (source video), then we can use it in all times right ? I mean we can avoid all the steps for extract images etc that concern the source video. Is it correct ?

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      yes u can save the model and iterations

    • @georges8408
      @georges8408 3 роки тому +1

      @@Jsfilmz thanks but how ? where is the trained model ? (I use my self video as source)

  • @gauravlokha8787
    @gauravlokha8787 11 місяців тому

    My process -
    2) extract images from video data_src
    3) extract images from video data_dst FULL FPS
    4) data_src faceset extract
    5) data_dst faceset extract
    5.XSeg) data_dst mask - edit
    5.XSeg) data_src mask - edit
    5.XSeg) train
    5.XSeg) data_dst trained mask - apply
    5.XSeg) data_src trained mask - apply
    7) merge SAEHD
    This is exactly what I did. Now while I started to merge, the model seems to apply the destination face itself to data_dst. What am I doing wrong?? It's not applying source face for some reason.

    • @idigogideon4339
      @idigogideon4339 Місяць тому

      you did not train the saehd, you just started merging

  • @bakermclendon
    @bakermclendon 4 роки тому +1

    JSFILMZ - once I edit the dst or src with XSeg, do I select XSeg data/src_dst/src trained mask-apply.bat opposed to the XSeg) train.bat? It appears you went directly to the XSeg train.bat opposed to applying the Xseg mask first. It may not make a difference. Thanks for any info you can provide.

    • @Jsfilmz
      @Jsfilmz  4 роки тому +2

      hey bro gotta train first before u apply

  • @KillaCyst
    @KillaCyst 4 роки тому

    great tutorial. I'm a noob to deepfakes and have done a couple but I think xseg might be the step I was missing!

  • @Arewethereyet69
    @Arewethereyet69 Рік тому

    What if my laptop cant do train SAEHD and only Quick96? will the XSEG training not be used?

    • @doziekizito3399
      @doziekizito3399 Рік тому

      I have this issue too. My laptop only has a 6GB VRAM NVIDIA RTX Geforce 40 series. So it can't train SAEHD

  • @sheeeple2069
    @sheeeple2069 4 роки тому +1

    you should mask out the obstructions during the xseg part

    • @Jsfilmz
      @Jsfilmz  4 роки тому +3

      bro i tried man but the face changes it makes funny faces when i exclude something like a knife goin across his face hahahaha when lets say a hand goes across the face im trying to figure out how to fix

  • @KhalilCh
    @KhalilCh 2 роки тому +1

    can I do it on mac?

    • @Jsfilmz
      @Jsfilmz  2 роки тому

      never owned a mac sorry

  • @rgrimoldi
    @rgrimoldi 3 роки тому +1

    how do you eliminate the blurriness?

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      iteration time leave it longer, i think that was 800k

    • @francisdeleon5647
      @francisdeleon5647 Рік тому

      @@Jsfilmz Hi mine was 976,873 iterations and still blurry. any advice?
      ================== Model Summary ===================
      == ==
      == Model name: new_SAEHD ==
      == ==
      == Current iteration: 976873 ==
      == ==
      ==---------------- Model Options -----------------==
      == ==
      == resolution: 128 ==
      == face_type: wf ==
      == models_opt_on_gpu: True ==
      == archi: liae-ud ==
      == ae_dims: 256 ==
      == e_dims: 64 ==
      == d_dims: 64 ==
      == d_mask_dims: 22 ==
      == masked_training: True ==
      == eyes_mouth_prio: False ==
      == uniform_yaw: False ==
      == blur_out_mask: False ==
      == adabelief: True ==
      == lr_dropout: n ==
      == random_warp: False ==
      == random_hsv_power: 0.0 ==
      == true_face_power: 0.0 ==
      == face_style_power: 0.0 ==
      == bg_style_power: 0.0 ==
      == ct_mode: none ==
      == clipgrad: False ==
      == pretrain: False ==
      == autobackup_hour: 0 ==
      == write_preview_history: False ==
      == target_iter: 0 ==
      == random_src_flip: False ==
      == random_dst_flip: True ==
      == batch_size: 8 ==
      == gan_power: 0.0 ==
      == gan_patch_size: 16 ==
      == gan_dims: 16 ==
      == ==
      ==------------------ Running On ------------------==
      == ==
      == Device index: 0 ==
      == Name: NVIDIA GeForce GTX 1660 ==
      == VRAM: 4.80GB ==
      == ==
      ====================================================

  • @silverhawk661
    @silverhawk661 4 роки тому +1

    @ JSFILMZ
    Hey there. Thanks for the great tutorial.
    Is there any way to make the nose of the dst model have the same shape as the src model when he looks to the side ?
    When the character look to the side, his nose keeps the dst shape, i want his nose shape to look like the src model when he looks to the side (side view/profile view).
    How am i supposed to do that ?

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      yea you can increase the face style power but be very careful save your model first

  • @guptaflavio5383
    @guptaflavio5383 4 роки тому

    *Hi, In the aligned results, there are 2 faces. How will I know if it belongs to the subject and not the extra person?*

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      Gupta Flavio for your destination folder im guessing?

    • @idigogideon4339
      @idigogideon4339 Місяць тому

      delete the one you dont need

  • @romania3dart
    @romania3dart 3 роки тому

    Hi....It say Exception: Unable to start subprocesses ....
    Press any key to continue ...
    Then after i press enter is shutting down ...

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      u need an nvidia gpu probably

  • @impcharts
    @impcharts 2 роки тому

    How to create a source face that is an average of two persons? Can I extract faces separately from two characters (each producing its own data_src), and put all the aligned face jpg files together into one aligned folder to train? Can this method create an average source model that looks somewhat like both of the characters? Is there a better method? I appreciate your answer. Thanks.

  • @GiselaMarten-d4c
    @GiselaMarten-d4c 3 місяці тому

    Hello, I'm building head. how do i find the faceset.pak for head ? thanks

  • @tszlokchan5343
    @tszlokchan5343 3 роки тому

    Hi after using the dst sort, I deleted some of aligned face , then I sort the faces again by using original filename. However, the numbers of the faces changed and not same with the photos after I extract, is that fine or what .thank you

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      no that sounds weird bro

  • @bharatiratan5209
    @bharatiratan5209 4 роки тому

    Hi! I have Nvidia GeForce MX110 2 VRAM can I use deepfakelab. Please let me know.

    • @bharatiratan5209
      @bharatiratan5209 4 роки тому

      Please reply

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      prolly not bro, u can try cpu but it wont be as good

  • @slafajsldasdf7592
    @slafajsldasdf7592 2 роки тому

    Where is the repo for these bat files?

  • @grantpeterson2524
    @grantpeterson2524 4 роки тому

    Hey man, not sure why but the Xseg training isn’t working for my 3080 :( it does 1 iteration and then just stops doing iterations. No errors or anything, just seems to be infinitely working on a single iteration. Any ideas? No support for 3000 series cards yet? Seems to work fine off my 5900X but it obviously isn’t optimized for CPUs so it takes 2 seconds an iteration if I do that

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      far as i know rtx 3000 is still not supported with dfl

  • @BenjiJames
    @BenjiJames 2 роки тому

    Fantastic tutorial, good sir! If you're looking for brown actors or darker skin toned people, you'll find heaps in Australian movies/trailers :)

  • @GarageRockk
    @GarageRockk 4 роки тому

    Hey ! thank you very much for this tutorial. My question is, what happens when you have a video with two people in scene? how do i choose wich one i want to deepfake ?

    • @Jsfilmz
      @Jsfilmz  4 роки тому +1

      Ale Camps two faces as the destination or source?

    • @GarageRockk
      @GarageRockk 4 роки тому

      @@Jsfilmz For my destination. Say for example i have a harry potter scene with hermione and ron and i want my face only to be applied in ron's face.
      Thank you for your time and response!

    • @Jsfilmz
      @Jsfilmz  4 роки тому +4

      Ale Camps you can do a manual extract for the dst or just crop out the other faces or blur them. If you can wait im planning on doing more tuts

    • @GarageRockk
      @GarageRockk 4 роки тому

      @@Jsfilmz thank you very much! Of course ill wait, definitely suscribing

    • @chi11estpanda
      @chi11estpanda 3 роки тому

      @@GarageRockk When you extract faces from your destination source, you will have an extra set of photos in your data_dst/aligned folder, delete all the ones that detected and are focused on hermoine's face or to be more precise and not accidentally delete what you need, use data_dst view aligned debug results to see the landmarks of each image in data_dst/aligned/ and for any of them show landmarks for Hermoine can be deleted from the aligned folder, (make sure you do NOT delete them from data_dst folder but the algined folder within data_dst) then you can proceed with the rest of the instructions if you want, but there are a lot of steps here that are not ideal and is not recommended. But I mention this more for anyone reading the answer to this later in the future, as 4 months have passed and you may have already learned for yourself how to accomplish this.

  • @LightWolf25
    @LightWolf25 4 роки тому

    How do you fix the eyes looking different positions to original dst?

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      LightWolf25 try doin eye priority y then turn it off when its fixed

    • @LightWolf25
      @LightWolf25 4 роки тому

      @@Jsfilmz when does that option come ? Thnx

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      LightWolf25 during training its called eyes priority its in this video

  • @boratsagdiyev1586
    @boratsagdiyev1586 4 роки тому

    when i'm at train SAEHD and it starts i see all the random included faces instead of my own, i dont get it i copied the entire workflow. please help!

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      did u clear ur workspace?

    • @Vlfkfnejisjejrjtjrie
      @Vlfkfnejisjejrjtjrie 4 роки тому

      Did it ask to use the CelebsA pre-trained model upon initial model creation? Say no. New to deepfacelab so not sure if you can just override it on next start up.

  • @tszlokchane5505
    @tszlokchane5505 3 роки тому

    Hello , why I getting error when I go for the xseg training

    • @Jsfilmz
      @Jsfilmz  3 роки тому

      what gpu

    • @tszlokchane5505
      @tszlokchane5505 3 роки тому

      JSFILMZ I am using gtx 1070 which has 8gb vram . However, I still getting memory error or just stuck after I entered the batch size

    • @tszlokchane5505
      @tszlokchane5505 3 роки тому

      Benjamin Blacher oh is there any difference between putting the folder at C drive in case of other drive?

    • @paulgeorge9228
      @paulgeorge9228 2 роки тому

      @@tszlokchane5505 yeap. it can't run xD. it didnt work for me at first then i moved it to c drive n also did other things then it worked

    • @idigogideon4339
      @idigogideon4339 Місяць тому

      @@tszlokchane5505 reduce the batch size

  • @VKTVCHANNEL
    @VKTVCHANNEL 4 роки тому

    How much time it takes to create deepfake a 15 seconds video on a normal i7 64 bit computer?, Please...

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      VK TV its gpu u will need for it to be faster.

    • @VKTVCHANNEL
      @VKTVCHANNEL 4 роки тому

      @@Jsfilmz Thank you. But what could be the estimated time it takes to create 15 seconds of a deepfake video on my normal i7 64 bit computer? I am curious. Can I start? Or should I leave it for weeks? Or what if it is only of 5 or 6 seconds video?

    • @PlatinNr1
      @PlatinNr1 4 роки тому +1

      @@VKTVCHANNEL the time of the video doesnt matter.

    • @VKTVCHANNEL
      @VKTVCHANNEL 4 роки тому

      @@PlatinNr1 I wonder to hear that the duration of the video won't effect the time to create deepfake. If I use. Longer videos and I want to create longer deepfake it takes long time. That's why I have asked about the time to create a minimum length of video with minimum resources. How you understood.

    • @PlatinNr1
      @PlatinNr1 4 роки тому +1

      @@VKTVCHANNEL the process which is time intensive is to train the model (replacing the face). in this case the length of the video output doesnt matter.

  • @Zimbabwe.
    @Zimbabwe. 4 роки тому

    Hi Js , do you know why the why i get an error when i train the files on #6 step , i tried all trains, they all give errors, any idea

    • @Jsfilmz
      @Jsfilmz  4 роки тому +1

      Zimbabwe did u change any parameters what error r u gettin

    • @Zimbabwe.
      @Zimbabwe. 4 роки тому

      @@Jsfilmz Thank you for your reply, there are abunch of errors top ones say : Map memory - CL _Mem _object allocation_ Failure then all the other errors are in diffgerent file lines to do with the .py eg: error library.py line 131 , backend.py line 1668 in variable there are almost 12 lines with errors on different .py files

    • @Zimbabwe.
      @Zimbabwe. 4 роки тому

      @@Jsfilmz Thank you are again, your channel is so informative, i shared to a bunch of my friends.

    • @Jsfilmz
      @Jsfilmz  4 роки тому

      Zimbabwe what graphics card do i have

    • @Zimbabwe.
      @Zimbabwe. 4 роки тому

      @@Jsfilmz i have AMD Radeon HD 8490 its 24gb combined , 8 gb internal and the rest is an add from the extra card

  • @DAVIDTATLITUG
    @DAVIDTATLITUG 4 роки тому

    You haven't given a clue what program to get to start this. Where do we get the program

    • @Jsfilmz
      @Jsfilmz  4 роки тому +2

      its in the thumbnail brosky its called dfl 2.0

  • @oliverwatson6689
    @oliverwatson6689 4 роки тому

    Sir can we make a deep fake with (Intel® Core™ i3-9100F Processor 6M Cache, up to 4.20 GHz) alone with Nvidia GTX 1650super graphic card

    • @Jsfilmz
      @Jsfilmz  4 роки тому +1

      yes im sure you can, if not try a cpu one

    • @oliverwatson6689
      @oliverwatson6689 4 роки тому

      @@Jsfilmz am very new to this deep fake.how to work on CPU sir.

  • @I77AGIC
    @I77AGIC Рік тому

    you MASSIVELY overfit your model. 800k is way too many. Always look at the graph and once the yellow part starts moving up instead of down you have trained for too long. The more data you have the longer you can train before this happens though.

  • @thewakandapost
    @thewakandapost Рік тому

    Mirip orang indo. 🤔

  • @oliverwatson6689
    @oliverwatson6689 4 роки тому

    Please make a deep fake video (not required graphic card)

  • @PascalQNH2992
    @PascalQNH2992 2 роки тому +1

    Followed all the steps from different video's. Nothing works. Errors is all I get. And my pc is pretty pretty fast! ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[20,128,320,320]

  • @danielreso4405
    @danielreso4405 2 роки тому

    good afternoon you how to remove the blur caused deepfake creation is blurry is there any technology for this.