AE Face Replacement + Face Tracking for consistent img2img

Поділитися
Вставка
  • Опубліковано 21 сер 2024

КОМЕНТАРІ • 96

  • @TomiTom1234
    @TomiTom1234 Рік тому +1

    Watching the steps.. opening Snap Chat, turning on the Disney cartoon filter.. Voilaa mission accomplished lol.

  • @shaman_ns
    @shaman_ns Рік тому +3

    Bro, you are in the cutting edge for fine tuning the video output of these AI based platforms. Keep it up!

  • @Blaxpoon
    @Blaxpoon Рік тому +13

    Slowly but surely, you are creating the ai workflow of the future !

  • @IntiArtDesigns
    @IntiArtDesigns Рік тому +1

    Great tutorial and an awesome technique. Thanks for this.

  • @YossiSheba
    @YossiSheba Рік тому

    Many thanks mate ! Subscribed

  • @alus0l
    @alus0l Рік тому

    you da man! thanks a million!

  • @DeliciousBrain12
    @DeliciousBrain12 Рік тому

    dude this is absolutely amazing! Thx a lot for sharing

  • @turner-tune
    @turner-tune Рік тому

    Such a cool process! Thanks for taking us through your creative process, its really amazing!

  • @paperdave
    @paperdave Рік тому

    ive been just getting into this and I've been thinking ideas like this and its awesome to see that these techniques do work. cant wait to try this out on my own.

  • @motiontinez
    @motiontinez Рік тому

    dude your videos are wesome you dserve more followers!

  • @jeffreysabino6176
    @jeffreysabino6176 Рік тому

    Consistent!

  • @hatuey6326
    @hatuey6326 Рік тому

    great work !! congrats !!

  • @vesto4kaa
    @vesto4kaa Рік тому

    Bro, thank you! Great work!

  • @harrishodovic7010
    @harrishodovic7010 Рік тому

    Always great content my friend ;)

  • @nalisten
    @nalisten Рік тому

    I appreciate your Amazing work, I'm learning alot from your channel

  • @jordyandrews
    @jordyandrews Рік тому +3

    i would look into ai background removal!! it’s worked really well for me and might save you some hassle with masking

    • @ianhmoll
      @ianhmoll Рік тому

      It's free or paid? Can you send me the website?

    • @jordyandrews
      @jordyandrews Рік тому

      @@ianhmoll look up robust video matting

    • @Jaspax
      @Jaspax Рік тому

      Rotobrush 2 is ai based tho

    • @ianhmoll
      @ianhmoll Рік тому

      @@jordyandrews thank you. Seems pretty good

  • @soyguikai
    @soyguikai Рік тому

    Excelente men.

  • @scavengers4205
    @scavengers4205 Рік тому

    This is nice!

  • @IdanCreativeCloud
    @IdanCreativeCloud Рік тому

    thank you bro!

  • @morgannpoulain4612
    @morgannpoulain4612 Рік тому

    Great tips !! Thx a lot !

  • @gemini9775
    @gemini9775 Рік тому

    thanks a lot dude ^^

  • @videoeditinglifestyle
    @videoeditinglifestyle Рік тому +1

    Great stuff as usual bro! I would love to see some tutorials on how to use Stable Diffusion on a Mac, or if you already have one out could you link me to it? The only solution I've seen so far is DiffussionBee which isn't all that great.

  • @xEvgeNx
    @xEvgeNx Рік тому

    😃Big job, Stable Diffusion top. 👍

  • @user-galad1995
    @user-galad1995 Рік тому +1

    Too difficult for me but the result like the one you've got motivates to develop skills

  • @spotsnap
    @spotsnap Рік тому

    corridor crew did the same thing but you made a tutorial about it thanks!

  • @EdVizenor
    @EdVizenor Рік тому

    LIKED, SUBSCRIBED.

  • @Jaspax
    @Jaspax Рік тому +2

    Very clever technique tracking an already stylized face over the footage, does it work with faces that show bigger expressions (lauging, crying, going from neutral to smiling etc)?

    • @enigmatic_e
      @enigmatic_e  Рік тому +4

      Technically you could with Face Tool. I‘ve seen footage of it being used while people show expressions. I may do a video showing that.

    • @shaman_ns
      @shaman_ns Рік тому

      @@enigmatic_e yes please!

  • @RhetoricalTraveling
    @RhetoricalTraveling Рік тому +1

    not a perfect system, but great idea :D excited to see this evolve more. What the corridor peeps did was include deepfake, i wonder if they created a stable diffused face with enough iterations to train deepfake mask and then deepfaked original or deepfaked the actor as Tom Holland and then diffused a stabilized video. thoughts?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I think that is very possible.

    • @mrpixelgrapher
      @mrpixelgrapher Рік тому

      They just face tracked and placed it on the original footage, essentially rendering the face differently and the rest of the scene differently. Preety much what enigmatice did here

    • @kylokat
      @kylokat Рік тому

      i dont think they used deepfake for their spiderverse video

    • @mrpixelgrapher
      @mrpixelgrapher Рік тому

      @@kylokat they didn't you are right. they used face track

    • @aianimations
      @aianimations Рік тому

      @@kylokat They termed it a 'stable fake' where the actor playing tom holland had his face replaced by a Spiderverse Tom Holland.

  • @VFXkabilan
    @VFXkabilan Рік тому

    nice idea

  • @PriestessOfDada
    @PriestessOfDada Рік тому +2

    Thought: This might actually be easier to do in blender

  • @darccow6191
    @darccow6191 Рік тому +2

    Nice. Sometimes I use blue/green background then you can key it out after you run it through Stable Diffusion instead of needing a mask

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Yea i think thats a great work flow.

  • @Dezember456
    @Dezember456 Рік тому

    Nice one, will definitelly try that aswell. I just wonder if it would be better if you stabilize it to the face first and then apply the cartoon face.

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Im sure they both work. You can try and let me know if how you like it.

  • @mafallo
    @mafallo Рік тому

    just a suggestion, you should check out nerdy rodent face thing that he did and combine that with your method maybe that will make the face look more alive and natural. thanks for the lesson

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I‘ve seen a few of his videos, they have taught me a lot. I think its possible to make it look alive with this method as well, I just didn‘t go that deep with this yet. I‘ve seen what AE face tool can do, and it definitely allows for facial expressions. I‘ll make another video showing that someday!

  • @jasonmurphy007
    @jasonmurphy007 Рік тому

    It’s not free, but Lockdown on aescripts would save a ton of work in this workflow.

  • @markv6684
    @markv6684 Рік тому

    Guys! Please help me, the photo that I add to the "Replacement Comp - Holder" flies to the upper left corner, and is not put on my head in the video. + + + + + + For some reason, the "Replacement comp 1" folder has a black screen instead of a video.

  • @marcdevinci893
    @marcdevinci893 Рік тому

    Nice thanks for sharing! So this would work with a talking character just as well?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Yes it could work. There is a way to do it with the AE face tool i mentioned here. But you need to make some adjustments. May make video about in future.

  • @MichaelFlynn0
    @MichaelFlynn0 Рік тому

    still hard work

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Yep. I mean, at least people can‘t say AI videos takes no skill.

  • @diego.spirit
    @diego.spirit Рік тому

    I saw that you use inpainting as a template. What other stable diffusion models are good for maintaining consistency?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Now you can use almost any model with ControlNet and get some pretty consistent results.

  • @bigbangentertainment866
    @bigbangentertainment866 Рік тому

    sir i have a question can we create a vlogging video like this metaverse type like tthis can we create please tell me please

  • @senseidsa4492
    @senseidsa4492 Рік тому

    hello, I would like to know how you use this transition in the multi-colored wave video, could you tell me how you do it?

  • @mrpixelgrapher
    @mrpixelgrapher Рік тому

    a better way to get prportion was to do an img to img with spiderverse SD model on your subject and then use that image as a comp instead of gwen stacy image.'

    • @enigmatic_e
      @enigmatic_e  Рік тому

      True, however to get a very consistent looking cartoon style that retains same facial features through out is almost impossible. I tried it myself and I was able to get like 1-2 seconds of consistency, then suddenly the style started to shift as her face started to move direction.

    • @mrpixelgrapher
      @mrpixelgrapher Рік тому

      @@enigmatic_e I was talking about following the exact same procedure as you did in this tutorial but i stead of using gwen stacy face from the spiderverse movie, use A spiderverse SD model img2img on your subject. Than use that generated image instead of gwem stacy one and follow the same process. This will reduce all the headache of "getting the same proportions"

  • @shaikshahid7408
    @shaikshahid7408 Рік тому

    Hello bro,
    Your videos are informative and amazing
    I'm using RTX 3060 and still having trouble calling ( CUDA out of memory. Tried to allocate 678.00 MiB (GPU 0; 6.00 GiB total capacity; 4.65 GiB already allocated; 0 bytes free; 4.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF)
    Please let me know if you know the solution

  • @lionbeatscobra
    @lionbeatscobra Рік тому

    Its too bad they haven't created a quicker way to do this. Hopefully in the near future. :)

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Yea i feel you. These are just work arounds for SD limitations. I'm sure very soon, this won't even be an issue.

  • @qq373163143
    @qq373163143 Рік тому

    Hi, comments from an old audience: maybe I can call you "master", a master in AIGC field. I hope you can continue to lead us to explore more interesting and useful AI usage. I will always subscribe and recommend you to the people around me.

  • @hugg3d
    @hugg3d Рік тому

    Yeah after the video with the stabilized footage i thought the same, now how do i unstabilize and keep the stuff in place, thank you

  • @ShivaTD420
    @ShivaTD420 Рік тому

    Cool, ai rotoscoping.

  • @blaqdesign286
    @blaqdesign286 Рік тому

    1k like

  • @ianhmoll
    @ianhmoll Рік тому

    Niice! How you did this vortex effect on the face transition?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Ah yea, I did that with shape layers and turbulent displacement.

    • @ianhmoll
      @ianhmoll Рік тому

      @@enigmatic_e damnn.. actually is so good that I can't believe it was just that hahah. Can you do a tutorial? This all final look would be a good tutorial.

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      @@ianhmoll yea I will!

    • @ianhmoll
      @ianhmoll Рік тому

      @@enigmatic_e waiting anxiously :)

    • @ianhmoll
      @ianhmoll Рік тому

      @@enigmatic_e Will you still do the tutorial?

  • @LP12576
    @LP12576 Рік тому

    have you tried using dreambooth to improve consistency?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      No I haven‘t. Might work to give a specific likeness of a character.

  • @user-bo6yh3dc6c
    @user-bo6yh3dc6c Рік тому

    very hard

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I know, but sometimes necessary to get what you want.

  • @relative_vie
    @relative_vie Рік тому

    Can you export this stuff to create a IG filter?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I think theres other ways to accomplish that, probably way easier than this.

    • @relative_vie
      @relative_vie Рік тому

      @@enigmatic_e auto generated IG filters would be insane 👀

  • @BryanKesler
    @BryanKesler Рік тому

    Hmm wonder how this would work with a face that is talking or showing emotion
    Use a deepfake tool on the static face with original footage?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      The Face Tool allows for emotion and even talking. Will make a video about that some day.

    • @BryanKesler
      @BryanKesler Рік тому

      @@enigmatic_e AE plugin?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      @@BryanKesler yea, the one i showed here

  • @grandsunhar
    @grandsunhar Рік тому +1

    You could have used "Thin-Plate Spline Motion Model for Image Animation" for better face tracking and moving eyes .

    • @grandsunhar
      @grandsunhar Рік тому

      Nerdy Rodent youtuber has very useful tutorial about it

    • @A.eye_101
      @A.eye_101 Рік тому

      definitely not, thin plate is not that good