THIS IS CRAZY! Create your own talking avatar with Stable Diffusion or Midjourney!

Поділитися
Вставка
  • Опубліковано 20 жов 2024

КОМЕНТАРІ • 57

  • @theSato
    @theSato Рік тому +3

    Haha, this is pretty cool. No replacement for full-on artists making 3D models or 2D VTuber models, but still a nice cool novelty!
    Can't wait for the Controlnet followup, I'm sure that's where everyone's excitement is right now!

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Yeah I know. It’s also where my mind is at the moment. Trying to grasp all the possibilities and how to create the best workflow for it🙌

  • @apolloagcaoili5363
    @apolloagcaoili5363 Рік тому +8

    Title got me fooled. Talking avatar was created via D-ID, not SD or MJ.

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Sorry about that. That was not my intention. Thanks for watching though!🙏

    • @tomburden
      @tomburden Рік тому +1

      Same.

    • @LevendeStreg
      @LevendeStreg  Рік тому

      @@tomburden Sorry about that. I guess you could maybe do it with Deforum...

    • @tomburden
      @tomburden Рік тому +1

      @@LevendeStreg You still made a cool video!! Neat stuff!

    • @LevendeStreg
      @LevendeStreg  Рік тому

      @@tomburden Thank you kindly. I really appreciate that! Thanks for watching!

  • @meredithhurston
    @meredithhurston Рік тому +1

    This is great! I love it. Thanks for sharing.

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Thank you! I’m glad you like it. And thanks for watching!

  • @IvanShredenger
    @IvanShredenger Рік тому +1

    It is quite interesting whether it is possible to synthesize very high resolution images using only the stable diffusion model itself without extraneous upscalers?
    That's a question no one dares to answer!
    But I hope for a logical and correct answer.

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Maybe try to read This publikation about condituonal GAN: openaccess.thecvf.com/content_cvpr_2018/papers/Wang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf

  • @strayS2K
    @strayS2K Рік тому +1

    Broken down nicely, what a world! Thank you 😊

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Thank you kindly and thanks for watching 🙌

  • @promptjungle
    @promptjungle Рік тому +3

    Nice video 😊

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Thank you kindly! I really appreciate it 🙌

  • @LevendeStreg
    @LevendeStreg  Рік тому

    Thank you so much @LouisGedo for your comment. Don’t know what happened, but your comment just disappeared - but thanks!🎉

  • @optimus6858
    @optimus6858 Рік тому +1

    kinda pricey

  • @PlasticPaper
    @PlasticPaper Рік тому +1

    It it possible for it to be used as an avatar that can tract movements for mouth and eye movement for a live stream?

    • @LevendeStreg
      @LevendeStreg  Рік тому

      I’m not quite sure. There may be an option for that in the enterprise solution.

    • @LevendeStreg
      @LevendeStreg  Рік тому

      But you do know that NVidia has a solution for that, right. The Maxine eye control.🙌

  • @Knowledgetalks135
    @Knowledgetalks135 Рік тому +1

    Can you please tell me how can I do hand movements of this AI avatar character?

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Great question. You can’t in these solutions. You’d have to rig it up in after effects or animate or some other animation software🙌

  • @_B3ater
    @_B3ater 11 місяців тому +1

    Well she doesnt talk about the watermark on the whole screen.Unless you pay 200 dollars a month.Was very helpful really thank you for wasting my another hour.

    • @LevendeStreg
      @LevendeStreg  11 місяців тому

      Please note that this is an old video. The watermark was not as prevailing on the videos back then.

    • @_B3ater
      @_B3ater 11 місяців тому +1

      Ah ok. Sorry for being too aggressive then 🙇@@LevendeStreg

  • @Maeve472
    @Maeve472 Рік тому +1

    how we gonna train our photos?

    • @LevendeStreg
      @LevendeStreg  Рік тому

      You might want to check out this video I did on the topic:
      ua-cam.com/video/jvNotT7eFYI/v-deo.html

  • @sadshed4585
    @sadshed4585 Рік тому +1

    someone let me know best real time wav or text to face animation open source software. Unless that websites api is real time with video output. Someone let me know though.

    • @LevendeStreg
      @LevendeStreg  Рік тому +1

      You might want to Google “thin-spline-model” for SD. I’m doing a new video on it.

    • @sadshed4585
      @sadshed4585 Рік тому +1

      @@LevendeStreg wav2lip seems to still be quicker than that repo since it requires training.

    • @LevendeStreg
      @LevendeStreg  Рік тому

      @@sadshed4585 Thank you, Sad. I'm checking that out at the moment. 🙌

  • @워크-f2p
    @워크-f2p Рік тому +1

    did말고 webui에서 할 수 있는 확장기능생겼는데 그 내용인줄 알았네;; did는 부분무료인데

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Ah, you thought it was made with Deforum, I guess. No, you can't yet make something as good as this in Deforum. But it will come!😉

  • @DarkForcesStudio
    @DarkForcesStudio Рік тому +1

    Who would want this?... As anything?

    • @LevendeStreg
      @LevendeStreg  Рік тому +2

      Hahaha I would. I can think of so many ways to use this😜

    • @AscendantStoic
      @AscendantStoic Рік тому +2

      The avatar doesn't have to be based on your appearance, it can literally be anything, an alien, a cat, a robot XD

    • @LevendeStreg
      @LevendeStreg  Рік тому

      @@AscendantStoic yeah - I’m gonna try it out in a later video🙌

  • @skazkaoteberu
    @skazkaoteberu Рік тому +1

    It is not about creating avatar in stable diffusion or midjourney...

    • @LevendeStreg
      @LevendeStreg  Рік тому +1

      Well I guess you’re right. I’m going to be doing one on spline in Google Colab soon.🙌

  • @AntonioSorrentini
    @AntonioSorrentini Рік тому +1

    Everyone talk about this "D-ID", but it's extremely expensive. Please propose something open source.

    • @LevendeStreg
      @LevendeStreg  Рік тому +1

      Yup it's pretty cool, I haven't been able to find a better alternative🙌

    • @AscendantStoic
      @AscendantStoic Рік тому +2

      There is the Thin-Plate-Spline-Motion models, they can do the same thing on your local computer or on a Google colab, I think both Nerdy Rodent and Prompt Muse made tutorials about how to use the colab and how to install it locally (Nerdy Rodent pretty much uses it in all his videos).

    • @AntonioSorrentini
      @AntonioSorrentini Рік тому +1

      @@AscendantStoic Thank you very much.

    • @AscendantStoic
      @AscendantStoic Рік тому

      @@AntonioSorrentini You are welcome

    • @LevendeStreg
      @LevendeStreg  Рік тому

      @@AscendantStoic thank you. Yeah, I think I saw that in one of his videos🙌

  • @martinsaint9999
    @martinsaint9999 Рік тому +1

    So the weathergirl gets fired too in 2023... 🤔

    • @LevendeStreg
      @LevendeStreg  Рік тому

      Hahahahahhaah! I had that very same thought!

  • @Jacques.dAnjou
    @Jacques.dAnjou Рік тому +3

    that looks horrible. why would you even do that?

    • @LevendeStreg
      @LevendeStreg  Рік тому +3

      Hahahaha... because - as I say in the video - I've been home with pneumonia and a head cold. So couldn't do a normal recording. And this is a fun addition to what people would like to learn about. And actually - I think it's fun and looks pretty good. I'm impressed with the technology. It will evolve and get even better in a couple of month. I think it's awesome.

  • @bySterling
    @bySterling Рік тому +2

    Uhhhh this just kind of creepy n don’t see who would ever use unless an overseas weirdo in their basement ha

    • @LevendeStreg
      @LevendeStreg  Рік тому +1

      Hahahaha. Yeah it is kinda creepy 😜

    • @gargantuan6241
      @gargantuan6241 Рік тому

      bySterling, you're an idiot, i can think of 1901830912831283 ways to use this functionality to enhance media. Congrats on making today's dumbest post ever on yt.