Haha, this is pretty cool. No replacement for full-on artists making 3D models or 2D VTuber models, but still a nice cool novelty! Can't wait for the Controlnet followup, I'm sure that's where everyone's excitement is right now!
It is quite interesting whether it is possible to synthesize very high resolution images using only the stable diffusion model itself without extraneous upscalers? That's a question no one dares to answer! But I hope for a logical and correct answer.
Maybe try to read This publikation about condituonal GAN: openaccess.thecvf.com/content_cvpr_2018/papers/Wang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf
Well she doesnt talk about the watermark on the whole screen.Unless you pay 200 dollars a month.Was very helpful really thank you for wasting my another hour.
someone let me know best real time wav or text to face animation open source software. Unless that websites api is real time with video output. Someone let me know though.
There is the Thin-Plate-Spline-Motion models, they can do the same thing on your local computer or on a Google colab, I think both Nerdy Rodent and Prompt Muse made tutorials about how to use the colab and how to install it locally (Nerdy Rodent pretty much uses it in all his videos).
Hahahaha... because - as I say in the video - I've been home with pneumonia and a head cold. So couldn't do a normal recording. And this is a fun addition to what people would like to learn about. And actually - I think it's fun and looks pretty good. I'm impressed with the technology. It will evolve and get even better in a couple of month. I think it's awesome.
bySterling, you're an idiot, i can think of 1901830912831283 ways to use this functionality to enhance media. Congrats on making today's dumbest post ever on yt.
Haha, this is pretty cool. No replacement for full-on artists making 3D models or 2D VTuber models, but still a nice cool novelty!
Can't wait for the Controlnet followup, I'm sure that's where everyone's excitement is right now!
Yeah I know. It’s also where my mind is at the moment. Trying to grasp all the possibilities and how to create the best workflow for it🙌
Thank you so much @LouisGedo for your comment. Don’t know what happened, but your comment just disappeared - but thanks!🎉
Title got me fooled. Talking avatar was created via D-ID, not SD or MJ.
Sorry about that. That was not my intention. Thanks for watching though!🙏
Same.
@@tomburden Sorry about that. I guess you could maybe do it with Deforum...
@@LevendeStreg You still made a cool video!! Neat stuff!
@@tomburden Thank you kindly. I really appreciate that! Thanks for watching!
Broken down nicely, what a world! Thank you 😊
Thank you kindly and thanks for watching 🙌
It is quite interesting whether it is possible to synthesize very high resolution images using only the stable diffusion model itself without extraneous upscalers?
That's a question no one dares to answer!
But I hope for a logical and correct answer.
Maybe try to read This publikation about condituonal GAN: openaccess.thecvf.com/content_cvpr_2018/papers/Wang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pdf
This is great! I love it. Thanks for sharing.
Thank you! I’m glad you like it. And thanks for watching!
how we gonna train our photos?
You might want to check out this video I did on the topic:
ua-cam.com/video/jvNotT7eFYI/v-deo.html
Nice video 😊
Thank you kindly! I really appreciate it 🙌
It it possible for it to be used as an avatar that can tract movements for mouth and eye movement for a live stream?
I’m not quite sure. There may be an option for that in the enterprise solution.
But you do know that NVidia has a solution for that, right. The Maxine eye control.🙌
Can you please tell me how can I do hand movements of this AI avatar character?
Great question. You can’t in these solutions. You’d have to rig it up in after effects or animate or some other animation software🙌
kinda pricey
Yes, I agree!
did말고 webui에서 할 수 있는 확장기능생겼는데 그 내용인줄 알았네;; did는 부분무료인데
Ah, you thought it was made with Deforum, I guess. No, you can't yet make something as good as this in Deforum. But it will come!😉
Well she doesnt talk about the watermark on the whole screen.Unless you pay 200 dollars a month.Was very helpful really thank you for wasting my another hour.
Please note that this is an old video. The watermark was not as prevailing on the videos back then.
Ah ok. Sorry for being too aggressive then 🙇@@LevendeStreg
someone let me know best real time wav or text to face animation open source software. Unless that websites api is real time with video output. Someone let me know though.
You might want to Google “thin-spline-model” for SD. I’m doing a new video on it.
@@LevendeStreg wav2lip seems to still be quicker than that repo since it requires training.
@@sadshed4585 Thank you, Sad. I'm checking that out at the moment. 🙌
Who would want this?... As anything?
Hahaha I would. I can think of so many ways to use this😜
The avatar doesn't have to be based on your appearance, it can literally be anything, an alien, a cat, a robot XD
@@AscendantStoic yeah - I’m gonna try it out in a later video🙌
So the weathergirl gets fired too in 2023... 🤔
Hahahahahhaah! I had that very same thought!
Everyone talk about this "D-ID", but it's extremely expensive. Please propose something open source.
Yup it's pretty cool, I haven't been able to find a better alternative🙌
There is the Thin-Plate-Spline-Motion models, they can do the same thing on your local computer or on a Google colab, I think both Nerdy Rodent and Prompt Muse made tutorials about how to use the colab and how to install it locally (Nerdy Rodent pretty much uses it in all his videos).
@@AscendantStoic Thank you very much.
@@AntonioSorrentini You are welcome
@@AscendantStoic thank you. Yeah, I think I saw that in one of his videos🙌
It is not about creating avatar in stable diffusion or midjourney...
Well I guess you’re right. I’m going to be doing one on spline in Google Colab soon.🙌
that looks horrible. why would you even do that?
Hahahaha... because - as I say in the video - I've been home with pneumonia and a head cold. So couldn't do a normal recording. And this is a fun addition to what people would like to learn about. And actually - I think it's fun and looks pretty good. I'm impressed with the technology. It will evolve and get even better in a couple of month. I think it's awesome.
Uhhhh this just kind of creepy n don’t see who would ever use unless an overseas weirdo in their basement ha
Hahahaha. Yeah it is kinda creepy 😜
bySterling, you're an idiot, i can think of 1901830912831283 ways to use this functionality to enhance media. Congrats on making today's dumbest post ever on yt.