AI Animation realistic Consistency TUTORIAL Stable Diffusion + EbSynth (Advanced)
Вставка
- Опубліковано 12 бер 2023
- Learn how to achieve realistic animation consistency using AI techniques! In this tutorial, we explore the advanced AI workflow using Stable Diffusion and EbSynth to generate realistic video from 3D models animations.
Prompt:
RAW photo, a close up portrait photo of 30 y.o woman [PUT HERE YOUR PROMPT], pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
Negative prompt: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck
Checkpoint used: Realistic Vision 1.3
Size: 512x512, Seed: 4186429302, Steps: 25, Sampler: Euler a, CFG scale: 7, Model hash: c35782bad8, Hires steps: 25, Hires upscale: 1.5, Hires upscaler: R-ESRGAN General WDN 4xV3, Denoising strength: 0.5
Stable diffusion:
github.com/AUTOMATIC1111/stab...
EbSynth
ebsynth.com/
Checkpoint models:
civitai.com/
Hashtag: #AIanimation #realisticvideo #StableDiffusion #EbSynth #3Dmodels #advancedtutorial #unrealengine #unreal #3d #animation #cgi
Creative animation!
wow I was really expecting comfyui ^^
Interesting stuff
awesome tutorial, would like to see more on how to create consistent animations.
i'm wokring on it, consistency is not an easy task, but i'm sure it will be easyer soon, this AI grow in a crazy way :)
can you explain why you use EbSynth? I didn't understand why you use EbSynth with several generated style-image
EBsynth shifts generated image pixels to accomodate the changes/movements in video, thus create png sequence with altered image style animated the same as the original video
Why when I finally output the video, there is afterimage when the character's hand moves?😂
I dont understand sorry, can you explain me with more details? or put a video link
Can you do it again but with a 2d anime style? Or is that impossible?
Sure is the same workflow, just change the ouput style image in anime style.
The style is decided by Stable Diffusion, when u have identify your style just follow the process here in the tutorial.
@@elvismorellidigitalvisuala6211 I only have a 1080ti. Think it’s possible for me or do I have to wait until I get a 3090 or better?
@@FTONYProductions maybe it is slow but It works u have to try it, let me know!
hello,i'm an graduted student major in cs from japan.
i have some experience in 3d animation like making mod of mhwi.
are you operating a company or studio?
i think these projects were so cool, and i would like to go further in ai animation.
so can i make contact with you?
Hi, I work as freelance no studio or company :)
How can I help you?
@@elvismorellidigitalvisuala6211 thanks for you reply,
i have no exact idea right now,
when i have some ideas i will contact to you😁
@@elvismorellidigitalvisuala6211 Hello I never leave comments on UA-cam but something made me send one here and urgently. Whatever it is you guys are working on I want to be involved. Let's create something big in this moment I would definitely love to support it even if it means beers/money for the odd good meal. Leave me a way for contact.
See my message below
@@elvismorellidigitalvisuala6211
Hi, I now have an idea.
I'm working on some better style transfer algorithms,
I want some datasets, like the one shown in your video, the image datasets with continuous frames of only people and no background, can I contact you through your email on LinkedIn?