- 54
- 179 998
Grockster
United States
Приєднався 4 кві 2006
🏆 UNREAL Flux Skin, Redux Ingredients, PulID v2 Face Swaps and more for ComfyUI
✨In this video, I’ll showcase the ComfyUI power of PulID v2 🎭, a game-changing tool for seamless face swapping and importing faces from reference images with stunning accuracy. We’ll also dive into Redux 🎨, an innovative feature that takes in image ingredients and automatically crafts breathtaking composited masterpieces. Plus, I’ll explore the magic of Blur FX 🌫️ and how it can add depth, realism, and cinematic appeal to your creations. Let’s jump in and see what’s possible! 🔥
* 1 on 1 LIVE AI Training (Springtime Savings for new clients!): www.grockster.com/newclientsecretdeal
* Offline Requests to Support your ComfyUI needs - www.grockster.com/support
-- Chat Share Have Fun w/ me on Discord - discord.gg/pWnectzw6Z --
RESOURCES (all model and files linked in install instructions)
* Workflow and Install Instructions - civitai.com/models/1202578?modelVersionId=1354149
* Flux Model Testing and LORA Inventory - docs.google.com/spreadsheets/d/1543rZ6hqXxtPwa2PufNVMhQzSxvMY55DMhQTH81P8iM/edit?usp=sharing
KEY COMFY & STABLE DIFFUSION TOPICS
* Wildcards Dynamic Prompts
* TeaCache + WaveSpeed
* Amazing 1-step adherence booster
* Quick node refresh tip
* Advanced Beaming Use Everywhere by Groups
* Advanced Mile High Styler + Custom Stylers
* 1 on 1 LIVE AI Training (Springtime Savings for new clients!): www.grockster.com/newclientsecretdeal
* Offline Requests to Support your ComfyUI needs - www.grockster.com/support
-- Chat Share Have Fun w/ me on Discord - discord.gg/pWnectzw6Z --
RESOURCES (all model and files linked in install instructions)
* Workflow and Install Instructions - civitai.com/models/1202578?modelVersionId=1354149
* Flux Model Testing and LORA Inventory - docs.google.com/spreadsheets/d/1543rZ6hqXxtPwa2PufNVMhQzSxvMY55DMhQTH81P8iM/edit?usp=sharing
KEY COMFY & STABLE DIFFUSION TOPICS
* Wildcards Dynamic Prompts
* TeaCache + WaveSpeed
* Amazing 1-step adherence booster
* Quick node refresh tip
* Advanced Beaming Use Everywhere by Groups
* Advanced Mile High Styler + Custom Stylers
Переглядів: 4 682
Відео
🌀 Supercharged dynamic prompts, adherence boosts, Cache Speed Boosts and more!
Переглядів 1,9 тис.21 день тому
In this video, we dive deep into the world of Dynamic Prompts 🌀, exploring how to take your creativity to the next level. Discover advanced techniques for modifying and customizing the Mile High Styler ✨, unlocking its full potential for unique, standout styles. Learn how to supercharge your workflow with TeaCache ☕ and WaveSpeed ⚡, delivering incredible speed boosts and efficiency. I’ll also s...
👀Mind-blowing AI LOCAL video creation in Comfy in a fraction of time!
Переглядів 7 тис.Місяць тому
Discover a game-changing movie maker workflow that lets you LOCALLY create high-quality 20 second videos at lightning speed compared to what you may have seen previously! 🚀 In this tutorial, I’ll show you step by step how to streamline your video creation process with an innovative workflow and a fun community challenge for us all! Perfect for content creators looking to save time without sacri...
🙋Help! My PC can't run the latest AI Apps, or can it?
Переглядів 428Місяць тому
Product Reviewed: MimicPC (links below) * Get MimicPC: ai.mimicpc.com/Grockster 🚀In this video, I walk you through a quick and easy web app that lets you leverage beefier video cards to run AI apps. Not just Comfy, but everything from LORA training, to audio creation to LLMs. Compared to RunPod, this interface is super slick and easy to set up! We'll walk through step-by-step in getting into th...
🚀 ComfyUI Power Moves! Master the New Interface & Flux Inpainting
Переглядів 2,6 тис.2 місяці тому
🎨 In today’s video, I’m detailing the amazing new ComfyUI interface 🖥️ that’s designed to turn you into a true power user. 🚀 With its extreme efficient design and enhanced functionality, you’ll master complex workflows like never before! But wait-there’s more! I’m also diving into the brand-new FLUX Inpainting feature 🖌️, which allows you to quickly tailor your images to the ends of the earth. ...
🎭Pro tips for Flux HD Face Swaps & Detail Fixing! Level up!
Переглядів 5 тис.2 місяці тому
🌟Today’s episode is packed with pro-level techniques that’ll level up your results and save you tons of time! 👀 We’re starting with using Detailer and LORA to nail those high-res face swaps 🔥, targeting specific faces with crystal-clear detail. Then, I’m walking you through the best settings to get peak performance out of your FLUX models 💻, plus some quick tips to speed up those node connectio...
🚀 Insane Flux Control - New Loader, Crazy Compositing & Secret Memory Hacks!
Переглядів 22 тис.3 місяці тому
🌟 Hey there, creators! 🎨 Ready to take your AI art game to the next level? Today, we’re diving into some serious upgrades that’ll make your workflow faster, smoother, and way more efficient. 🚀 First up, we’ll check out an all-new efficient loader 🔄 that steps you through the creation process. Plus, I’ll introduce you to a powerful new compositing method 🖼️ that brings even more control to your ...
🐶Reviewed: AI Enhance Tool to upscale, refine & polish your masterpieces!
Переглядів 6743 місяці тому
Product Reviewed: AI Arty Image Enhancer (links below) * Get Aiarty Image Enhancer and Midjourney monthly plan: www.aiarty.com/midjourney-prompts/?ttref=2409-aia-mj-ytb-Grockster-ded-wxr * Learn more about Aiarty Image Enhancer: www.aiarty.com/?ttref=2409-aia-mj-ytb-Grockster-ded-wxr In this video, I walk you through an amazing new enhancement in AI art generation called AIArty. It's super simp...
💪 Flux Frenzy: Unleash Screaming Fast Speed Boosts & Next-Gen Tools!
Переглядів 2,9 тис.4 місяці тому
In this video, we’re diving deep into the world of screaming fast FLUX speed boosts and next-gen tools that will revolutionize your workflow. 💡 Learn how to master outpainting with Flux for stunning results that go beyond the canvas. 🖼️ Plus, discover how to decrease render speeds by 90% to save time on every project. ⏱️ We’re also sharing brand new productivity tips 💼 that will help you work s...
🌟 Ultimate Flux Speed and Inpainting Tools (with free secrets!)
Переглядів 7 тис.5 місяців тому
🚀 Get ready to experience a whole new level of speed and efficiency in image compositing! In this video, we’re showcasing incredible boosts in speed that will revolutionize your workflow. 🖼️ But that’s not all - we’re also introducing some amazing new tools for advanced inpainting, allowing you to seamlessly fill in missing parts of your images with stunning accuracy. 🖌️ Plus, discover the ulti...
💥 Flux EXTREME - Cutting-Edge Techniques for Ultimate Control
Переглядів 12 тис.5 місяців тому
In this video, we're diving deep into Flux Advanced Techniques, so buckle up 🌟 Discover how Flux lets you fine-tune every aspect of your composition, giving you unparalleled creative freedom 🖌️ Ever wanted to tweak just a small part of your image? With Flux inpainting, you can now do it in a breeze. Learn how to fix those little details or completely transform sections of your work without brea...
🖌️ From Doodles to Masterpieces - Caricature Craziness in ComfyUI
Переглядів 1,9 тис.6 місяців тому
🚀 Embark on an artistic adventure with Lora as she dives into the delightful world of ComfyUI! In this video, we’ll explore the intricate art of caricatures, focusing on face detailing that brings characters to life. Lora shares her personal journey, unlocking new levels of creativity and efficiency. Whether you’re a seasoned artist or a curious beginner, join us for tips, tricks, and a dash of...
🚀Unleash Blistering Rendering Boosts & Face Masking in ComfyUI
Переглядів 2 тис.7 місяців тому
🌐🚀 Are you ready to witness unthinkable speeds when you unleash a brand new sampler saving you over 75% render time? This new configuration in ComfyUI will leave you in awe. Say goodbye to those sluggish image generation times. This is more than a tutorial; it’s a gateway to unparalleled efficiency! 🎨💫 Then we'll focus on Face Masking Mastery by diving into the depths of face masking with new w...
🤖Jaw dropping Image Captioning with Florence2 and BG Scene Swaps in ComfyUI
Переглядів 4,3 тис.7 місяців тому
🤖Jaw dropping Image Captioning with Florence2 and BG Scene Swaps in ComfyUI
🧙♂️ Boaty’s Magic: Upscale & Transform while using Photopea layers!
Переглядів 1,3 тис.7 місяців тому
🧙♂️ Boaty’s Magic: Upscale & Transform while using Photopea layers!
🕶️ Master Silhouette Art: Black Outline Magic Revealed and Photopea!
Переглядів 1,1 тис.8 місяців тому
🕶️ Master Silhouette Art: Black Outline Magic Revealed and Photopea!
🔧 Power Tool Discovery for Amazing New Experiences in ComfyUI!
Переглядів 5 тис.8 місяців тому
🔧 Power Tool Discovery for Amazing New Experiences in ComfyUI!
🎨✨ Mastering Character Poses with ControlNet and Layers in ComfyUI!
Переглядів 5 тис.8 місяців тому
🎨✨ Mastering Character Poses with ControlNet and Layers in ComfyUI!
✨Speed & Detail: Hyper Stable Diffusion, LORA Power Play & Stunning Overlays! ⚡🌈
Переглядів 3,1 тис.9 місяців тому
✨Speed & Detail: Hyper Stable Diffusion, LORA Power Play & Stunning Overlays! ⚡🌈
⚡Harness Lightning-Fast Detail with ComfyUI PERTURBED + 🔮 Mask Wizardry & Fashion Secrets! 🤩
Переглядів 6 тис.9 місяців тому
⚡Harness Lightning-Fast Detail with ComfyUI PERTURBED 🔮 Mask Wizardry & Fashion Secrets! 🤩
🤯 Incredible 1-Node Attention CFG Tricks for 🤩 Stunning Detail in Stable Diffusion
Переглядів 2,9 тис.9 місяців тому
🤯 Incredible 1-Node Attention CFG Tricks for 🤩 Stunning Detail in Stable Diffusion
Community AI Art Gallery Showcase - Session #4
Переглядів 1689 місяців тому
Community AI Art Gallery Showcase - Session #4
New & Exciting Ways to Provide AI Community Support
Переглядів 1049 місяців тому
New & Exciting Ways to Provide AI Community Support
🎶 Upscale Your Musical Genius with FREE Sonauto and a new ComfyUI Workflow! 🎶
Переглядів 2,4 тис.10 місяців тому
🎶 Upscale Your Musical Genius with FREE Sonauto and a new ComfyUI Workflow! 🎶
Community AI Art Gallery Showcase - Session #3
Переглядів 18110 місяців тому
Community AI Art Gallery Showcase - Session #3
How creating amazing text effects and AI Art in Comfy can translate into REAL money!
Переглядів 1,6 тис.10 місяців тому
How creating amazing text effects and AI Art in Comfy can translate into REAL money!
Community AI Art Gallery Showcase - Session #2
Переглядів 19910 місяців тому
Community AI Art Gallery Showcase - Session #2
Photoshop is DEAD?! Can ComfyUI Layer Diffusion unseat the champ?!
Переглядів 5 тис.11 місяців тому
Photoshop is DEAD?! Can ComfyUI Layer Diffusion unseat the champ?!
Community AI Art Gallery Showcase - Session #1
Переглядів 25211 місяців тому
Community AI Art Gallery Showcase - Session #1
Lightning and local LLM Chatbots and Comfy, Oh my!
Переглядів 2,3 тис.11 місяців тому
Lightning and local LLM Chatbots and Comfy, Oh my!
I am having issues with the MaraScott nodes I cloned the repo, I figure I probably need to run the requirements or something. If comfyui is not detecting a node such as this, how would I know that its an issue of requirements, dependencies or something like a directory path issue? Would the logs tell me this kind of information? I can't pay you for tech support unfortunately but maybe if I got comfy ui running i could get another job instead of reading git hub pages all day.
Figure I would mention this for anyone debugging linux... If you installed Comfy UI in a 'venv' then your config.init file is somewhere else. Instead of being in the expected location: (/custom_nodes/comfyui-manager/) the config.ini file was stored in a user-specific directory: /home/username/.../ComfyUI/user/default/ComfyUI-Manager/ 'default' <---- this one has the config.init Grockster mentions at 6:31 (I almost missed this tip it flashed so fast [I tend to watch vids at warp speed cuz ADHD]) I am a retard so I found it the longish way by: -name "config.ini" 2>/dev/null (read the list etc,) But I expect that the linux geniuses out there probably think this is obvious. Wasn't for me. So if you need it there it is... *idk if this messes up dependencies, but why are you using comfy ui if you don't like seeing walls of red dependency gates of hell?* My setup: OS: EndeavourOS Linux x86_64 Host: MS-7C95 1.0 Kernel: 6.12.10-arch1-1 Uptime: 3 hours, 26 mins Packages: 1236 (pacman) Shell: zsh 5.9 Resolution: 1920x1080 DE: Plasma 6.2.5 WM: kwin Theme: Breeze-Dark [GTK2], Breeze [GTK3] Icons: breeze-dark [GTK2/3] Terminal: konsole CPU: AMD Ryzen 5 5600X (12) @ 4.651GHz GPU: NVIDIA GeForce RTX 3090 Memory: 4036MiB / 64228MiB
Put all Seg models in comfyUI/models/Yolov8 i dont hace this folder how i can install it?
It's a folder you have to make, just create an empty folder named "Yolov8" in the "models" directory and put the seg models in that folder. Also you can create a symlink to your regular folder - for example: mklink /D "C:\ComfyUI\models\Yolov8" "C:\ComfyUI\models\ultralytics\segm"
hi how do you create your avatar with linpsync ?
It's a combination of several techniques including Live Portrait, Face Fusion, Animate Avatar, Hedra and video editing tools. Really varies based on scenario and avatar I generate
how to install first block speed boost node
It's called WaveSpeed and it's in ComfyManager
Thank you very much! I've just got one question. How can I delete Custom Categories / Templates I created (ComfyUI-N-Sidebar)? I just don't find an option for that
It's been awhile since I've used it but I believe you right click the category and then there's a delete option. It may have been also in the Sidebar Settings screen.
@@GrocksterRox Thank you!
Super informative and comprehensive video. You got a sub buddy. 💯
Awesome, just a few more and we'll beat Mr. Beast! :) haha every little bit helps and thanks for passing along to others!
How did you make the talking avatar?
Blend of Live Portrait, Animate Anything, Hedra and video editing tools
Anyone else having the issue that it uses CPU instead of CUDA even though CUDA is available?
Did you make sure that you have CUDA selected and not "default"?
@@GrocksterRox Yes, it was selected already. Now I also added --cuda-device 0 --gpu-only and then I uninstalled onnxruntime and onnxruntime-gpu and reinstalled onnxruntime-gpu.
@toledavid89 and that resolved it or it's still an issue? If still an issue, feel free to jump on the discord and we can try to figure it out
I love you for this!
I'm so happy this is valuable and makes you happy! Feel free to share this in the channel with others so everyone can learn.
Hi all, please can anyone help direct me on how to get the "YoloSegNode" installed, I have downloaded the yolo pt file and put in models/yolo folder.....also tried Ultralytics/segm....any help please.
Great video and explanations, got your zip from civitai but comfyui cannot find 3 custom nodes Warning: Missing Node Types When loading the graph, the following node types were not found: MaraScottMcBoatyUpscalerRefiner_v5 CheckpointLoaderNF4 MilehighStyler Any idea where do i get them?
Thanks so much, interesting on the missing items but here they are that you can install: * Mile High Styler - civitai.com/models/119246/comfyuimilehighstyler * MaraScott - github.com/MaraScott/ComfyUI_MaraScott_Nodes * You shouldn't need NF4 loader I thought I removed it from the workflow but you can do that too
@ thx for the fast response buddy will have a look in a few days and let you know :).
@Nova-ul3vv Good luck
@ Worked :)
I can't find the model path for "Jags Solo Seg node". I have the model and have already put it everywhere but I can't access it
I set up a folder symlink (essentially a pointer): mklink /D "C:\StableDiffusion\ComfyUI\models\Yolov8" "C:\StableDiffusion\ComfyUI\models\ultralytics\segm" so make sure the models are in the Yolov8 one but in order not to duplicate the models you can use the folder pointer command above (include it in your run_nvidia_gpu.bat file
with flux created image face is always oily
It really depends on the model that you're using, I really love this project0 version 3, but the effect will vary based on your base model that you're using
your face looks so real wow
The magic of AI. Haha
Amazing workflow thanks a lot. I have this problem. I have "skin_yolov8m-seg_400.pt" in my "ultralytics\segm" folder, but "Jags-YoloSegNode" doesn't find it... where do i have to put it ? please
trying to update ultralytics and it returns this error: ERROR: To use this feature, you must either set '--listen' to a local IP and set the security level to 'normal-' or lower, or set the security level to 'middle' or 'weak'. Please contact the administrator.
Yep, I had a note in the video about this, check the place where the screenshot shows how to update your security settings.
@@GrocksterRox awesome thanks, i must have missed it first watch- i'll take a second look! just found your channel. love what you're doing- really appreciate it!
@bstuartTI I'm so glad it's been helpful for you and others feel free to share with others everyone can benefit
@@GrocksterRox def sharing- gl growing the channel. i've been learning up to this point by backwards engineering workflows- but your explanations are great. keep it up!
Did you create a speaker avatar to the bottom right in Live Portrair in Comfyui?
It looks like yes...
It's a custom blend of live avatar, Hedra, some video blending and more.
Good guess!
Hi, very good video, I'm somewhat new to using comfyui, I'm interested in being able to use the skin texture booster and extra skin texture booster nodes, could you be so kind as to share the workflow with just those two please?
Yep, they are linked in the description of the video (link to CivitAi where you can download free)
great video. I love your animated avatar. Does one of your videos cover that talking Avatar?
i think its act one by runway, i may be wrong tho.
Not yet, but there are several free services and nodes to accomplish this including Hedra, Live Portrait, Animate Avatar and more. Good luck!
i feel so sad by knowing that people doesnt even know about my so great node, which i had to make because of vram issue as i use 6gb vram card. look for my node TogetherVisionNode which can enhance prompts using free api almost unlimited, also oote it has a node that can geenrate images using schnell 4 step free of cost too. isnt it great ? try it now.
👋 hi 👋
Hi there, thanks for stopping by!
I never managed to make Pulid or Pulid 2 work. For me, a good custom nodes should always be installed correctly at the first attempt. If they don't, they are badly programmed so, I don't use them.
100% agree, it's definitely a pain in the butt, but given there isn't an easy/effective alternative, we deal with the pain to get the reward :) Really appreciate you and your feedback!
i wish i cloud help u brother.
Worked on the first try for me. What kind of errors or troubles did you face? Maybe i can help.
It might be easy if everyone jumps into the support channel on my discord server because then we can show videos and screenshots to help resolve the issue
Cool workflow. Project0 V3 looks good, but it's NSFW anatomy isn't as good as my Jib Mix Flux in my testing ;)
Sounds like you and Helga need to have a collaboration session to merge the best of both worlds :)
Perfect as always, thank you for your job and sharing your knowledge
Definitely, I'm so glad you and others are learning from this and hopefully the entire AI community will continue to learn and grow their fun new techniques!
Awesome! Could you please add a video showing how we can do the same things with video?
That's an interesting challenge! The problem we have to figure out is how to change the angles of the background even though you will easily composite your characters or elements on top. Good thinking!
Great tips, thanks for sharing!
I'm so glad you found them helpful!
Awesome video! Thanks a lot for featuring and explaining the Mile High Styler!
Thanks so much, I live and die by the Mile High Styler, thanks so much for creating it! :)
Thought I'd share my initial results with the TeaCache node, I'm running Flux on an M1 Max MacBook Pro, 64g Ram, usually get speeds around 19 seconds per iteration at 1216 x 832. Running the TeaCache node had me starting at 19 sec's gradually dropping down to around 8 sec's through 30% to 60% and then gradually rising to close out around 18-19 sec's again. No question it lopped time off my render bringing me down from approx 9 mins per image to around 6 1/2 mins per image but it seemed like strange behaviour, but maybe not, might be a standard thing with the way it renders ??? Thanks for the great Vid's btw..
Thanks so much for the great feedback and totally agree with you, there's magic science going on in the background but it definitely helps speed things up! Love the data-driven approach as well - great job!
Thank you for sharing your knowledge! Really appreciate it!
Absolutely, anything I can do to help the community, I'll try :)
The Mikey Nodes are incredible for Wildcards also. ;) Thanks for showing about the Wavespeed settings!
Absolutely and I'll have to check them out! Continued wishes for success!
Amazing video! Always packed with priceless tips and tricks for comfy! I really like that idea with running 2 samplers and making my own nodes for my wildcards is going to be great! Thank you for always sharing the knowledge! Jay
Thanks Jay, I'm so glad you found the tips and tricks useful - keep experimenting and learning!
Awesome video! gonna try it out soon. What was the discord link?
discord.gg/pWnectzw6Z - Looking forward to collaborating! :)
I am not entirely convinced about the 2 stage Ksampler being better than placebo, but it will not be hurting speed too much, maybe there is something about having 2 different seed values, I would have to do some more testing.
Oh totally - this (as with everything else) is another tool in the toolbelt, and there's many times using the standard method will work great. I'd definitely recommend experimenting because it really opened my eyes to the possibilities. Thanks so much for the feedback!
This is what we need more of, slow it down and go deep dive on a narrow subject
Love the feedback, thank you so much, and thanks so much for sharing with others!
And love your vids! You explain amazingly
Thank you SO much! I want to help the AI community learn this stuff as much as possible (and I'm learning all the time too of course..haha)
FIRST!
You got it, thanks for being a dedicated subscriber!!! :)
Hello mister, a video on how you made your talking avatar will be great, how did you do it?
Thanks so much! It's a mix of methods that I vary but generally it's a mix of Hedra, Live Portrait and the new animated avatar that's mixed in video editing software
Great multi workflow canvas you created. 2 questions ? 1 how do we see our ram usage in the newest comfyui since the queue area is different now and 2 is there a ram tweak for Mac as well? Always looking for ways to speed up anything and everything.
Hi there, thanks so much for the feedback! To see the Cryss monitoring for the new UI, you just need to make sure you're using the latest version of Comfy (you may need to force the update by going to the update folder outside of your Comfy folder)
Can we do the pose change in the newer Flux model and Control Net? Thank you!
Yup absolutely in fact the video is already ready for your consumption :) ua-cam.com/video/WSx74Uep590/v-deo.html Enjoy!
2:03 As a developer, I see that screen and I would rather right code than learn something like this. I don't see why they make Comfy have Nodes. I will wait till it is Text to Video or Image to Video.
Thanks for the feedback, I'm not sure what you mean by writing code. Can you talk a little bit more about this?
@@GrocksterRox I just feel confused when I don't know what to do in a Node type environment. With C# or Python I can search for what I need. I have done Blender tutorials for similar Nodes and I can follow along, but I don't feel like I learn it. I just find Node environments more intimidating than Code I can just copy a working example even I don't understand it. Probably has something to do with an old dog trying to learn new tricks.
Feel free to jump on the discord and we can chat or if you need a more in depth walk through / lesson, you can sign up for 1:1 LIVE training sessions - www.grockster.com/services
I tried and failed. Utterly chaos, doesn't matter what I tried.... The end frame after Shot 1 is so much distrubed that Florence can't see anything and is hallucinating stuff, and the LLM does it's part then. Some things could be funny .... He-Man in front of his castle (should fight a dragon...) but ended up playing golf on a golf court! In Theory it would be funny stuff, but LTX is just a washed-out chaos generator :/ Outputs looking even worse than generating with SVD or Animate Diff :D
Interesting. Sorry you didn't get good results. Do you want to jump on the discord and we can take a look at the settings and prompt used etc?
@@GrocksterRox Sorry :) I don't even have and ever used discord. I feel like a first grader who is supposed to study ;D I know nothing about video ai training, even my prompting is still like I would do it with SDXL. It's always about how to get "control". I see some people doing great stuff, I couldn't do. For some amateurs like me, it lacks tools for Area selection + motion control, camera movement, inpainting.... I need to wait until some clever people made this stuff :)
Yup understood - if you're just starting to get into the Comfy world in general, I offer 1:1 LIVE training sessions if there's interest - www.grockster.com/services
One thing that I think would make a huge difference, is being able to separate the foreground from the background. Just like image generators. So for a rotating shot, I could input a very long background, So the AI only has to really worry about the foreground character/s.
That's an interesting theory! I wonder how compositing and blending would look so that it didn't look very layer like?
@GrocksterRox I noticed in a video to video demo, turning an old 4:3 into letterbox, using infill, it tended to work best, during the panning shots. I figure if you at least give the AI a clue what it's supposed to be panning towards, It could save a lot of attention, and therefore work better. At least that's my hypothesis.
Very interesting observation, thanks!
Creepy anyway.
Definitely will take time to get perfect, but it's amazing how far it's come even in the last couple of months.
@@GrocksterRox The same quality as it were even 6 month ago. The same fusing limbs, inconsistency, hallucinations, color bleeding etc.
@procrastonationforever5521 little progress but hopefully will improve
Yes local flavor is the right way to make videos .. who needs online services 😂
100% right on! :)
I see that all use euler for schudler....i get it out dpm2 is 10x better quality like euler...but take some minutes more.i dont want to use euler becouse videos are not so sharp and have many of blury pixles.what u think ....how u clean euler pixles in videos?tnx
Interesting - are you using dpm2 or dpm2_ancestral for your sampler? Also, what scheduler are you using in combination with it? Thanks so much for your thoughts!
@GrocksterRox ancestral give me bad output even on euler.... I use normal scheduler and dpm2
@@robertaopd2182 It's odd, with this workflow I tried dpm2/normal versus the standard euler_ancestral/simple and the latter is giving significantly better output. Maybe try this workflow out with both and post the results. I'm trying to get the optimal output (with consistency) as possible
Dpm2 with the beta scheduler ftw 🙌
@@davidcache6802 jep dpm2 with only 10 steps make almost best like euler.a with 30 steps... for me go better . and i use 3080 10gb.
what is your hardware specs again?
I'm running a 4090 (24gb vram) but others with 16gb have been able to run this as well (haven't tried less yet)
I need to check it out :) Always good if people work on things. I did my own workflow, and I can possibly create TV movie like 7 seconds scenes in Full HD with it, without much of AI garbage :D LTX is so bad compared to hunyuan. I couldn't use it :/ It's nothing to compare... For me LTX was a "dead on arrival". It can do nearly "nothing" if I compare the output quality :D
Yup I totally get that and that's why I'm engaging with the community for this group challenge to see if we can get it better. Huanyuan is very interesting but it's still a TON of time for minimal results (even at a higher quality level), so until it can get even slightly near the same level of speed, it's not going to be a viable option for media creators (unless they're willing to wait months for a significant length clip to come out)
@@GrocksterRox Im looking forward to test your workflow soon. I created a LTX workflow, but as soon as I get "motion" in a video it disintegrates after two or three seconds :( LTX couldn't even generate something "easy" like firerworks for me.... I tried this for some hours. Right now I'm trieing to create a 3 minute decent video with hunyuan... With scenes of 5-18 seconds lengh, sometimes with frame interpolation.... And you are right. Took me already 2 days and it's 3/4 finished now, around 5 hours of pure "rendering" time in total. Media creators could use A100 Gpus and get more out of it with much time. I don't think the consumer hardware right now will be good enough for more. but we are getting there, with a 5090 coming soon :D
Yeah I know it'll get better but with it taking SOOO long for Huan to render stuff, it just isn't a tenable solution for me (and Cog was worse!) That's why I'm trying to get us all into looking at clever/crafty ways to get LTX to produce faster results
@@GrocksterRoxYes. Time is a thing. I remember in the 80's, we had to wait 12 hours or more for our Amiga to render just one raytraycing image that we modeled for hours :D Doing just a 3 minute video, rendering 200 video clips with Hunyuan, remembers me a lot. it sucks :D
@@RagonTheHitman Haha the good ol' days! :)
I look forward to testing this out.
Great, I'd love to get your thoughts as well as collaboration with the community challenge. Good luck!