Hi. sorry for bothering but I tried everything to install integer and image scale to side, using the menager or prviding the github link I found form Derfuu but the two nodes stills always missing. How can I solve this?
8:00 you can remove that viggle watermark in fusion page by a elipes node and plug it into the garbage input of the Delta keyer without going to color page😊
Wheres best place to learn this from ground up. Quite a lot of this is confusing with talk of models and APIs and lora. I use midjourney, runway, pad & Ae . Also dabbled in unreal engine. A lot if it I do know like clean plate and masking but this as a first time looks a bit overwhelming with nodes and spaghetti everywhere😂.
@@armondtanz Banodoco disc is a great place to learn. Comfy workflows are drag and drop and you can learn a lot just by using and studying other people's.
@@melchiorao9759 hey. I'm 53 this year... is disc short for discord? I'm looking to pay sum1 for q&a I learn a lot better like that , fiverr has some useful comfyui services. I probably just need to learn the terminology , then how to set up the things? Uploader was talking about. Once that groundwork is done then you can get a better feel of dropping stuff in.
thank you for the tutorial! this makes viggle much more viable. mind sharing your workflow file that includes the ipadapter nodes? for the less comfy savvy of us
how do i fix Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'vae-ft-mse-840000-ema-pruned.ckpt' not in ['taesd', 'taesdxl'] CheckpointLoaderSimple: - Value not in list: ckpt_name: 'photonLCM_v10.safetensors' not in [] ControlNetLoaderAdvanced: - Value not in list: control_net_name: 'controlnet_checkpoint.ckpt' not in [] Lora Loader Stack (rgthree): - Value not in list: lora_02: 'Joker.safetensors' not in ['None'] ? please help
Thanks for the video amigo! Sorry, didn't quite understand the purpose behind auto-cropping and auto-zooming? Is it necessary or useful for certain cases? For example if I have a baseball hitter swinging a bat and the camera is still and I prep my shot in After Effects for ComfyUI, would it still be necessary??
Autocropping is great for subjects that are far away. If your not doing the cropping then it will generate the full screen which causes not only more processing but also worst results.
@@enigmatic_e Hi, thanks so much for the tutorial, awesome stuff. I'm working on a close-up, how would you go about removing the cropping? should I just disable the cropped view bucket on the right?
this looks realy interesting, but i dont understand how you get hand positioning inside the mask(when hands go infront of body) without a controlNet, i would have used a depth map, but i guess you have a other solution that i'm not catching
Hey amigo, I loaded your workflow into Comfy. I had the old IPadapter and it seems you're using a newer one??? I go to Install Custom Nodes and for IpAdpater Plus I only have disable button. Do I need to remove it then re-install it to get the latest IPadapter?
Thank you for providing tutorial, I am using a 1364x768 resolution video, but the auto cropping dosen't seem to work. Is there anyway to manually adjust the cropping location?
Do u charge a service to set this up? I'm using MJ, & pika, runway etc. Also used unreal after fx and photoshop. So not a complete noob. Please let me know and I can contact you THX.
Useful stuff man. I cant install Image Scale to Size and Integer nodes. Tried from manager and git clone directly to custom nodes, but nothing is helpful. Maybe you can advice how to install them
Not a critique on your work, which is great, but a general rant : why are there so many apps that are only online like Viggle ? I can understand some people don't have a beefy GPU so it's good for them, but for those who have one, why do they have to be forced to upload stuff all the time ? There should be an app version of those things, like Topaz or Adobe Products. It looks like it's only online to fight piracy, as if Topaz or Adobe didn't find ways to make money despite piracy. If the app ABSOLUTELY need more than 24GB VRAM to work, I guess it's alright to be only online, because nobody besides professional VFX studios have that kind of VRAM. But if it's just FASTER with an A100, but could totally work with 12 to 24GB VRAM, I really don't like that it is ONLY online.
It’s in the beta phase right now. I’m sure it will eventually have a website where you pay credits. This seems to be the norm nowadays. But im on your side on this, I would love an app.
Topaz Labs Upscaler: topazlabs.com/ref/2377/
Hi. sorry for bothering but I tried everything to install integer and image scale to side, using the menager or prviding the github link I found form Derfuu but the two nodes stills always missing. How can I solve this?
@@alessandrobatacchioli2502 Just updated the workflow, should work now.
8:00 you can remove that viggle watermark in fusion page by a elipes node and plug it into the garbage input of the Delta keyer without going to color page😊
Oh nice to know! 🙏🏽🙏🏽🙏🏽
Gamechanger for sure. Cant thank you enough for taking the time to prepare the workflows and make a tutorial for us to use it too.
Wheres best place to learn this from ground up. Quite a lot of this is confusing with talk of models and APIs and lora.
I use midjourney, runway, pad & Ae . Also dabbled in unreal engine.
A lot if it I do know like clean plate and masking but this as a first time looks a bit overwhelming with nodes and spaghetti everywhere😂.
@@armondtanz Banodoco disc is a great place to learn. Comfy workflows are drag and drop and you can learn a lot just by using and studying other people's.
@@melchiorao9759 hey. I'm 53 this year... is disc short for discord?
I'm looking to pay sum1 for q&a I learn a lot better like that , fiverr has some useful comfyui services.
I probably just need to learn the terminology , then how to set up the things? Uploader was talking about.
Once that groundwork is done then you can get a better feel of dropping stuff in.
@@melchiorao9759 my comment got deleted?
@@melchiorao9759 youtube so strict on comments? Mine keep gettin nuked :(
When loading the graph, the following node types were not found:
Integer
Nodes that have failed to load will show as red on the graph.
same here. any solution to this?
Awesome video! Thanks for taking the time to share your experience.
Doing god's work my friend! Amazing tutorial as usual
Thank you man ! It's always amazing to learn new stuff!
🐐 for making this!
Let’s blow up your UA-cam channel 👊🏻
this is the f**king future, bruh😭😭
the joker works so well because his colors are very different to background... that is a key part of it.
in the image resize there is error. cant be false. need to be: keep proportion or fiil/crop or pad
thank you for the tutorial! this makes viggle much more viable. mind sharing your workflow file that includes the ipadapter nodes? for the less comfy savvy of us
There is a new rembg node in comfy called Inspyrenet rembg that does a fantastic job for creating the alpha mat
You should give it a try
GREAT! Video.
How can I ensure that I don't lose the shadows and ensure that the light composition is seamless?
how do i fix Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'vae-ft-mse-840000-ema-pruned.ckpt' not in ['taesd', 'taesdxl']
CheckpointLoaderSimple:
- Value not in list: ckpt_name: 'photonLCM_v10.safetensors' not in []
ControlNetLoaderAdvanced:
- Value not in list: control_net_name: 'controlnet_checkpoint.ckpt' not in []
Lora Loader Stack (rgthree):
- Value not in list: lora_02: 'Joker.safetensors' not in ['None']
? please help
Thanks for the video amigo! Sorry, didn't quite understand the purpose behind auto-cropping and auto-zooming? Is it necessary or useful for certain cases? For example if I have a baseball hitter swinging a bat and the camera is still and I prep my shot in After Effects for ComfyUI, would it still be necessary??
Autocropping is great for subjects that are far away. If your not doing the cropping then it will generate the full screen which causes not only more processing but also worst results.
@@enigmatic_e Hi, thanks so much for the tutorial, awesome stuff. I'm working on a close-up, how would you go about removing the cropping? should I just disable the cropped view bucket on the right?
Thanks a lot , really appreciated and helpful
Amazing tutorials!
From Viggle perspective, how could they utilize their strength in training their own dataset?
this looks realy interesting, but i dont understand how you get hand positioning inside the mask(when hands go infront of body) without a controlNet, i would have used a depth map, but i guess you have a other solution that i'm not catching
Hey amigo, I loaded your workflow into Comfy. I had the old IPadapter and it seems you're using a newer one??? I go to Install Custom Nodes and for IpAdpater Plus I only have disable button. Do I need to remove it then re-install it to get the latest IPadapter?
Yea there were some changes with Ipdapter. If you haven’t made the update, you might want to replace the nodes I have with the old ones.
Could you elaborate more on this worflow? like the settings and stuff
great video, thx man
Thank you for providing tutorial, I am using a 1364x768 resolution video, but the auto cropping dosen't seem to work. Is there anyway to manually adjust the cropping location?
I want to create fast viggle cat videos. the body and the face look horrorable. What the best layout i should use?
Love You’re work 🔥
great stuff. you should be able to mask the figure within comfyui automatically
Great stuff! Can anybody point me to where the number of rendered frames is set? The AIWarper workflow only does 40 frames (in my case)
this video made my day
What did you download to get the system utilities, for the mod Manager??
Awesome job!
Do u charge a service to set this up? I'm using MJ, & pika, runway etc. Also used unreal after fx and photoshop. So not a complete noob. Please let me know and I can contact you THX.
Useful stuff man. I cant install Image Scale to Size and Integer nodes. Tried from manager and git clone directly to custom nodes, but nothing is helpful. Maybe you can advice how to install them
found any solution?
Ty for this!!
You are amazing! thanks soo match
I love Viggling 😂
Would it make sense to mix ipadapter and loras?
I’ve done that in past workflows with success. I would try dialing in the weights to get the best results.
Fantastic video.
🙏🏽
You should teach everything 😍🔥
How to download a video on viggle
Check my previous video ua-cam.com/video/-fhFjnsZbDo/v-deo.htmlsi=NfDgOm8za05hfVMw
brilliant
Thanks!
Can we make video from it with our image....
Yes you can
@@enigmatic_e I want to say after making viggle's video with my image... Then when i push it into comfy workflow is then my face is same or change..??
lol batch prompting doesnt know if it works... this is always the case xD nobody knows if the prompting does anything
😂
who can I pay to do this for me?
AI Warper sent me here
🎉
Not a critique on your work, which is great, but a general rant : why are there so many apps that are only online like Viggle ? I can understand some people don't have a beefy GPU so it's good for them, but for those who have one, why do they have to be forced to upload stuff all the time ? There should be an app version of those things, like Topaz or Adobe Products. It looks like it's only online to fight piracy, as if Topaz or Adobe didn't find ways to make money despite piracy. If the app ABSOLUTELY need more than 24GB VRAM to work, I guess it's alright to be only online, because nobody besides professional VFX studios have that kind of VRAM. But if it's just FASTER with an A100, but could totally work with 12 to 24GB VRAM, I really don't like that it is ONLY online.
It’s in the beta phase right now. I’m sure it will eventually have a website where you pay credits. This seems to be the norm nowadays. But im on your side on this, I would love an app.
I think there is champ ai
why are u not showing how to do the autocrop thing, thats why we click on this video.. what a waste of time
It does it automatically when you add the alpha matte. That’s the whole point of the matte I explain it in the video.