Hey guys, a few tips for anyone who's still getting errors: - Make sure you restart ComfyUI after installing the nodes, try restarting your computer as well. - if you already have COmfyUI installed before watching this video, update it first - if you don't see the Comfy UI Manager after installation, update the UI by navigating to the Comfy UI folder and running the update batch files, if that doesn't work, try the installation methods here: civitai.com/models/71980/comfyui-manager - visit the AnimateDiff Discord for more technical support: bit.ly/48Vnpdu if you still get errors, drop them below
Im getting this error when i load the jason file: Loading aborted due to error reloading workflow data TypeError: widget[GET_CONFIG] is not a function TypeError: widget[GET_CONFIG] is not a function at #onFirstConnection (127.0.0.1:8188/extensions/core/widgetInputs.js:385:54) at PrimitiveNode.onAfterGraphConfigured (127.0.0.1:8188/extensions/core/widgetInputs.js:314:29) at app.graph.onConfigure (127.0.0.1:8188/scripts/app.js:1144:34) at LGraph.configure (127.0.0.1:8188/lib/litegraph.core.js:2260:9) at LGraph.configure (127.0.0.1:8188/scripts/app.js:1124:22) at ComfyApp.loadGraphData (127.0.0.1:8188/scripts/app.js:1374:15) at reader.onload (127.0.0.1:8188/scripts/app.js:1682:10) This may be due to the following script: /extensions/core/widgetInputs.js Any Ideas?
Hey thanks a lot for your tutorials and help, I'm getting this error while trying to add your workflow or Inner Reflections one on the UI " Loading aborted due to error reloading workflow data TypeError: widget[GET_CONFIG] is not a function " I tried on two different PC and still getting the same problem
Thanks for the great tutorial but I can't even get ComyfUI to work. I looked through the discord server but can't find my problem. Not used discord before so I might be looking wrong. No idea what I'm doing wrong. None of the Custom Nodes will load. Lots of error messages, ModuleNotFoundError, Cannot import, IMPORT FAILED, ffmpeg cannot be found etc. I'm guessing that's why the custom workflow fails too. Tried switching browsers, restarting Comfy UI, restarting the computer but same results.
@@crankyboy71 hi, try updating comfyUI and the nodes and let me know how it went. - you can update comfyui here: ComfyUI_windows_portable\update - you can update the nodes by opening cmd inside the node folder and run the "git pul" command
Awesome, thanks man. Really like the interview scenes and how you structured the tutorial. You are my favourite AI Channel by now. Though I'd love to see you making more VFX content, but im hyped for a video to video or video to animation workflow. All these animations are cool and all, but researching how to incorporate AI in Video projects would be something i'm very curious about.
Not sure if anyone had problems but my try was slow with 100 frames, it was going to take 8hrs with a mid spec 12gb card. Make sure you do use an sd1.5 as sdxl takes forever. This applies in conjunction to the initial model and animediff loader, (I had temporal layers.f16 reason why I was able to use sdxl) - also now you can download the new v3_sd15_mm.ckpt instead of the previous version always using the link he has in the description. I was almost there, giving up and then I realised that I had to follow his video to the dot. You are the best content creator for AI I found so far.. ❤️ thank you! 🙏 I learned a lot and I love the discord you also have. Again, massive thanks.
@ thank you.. without you.. I would have been lost. I truly appreciate the effort you went thru and all the additional support, adding all the links in the description in sequence and the discord to find what problems people had.. hopefully soon I’ll be able to help you help people 😅😂 (I think I almost watched all your videos) 😂😂😂
@ well… little update.. maybe I am the only one.. but… if you run stable in the background and process something with comfyUI, it will slow you down dramatically.. even if stable is not doing anything. I was trying to copy a prompt i had on stable and left it there. Comfy ui was as slow as it gets.. simple thing.. but not obvious for a noob like me 🙄
Thank you for your course, it is one of the best tutorials I think. I also learned to make some short videos and posted them on my website. It was really fun! Thank you!❤❤❤❤
Thx a lot, first animation node setup that works for me - that is without understanding comfy ui too much. But from here i can test it out, i really love this morph style and i hope morphing from (existing) image to image is possible....
perhaps you didn't install it properly? sometimes it's stubborn and doesn't show up even after installing, I will add a solution to the pinned comment once I find one
how many lines can you generate? cause if i write more than 5 lines the engine doesnt work ...gives me the error underneat Error occurred when executing BatchPromptSchedule: Expecting value: line 2 column 7 (char 273) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 101, in animate animation_prompts = json.loads(inputText.strip()) File "json\__init__.py", line 346, in loads File "json\decoder.py", line 337, in decode File "json\decoder.py", line 355, in raw_decode
Error occurred when executing BatchPromptSchedule: Expecting value: line 2 column 7 (char 273) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 101, in animate animation_prompts = json.loads(inputText.strip()) File "json\__init__.py", line 346, in loads File "json\decoder.py", line 337, in decode File "json\decoder.py", line 355, in raw_decode
@@MDMZ ok i did. all cool untill i tried to insert one more option. It doesnt want more passages. Error occurred when executing BatchPromptSchedule: Expecting property name enclosed in double quotes: line 7 column 1 (char 1691) File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 101, in animate animation_prompts = json.loads(inputText.strip()) File "json\__init__.py", line 346, in loads File "json\decoder.py", line 337, in decode File "json\decoder.py", line 353, in raw_decode BTW i appreciate your work and must thank you
@@allago11 if you copy and paste a prompt with line breaks like this: "blah blah, blah, blah blah blah blah, blah" it won't work, you need to erase the line break and turn it to this "blah blah, blah, blah blah blah blah, blah".
Its pretty similar to this, just less options. Same kind of sliders and whatnot. I found it to be incredibly slow on A1111 though. A 128 frame animation took 26 minutes at 512x768. That said you can use frame interpolation with deforum so its a bit smoother. All of this pales into total insignificance however with the results on paid websites which are several hundred orders of magnitude better than anything AnimateDiff can make. Unless your desperate to do local gen, I would just use a paid for website.
Moe, your vids are amazing but for some reason i always have an issue opening the bitly links...they never seem to work for me for some reason and now im trying to get your workflow but can't download it. Is there a way to find it through a regular link please?
Hi there, can you check if you have extensions blocking the links ? Or try a different browser. Shortened links are a good way to keep the description from looking messy, it also allows me to track clicks and understand my audience a bit more.
i would love to test out with the same prompt but slightly edited could u by anychance post your promt so i could copy paste ty, same for the pre text and app text thank you
I get a lot of errors, including that my CUDA is out of memory and that i have to realocate some, but anyhow i am very happy with yr pevious workflow, i only need to know if there is a way to make the movements a bit slower
the main nodes needed to get this done are showcased in the video, I'm sure I covered every single step unless some of the nodes have been updated by the developers and they look a little different now
As I followed along I quickly ran into a road block, I am a mac use but do not have a Mac Sillicon, do you know of a work around to download and use for a mac laptop user? Looks like Stable Diffusion also shares that limitation :(
how to fix this...? When loading the graph, the following node types were not found: BatchPromptSchedule Nodes that have failed to load will show as red on the graph.
Yes i have manually download the ComfyUI-AnimateDiff-Evolved and other three and place them info the custom_nodes folder and restart the comfyui and it works for me.@@MrKikegraphics
Hi, thanks for the tutorial. I got an error when loading the queuing prompt : Error occurred when executing ADE_AnimateDiffLoaderWithContext: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118 Could you help me?
I need help. When loading the graph, the following node types were not found: CheckpointLoaderSimpleWithNoiseSelect ADE_AnimateDiffUniformContextOptions ADE_AnimateDiffCombine ADE_AnimateDiffLoaderWithContext How can I solve this?
Someone know what is this?" When loading the graph, the following node types were not found: VHS_VideoCombine Nodes that have failed to load will show as red on the grap"
Hi there When I drag and drop the workflow it shows this error: widget[get_config] is not a function And: This may be due to the following script: /extensions/core/widgetInputs.js What should I do?
When I do this, my prompts are just getting melded together, it doesn't actually travel to the next one. Not really sure what I'm doing wrong :S. The upscale boxes are missing also
o meu ta dando isso OSError: [WinError 126] Não foi possível encontrar o módulo especificado. Error loading "C:\Users\Freitas\Desktop\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\lib\c10.dll" or one of its dependencies.
TypeError: widget[GET_CONFIG] is not a function at #onFirstConnection (127.0.0.1:8188/extensions/core/widgetInputs.js:385:54) at PrimitiveNode.onAfterGraphConfigured (127.0.0.1:8188/extensions/core/widgetInputs.js:314:29) at app.graph.onConfigure (127.0.0.1:8188/scripts/app.js:1144:34) at LGraph.configure (127.0.0.1:8188/lib/litegraph.core.js:2260:9) at LGraph.configure (127.0.0.1:8188/scripts/app.js:1124:22) at ComfyApp.loadGraphData (127.0.0.1:8188/scripts/app.js:1374:15) at ComfyApp.setup (127.0.0.1:8188/scripts/app.js:1221:10) at async 127.0.0.1:8188/:14:4
Heya, not sure why I'm getting this error. My pc has 32GB ram and running on a RTX 4080. Error occurred when executing KSampler: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 2.48 GiB Requested : 39.79 GiB Device limit : 15.99 GiB Free (according to CUDA): 12.15 GiB PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB Not sure why it's requesting almost 40GB to generate an animated prompt :\
the error is referring to your GPU VRAM and not your RAM, you seem to have 16GB which should be enough to run this, perhaps you can try updating your GPU drivers
"0" :"A Chimpanzee, covered in dark black hair, with a large jaw and sharp canine teeth, in the jungle, within the depths of the wilderness, surrounded by forest trees and greenery, in a dense tropical rainforest, lush vegetation, tall trees, sun rays", "15" :"A Chimpanzee, covered in dark black hair, with a large jaw and sharp canine teeth, in the jungle, within the depths of the wilderness, surrounded by forest trees and greenery, in a dense tropical rainforest, lush vegetation, tall trees, sun rays", "35" :"A male Homo erectus, upright man, with a robust and sturdy build, a stooped posture, with a prominent brow ridge and a pronounced jaw and broad nose, wearing basic animal hides and furs for warmth, holding an arrow, embraces a functional and primitive style, with long and unkempt hair, wearing minimal accessories and resides within a hunter-gatherer society, in a hut inside a rocky cave", "60" :"An early modern village man, dressed in simple clothing made from animal hides, wearing a tunic, herding livestock, in an agriculture setting, village, near a river, settled agricultural community, surrounded by shelters made from natural materials like branches, leaves, and animal skins", "85" :"an ancient Egyptian Pharaoh male wearing clothing with intricate patterns, symbolic style, symbolizing divine status and authority, Wearing a wig and a beard, embellished with elaborate jewelry like collars and bracelets in the Nile River civilization, set against ancient Egyptian architecture, royal palace, temples of ancient Egypt, in the background are massive pyramids and The Great Sphinx of Giza, egyptian desert", "110":"an ancient Greek male , wrapped in a toga, embodies a simple and functional style, emphasizing athleticism, With short hair and minimal accessories, in a society known for democracy, philosophy, and arts, surrounded by the architectural marvels of ancient Greek cities, Athena, impressive temples, theaters, Mediterranean landscape", "135":"An ancient Roman male wearing a tunic, reflecting practicality and militaristic influence, With short hair, styled elaborately, wearing leather sandals, brooches, rings, and belts, ancient Roman architecture, well-paved road, the Colosseum, vast Roman Empire", "160":"a medieval European knight, heavily armored and mounted warrior, in a suit of steel plate armor with a distinctive helmet, shield, and a sword, carrying the emblem of his house on his shield, Crusades, surrounded by medieval European architecture, castles, fortified towns", "185":"in Renaissance Europe, a man wearing a doublet hose, ruffled collar, richly detailed fabrics, hat, glove, intricate jewelry, long tied back hair, cultural artistic flourishing, surrounded by Renaissance-era architecture, grand cathedrals, opulent palaces, crowded market squares, people walking in the background, cultural artistic flourishing", "210":"victorian Era, a man dressed in a formal and highly structured ensemble, including a tailcoat, waistcoat, cravat, and a top hat, groomed hairstyle, sideburns, pocket watch, cane, monocles, industrial Revolution, ornate architecture, grand estates, bustling urban centers, Victorian gentlemen navigate through the cobbled streets, cultural landscape", "235":"a 1900s man wearing a tailored three-piece suit consisting of a jacket, vest, and trousers, paired with a crisp white shirt and a tie, polished leather shoes and pocket watch, neatly combed short hair, engaging in social gathering, in the background is a busy rapidly growing city, bustling urban environment ", "260":"a 1980s man wearing a mod look, wearing bell bottom jeans, a fitted graphic T-shirt, and a colorful windbreaker, 80s fashion style, mullet hairstyle with a headband, wearing sunglasses, watch, carrying a boombox, in a suburban neighborhood, with pastel-colored houses, lush lawns, pop culture and fashion", "285":"a man in a modern fashion style, wearing a well-fitted minimalist outfit, slim-fit trousers and a crisp button-up shirt, sneakers, wearing a medical face mask, a clean and sharp undercut with neatly groomed facial hair, influenced by digital age, holding a smartphone, wearing headphones, designer brands, sleek wristwatch and a leather bag, bustling urban environment, sleek skyscrapers, vibrant street art, electric vehicles, contemporary fashion, urban aesthetics, and digital-age sophistication digital revolution, modern urban architecture", "310":"a futuristic man wearing a sleek and minimalist outfit, wearing VR headset, adaptable clothing integrated with advanced technology, his attire features clean lines and eco-friendly materials, perfectly coiffed and effortlessly sophisticated hairstyle, wearable tech and bio-embedded devices, in a futuristic cityscape, towering skyscrapers, smart infrastructure, sustainability, advanced transportation and connectivity, high-tech , AI integration, futuristic architecture", "335":"A cyborg, seamlessly merging human and machine, sleek, metallic exosuit adorned with intricate circuitry and augmented with biomechanical enhancements, cleanly shaved hair, integrated data ports and augmented reality visors, high-tech, bio-engineered body modifications, futuristic cityscapes, neon-lit skyscrapers stand in stark contrast to the natural world, technological progress, holographic and augmented reality, robots and automated vehicles in the background", "360":"In post-apocalyptic world, a cyborg as a formidable figure, in a rugged and battle-worn ensemble, wearing salvaged metal plating and leather, bearing the scars of battles, a tattered hooded cloak conceals their mechanical enhancements, steel-like mohawk hair, fierce defiant appearance, wearing cybernetic goggles over one eye, glowing with augmented reality displays and tactical info, in a dystopian wasteland, ruined skyscrapers and shattered cityscapes, with a res sun casting an eerie, ominous glow over the harsh terrain, heavy weaponry", "385":"In the space invasion era, a futuristic robot as a sleek and formidable humanoid machine, advanced metallic alloys, with a streamlined, angular design, adorned with a variety of integrated weaponry and advanced sensor arrays, shimmering with high-tech finish, energy shields, rocket boosters and a multifunctional helmet-like interface, dark expanse of outer space, alien spacecraft and distant galaxies, engaging in intense battles with other advanced robotic invaders, Explosions, laser beams, futuristic machines clash amidst the backdrop of stars and celestial bodies", "410":"highly evolved male, with advanced biotechnology to fuse seamlessly with the natural world, in a bioengineered attire, self-renewing plant fibers that change color texture, blending effortlessly with the environment, hairstyle is adorned with living, bioluminescent vines, creating a mesmerizing play of colors, accompanied by symbiotic sentient ethereal creatures with wings, in s a lush, harmonious paradise, city integrated into the natural landscape, organic, bioengineered architecture, advanced ecological systems", "435":"futuristic conscious energy, shimmering, intricate patterns of light and color that shift and morph, emanating a luminous aura, creating a breathtaking visual spectacle, energy patterns, with tendrils of light and radiant wisps that dance and pulse,intricate, fractal-like mandalas of light that hover around them, in a cosmic tapestry of interconnected realms"
@@elifmiami nope, but it seems to me that when the prompts vary too much in length, it tends to give an error, try to keep the length of all prompts as consistent as you can.
When loading the graph, the following node types were not found: ADE_AnimateDiffCombine ADE_AnimateDiffLoaderWithContext ADE_EmptyLatentImageLarge CheckpointLoaderSimpleWithNoiseSelect ADE_AnimateDiffUniformContextOptions Nodes that have failed to load will show as red on the graph.
I have the same error. Have you found a solution? CheckpointLoaderSimpleWithNoiseSelect ADE_AnimateDiffUniformContextOptions ADE_AnimateDiffCombine ADE_AnimateDiffLoaderWithContext VHS_LoadImages
Bhosdike ye jagah protest karne liye nhi bnayi gyi hai tujhe itni chinta ho rahi hai Palestine jake he uske side se lad le. Sala tu bhadwa hai bhadwa he rahega
Great tutorial. However i get an error when loading the workflow inty ComfyUI: Loading aborted due to error reloading workflow data TypeError: widget[GET_CONFIG] is not a function ... This may be due to the following script: /extensions/core/widgetInputs.js PLEASE HELP :)
When loading the graph, the following node types were not found: ADE_AnimateDiffCombine ADE_AnimateDiffLoaderWithContext ADE_EmptyLatentImageLarge CheckpointLoaderSimpleWithNoiseSelect ADE_AnimateDiffUniformContextOptions Nodes that have failed to load will show as red on the graph.
Hey guys, a few tips for anyone who's still getting errors:
- Make sure you restart ComfyUI after installing the nodes, try restarting your computer as well.
- if you already have COmfyUI installed before watching this video, update it first
- if you don't see the Comfy UI Manager after installation, update the UI by navigating to the Comfy UI folder and running the update batch files, if that doesn't work, try the installation methods here: civitai.com/models/71980/comfyui-manager
- visit the AnimateDiff Discord for more technical support: bit.ly/48Vnpdu
if you still get errors, drop them below
Im getting this error when i load the jason file:
Loading aborted due to error reloading workflow data
TypeError: widget[GET_CONFIG] is not a function
TypeError: widget[GET_CONFIG] is not a function
at #onFirstConnection (127.0.0.1:8188/extensions/core/widgetInputs.js:385:54)
at PrimitiveNode.onAfterGraphConfigured (127.0.0.1:8188/extensions/core/widgetInputs.js:314:29)
at app.graph.onConfigure (127.0.0.1:8188/scripts/app.js:1144:34)
at LGraph.configure (127.0.0.1:8188/lib/litegraph.core.js:2260:9)
at LGraph.configure (127.0.0.1:8188/scripts/app.js:1124:22)
at ComfyApp.loadGraphData (127.0.0.1:8188/scripts/app.js:1374:15)
at reader.onload (127.0.0.1:8188/scripts/app.js:1682:10)
This may be due to the following script:
/extensions/core/widgetInputs.js
Any Ideas?
Thanks a lot, i just don't know where to put the files from FFmpg you told to avoid some errors.. :) thx for everythings
Hey thanks a lot for your tutorials and help, I'm getting this error while trying to add your workflow or Inner Reflections one on the UI " Loading aborted due to error reloading workflow data TypeError: widget[GET_CONFIG] is not a function " I tried on two different PC and still getting the same problem
Thanks for the great tutorial but I can't even get ComyfUI to work. I looked through the discord server but can't find my problem. Not used discord before so I might be looking wrong.
No idea what I'm doing wrong. None of the Custom Nodes will load. Lots of error messages, ModuleNotFoundError, Cannot import, IMPORT FAILED, ffmpeg cannot be found etc. I'm guessing that's why the custom workflow fails too. Tried switching browsers, restarting Comfy UI, restarting the computer but same results.
@@crankyboy71 hi, try updating comfyUI and the nodes and let me know how it went.
- you can update comfyui here: ComfyUI_windows_portable\update
- you can update the nodes by opening cmd inside the node folder and run the "git pul" command
Your tutorials are always top notch and we as a AI community appreciate your hard work
Glad to hear that!
Awesome, thanks man. Really like the interview scenes and how you structured the tutorial. You are my favourite AI Channel by now. Though I'd love to see you making more VFX content, but im hyped for a video to video or video to animation workflow. All these animations are cool and all, but researching how to incorporate AI in Video projects would be something i'm very curious about.
🙏 thanks for the feedback
Aww Yeah!!! Thanks for the interview!
My pleasure ! thanks for joining :)
Not sure if anyone had problems but my try was slow with 100 frames, it was going to take 8hrs with a mid spec 12gb card. Make sure you do use an sd1.5 as sdxl takes forever. This applies in conjunction to the initial model and animediff loader, (I had temporal layers.f16 reason why I was able to use sdxl) - also now you can download the new v3_sd15_mm.ckpt instead of the previous version always using the link he has in the description. I was almost there, giving up and then I realised that I had to follow his video to the dot. You are the best content creator for AI I found so far.. ❤️ thank you! 🙏 I learned a lot and I love the discord you also have. Again, massive thanks.
you did everything right ! and happy to help
@ thank you.. without you.. I would have been lost. I truly appreciate the effort you went thru and all the additional support, adding all the links in the description in sequence and the discord to find what problems people had.. hopefully soon I’ll be able to help you help people 😅😂 (I think I almost watched all your videos) 😂😂😂
@ well… little update.. maybe I am the only one.. but… if you run stable in the background and process something with comfyUI, it will slow you down dramatically.. even if stable is not doing anything. I was trying to copy a prompt i had on stable and left it there. Comfy ui was as slow as it gets.. simple thing.. but not obvious for a noob like me 🙄
Awesome video mate, truly loved it and full of amazing knowledge :-)
Glad you enjoyed it!
thanks for the tutorial, it was so helpful!!
Glad it was helpful!
cant wait to give this a go tomorrow. im new to this scene and you made this tutorial so simple to understand. thank you buddy
have fun :)
Thank you for your course, it is one of the best tutorials I think. I also learned to make some short videos and posted them on my website. It was really fun! Thank you!❤❤❤❤
just checked your work, really good stuff, you're doing GREAT!
WOW!!! Thanks so much for making this video!!
Glad it was helpful!
Great video... Thnx man
great vid, big thanks!
Glad you liked it!
love your work.
this works best with SDXL
Thx a lot, first animation node setup that works for me - that is without understanding comfy ui too much. But from here i can test it out, i really love this morph style and i hope morphing from (existing) image to image is possible....
Glad it helped!
@@MDMZwhat is v ram? Is it general ram or gpu ram?
@@emmanode vram is GPU ram
you are incredible. you are gonna make me rich
haha, let's go 💪
Great content, you still haven't posted the controlnet tutorial, when will you post it? Thank you
plenty of AI animation videos coming soon
Great tutorials brother ❤️🙏🏻
Amazing Animation
2:10 "let's open the manager"... and the manager isn't there :(
perhaps you didn't install it properly? sometimes it's stubborn and doesn't show up even after installing, I will add a solution to the pinned comment once I find one
@@MDMZ
Followed the procedure militarly :)
I see there's a 121 version though, perhaps I should start with that.
@@ChristianIce just added a solution u can try :)
@@MDMZ
You're the best!
This was a really good video. Thanx!
Glad you enjoyed it!
how many lines can you generate? cause if i write more than 5 lines the engine doesnt work ...gives me the error underneat Error occurred when executing BatchPromptSchedule:
Expecting value: line 2 column 7 (char 273)
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 101, in animate
animation_prompts = json.loads(inputText.strip())
File "json\__init__.py", line 346, in loads
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 355, in raw_decode
I'm still able to generate very long and very short prompts, but I did get similar errors a few times before, I'm yet to figure it out
@@MDMZ thx
Great video!! Many thanks for sharing the great workflow. Is it possible to sequence-morph from existing images?
not yet
MDMZ never heard of this software wow amazing thanks for sharing your knowledge. Is there a way to do this in Video ? Thanks 🙏
Yes, soon
Thank you very much for this tutorial!
Is it normal for KSampler to take a long time to load?
as long as you see a green progress line, it's normal
Error occurred when executing BatchPromptSchedule:
Expecting value: line 2 column 7 (char 273)
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\I9Pro\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 101, in animate
animation_prompts = json.loads(inputText.strip())
File "json\__init__.py", line 346, in loads
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 355, in raw_decode
looks like you have a typo in your prompts, make sure you follow the structure of the original prompt
@@MDMZ ok i did. all cool untill i tried to insert one more option. It doesnt want more passages.
Error occurred when executing BatchPromptSchedule:
Expecting property name enclosed in double quotes: line 7 column 1 (char 1691)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 101, in animate
animation_prompts = json.loads(inputText.strip())
File "json\__init__.py", line 346, in loads
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 353, in raw_decode
BTW i appreciate your work and must thank you
@@allago11 if you copy and paste a prompt with line breaks like this:
"blah blah, blah, blah blah
blah blah, blah"
it won't work, you need to erase the line break and turn it to this
"blah blah, blah, blah blah blah blah, blah".
great video. I only want to ask this: When i download the ffmpeg do i place it anywhere? I leave it with my downloads?
hi, I suggest you look for any youtube tutorials on how to install ffmpeg, but animatediff will work without it in most cases
thanks for your tutorial again,. Curious to know, what is the spec of your PC in the video ?
i9-13900 + RTX4090
How make for video to video ?
Can you show how this is done with a1111? I assume most of us here use that over comfy
any reason you're against using comfyUI? 😅
Its pretty similar to this, just less options. Same kind of sliders and whatnot. I found it to be incredibly slow on A1111 though. A 128 frame animation took 26 minutes at 512x768. That said you can use frame interpolation with deforum so its a bit smoother. All of this pales into total insignificance however with the results on paid websites which are several hundred orders of magnitude better than anything AnimateDiff can make. Unless your desperate to do local gen, I would just use a paid for website.
Please consider adding "in ComfyUI" in the title. Cheers.
oh, very good suggestion, I will do that
Thanks for the awesome tutorial 😊
Can you also use SDXL with animatediff?
I believe it's possible in certain cases
When is the Controlnets video coming, kind Sir?
more AI animation tutorials are in the pipeline, stay tuned
Are these steps have to be followed even for think diffusion??
pretty much yes
Loading aborted due to error reloading workflow data
TypeError: widget[GET_CONFIG] is not a function
Any help ? Thanks!
same for me
managed to make it work, just re-install all and make sure you have all the models and animatedmodels put in the right folders. It should work after
@@94272008a got the same error, re-installing didn't work for me though
same problem
@@ottonik8605 i solved, its a question of prompt, try to write it correct, respect the space after comma, dont put a comma after Last Line etc
Moe, your vids are amazing but for some reason i always have an issue opening the bitly links...they never seem to work for me for some reason and now im trying to get your workflow but can't download it. Is there a way to find it through a regular link please?
Hi there, can you check if you have extensions blocking the links ? Or try a different browser.
Shortened links are a good way to keep the description from looking messy, it also allows me to track clicks and understand my audience a bit more.
i would love to test out with the same prompt but slightly edited could u by anychance post your promt so i could copy paste ty, same for the pre text and app text thank you
thank you, I got images with lot of noise , so like not clean image
that's strangem try different models, and different resolutions, make sure you get ffmpeg and update your GPU driver
Es en modo local o en line, gracias por responder
I get a lot of errors, including that my CUDA is out of memory and that i have to realocate some, but anyhow i am very happy with yr pevious workflow, i only need to know if there is a way to make the movements a bit slower
the CUDA error usually means your GPU VRAM is a bit low for your needs.
Дуже круто друже)
Hi, why didn't you show many nodes how to install them? I can't find: number of frames and many other nodes, I don't keep them and I can't find them
the main nodes needed to get this done are showcased in the video, I'm sure I covered every single step unless some of the nodes have been updated by the developers and they look a little different now
As I followed along I quickly ran into a road block, I am a mac use but do not have a Mac Sillicon, do you know of a work around to download and use for a mac laptop user? Looks like Stable Diffusion also shares that limitation :(
there's a link for mac and AMD users in the MAC, so far I learned that it doesn't work for everyone, but worth giving it a shot
Hey MDMDZ, I'd love to know how to adapt this workflow to do img2video? Is there a workflow that is easy to use for this purpose?
hi there, not that I know fo, I'm also waiting for a proper workflow that allows that
how to fix this...?
When loading the graph, the following node types were not found:
BatchPromptSchedule
Nodes that have failed to load will show as red on the graph.
hey, you can find some help in the pinned comment :)
did you find the solution for this? i´m having the same isue
Yes i have manually download the ComfyUI-AnimateDiff-Evolved and other three and place them info the custom_nodes folder and restart the comfyui and it works for me.@@MrKikegraphics
Cool vid, but you should really use a1111 since that's what most of use
I use A1111 too, but ComfyUI is much more intuitive, more and more people are using it
Not anymore.
Hi, thanks for the tutorial. I got an error when loading the queuing prompt : Error occurred when executing ADE_AnimateDiffLoaderWithContext:
Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118
Could you help me?
send me your workflow on IG or Discord
how long it generated 425 frames with upscale? 😳 and what's your vram?
around 30-40 mins, I use a 4090 with 24GB
@@MDMZ wow that's pretty fast, thanks for answer 🫶
I need help. When loading the graph, the following node types were not found:
CheckpointLoaderSimpleWithNoiseSelect
ADE_AnimateDiffUniformContextOptions
ADE_AnimateDiffCombine
ADE_AnimateDiffLoaderWithContext
How can I solve this?
hi, checkout the pinned comment
i cant make video load, I got noise images. please can you confirm if this works or not anymore
I just tested it again using the same workflow, works completely fine for me (Windows, Nvidia GPU)
@@MDMZ I am getting noise . No real video . But some noise :(
I cannot use the animatediff nodes. I've installed the nodes, but they are still red. Should I install xformers or CUDA?
I've solved the problem by updating the comfyui.
nice!
Please tell me system requirements to install this?
Someone know what is this?" When loading the graph, the following node types were not found:
VHS_VideoCombine
Nodes that have failed to load will show as red on the grap"
open the ComfyUI manager, go to install custom missing nodes, and download the missing nodes
mine has the same error. Did you manage to solve it
@@MDMZ Help me VHS_VideoCombine
Hi there
When I drag and drop the workflow it shows this error: widget[get_config] is not a function
And: This may be due to the following script: /extensions/core/widgetInputs.js
What should I do?
try re-installing
This work with Think Diffusion?
should be able to since ThinkDiffusion can run ComfyUI
When I do this, my prompts are just getting melded together, it doesn't actually travel to the next one. Not really sure what I'm doing wrong :S. The upscale boxes are missing also
hmmm are you sure you commented on the right video? 😅 sounds like u r talking about AnimateDiff
Can I use my mobile phone?
you might be able to do it through a cloud solution that runs ComfyUI
I do everything step by step exactly and when I open the program you see nodes in red with error messages
hi, check out the pinned comment
Is it work on lap and mobile
works on laptops
Do you know how many GBs is yr graphics card to be able to do this? Because mine is 10 and i get an error telling me that i need more memory
I was able to run this on 16gb and 24gb
o meu ta dando isso OSError: [WinError 126] Não foi possível encontrar o módulo especificado. Error loading "C:\Users\Freitas\Desktop\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\lib\c10.dll" or one of its dependencies.
is there a good app that can do this?
not that I know of
Chromebook can download?
i am getting an error when loading json files
TypeError: widget[GET_CONFIG] is not a function
at #onFirstConnection (127.0.0.1:8188/extensions/core/widgetInputs.js:385:54)
at PrimitiveNode.onAfterGraphConfigured (127.0.0.1:8188/extensions/core/widgetInputs.js:314:29)
at app.graph.onConfigure (127.0.0.1:8188/scripts/app.js:1144:34)
at LGraph.configure (127.0.0.1:8188/lib/litegraph.core.js:2260:9)
at LGraph.configure (127.0.0.1:8188/scripts/app.js:1124:22)
at ComfyApp.loadGraphData (127.0.0.1:8188/scripts/app.js:1374:15)
at ComfyApp.setup (127.0.0.1:8188/scripts/app.js:1221:10)
at async 127.0.0.1:8188/:14:4
fixed it by git pull on comfy UI folder
glad it worked
is there a tutorial video for AMD gpu on windows? i clicked the link and it said linux only for AMD gpu
you can follow the same steps in the video if you're on windows, hopefully your GPU will be compatible
Heya, not sure why I'm getting this error. My pc has 32GB ram and running on a RTX 4080.
Error occurred when executing KSampler:
Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 2.48 GiB
Requested : 39.79 GiB
Device limit : 15.99 GiB
Free (according to CUDA): 12.15 GiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB
Not sure why it's requesting almost 40GB to generate an animated prompt :\
the error is referring to your GPU VRAM and not your RAM, you seem to have 16GB which should be enough to run this, perhaps you can try updating your GPU drivers
Hey, Can you write your prompt fully ?
"0" :"A Chimpanzee, covered in dark black hair, with a large jaw and sharp canine teeth, in the jungle, within the depths of the wilderness, surrounded by forest trees and greenery, in a dense tropical rainforest, lush vegetation, tall trees, sun rays",
"15" :"A Chimpanzee, covered in dark black hair, with a large jaw and sharp canine teeth, in the jungle, within the depths of the wilderness, surrounded by forest trees and greenery, in a dense tropical rainforest, lush vegetation, tall trees, sun rays",
"35" :"A male Homo erectus, upright man, with a robust and sturdy build, a stooped posture, with a prominent brow ridge and a pronounced jaw and broad nose, wearing basic animal hides and furs for warmth, holding an arrow, embraces a functional and primitive style, with long and unkempt hair, wearing minimal accessories and resides within a hunter-gatherer society, in a hut inside a rocky cave",
"60" :"An early modern village man, dressed in simple clothing made from animal hides, wearing a tunic, herding livestock, in an agriculture setting, village, near a river, settled agricultural community, surrounded by shelters made from natural materials like branches, leaves, and animal skins",
"85" :"an ancient Egyptian Pharaoh male wearing clothing with intricate patterns, symbolic style, symbolizing divine status and authority, Wearing a wig and a beard, embellished with elaborate jewelry like collars and bracelets in the Nile River civilization, set against ancient Egyptian architecture, royal palace, temples of ancient Egypt, in the background are massive pyramids and The Great Sphinx of Giza, egyptian desert",
"110":"an ancient Greek male , wrapped in a toga, embodies a simple and functional style, emphasizing athleticism, With short hair and minimal accessories, in a society known for democracy, philosophy, and arts, surrounded by the architectural marvels of ancient Greek cities, Athena, impressive temples, theaters, Mediterranean landscape",
"135":"An ancient Roman male wearing a tunic, reflecting practicality and militaristic influence, With short hair, styled elaborately, wearing leather sandals, brooches, rings, and belts, ancient Roman architecture, well-paved road, the Colosseum, vast Roman Empire",
"160":"a medieval European knight, heavily armored and mounted warrior, in a suit of steel plate armor with a distinctive helmet, shield, and a sword, carrying the emblem of his house on his shield, Crusades, surrounded by medieval European architecture, castles, fortified towns",
"185":"in Renaissance Europe, a man wearing a doublet hose, ruffled collar, richly detailed fabrics, hat, glove, intricate jewelry, long tied back hair, cultural artistic flourishing, surrounded by Renaissance-era architecture, grand cathedrals, opulent palaces, crowded market squares, people walking in the background, cultural artistic flourishing",
"210":"victorian Era, a man dressed in a formal and highly structured ensemble, including a tailcoat, waistcoat, cravat, and a top hat, groomed hairstyle, sideburns, pocket watch, cane, monocles, industrial Revolution, ornate architecture, grand estates, bustling urban centers, Victorian gentlemen navigate through the cobbled streets, cultural landscape",
"235":"a 1900s man wearing a tailored three-piece suit consisting of a jacket, vest, and trousers, paired with a crisp white shirt and a tie, polished leather shoes and pocket watch, neatly combed short hair, engaging in social gathering, in the background is a busy rapidly growing city, bustling urban environment ",
"260":"a 1980s man wearing a mod look, wearing bell bottom jeans, a fitted graphic T-shirt, and a colorful windbreaker, 80s fashion style, mullet hairstyle with a headband, wearing sunglasses, watch, carrying a boombox, in a suburban neighborhood, with pastel-colored houses, lush lawns, pop culture and fashion",
"285":"a man in a modern fashion style, wearing a well-fitted minimalist outfit, slim-fit trousers and a crisp button-up shirt, sneakers, wearing a medical face mask, a clean and sharp undercut with neatly groomed facial hair, influenced by digital age, holding a smartphone, wearing headphones, designer brands, sleek wristwatch and a leather bag, bustling urban environment, sleek skyscrapers, vibrant street art, electric vehicles, contemporary fashion, urban aesthetics, and digital-age sophistication digital revolution, modern urban architecture",
"310":"a futuristic man wearing a sleek and minimalist outfit, wearing VR headset, adaptable clothing integrated with advanced technology, his attire features clean lines and eco-friendly materials, perfectly coiffed and effortlessly sophisticated hairstyle, wearable tech and bio-embedded devices, in a futuristic cityscape, towering skyscrapers, smart infrastructure, sustainability, advanced transportation and connectivity, high-tech , AI integration, futuristic architecture",
"335":"A cyborg, seamlessly merging human and machine, sleek, metallic exosuit adorned with intricate circuitry and augmented with biomechanical enhancements, cleanly shaved hair, integrated data ports and augmented reality visors, high-tech, bio-engineered body modifications, futuristic cityscapes, neon-lit skyscrapers stand in stark contrast to the natural world, technological progress, holographic and augmented reality, robots and automated vehicles in the background",
"360":"In post-apocalyptic world, a cyborg as a formidable figure, in a rugged and battle-worn ensemble, wearing salvaged metal plating and leather, bearing the scars of battles, a tattered hooded cloak conceals their mechanical enhancements, steel-like mohawk hair, fierce defiant appearance, wearing cybernetic goggles over one eye, glowing with augmented reality displays and tactical info, in a dystopian wasteland, ruined skyscrapers and shattered cityscapes, with a res sun casting an eerie, ominous glow over the harsh terrain, heavy weaponry",
"385":"In the space invasion era, a futuristic robot as a sleek and formidable humanoid machine, advanced metallic alloys, with a streamlined, angular design, adorned with a variety of integrated weaponry and advanced sensor arrays, shimmering with high-tech finish, energy shields, rocket boosters and a multifunctional helmet-like interface, dark expanse of outer space, alien spacecraft and distant galaxies, engaging in intense battles with other advanced robotic invaders, Explosions, laser beams, futuristic machines clash amidst the backdrop of stars and celestial bodies",
"410":"highly evolved male, with advanced biotechnology to fuse seamlessly with the natural world, in a bioengineered attire, self-renewing plant fibers that change color texture, blending effortlessly with the environment, hairstyle is adorned with living, bioluminescent vines, creating a mesmerizing play of colors, accompanied by symbiotic sentient ethereal creatures with wings, in s a lush, harmonious paradise, city integrated into the natural landscape, organic, bioengineered architecture, advanced ecological systems",
"435":"futuristic conscious energy, shimmering, intricate patterns of light and color that shift and morph, emanating a luminous aura, creating a breathtaking visual spectacle, energy patterns, with tendrils of light and radiant wisps that dance and pulse,intricate, fractal-like mandalas of light that hover around them, in a cosmic tapestry of interconnected realms"
I'm gonna give it a shot ! Thanks a lot. @@MDMZ
I have one more question: Is there any character limitation for prompts? I keep receiving an error with BatchPromptSchedule. Thanks@@MDMZ
@@elifmiami nope, but it seems to me that when the prompts vary too much in length, it tends to give an error, try to keep the length of all prompts as consistent as you can.
what about img2vid?
I have another video on that
@@MDMZ where is it?link plz
When loading the graph, the following node types were not found:
ADE_AnimateDiffCombine
ADE_AnimateDiffLoaderWithContext
ADE_EmptyLatentImageLarge
CheckpointLoaderSimpleWithNoiseSelect
ADE_AnimateDiffUniformContextOptions
Nodes that have failed to load will show as red on the graph.
Same
Did u restart comfy after installng the nodes?
I have the same error. Have you found a solution?
CheckpointLoaderSimpleWithNoiseSelect
ADE_AnimateDiffUniformContextOptions
ADE_AnimateDiffCombine
ADE_AnimateDiffLoaderWithContext
VHS_LoadImages
Try restarting your computer. It worked for me.
@@MDMZ Had to restart, update comfyui from the repo, restart. Thank you and great content!
could you update, this can't be used anymore
why is that ? are u running into errors ?
Nivida 940mx will it work any one please tell me ?
I doubt it, but give it a try, you have nothing to lose
That’s not the video AI we want. We want realism.
im new to this and my outputs are super low quality and blurry any ideas why?
perhaps u missed some settings? also make sure comfyui and your GPU are up to date
Not support my low end pc 😔😔😔😔
did you try ?
Or tell us how do we use comfyui in Google colab.......
Please ❤❤❤
So you have to have an Nvidia gpu then?
there a link for AMD and MAC users in the guide
Bro this is not fair,,,,
Please give the tutorial of this "Evolution" animation in discodifusion please 🙏🏻🙏🏻🙏🏻
Love from Bangladesh ❤
❤❤
Free plastine
Indonesia 🇮🇩🇮🇩🇮🇩🇮🇩
🇵🇸🇵🇸🇵🇸🇵🇸🇵🇸🇵🇸 free Palestine
Free Israel
🤡
@@kasseen They are defending themselves, not fighting. Search for the truth and you will find it
Bhosdike ye jagah protest karne liye nhi bnayi gyi hai tujhe itni chinta ho rahi hai Palestine jake he uske side se lad le. Sala tu bhadwa hai bhadwa he rahega
🇵🇸🇵🇸🇵🇸🇵🇸🇵🇸🇵🇸🇵🇸
Free Free Palestine 🇵🇸 ❤
Great tutorial. However i get an error when loading the workflow inty ComfyUI:
Loading aborted due to error reloading workflow data
TypeError: widget[GET_CONFIG] is not a function
...
This may be due to the following script:
/extensions/core/widgetInputs.js
PLEASE HELP :)
Fixed: update comfyUI itself through update.bat file
When loading the graph, the following node types were not found:
ADE_AnimateDiffCombine
ADE_AnimateDiffLoaderWithContext
ADE_EmptyLatentImageLarge
CheckpointLoaderSimpleWithNoiseSelect
ADE_AnimateDiffUniformContextOptions
Nodes that have failed to load will show as red on the graph.
Try restarting your computer. It worked for me.
me too