Andrea Baioni
Andrea Baioni
  • 39
  • 267 962
Motion Graphics with Stable Diffusion - Video to Video
This video dives deep into a new Stable Diffusion workflow.
We're going to explore how to achieve motion graphics animation effects in ComfyUI starting from a pre-existing video input.
Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi
Shoutout to Ryanontheinside for creating this node pack: www.youtube.com/@UCWLSByG96v8vPHgZLv8UHVg
Nodes: github.com/ryanontheinside/ComfyUI_RyanOnTheInside
Workflow: openart.ai/workflows/JQ9yImydDsHR9MblmVXp
Run the workflow on RunComfy without any installations required: www.runcomfy.com/comfyui-workflows/how-to-create-motion-graphics-with-comfyui?ref=AndreaBaioni
Models:
Loras (place them in ./ComfyUI/models/loras):
- huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v_lora.safetensors
- huggingface.co/naonovn/Lora/blob/main/add_detail.safetensors
- huggingface.co/Lykon/LoRA/blob/7b44164cabdc9a4f34b4ef27f508dacde0a540e0/animemix_v3_offset.safetensors
AnimateDiff Evolved Lora (place them in ./ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/motion-lora):
- huggingface.co/peteromallet/poms-funtime-mlora-emporium/blob/main/LiquidAF-0-1.safetensors
AnimateDiff Evolved Checkpoint (place them in ./ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models):
- huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v.ckpt
Timestamps:
00:00 - Intro
02:01 - Changes for Video Inputs (Z-Depth Mask Creation)
04:44 - Changes for Video Inputs (Generation)
05:39 - Timing an action
07:35 - Understanding the Z-Depth Plane
09:51 - KSampler speed
10:22 - Are "Motion Graphics" in ComfyUI Production Ready?
12:00 - Outro
#stablediffusion #stablediffusiontutorial #comfyui #comfyuitutorial #houdini #zdepth
Переглядів: 2 639

Відео

Animating Products with Z-Depth Maps in Stable Diffusion (Houdini style)
Переглядів 3,6 тис.14 днів тому
This video dives deep into a new Stable Diffusion workflow. We're going to explore how to achieve Houdini-like Z-Depth manipulations and generate stunning animations using Stable Diffusion! This technique involves creating depth maps and using them to guide the diffusion process, resulting in smooth animations. Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi Shoutout to R...
Flux LoRAs: basics to advanced (single image, single layers training) with RunPod and AI-Toolkit
Переглядів 8 тис.28 днів тому
In this video, I'll be diving into the world of Flux LoRA training and showing you how I've been training my own custom LoRAs. We'll cover: - The basics of Flux LoRA training: How to set up a cloud GPU instance on RunPod and get started with AI Toolkit. - Experimental LoRa techniques: Learn how to train LoRAs with a single image and target specific layers for faster, more efficient results. - B...
Developing Complex Flux Workflows Kinda Sucks
Переглядів 3,4 тис.Місяць тому
From Dev's uncertain License, to underperforming model releases and the lack of documentation, Flux's ecosystem is proving to be a bit underwhelming - or maybe it's just me and it's a skill issue? Want to support me? You can buy me a ko-fi here: ko-fi.com/risunobushi (no workflow this week because the shown workflow is just a troubleshooting mess) Timestamps: 00:00 - Intro 01:15 - Flux Dev's Li...
A Professional's Review of FLUX: A Comprehensive Look
Переглядів 11 тис.Місяць тому
In this video, we explore Flux - the groundbreaking new image generation model from Black Forest Labs. As a fashion photographer and AI workflow expert, I break down: What is Flux and how does it compare to previous models? The different versions: Schnell, Dev, and Pro My professional perspective on Flux's strengths and current limitations Detailed installation guide for ComfyUI Practical workf...
The Only Virtual TryOn I've Been Excited About - CatVTON ComfyUI
Переглядів 6 тис.Місяць тому
In this episode of Stable Diffusion Experimental, we explore CatVTON, a fantastic Virtual Try-On (VTON) tool that's great at creating working bases for generative clothes swaps. As a fashion photographer, I explain why this model excites me and how it outperforms previous VTON attempts. Wanna support me? Buy me a ko-fi here: ko-fi.com/risunobushi Workflow: openart.ai/workflows/HaxcrNaVvjae9pdku...
Multimodal AI Video Relight with IC-Light (ComfyUI, non-AnimateDiff)
Переглядів 3,6 тис.2 місяці тому
Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs! Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below. REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon. COUPON CODE: RCABA10 (Expires August 31) Workflow (RunComfy): www.run...
A Great New IPAdapter with Licensing Issues: Kolors
Переглядів 5 тис.2 місяці тому
A new, very good base model and IPAdapter were released, but the licensing is not that clear: Kolors! Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi Workflow: openart.ai/workflows/4bczMC6DtTZKktEBIUfU Install missing custom nodes via the manager, or from the GitHub via git clone: github.com/MinusZoneAI/ComfyUI-Kolors-MZ Kolors (checkpoint, place it in the models/UNET fo...
Photoshop to Stable Diffusion (Single Node, updated)
Переглядів 3,6 тис.2 місяці тому
A very quick update to my previous Photoshop to ComfyUI tutorials, which, since Nima's updated their nodes, needed a bit of a refresh. Workflow: openart.ai/workflows/2ZePdBrzTz2Bi00BKJJz Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi This node needs to be installed from GitHub: github.com/iamkaikai/comfyui-photoshop Everything else can be installed via ComfyUI Manager b...
Magnific AI Relight is Worse than Open Source
Переглядів 10 тис.2 місяці тому
Try RunComfy and run this workflow on the Cloud without any installation needed, with lightning fast GPUs! Visit www.runcomfy.com/?ref=AndreaBaioni , and get 10% off for GPU time or subscriptions with the Coupon below. REDEMPTION INSTRUCTIONS: Sign in to RunComfy → Click your profile at the top right → Select Redeem a coupon. COUPON CODE: RCABP10 (Expires July 31) Workflow (RunComfy): www.runco...
Turning Canva into a Real Time Generative AI tool
Переглядів 2,3 тис.3 місяці тому
Welcome to another episode of Stable Diffusion for Professional Creatives! In this video, we'll show you how to turn Canva, a versatile graphic design and compositing tool, into a generative AI powerhouse using Stable Diffusion and LLMs. Support me on Ko-fi: ko-fi.com/andreabaioni You'll learn how to: Transform basic Canva images into stunning AI-generated results. Seamlessly integrate Stable D...
Multi Plane Camera Technique for Stable Diffusion - Blender x SD
Переглядів 5 тис.3 місяці тому
How does a old animation technique help Stable Diffusion become Art Directionable? Clients and Art Directors want to be able to change every little bit of an image, and that's something Stable Diffusion is not great at - unless you start thinking with planes! In this episode of Stable Diffusion for Professional Creatives, we'll see how the tech behind the Multi Plane Camera, something out of th...
Get Better Images: Random Noise in Stable Diffusion
Переглядів 3,9 тис.3 місяці тому
Are your Stable Diffusion generations not as great as MidJourney's? Discover how a tiny bit of random noise can make a big difference in image quality! In this episode of Stable Diffusion for Professional Creatives, we'll show you how to improve your images using random noise, whether through ControlNet or latent manipulation. Want to support me? You can buy me a coffee here: ko-fi.com/risunobu...
Perfect Relighting: Preserve Colors and Details (Stable Diffusion & IC-Light)
Переглядів 8 тис.3 місяці тому
Finally, a way to relight people with IC-Light without color shifting and losing out on details. In this episode of Stable Diffusion for Professional Creatives, we finally solve one of the main issues with IC-Light: color shifts! Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi Workflow: openart.ai/workflows/risunobushi/relight-people-preserve-colors-and-details/W50hRGaBR...
Any Node: the node that can do EVERYTHING - SD Experimental
Переглядів 7 тис.3 місяці тому
Any Node: the node that can do EVERYTHING - SD Experimental
Stable Diffusion IC-Light: Preserve details and colors with frequency separation and color match
Переглядів 11 тис.4 місяці тому
Stable Diffusion IC-Light: Preserve details and colors with frequency separation and color match
Relight and Preserve any detail with Stable Diffusion
Переглядів 15 тис.4 місяці тому
Relight and Preserve any detail with Stable Diffusion
Relight anything with IC-Light in Stable Diffusion - SD Experimental
Переглядів 12 тис.4 місяці тому
Relight anything with IC-Light in Stable Diffusion - SD Experimental
Simple animations with Blender and Stable Diffusion - SD Experimental
Переглядів 8 тис.4 місяці тому
Simple animations with Blender and Stable Diffusion - SD Experimental
Hyper Stable Diffusion with Blender & any 3D software in real time - SD Experimental
Переглядів 68 тис.5 місяців тому
Hyper Stable Diffusion with Blender & any 3D software in real time - SD Experimental
Adobe Mixamo & Blender to level up your poses in Stable Diffusion - SD for Professional Creatives
Переглядів 6 тис.5 місяців тому
Adobe Mixamo & Blender to level up your poses in Stable Diffusion - SD for Professional Creatives
Stable Diffusion 3 via API in comfyUI - Stable Diffusion Experimental
Переглядів 4,3 тис.5 місяців тому
Stable Diffusion 3 via API in comfyUI - Stable Diffusion Experimental
Generating a fashion campaign with comfyUI - Barragán x Stable Diffusion for Professional Creatives
Переглядів 3,4 тис.5 місяців тому
Generating a fashion campaign with comfyUI - Barragán x Stable Diffusion for Professional Creatives
From sketch to 2D to 3D in real time! - Stable Diffusion Experimental
Переглядів 13 тис.5 місяців тому
From sketch to 2D to 3D in real time! - Stable Diffusion Experimental
Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject
Переглядів 6 тис.6 місяців тому
Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject
IPAdapter v2 Released! Old workflows are broken - Stable Diffusion Experimental
Переглядів 8 тис.6 місяців тому
IPAdapter v2 Released! Old workflows are broken - Stable Diffusion Experimental
Generate reference images for moodboards & inspiration - Stable Diffusion for Professional Creatives
Переглядів 3,1 тис.6 місяців тому
Generate reference images for moodboards & inspiration - Stable Diffusion for Professional Creatives
What is Stable Diffusion for Professional Creatives? - Channel Trailer
Переглядів 2,5 тис.6 місяців тому
What is Stable Diffusion for Professional Creatives? - Channel Trailer
The Issue with MidJourney - Stable Diffusion for Professional Creatives
Переглядів 2,6 тис.6 місяців тому
The Issue with MidJourney - Stable Diffusion for Professional Creatives
Stable Diffusion for Professional Creatives - Lesson 2: Photoshop with SD generating in real time
Переглядів 8 тис.6 місяців тому
Stable Diffusion for Professional Creatives - Lesson 2: Photoshop with SD generating in real time

КОМЕНТАРІ

  • @PeterStrmberg007
    @PeterStrmberg007 2 дні тому

    Thanks for this. fp32 is not only slower, it sometimes gives completely the wrong colors! So best stick with fp16. I added an upscaler and face enhancer. Also found out making a more accurate mask helps a lot.

  • @tombkn3529
    @tombkn3529 2 дні тому

    great video thank you!! my product image is so stretched because of my phone format, how do i put in as squareformat?

  • @---Nikita--
    @---Nikita-- 2 дні тому

    more tuts for generating 3d models pls

  • @nahlene1973
    @nahlene1973 3 дні тому

    The 'frequency' part actually sounds a lot like how focus peaking works.

  • @placebo_yue
    @placebo_yue 4 дні тому

    Sadly tripoSR is broken now. Do you have any solution to that bro? Nobody can give me answers or help so far :(

  • @mariorenderbro6370
    @mariorenderbro6370 4 дні тому

    Suscribed!!!!

  • @wildinsights5033
    @wildinsights5033 4 дні тому

    Being new to this, make me wonna kill myself

  • @yotraxx
    @yotraxx 5 днів тому

    Quality video and explanations Andrea ! As usual... Thank you for sharing your works

  • @Betonbent
    @Betonbent 5 днів тому

    Tried it, i would love to use it, but it generated an error, and the startup screen looks very different from the one in your video, is there any way to simplify this? the appeal of magnific is the user friendly interface, but big respect for your work offcourse

    • @Betonbent
      @Betonbent 5 днів тому

      forgot to mention im using the online workflow via comfy

    • @Betonbent
      @Betonbent 5 днів тому

      so it should be plug and play, but didnt work for me at all :(

    • @risunobushi_ai
      @risunobushi_ai 5 днів тому

      hi! which error are you getting? are you running it on runcomfy? if there's an error I can flag it to them and have them fix it.

    • @Betonbent
      @Betonbent 5 днів тому

      @@risunobushi_ai I'm so sorry i launched the wrong app, i launched the portrait IC by mad :)

    • @Betonbent
      @Betonbent 4 дні тому

      @@risunobushi_ai I'm sorry i was opening the wrong version, i used the portrait IC one, my bad :)

  • @hasstv9393
    @hasstv9393 6 днів тому

    Not cheap at all

    • @risunobushi_ai
      @risunobushi_ai 6 днів тому

      0.30 USD for a LoRA (on a A40, including pod setup times) doesn't seem much compared to Replicate and CivitAI's pricing, which are arguably still rather cheap for professionals

  • @girodigitalarg9557
    @girodigitalarg9557 7 днів тому

    hi there, im having this isue "module 'torch' has no attribute 'float8_e5m2" thanks for all

  • @ManojOmre22
    @ManojOmre22 7 днів тому

    Would love to see an animation workflow from this base tutorial.

  • @andredeyoung
    @andredeyoung 7 днів тому

    🔥

  • @kaymifranca
    @kaymifranca 8 днів тому

    Thank you very much, your explanations are excellent

  • @kaymifranca
    @kaymifranca 8 днів тому

    You are best!

  • @tommywilczek8720
    @tommywilczek8720 8 днів тому

    You consistently put out cutting-edge, incredibly high quality content. Keep it up!

  • @AYAhigheye
    @AYAhigheye 8 днів тому

    wooooow cool

  • @steveyy3567
    @steveyy3567 8 днів тому

    interesting. i‘ll try it.

  • @greathawken7579
    @greathawken7579 8 днів тому

    very interesting

  • @Taz_Olson
    @Taz_Olson 8 днів тому

    I love the workflow, but an issue I keep having is that even though the Liquid AF is good, it's still certainly not usable for a lot of things because if you leave it on for more than a quick transition, it's instability and flaws become pretty noticeable, I'm trying to work on figuring out how to make it more stable

    • @risunobushi_ai
      @risunobushi_ai 8 днів тому

      I'm definitely not an expert on AnimateDiff, there might be better LoRAs out there, but at the stage I'm at with these workflows right now I'm focusing on finding the limitations and the areas of applicability of the tech, so I haven't shopped around for better solutions yet. I'm also very lacking in knowledge about how AnimateDiff came to be, and how it's evolving, and that's something I'll need to look into sometime in the future. Otherwise I'll just keep kicking the can down the road and settle for other people's solutions to issues I don't really have a proper understanding of.

  • @ZergRadio
    @ZergRadio 8 днів тому

    How weird. Earlier today I was watching some Blender Motion Graphics videos and thought are there any ComfyUI Motion Graphics video, nothing came up in the titles when I searched earlier. But later during the day, your video popped up in my Subscribers feed Thanks

  • @Xandercorp
    @Xandercorp 8 днів тому

    I don't understand, what would you use this for though? The clips at the beginning are... questionable?

    • @risunobushi_ai
      @risunobushi_ai 8 днів тому

      are they questionable in the sense that they're the first tests out of a bit of tech that came out a couple of weeks ago, raw and unpolished? yes are they questionable in the sense that they showcase the complete lack of possibilities in pursuing motion graphics animations with these nodes? I don't think so. is it better to do motion graphics, right now, with anything but stable diffusion? probably, yes. my reasoning in pursuing this is that it's a neat piece of tech that makes it possible to do something that a) has a commercial application, and b) was not possible with SD before now. whether or not it's actually something that will eventually be commercially viable is a completely different matter. on the first video I published about product relighting, there was a ton of people questioning color shifts and detail loss. over the course of the following months, we fixed that, little by little. I'm trying to do the same thing here for motion graphics.

    • @Xandercorp
      @Xandercorp 8 днів тому

      @@risunobushi_ai Thank you for clarifying. I'm not against it, it just seems like a lot of work from the side of the person that hasn't learned Comfy yet. :) I've fiddled with it, but what you guys do is way out there. Like might as well learn Houdini if I'm trying this niche :D It's nice you get to experiment though.

    • @risunobushi_ai
      @risunobushi_ai 8 днів тому

      oh definitely! right now, learning houdini would be the smartest choice for this kind of stuff, but then again I do like experimenting with new, unconventional ways of doing the same things that other softwares do better - you never know what could happen down the line. but yeah, I was actually on the verge of cataloguing this as part of my Stable Diffusion Experimental series, but it's the existing commercial application of the motion graphics sector that pushed it over the edge, even if it's really raw tech right now.

  • @kyounokuma
    @kyounokuma 8 днів тому

    Thank you so much for this. I would love to support you on Patreon so that I can see you dive even deeper into the topics you've been covering.

    • @risunobushi_ai
      @risunobushi_ai 8 днів тому

      thanks! I don't have a Patreon, mostly because I don't have the time to do more in-depth, subscriber only content (I do this for a living, and I'm either bound by NDA on more in depth stuff, or I straight up don't have the time to create more content), and because I try to share everything I can share openly, without subscriptions. I do have a ko-fi page in the video's descriptions, but that's just for donations, there's no content locked behind it.

    • @kyounokuma
      @kyounokuma 8 днів тому

      I forgot that you had a ko-fi page. I’ll sign up over there then. Yeah, I kinda guessed you were a bit too busy for Patreon. Anyway, thanks for sharing openly and for consistently bringing some clarity to these topics.

  • @andredeyoung
    @andredeyoung 9 днів тому

    wow! Thanx

  • @enjon9873
    @enjon9873 10 днів тому

    The perfect channel for real pros. No nonsense. No weird testing. No unnecessary complicated techniques and straight to the point. Glad i came across your channel.

  • @98alejoso
    @98alejoso 10 днів тому

    i just stumbled upon yourt chnnel, its exactrly what ive been looking for. thank you so much for all the work put into these tutorials!!

  • @mohammedrashid1910
    @mohammedrashid1910 11 днів тому

    works great thanks

  • @viktorchemezov927
    @viktorchemezov927 12 днів тому

    Image Levels Adjustment math domain error :( gamma = math.log(0.5) / math.log((self.mid_level - self.min_level) / (self.max_level - self.min_level)) ValueError: math domain error

    • @risunobushi_ai
      @risunobushi_ai 11 днів тому

      hi! this is a known issue, the Image Level Adjustment was updated and it broke the range. I haven't had the time to fix this yet because of my job, I'll try to do it as soon as I have the time to. Unfortunately I can't maintain all my old workflows on a daily schedule.

  • @viktorchemezov927
    @viktorchemezov927 12 днів тому

    Andrea Hi again. I dont know whats the problem but when I start processing it ends on the group of regeneration. Process isnt finished completely. Nodes from other groups simply do not get the process and everything is completed successfully before them. I don't understand what the reason is, I've already suffered. No errors.

    • @viktorchemezov927
      @viktorchemezov927 12 днів тому

      100% 20/20 [01:26<00:00, 4.32s/it] Unloading models for lowram load. 1 models unloaded. Loading 1 new model loaded completely 0.0 319.11416244506836 True Processing image 1/1 with shape: (1448, 1086, 3) Prompt executed in 93.81 seconds thats the end. Its ksampler in regeneration group. Whats the problem?

    • @risunobushi_ai
      @risunobushi_ai 12 днів тому

      What’s your hardware setup? What OS are you running?

    • @viktorchemezov927
      @viktorchemezov927 12 днів тому

      @@risunobushi_ai google colab

    • @risunobushi_ai
      @risunobushi_ai 11 днів тому

      ah, that might be why I'm having trouble helping you specifically. I can only test on Linux and Windows, I do not test for macOS (no CUDA libraries), Pinokio and Google Colab (different environments from what I'm used to).

  • @atihook
    @atihook 12 днів тому

    Great workflow! Amazing. Just a question: how do you accomplish that consistency between the frames? Any tips on that? From the different tests that I did, it was a lot more choppy and varied a lot between frames.

    • @risunobushi_ai
      @risunobushi_ai 12 днів тому

      Hi! That’s usually the job of the AnimateDiff Motion Lora. If you’re getting choppy frames, it might not have been triggered properly. Try rerunning the choppy animation with a different seed with anything else at the same values. Sometimes it happens to me too on first generations.

    • @atihook
      @atihook 11 днів тому

      @@risunobushi_ai Thanks :), that was the problem. It wasn't triggering correctly. I had to skip the IP adapter group, even though it was bypassed alrdy ?¿.

  • @gordianodesatado4837
    @gordianodesatado4837 12 днів тому

    Thank you!!!

  • @xandervera7026
    @xandervera7026 13 днів тому

    Love it, can't wait to fiddle with this!! ALSO what do you have against set/get nodes 😆I hate spaghetti!! Just set and get so everything looks clean and color code them xD

  • @farshidrodsari
    @farshidrodsari 14 днів тому

    lol 100% facts

  • @dakshroy1326
    @dakshroy1326 14 днів тому

    LoadAndApplyICLightUnet IC-Light: Could not patch calculate_weight - IC-Light: The 'calculate_weight' function does not exist in 'lora' give this error can you help me in this ?

  • @akatz_ai
    @akatz_ai 14 днів тому

    Hell yeah, great workflow! I’m also trying to figure out what is possible with Ryan’s node pack! I feel like we just entered a new paradigm and it will take a while to see the full potential of all these new toys 😄

  • @antronero5970
    @antronero5970 15 днів тому

    Between you and latent vision channel... Gem dropping all the way! Thank you!

  • @sudabadri7051
    @sudabadri7051 15 днів тому

    Very cool man big props to the guy who made the nodes

  • @maxehrlich
    @maxehrlich 15 днів тому

    Love it, I learn some of the best stuff on this channel! ❤

  • @StringerBell
    @StringerBell 15 днів тому

    I love you!

  • @thibaudherbert3144
    @thibaudherbert3144 15 днів тому

    I thought about the same with TouchDesigner. I'm wondering what kind of techno/music animation you can create using the audio-reactive particle node. It could replace or be used in addition to TouchDesigner

  • @thibaudherbert3144
    @thibaudherbert3144 15 днів тому

    Really cool tutorial, can't wait for the series to see all the features of these nodes

  • @ryanontheinside
    @ryanontheinside 15 днів тому

    YEEYYEEYEEE