Stable Diffusion IMG2IMG Settings Pt. 2 (Consistent Animations!!)

Поділитися
Вставка
  • Опубліковано 21 сер 2024

КОМЕНТАРІ • 173

  • @jamiereibl9611
    @jamiereibl9611 Рік тому +44

    Watching the beginnings of a new art form emerge these last few weeks. We're at the cutting edge, thanks so much for putting the effort in to share your techniques!

    • @enigmatic_e
      @enigmatic_e  Рік тому +2

      🙏🏽

    • @AscendantStoic
      @AscendantStoic Рік тому

      Basically it's a rotoscoping setup for much cheaper and quicker than the traditional methods.

    • @jonhanson8925
      @jonhanson8925 Рік тому +1

      Seriously, this is an artistic revolution, the likes of which we haven't seen... ever!

  • @soulwynd
    @soulwynd Рік тому +21

    The img2img alternative script runs the model in reverse to find the noise pattern that would generate an approximation of that image and then uses it as seed for the actual generation. So it is a way to do fine adjustments to an image. Also it is only precise in the euler sampler.

    • @enigmatic_e
      @enigmatic_e  Рік тому +4

      Thank you for sharing this info. It’s good to know.

  • @more_tezza
    @more_tezza Рік тому +35

    A few other tricks that could help;
    1- reducing the number of frames without deleting every other frame; you can do that in adobe premiere, after effects or whatever software you use to export into single frames. You just reduce the frame rate before exporting.
    2- Use DAIN ai frame interpolator or similar, so you can use with even less frames per second. Then input into DAIN and it will create the in-between frames even smoother.
    3- 2D animations are usually made with 12 or 15 frame per second. So you can reduce the jittery-ness even more by just reducing the frame per second to 12 or 15 fps, and it won't look weird at all. Because animation audience are used to reduced frame per second videos. For example Rick and Morty characteres move at about 12 fps.
    I am an animator and thanks for your videos. I am really excited to try this technique out and make some content.

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Thank you for the tricks! I’m definitely interested in reducing frame rate to something 12 fps and testing that with frame interpolation! I will definitely try that next time!

  • @ozguraltay
    @ozguraltay Рік тому +10

    You can control the background by reducing the "denoising strength", but you will lose the changes in the character. I watched some other videos about video generation via img2img but yeah, the "frame blending" is the catch for smoother non-stuttering AI transition videos. This video totally dope!
    Thanks for sharing it 👍

  • @Adrischa
    @Adrischa Рік тому +2

    well done video
    honestly I love your "how to delete every second file" technique. it's such smooth little hack.
    also never new about mass renaming.
    the stable diffusion stuff is cool too tough :P

  • @alexsuttie16
    @alexsuttie16 Рік тому +2

    Dude really appreciate your videos. Awesome work and thanks for sharing your knowledge on this. This tech is so ground breaking ...

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Thank you! I agree is ground breaking and…… it kinda scares me 😂

  • @wickemu
    @wickemu Рік тому +2

    I've been waiting for this one. I don't have nearly as dramatic subject matter, so my first attempt will be turning my hopping bunny into a hopping potato.

  • @spearcy
    @spearcy Рік тому +4

    Great tips! I hadn't thought about using the frame blending method to smooth it out, but that makes sense. It also makes me wonder whether using optical flow might help sometimes, too. But in your example, when you hit that frame blending tab, it really made a difference. Thanks again!

  • @victor-antonioali378
    @victor-antonioali378 Рік тому +9

    If you open your original clip in premiere,create sequence from clip, then go to sequence settings, you can drop the frame rate from what you have it to less, say 24fps to 12fps, then export as png img sequence without having to delete the pictures...

  • @alonshamash9343
    @alonshamash9343 Рік тому

    I love you for making this video, thank you so much

  • @IGORTSARENKO
    @IGORTSARENKO Рік тому +6

    Here is a tip: create 12FPS composition in AE when you are exporting your video as PNG sequence and it will make your life easier to import it back again and you don't have to rename files.

    • @ROMTHIRTY
      @ROMTHIRTY Рік тому

      Note for people who don't work with video, "12FPS" is only true for 23.976 FPS footage. Essentially half your normal frame rate in the composition.

  • @goldenknowledge5914
    @goldenknowledge5914 Рік тому +1

    Bruh that was the most entertaining n original intro 👍👍

  • @RhettMankind
    @RhettMankind Рік тому +5

    Maybe also worth checking out Pixelmotion blur in AE. Double the length to get original time-frame back (from deleting every 2nd frame) keep frame blending and also add pixelmotion blur. It interpolates the expected blur on moving images to bring back natural motion.

    • @enigmatic_e
      @enigmatic_e  Рік тому +2

      Nice, is that a plug in already in AE?

    • @RhettMankind
      @RhettMankind Рік тому +2

      @@enigmatic_e Yes it's default in AE.

  • @morizanova
    @morizanova Рік тому

    Deleting half of frames really game changer . Nice and wonderful trick you `ve been share in this video . Super thanks !!!

  • @iamYork_
    @iamYork_ Рік тому +1

    Great job my broër°°°

  • @Blaxpoon
    @Blaxpoon Рік тому +1

    I love your channel !

  • @moeminebrahim1132
    @moeminebrahim1132 Рік тому

    Thank you so much for taking the time to help us. You are a blessing to humanity brother. I mean it. ❤

  • @JalexRosa
    @JalexRosa Рік тому +1

    pretty cool!!!

    • @animazon7
      @animazon7 Рік тому

      lmao look whoz here
      maybe next video is ai based triology

  • @ivideogameboss
    @ivideogameboss Рік тому +6

    Dude! your 0:37 animation just got shown live on stage at the Stable Diffusion presentation today. Look up "Stability Diffusion announcements" from Robert Scoble. It appears at 50:27. Congrats!

    • @enigmatic_e
      @enigmatic_e  Рік тому +2

      Wow!!! Thats so cool! I’m so happy that people are showcasing what I’ve work on as an example. Thank you for letting me know about this, definitely made my day!

    • @ivideogameboss
      @ivideogameboss Рік тому +1

      @@enigmatic_e When I saw them show it on stage, I was like, OMG I just saw a video about how this was made. I had to tell you.

  • @lennyinahemi4088
    @lennyinahemi4088 Рік тому

    This is the first video I stumbled on for this channel. I can already tell it's always top quality 👌

  • @ggdgfd9392
    @ggdgfd9392 Рік тому

    Thanks! I've been searching how to get it and this is brilliant :D

  • @arloroan3168
    @arloroan3168 Рік тому +1

    Nice Tutorial. Subscribed. Good luck with the channel!

  • @user-atulsharma
    @user-atulsharma Рік тому

    Keep making the content bro

  • @tantarantaran
    @tantarantaran Рік тому +1

    cool stuff, thanks for sharing! quick tip with the first part of the video: you can set different frame rates in your AE comp on the original footage (8, 12 etc) in Composition Settings, then export picture sequences before you plug them into SD. this way it's easier to play around with lower framerates instead of having to delete them manually. Instead of frame blending, you can try the Time Warp effect in AE, which is pretty much the same just with more settings. But what I would try next, is dropping the video frame rate to something like 10, then using an AI frame interpolation algorythim like TVP instead of the built in AE one. Those algorythms are trained for cartoons and will give a lot better results than the pixel morphing AE effects. this is just theory though, I'm yet to try this next week.

  • @haithamswelem
    @haithamswelem Рік тому

    Thank you for sharing all of these informations.... God bless you

  • @kozivideos7983
    @kozivideos7983 Рік тому

    You're awesome and friendly. Thanks for the video

  • @lenny_Videos
    @lenny_Videos Рік тому

    awesome video Mate 🙂

  • @zintenyo5223
    @zintenyo5223 Рік тому

    works, chock-full thanks!

  • @clenzen9930
    @clenzen9930 Рік тому +1

    Great video!

  • @crossmindion
    @crossmindion Рік тому +1

    Great video ❤

  • @draken5379
    @draken5379 Рік тому +1

    great video bro

  • @kira7x2
    @kira7x2 Рік тому +1

    Looks pretty good

  • @AgustinCaniglia1992
    @AgustinCaniglia1992 Рік тому +1

    the original prompt is for what the original image is supposed to be. And the prompt at the top should be what you want it to be.

  • @ConsultingjoeOnline
    @ConsultingjoeOnline Рік тому

    Thanks for the tips!

  • @djhymntv
    @djhymntv Рік тому

    Dope 🙏🏽✨

  • @zeblatt
    @zeblatt Рік тому

    Good job man, thank you for putting that out !

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Thank you! I appreciate you checking out the video.

  • @OngtwinsTV
    @OngtwinsTV Рік тому

    thank youuuuuuuuuuu. i found your channel. been looking for this style, cant find. i saw this style from WebUigirls, but they dont have a tutorials. though they used LoRA model. but hey thnk this is very helpfulllll! more power!

  • @ShaneNg
    @ShaneNg Рік тому +1

    AI Researchers: Let's have a model with temporal consistency
    Creators: Blend the frames 😂

  • @thaolinh2637
    @thaolinh2637 Рік тому

    Very useful ! Thanks !

  • @marcdonahue5986
    @marcdonahue5986 Рік тому

    THANKS DUDE

  • @WOMrecords
    @WOMrecords Рік тому +1

    Thanks!

  • @RaiseAnchor
    @RaiseAnchor Рік тому +1

    This is so cool

  • @TutorialMaulanaAdi
    @TutorialMaulanaAdi 11 місяців тому

    bro. thankyou very much. This tutorial really helpful to me for getting better output (90% expectations are reached)
    - I used the High Resolution image for sample (500 image), everyone can downscalling and adjusting the resolutions for get output quickly
    - used simple prompt (the output can to reuse for changing the main sample picture for next train Image)
    - img2img alternative enabled with same setting in this video. thats really helpful to getting output image
    - i used OpenPose [dw openpose full with sd15 openpose controlnetmodel] controlnet (getting same output the object and expression from data image train source) and lineart [lineart_realistic and sd15 lineart model controlnet] controlnet (I Think this controlnet really useful for painting the full area of each data image train)
    Again bro. Thankyou very much 😃😃👍👍👍👍👍

  • @BRAsiaTV
    @BRAsiaTV Рік тому

    a young woman looking towards camera with green hair holding wearing gloves with a blue shirt, anime, midjoumeyart style

    • @BRAsiaTV
      @BRAsiaTV Рік тому

      blotches, blown-out, maximalist, teeth, mouth open

  • @lot25
    @lot25 Рік тому

    Thank you!

  • @dincaboutit
    @dincaboutit Рік тому +1

    you rock!!!

  • @mdahsenmirza2536
    @mdahsenmirza2536 Рік тому

    You can also do another thing, increase the fps to like 120 and then slow down video, smoother video with overall less jittery video + spamming de-flicker node on davinci resolve

  • @jonmichaelgalindo
    @jonmichaelgalindo Рік тому

    You can also use SD to style animations you render in Blender. SD is more creative and diverse than EbSynth.

  • @FilmFactry
    @FilmFactry Рік тому +1

    I have been using SD v1.5 and it is SIGNIFICANTLY better than 1.4. In fact I now prefer the overall quality over MJ. But since MJ uses SD the next release of MJ v4 will incorporate SD v1.5. Things are moving so fast.

  • @MVARTZ
    @MVARTZ Рік тому

    So interesting dude, well explained too. At least for an AI beginner like me it was easy to follow.🤙

  • @christiandarkin
    @christiandarkin Рік тому

    in after effects you can export your original frames at a lower frame rate - which should save a bit of messing around. Also, try using timewarp instead of frame blending. This uses a softer morphing technique between frames and you'll be able to get away with doing maybe 1 frame in 5 rather than 1 in 2 for a much smoother animation.
    also, how about separating the background and foreground so you can give them different settings. Maybe you could even motion track the background, export and run SD on just one frame of it (with some outpainting to extend the edges) then use the tracking data to animate that background. That way you'll have a rock-steady background painting.
    Finally, experiment with a "difference matte". This is a great way to key out only those parts of an image that stay the same between frames. Use a difference matte (slightly blurred) to create a b/w mask. then have after effects only "repaint" the sections of the image that change from one frame to the next.
    That way, if you've got (say) a closeup of a face, speaking, only the mouth will be redrawn for different positions, leaving the rest of the face untouched.

  • @SoCalGuitarist
    @SoCalGuitarist Рік тому +4

    Hey, this is fantastic! Thanks for putting this together. You should drop a link in for aitrepeneur's video about training styles in your description, he'd probably appreciate it since you reference his midjourney training. Would anybody by chance happen to know if that frame smoothing option or something similar is available in Apple Motion? I'd like to give it a go, but I refuse to give Adobe a lead cent, screw those guys.

  • @Tijootsu
    @Tijootsu Рік тому

    Thank you

  • @tortenhebu
    @tortenhebu Рік тому

    thanks ma a brotha

  • @HyperGalaxyEntertainment
    @HyperGalaxyEntertainment Рік тому +1

    thanks for the tutorial~ by the way, what device do you use for lighting the background and yourself it looks cool~

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Thanks! The lights are called CAME-TV which aren’t sold anymore. But there are plenty of RBG saber type lights out there.

    • @HyperGalaxyEntertainment
      @HyperGalaxyEntertainment Рік тому

      @@enigmatic_e thanks a lot~

  • @phlerpdesigns9273
    @phlerpdesigns9273 Рік тому +2

    so im curious about your framerates on this. Is the initial video at 30 fps for your png sequence, then you delete half, then take the you ai rendered sequence back into AE, but is it now faster than the original video? or does the frame blending change that? Going to try some of this out now, I like seeing this difference between the more jumpy part, frame blending is a cool trick, somehow never noticed that box was there

  • @amj2048
    @amj2048 Рік тому +1

    this is funny because I was wondering if I should remove every other frame and then use Topaz Video Enhance AI to do a slow mo version of my video, which would then use AI to fill in the frame gaps and then use Premier Pro to speed the video back up ... but watching your video, you've found an easier method

  • @lospuntosstudios5149
    @lospuntosstudios5149 Рік тому +4

    Try Flowframes + EbSynth. I bet that could be something cool

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I heard of flowframes and even downloaded it but never tried it! I will try it soon!

    • @lospuntosstudios5149
      @lospuntosstudios5149 Рік тому

      @@enigmatic_eYou'll love it. NMKD made GUIs like Flowframes for easy Frame Interpolation, Cupscale for Image and Video Enhancement and Stable Diffusion GUI.

    • @lospuntosstudios5149
      @lospuntosstudios5149 Рік тому

      @@enigmatic_e anf IF it helps you, PLEEEASE give me a credit in a Video it would mean so much to me 💜🥹

    • @artdehls9100
      @artdehls9100 Рік тому +1

      @@enigmatic_e Flowframes is the best. I've thrashed it pretty hard with 4k MB3D renders. Interpolating crisp fractals is hard, and this will easily do 4 frames out of 5 and you won't notice a thing.
      It also only takes 20 minutes to do a vid clip that Topaz wants to spend all night on.

  • @baptiste6436
    @baptiste6436 Рік тому

    thanks!

  • @gemini9775
    @gemini9775 Рік тому

    thanks ^^

  • @traviscislo6260
    @traviscislo6260 Рік тому

    Could “frame blending” be another name for “image interpolation “?
    Cause that would make perfect sense why it becomes so smooth and retains the consistency longer. image interpolation is an amazing tool that I rarely see used. Basically predicts the frames in between two separate stills. It can easily turn a 30 fps vid into a 60fps. It works wonders on old (and new) claymation videos!!

  • @liquidmind
    @liquidmind Рік тому

    gracias!

  • @Spindonesia
    @Spindonesia Місяць тому

    Nice tuts sir, pelase do animatediff tuts for webui instead of comfy, im ur #1 fan!

  • @EranMahalu
    @EranMahalu Рік тому

    Thanks for that. Also try flowframes for smoothing the animation

  • @eliweber1891
    @eliweber1891 Рік тому +1

    Thanks for the trick on deleting every other frame on windows. I wasted so much time the other day doing it manually. 😪 Have you tried flowframes for the frame interpolation?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Glad i could help!

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      No i havent but i heard a lot about. I will try it out.

  • @robc3d
    @robc3d Рік тому +1

    It's interesting, I've tried this technique shown here, on nerdy rodents page, and on max novak's channel and the alternative test scripts dont seem to function the same way at all. Turning on the script immediately breaks any style tokens you add in the prompt, override denoising strength doesn't seem to have the same function as you guys display, and the sigma checkbox locks it down even further. Are you guys testing this on the most recent git pull? All the videos showing this off are from a couple days ago and i dont think the updated version runs the same. I've tested this on brand new clean installs on 2 machines and it just doesn't function the way you guys are showcasing.

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Hey, i can tell you that its just tweaking things. Ive had my image look good and sudden i change the a word in the prompt or change a value by 1 unit and it becomes a total messed.

    • @robc3d
      @robc3d Рік тому

      @@enigmatic_e appreciate the reply and im sure that plays a part, but using the same source images settings and prompts yield completely different results. Particularly with the override check boxes that were introduced that do not exist in any of the videos on this topic including yours since they were made before the update. Something has changed recently guess anyone trying this now will have to wait on clarification as people experiment.

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Ahh ok. I wasn’t aware of an update. I will def look into it.

  • @magikodj
    @magikodj Рік тому

    The generated image is similar to the original image, I want to generate a woman on the moon, in this case the result is no longer stable

  • @PaulikasKarolis
    @PaulikasKarolis Рік тому

    Great video! Very useful tips! Could you tell us what is the framerate of your original video, and once you import your custom png's what's the "assumed framerate" in after effects? Thanks!

  • @swannschilling474
    @swannschilling474 Рік тому

    Maybe image morphing could help? There is a stable-diffusion-morphing repo on Github...

  • @soda242
    @soda242 Рік тому

    que hermoso video se me enciende la imaginación gracias por las clases... pero el programa donde esta?

  • @rjdstudios4901
    @rjdstudios4901 Рік тому

    Very helpful! Im rendering a long video now and Im going to try this. I do have a question. Ive been shooting my videos in 60fps in the hopes of having more smooth end results, but if this video is the case, would you suggest shooting in 24 or 30fps and still deleting every other frame?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      You can shoot in 60 fps and test results. You can always remove frames after you want to.

  • @ExplicableCashew
    @ExplicableCashew Рік тому

    Me thinking about deleting every other frame: should be straighforward in Bash / GitBash, just make a seq of odd numbers and pipe it into xargs rm with a glob mask.
    enigmatic_e: literally just resizes the explorer window
    Me: (•́Ô•̀)

    • @enigmatic_e
      @enigmatic_e  Рік тому

      😂 hey as long as it gets the job done

  • @HB-kl5ik
    @HB-kl5ik Рік тому

    More people don't use this often? I'm using it all time since two weeks. I should start posting more lol

  • @imshypleasebenicetome.5344
    @imshypleasebenicetome.5344 Рік тому +1

    Good thing I got 10gbs of VRAM

  • @timhendson2550
    @timhendson2550 Рік тому

    So the whole video has to already be in a stylized frame? My video has a lot of different people and scene changes. Because what if I only styled the first 10 secs will eybsnth now to stylize the rest of the video into like a drawing. Or will it only stylize the frame I gave it. I have a 5 minute clip I want to stylize but I only have a stylized frame for the first 10 secs? Will eybsnth know to stylize the rest of my clip because it has a lot of moving parts it?

  • @adamblazer901
    @adamblazer901 Рік тому

    Batch Img2Img
    AttributeError: 'NoneType' object has no attribute 'ema_scope'
    Error Fixed :
    www.reddit.com/r/StableDiffusion/comments/yjbc2d/batch_img2img_help/?context=3

  • @forgottenalex
    @forgottenalex Рік тому

    I wonder how well the blending would be if you ran the images through a RIFE or DAIN

    • @adam.dachis
      @adam.dachis Рік тому +1

      I was thinking this too. I doubt the results would be perfect, but I think there's a strong chance they'd look better than AE's frame blending. I'm also curious how Adobe's other frame interpolation methods would perform. I'm guessing worse (in this situation). I don't think it'll be all that long before this is an actual feature in SD, though. I don't know the ideal path to achieving that, but given that many AI algorithms have focused on compensating for change over time in the last couple of years, I wouldn't be surprised if it happens. Image generation is already progressing at an insanely fast rate. I wish I had more time to play around with these techniques right now.

  • @Comic_Book_Creator
    @Comic_Book_Creator Рік тому

    thank you , also I wonder if you can do the same on google colab.. maybe slow but maybe not .. most of people use colab

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Yes you can technically use this version of SD using Colab. I show how to use it in this video ua-cam.com/video/qmnXBx3PcuM/v-deo.html check the second half of the video.

  • @henryleonardi5368
    @henryleonardi5368 Рік тому

    I feel like frame interpolation could be cool instead of frame blending

  • @fillill-111
    @fillill-111 Рік тому

    Thank you for this video! I have an error using img2img alternative test "OutOfMemoryError: CUDA out of memory". What can i do with it? I can't buy a new videocard as for now

  • @RHYTE
    @RHYTE Рік тому

    nice one! where/how did you get that art style exactly?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Artstyles can be made by yourself but it take a lot of steps. AItreprenuer has some tutorials on how to do that. ua-cam.com/video/tgRiZzwSdXg/v-deo.html

    • @RHYTE
      @RHYTE Рік тому

      @@enigmatic_e thanks alot mate!

  • @bhsmith5496
    @bhsmith5496 Рік тому

    When I delete the odd frames and then bring the even into After Effects or even Media encoder it is showing color bars or media off line for the deleted frames. Can anyone remedy?

  • @user-bs3jd2hj3z
    @user-bs3jd2hj3z Рік тому

    that moment when you want stable diffusion into your iphone, it's already someone have make an app to replace sd filters in camera on the go?

  • @DavidSilva-xp1iw
    @DavidSilva-xp1iw Рік тому

    Please do one using google colab, I currently do not have enough gpu

  • @whataworld8378
    @whataworld8378 Рік тому

    Frame blending works (was using it) but still don't have stability in SD (even with seed & proper setup). And it's difficult to track video properly. (At this stage)

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      I know its a bit tricky to get it right. Theres a lot of variables that go into it and now that i updated to new ui things seems to be different. I will def be making another video on the topic

    • @whataworld8378
      @whataworld8378 Рік тому

      @@enigmatic_e Thank you, great work! It's fun learning & growing together with this technology.

  • @lioncrud9096
    @lioncrud9096 Рік тому

    was there a reason you didn't post your negative prompts into the Script settings?

  • @Boltron
    @Boltron Рік тому

    what frame rate are your videos set to ? say if you have it set to 60fps and do every other frame like you suggest, would changing the frame rate to 30fps and doing every frame result in the same smoothness?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Mmm I think 30 fps works slightly better. 60 might be too many frames in a short amount of time. Test it and see how you like the results.

  • @dancebuddhapanda8018
    @dancebuddhapanda8018 Рік тому

    Hi please tell me there must be some way to install im2img into the MAC iOS system 😢
    Otherwise I need to buy a new windows computer….no…

  • @thedarkangel613
    @thedarkangel613 Рік тому

    since you have used both, what do you think is better, img2img SD like this video, or using EbSynth? pros and cons to both?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      I would say using them together creates great results when there are subtle movements. SD img2img is getting better though now with ControlNet. Have you heard of it?

    • @thedarkangel613
      @thedarkangel613 Рік тому

      @@enigmatic_e no i haven't. imma check that out

  • @oboy9090
    @oboy9090 Рік тому

    If you are animating a video with audio like singing, deleting every other frame will shorten the final animation and cause the audio to not be in sync correct?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Just make the sequence same fps and should look fine.

  • @wiiu7640
    @wiiu7640 Рік тому

    Blending a Rife frame interpolated and a every frame version could make the video more consistent and more accurate. The other thing is using the original video and converting it to a black and white image. This will help with keeping the brightness consistent, and then you can adjust the color saturation and vibrancy to bring back the color. If the color is shifting around a lot, you might need to just blend in the original video. In my opinion, if you aren't willing to put in the work to keep the frames consistent by looking at each converted image by hand, then you should just go with the jittery image. Trust me, making it the style is a lot better than doing a poor job trying to make it consistent. The AI will get better over time, so hopefully it won't be a problem in the future.

  • @HogwartsStudy
    @HogwartsStudy Рік тому

    ouch, any way to blend frames in blender?

  • @GeekTrading
    @GeekTrading Рік тому

    Could you edit video like Second gear UA-cam channel?

  • @studiokevinabanto2524
    @studiokevinabanto2524 Рік тому

    do you have the disco diffusion style for this? I see that in the description there is only the one from midjourney.

  • @oldleaf3755
    @oldleaf3755 Рік тому

    ok

  • @Shahriar.H
    @Shahriar.H Рік тому

    I followed what you were doing but when I turn on Img2Img Alternative my image completely changes and doesn't resemble my original image, any idea why this might be happening?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Try clicking on sigma adjustment right the bottom of script. Sometimes that fixes it.

  • @reezlaw
    @reezlaw Рік тому +1

    Aaah you should have redone your wife's shot with the decimated frames technique to show the difference

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I know! I thought of that very thing after watching it over. 😅

    • @reezlaw
      @reezlaw Рік тому

      @@enigmatic_e maybe an idea for another video ☺

  • @DrMacabre
    @DrMacabre Рік тому

    How about using some ai interpolation instead of frame blending ?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      I will try that. Maybe even do a short video showing results. Thank you!

  • @silvakaioagv
    @silvakaioagv Рік тому

    how can i get to this site? I use mac os and I can't find it at all.

  • @NAHIMNICE
    @NAHIMNICE Рік тому

    if you delete half of the the frames, will you still be able to sync audio you may have ?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Yes, when you import sequence into editing program, just interpret frame rate to original clip. Most programs will automatically make it 30 fps, so change it if you have to. Then just change speed to double the length and it should sync with audio