Deforum + Controlnet IMG2IMG (TemporalNet)

Поділитися
Вставка

КОМЕНТАРІ • 134

  • @enigmatic_e
    @enigmatic_e  9 місяців тому +1

    NOTE: Make sure you're using 1.5 model with this setting file and turn off any unused controlnets.

    • @ysy69
      @ysy69 8 місяців тому

      what happens when you use SDXL model ?

    • @enigmatic_e
      @enigmatic_e  8 місяців тому +2

      I think its possible, just you would need xl controlnets and there arent as many for xl
      @@ysy69

  • @RajithX
    @RajithX Рік тому +3

    how to fix this Error: 'Video file C:\Automatic1111\stable-diffusion-webui has format 'c:\automatic1111\stable-diffusion-webui', which is not supported. Supported formats are: ['mov', 'mpeg', 'mp4', 'm4v', 'avi', 'mpg', 'webm']'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.

    • @TheRainbowPilot
      @TheRainbowPilot Рік тому

      It was a bug in latest build. Should be patched now please update Deforum.

  • @bonsai-effect
    @bonsai-effect Рік тому +6

    Very easy to follow tutorial... so happy that as usual, you don't jump all over the place like some other ppl. Always a pleasure to watch and learn from your tuts! (mega thanks for the settings file too!!)

  • @eyevenear
    @eyevenear Рік тому +8

    instant like! I think the best solution for now is to separate the character form the background, so you can process foreground and background with more freedom and consistency, and only then put them back together in AE after a good deflickering pass.

    • @enigmatic_e
      @enigmatic_e  Рік тому

      True

    • @tamiltrivia
      @tamiltrivia Рік тому +2

      How to separate character from background?

    • @eyevenear
      @eyevenear Рік тому +1

      @@tamiltrivia Rotoscoping or You shoot the original video in a green screen room, or any solution between the two.

    • @xShxdowTV
      @xShxdowTV Рік тому

      with mask
      @@tamiltrivia

    • @TheKuzmann
      @TheKuzmann Рік тому +1

      ​@@eyevenear or you can use one of many background removal extensions available for SD, like Depthmap scripts, for example...

  • @EarmWermChannel
    @EarmWermChannel 10 місяців тому

    Its rare for things to work out so quickly in this field. Hats off to you, you're explanation was solid.

  • @bobwinberry
    @bobwinberry 10 місяців тому

    Great video - thanks. FYI: my settings kept crashing and I did a lot of different efforts to stop it, but it seems the only thing that worked was limiting my options on the Hieght/Width settings to: Horizontal: 1024 x 576 and Vertical: 576 x 1024 - thanks again for the great video and info

  • @HopsinThaGoat
    @HopsinThaGoat Рік тому +2

    that Mario clip is amazing

  • @NguyenNhatHuyDGM
    @NguyenNhatHuyDGM Рік тому +3

    I got this message after first frame genrated. Can someone help me fix this, thanks
    Error: 'OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op' '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.

  • @kenrock2
    @kenrock2 Рік тому +1

    I love you very much man... It took me alot of attempts to troubleshoot the errors that the controlnet is not working properly due to conflict extension, its best for beginners to have a clean install A1111 with just Deforum + Controlnet extension only if you have trouble understanding the terminal activity what is going on.
    By the way a1111 doesn't really work well on old version 1.4 which causes alot of buggy UI, I switched to version 1.5.2 it works better after that.
    I got amazing results following this tutorial.... thanks alot

    • @carsoncarr-busyframes619
      @carsoncarr-busyframes619 Рік тому

      yeah, I've been trouble shooting for a few hours after some conflict is causing deforum to not load even though it's installed. thanks, I'll try 1.5.2 (currently using 1.6)

    • @kenrock2
      @kenrock2 Рік тому +1

      @@carsoncarr-busyframes619 also notice due to recent version 1.6 update it also doesn't work well with this tutorial, even with the recent deforum update it some how doesn't use the controlnet properly (clean install). So stick to version 1.5.2 , i have no issue after downgrade it

  • @GuyTheAnimated
    @GuyTheAnimated Рік тому

    thank you for this! stable diffusion and all the possibilities, and things yet to be discovered, really is a driving force for me :)

  • @theunderdowners
    @theunderdowners Рік тому

    Doumo Doumo, This is the most coherent/consistent run I've done, thank you very much.

  • @marcobelletz4734
    @marcobelletz4734 Рік тому +3

    Really cool as all of your contents, but as many other people I get some weird error:
    load_img() got multiple values for argument 'shape'. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \
    I changed the slash as suggested but nothing changes.
    I checked if the input frames were correctly generated and yes, I have all input frames in separate folders, as many as the ControlNet modules enabled.
    Any ideas about how to fix this?

  • @Injaznito1
    @Injaznito1 Рік тому

    Thanx for the file and tutorial E! I've been drag'in my feet using TemporalNet in my workflow. ima give this a try on my current project.

  • @graphicsseion790
    @graphicsseion790 Рік тому +3

    Hi,thanks for your videos, I have been trying several times and several videos in a row for this style of animation with videos in deforum+controlnet. The problem is that even following all your indications, the frames that the output extracts are random and have nothing to do with the video init. the path of the video in video init and in the controlnet is correct, I have played with the values of strengh and cfg, even with the com alpha that I read in some other comment in videos. I would appreciate some light, thanks again.

    • @enigmatic_e
      @enigmatic_e  Рік тому

      i would suggest you join my discord, there are people who have solved many issues. It's also easier because you could share screen shots. Link to discord is in description.

    • @MrKrealfedorenko
      @MrKrealfedorenko Рік тому +1

      I think I have the same problem. Links for the Video (with dancer) are right, settings are the same...but the character after Generation is not moving... :-/

  • @georgekolbaia2033
    @georgekolbaia2033 Рік тому +2

    Hey! Thanks for yet another great tutorial!
    I was wondering, what are the advantages and disadvantages of Deforum+TemporalNet VS Colab+Warpfusion? When would you use one over the other? Which one gives you better results?
    I get that the Deforum is local and free as opposed to Collab+Warpfusion, but are there any other important differences that affect the quality of the output?

    • @enigmatic_e
      @enigmatic_e  Рік тому +2

      I would say warp gives more temporal coherence and consistency. But deforum is a great alternative if you can’t afford to warp. I’ve seen some deforum stuff that looks very close to warp.

  • @blockchaindomain
    @blockchaindomain Рік тому

    THANK YOU! THIS REALLY HELPED ME LEARN ALOT!!!!!

  • @judgeworks3687
    @judgeworks3687 Рік тому

    Love yr videos. Also nice call out to you from corridor crew on recent video of theirs.

  • @dmitrym.6578
    @dmitrym.6578 Рік тому

    Thank you very much. Very informative video.

  • @anyosaurus8545
    @anyosaurus8545 Рік тому +1

    Hi, why my result of the video isn't the same as my video init? my result is the same as the prompt but not consistenty lookalike my video init :(

  • @sergiogonzalez2611
    @sergiogonzalez2611 6 місяців тому

    wonderfull work man

  • @SnapAir
    @SnapAir Рік тому

    Thanks for the tutorial legend!

  • @ParvathyKapoor
    @ParvathyKapoor Рік тому

    Any idea how to make non flickering video?

    • @xShxdowTV
      @xShxdowTV Рік тому +1

      tile + TN - deflicker in davinci

  • @fedoraq2d3dcreative61
    @fedoraq2d3dcreative61 11 місяців тому

    Hi,
    thanks for the great training video
    I have a question, where can I find the source of the video with the dancer.
    Thank you :)

  • @reallybigname
    @reallybigname Рік тому

    Right on.

  • @jamminmandmband
    @jamminmandmband Рік тому +2

    In the past I have gotten this to work. But this time around, I do not know what is happening. I have followed your instructions, but keep getting this error.
    User friendly error message:
    Error: images do not match. Please, check your schedules/ init values.
    I have been using chat gpt to work out what is going on, but nothing seems to resolve this.
    Any thoughts?

    • @dagovegas
      @dagovegas 8 місяців тому

      I have the same issue, did you manage to fix it?

    • @jamminmandmband
      @jamminmandmband 8 місяців тому +1

      @@dagovegas I have not solved it yet. But honestly, I have not messed with it much as of recently.

    • @dagovegas
      @dagovegas 8 місяців тому +1

      @@jamminmandmband i figured out an alternative solution.
      Use each frame of the video as input for img2img with control net (pose, hed and soft edges).

  • @SatriaTheFlash
    @SatriaTheFlash Рік тому +1

    This is what i waiting for, cause i've been struggle with AI Animation, especially Warpfusion because i can't buy Colab Pro

    • @enigmatic_e
      @enigmatic_e  Рік тому

      This is exactly why I made this 👍🏽

  • @gonefull5036
    @gonefull5036 Рік тому

    hai bro, i am happy to look your tutorial, is very amazing bro, one question for deforum "init image", does is work to image sequence?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Mmm not sure never done it that way but I think it has to be a video file

  • @aiximagination
    @aiximagination Рік тому

    Awesome video!

  • @blender_wiki
    @blender_wiki Рік тому

    To achieve more consistent results with your videos, try using the MagicMask and Depth nodes in your DVR software then change the background by blurring it or replacing it with a flat one. Avoid using MP4 files, as they can introduce temporal compression artifacts that lead to unwanted noise and loss of coherence. Instead, opt for image sequences or MP4 files with zero compression for better outcomes.

  • @NoName-yd5cp
    @NoName-yd5cp Рік тому

    great and quick dive into deforum. ever tried to auto-mask people with ebsynth extension for a1111 -> pang extraction and input mask-sequence back to deforum? my PC isnt beefy enough to try :/

  •  Рік тому +1

    Hello, very good your video tutorial. I almost got the same result, but in my case the first image is generated based on the first frame of my video, but the others no longer follow the video and start generating random images of Mario. I already checked all the settings and I couldn't solve it. Any idea? Thanks.

    •  Рік тому

      @fryvfx I will review the type of movement. Thank you very much!

  • @ValiCas
    @ValiCas Рік тому +1

    Thanks for the tutorial! :) I am having an issue, I followed the steps, loaded the working file and copied/past the path correctly everywhere, but the final result won't follow the video init, and do a random animation just considering the prompts. What could it be?

    • @kenrock2
      @kenrock2 Рік тому +2

      I also face the same problem, there is a problem if you are using A1111 ver 1.6, the controlnet doesnt really register properly in that version, use version 1.5.2 ... also check the terminal to see any errors occur in controlnet, that is where u can start troubleshooting

  • @LifeSwapped
    @LifeSwapped Рік тому

    I love you!

  • @solomslls
    @solomslls Рік тому

    good video, i have a question : can you use a CLOTHES lora in the prompt ?? it will help with the consistance outfit , and might give a better result if its possible to put it !

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I dont see why you can't use a lora to change clothes. I technically gave this guy a mario outfit and he wasn't wearing an outfit but you if for example, you have someone dressed as the character, you can probably get some amazing results.

  • @GoodArt
    @GoodArt Рік тому

    you rule, thanks.

  • @MrPlasmo
    @MrPlasmo Рік тому +2

    everything was working fine until I got this:
    User friendly error message:
    Error: Video file C:\Users\k\stable-diffusion-webui has format 'c:\users\k\stable-diffusion-webui', which is not supported. Supported formats are: ['mov', 'mpeg', 'mp4', 'm4v', 'avi', 'mpg', 'webm']. Please, check your schedules/ init values.
    anyone know why? Deforum worked for 2 days prior... :(

    • @MrPlasmo
      @MrPlasmo Рік тому +2

      found the answer its a bug in the new version:
      Guys for people who get error with video control net to downgrade go to extention tab in automatic1111 extention deforum and write command git checkout 0949bf428d5ef9ce554e9cdcf5fc4190e2c1ba12 it will downgrade to aug13 version.
      i gess soon when bug fixed maybe u will need to reinstal deforum or write git checkout master

    • @Switch620
      @Switch620 Рік тому

      @@MrPlasmo Thanks man!

  • @tomibeg
    @tomibeg Рік тому

    Hey! Nice video, thanks. Btw, maybe you've tested if it's possible to run similar process with TemporalNet v2 and init image?

  • @aminshallwani9369
    @aminshallwani9369 Рік тому

    Thanks for sharing this video. I need to know if we have our own Prompt and generated a image from img2img, and then paste that prompt in the prompt area so how that will work. I have did that and got the error
    TypeError: 'NoneType' object is not iterable
    *END OF TRACEBACK*
    User friendly error message:
    Error: 'NoneType' object is not iterable. Please, check your schedules/ init values.
    Please need assistance
    Thanks

  • @Venkatesh_006
    @Venkatesh_006 Рік тому +2

    Sir I am Getting this Error, ValueError: 1 is not in list What should I do to Solve This ?

  • @artyfly
    @artyfly Рік тому

    cool! thanks!

  • @Ray-01-01
    @Ray-01-01 Рік тому

    Bro, I wanted to ask you something, could you tell me please. Have you seen different AI videos where they show the ’evolution of something’?, how ‘something’ changed over time. (For example, there is an AI video showing the ‘evolution of fashion'. At the beginning, the animation shows the fashion styles of the beginning of the last century, then the 50s-60s-70s and so on to our time)
    please help bro, I tried to do it 1000 times through Deforum, but I can't get such an animation in any way
    (I know that the question does not apply to this video, but nevertheless, I hope for your answer)

  • @Moise_s.
    @Moise_s. Рік тому

    só uma duvida na questão de copiar e colocar Settings File não esta indo

  • @epicddgt
    @epicddgt Рік тому

    Hi enigmatic i have seen your videos some time ago , i was wondering do you know or recommend a tutorial to install it with an mac m1 chip ? hope you have a great week !

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      I don’t know unfortunately, but maybe this helps? github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon

  • @aarvndh5419
    @aarvndh5419 Рік тому

    Thanks so much for the video and the settings file

  • @eyeless98
    @eyeless98 Рік тому

    Great video!!! Have you noticed how much VRAM do 3 CN use? I want to upgrade from a 3060ti to a 4070 for that extra 4GB of VRAM because I cant use 3CN right now without taking 8 hours for a generation.

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I use to run 3 controls nets when I had a 3080 10gb. But I couldn’t push the resolution too high

    • @joonienyc
      @joonienyc Рік тому

      same here , 3060 cant do more than 3 , it just tooo long of waitting
      @@enigmatic_e

  • @bardaiart
    @bardaiart Рік тому

    Thanks a lot! :)

  • @keYserSOze2008
    @keYserSOze2008 Рік тому

    Real digital artists need to get on this, they absolutely destroy these pretenders... "Looks smooth to me" 🤣

  • @Herman_HMS
    @Herman_HMS Рік тому

    great tutorial and thanks for settings file!

  • @yoktavyakhanna6967
    @yoktavyakhanna6967 Рік тому +1

    Hey it's running and generating pretty well but for some reason it isn't actually following the video and creating something of it's own is there any way to control how similar or different the output comes from the original video

    • @bonsai-effect
      @bonsai-effect Рік тому +1

      try disabling the controlnet with softedge.

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      I would play with the tile strength, cfg, or comp alpha schedule. Also make sure you’re adding video path to all the controlnets and the main init video.

    • @yoktavyakhanna6967
      @yoktavyakhanna6967 Рік тому +1

      @@enigmatic_e thank you it worked with putting the Comp Alpha higher, love your tutorials and your work with corridor crew please keep it up

  • @elijahdavis-xh2zt
    @elijahdavis-xh2zt Рік тому

    How would you compare Stable Warpfusion with Deforum Stable Diffusion?

  • @m3dia_offline
    @m3dia_offline Рік тому

    How would you compare this in terms of being flicker free and consistent to warp fusion?

  • @Panchocr888
    @Panchocr888 Рік тому

    Hey enigmatic_e, thanks this video was very helpful, by any chance do you have a video where you explain some of the prompts you use, i dont quite get for example why some of the prompts have (:0,8) next to the words, thx in advance!

    • @enigmatic_e
      @enigmatic_e  Рік тому

      No i don’t but i should make one.

  • @FirdausHayate
    @FirdausHayate 7 місяців тому

    i got error ('OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\core\src\arithm.cpp:650: error: (-209:Sizes of input arguments do not match) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function 'cv::arithm_op' '. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.) .. can anyone help or can someone solve it?

  • @yanning5116
    @yanning5116 Рік тому

    hello thank you very much for your video, there is one thing that was I can't open your link for Settings file , Is there another way to solve this problem? thank you very much again

  • @carsoncarr-busyframes619
    @carsoncarr-busyframes619 Рік тому

    anyone else getting "Error: ''NoneType' object is not iterable. Please, check your schedules/ init values." ? I've been trying to get this to work for almost a week and narrowed it down to an issue with control net. when I disable the control nets, it works but is obviously not temporally consistent. I've tried it with automatic 1111 1.6 and automatic 1111 1.52... I've tried using enigmatic's settings file and also from scratch. control net IS working with still images so maybe something with it broke with the latest version of deforum?

  • @TheMaxvin
    @TheMaxvin Рік тому

    Which type of ControlNet did you use for this animation?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      It’s in the settings file I provided in the description

    • @TheMaxvin
      @TheMaxvin Рік тому

      @@enigmatic_e Thanks, one question after all - whether the sequence in which the ControlNet models are applied matters?

  • @siriotrading
    @siriotrading Рік тому +1

    I follow all the steps but I get this error after the first frame.
    Error: OpenCV(4.8.0) (-209: input argument sizes do not match) The operation is neither "array op array" (where arrays have the same size and same number of channels), nor " array op scalar" , nor 'scalar op array' in function 'cv::arithm_op'
    . Check your programs/init values please. Also make sure you don't have a backslash in any of your PATHS - use / instead of \.
    What can it be caused by? Has anyone had my problem?

    • @inpsydout
      @inpsydout Рік тому

      I'm getting this same error..

  • @NotThatOlivia
    @NotThatOlivia Рік тому

    nice!!!

  • @ronnykhalil
    @ronnykhalil Рік тому

    w0w!

  • @MajomHus
    @MajomHus Рік тому

    You will have a lot less extra things appear if you stick close to the original resolutions of the model, so 512 or 768.

  • @zeeshistargamer
    @zeeshistargamer Рік тому

    Great wonderfull video, But please can you help me with this Error, I watched your videos daily but i face this error when I enable the controlnet in deforum to generate the video "Error: ''NoneType' object is not iterable'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli." if i disable the contronet then there is not error but the video didnt match with the reference video. I am try to solve from 1 month but didnt find any solution. please can you help me in this... Thanks ♥♥♥

  • @AIWarper
    @AIWarper Рік тому +1

    Does this work with SDXL models and LORAs? Or is Temporal limited to 1.5 still?
    Great video by the way. I look forward to every notification I get when you post!
    I have a recommendation if you are accepting - do one of these without a humanoid. Every one is using humans... but I'd love to see if you could apply this to say... a rendered output of a creature from Blender or some non humanoid kind of thing.. I suspect it wouldnt be as consistent?

    • @enigmatic_e
      @enigmatic_e  Рік тому +1

      Great suggestion! I will definitely consider that! And when it comes to SDXL, there still aren’t SDXL controlnets that are integrated into automatic 1111 yet. Hopefully soon!!

  • @imtaha964
    @imtaha964 Рік тому

    i love u bro😍😍😍

    • @imtaha964
      @imtaha964 Рік тому

      u so much helping
      thank u

  • @dagovegas
    @dagovegas 8 місяців тому

    I've tried to replicate it but this error always pops out: Error: images do not match. Please, check your schedules/ init values. Does someone know how to fix it?

    • @enigmatic_e
      @enigmatic_e  8 місяців тому

      hm not sure why. What kind of checkpoint are you using?

  • @AIWarper
    @AIWarper Рік тому

    When I select the control net tab I see CN1 - 5, and I see the enable check box, but I do not see settings available - any thoughts on why this would be?
    Edit: reloading the terminal and UI managed to let me enable the CN1 but the other tabs are still blank
    Edit 2: It happens when I import your settings. I suspect I have to manually input them as the ctrl net tabs are stuck on forever loading

    • @AIWarper
      @AIWarper Рік тому

      edit: 3: Manually input all settings worked. Importing from a settings file causes my WebUI to freeze on loading forever.
      I am also encountering this error anytime I change the resolution from 512 :512 to anything else (was trying 540 x 760)
      "error: images do not match. check your schedules/ init values please. also make sure you don't have a backwards slash in any of your paths - use / instead of \."
      I set the inputs to all defaults on a fresh run and slowly changed the settings until I could recreate the error.... and it happens from the resolution change

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Don’t manually type in resolution, just use slider, deforum has a strange issue with typing in exact values

  • @tvm9958
    @tvm9958 Місяць тому

    고맙습니다...영어를 못해 좀 힘들었습니다.ㅠㅠ

  • @Fabzter1
    @Fabzter1 Рік тому

    Great video! Would this work in colab?

    • @enigmatic_e
      @enigmatic_e  Рік тому

      I haven’t tried this in colab so I’m not sure, sorry.

  • @RichardRailey
    @RichardRailey 4 місяці тому

    does anybody know how to take it from an original animated or comic character and make it human ??

  • @ramemi1752
    @ramemi1752 Рік тому

    FIX:
    I need to have the strength at least at 0:(0.5), anything below it and the results show completely no relation to the input video. Also 'video input' has to be selected

    • @enigmatic_e
      @enigmatic_e  Рік тому

      Video doesn’t have to be selected. There is something in the settings not right if it’s not working.

  • @falialvarez
    @falialvarez Рік тому +1

    I used the parameters of this guy: ua-cam.com/video/rytoKTs--Y4/v-deo.html ,but i use your controlnet configuration changing only the order and the Weight: (1º tile Weight(1.5),2º openpose full Weight(1), 3º hed softdedge Weight(1) and 4º temporal net. The coherence is amazing. did you see temporalnet model have a versión 2º? I try to use it but in deforum i cant. congratulation for your videos, im a fan.

  • @TheMaxvin
    @TheMaxvin Рік тому

    SD write me that TemporalNet is unofficial model and advice me to refuse her.

    • @enigmatic_e
      @enigmatic_e  Рік тому

      It is unofficial but should safe. It’s up to you though. It’s the same developer who created TemporalKit, she’s on twitter sharing updates.

    • @TheMaxvin
      @TheMaxvin Рік тому

      As for me so no problem, A1111 is nervous)@@enigmatic_e

  • @eblake4250
    @eblake4250 Рік тому

    Promo-SM 💃

  • @MalikKayaalp
    @MalikKayaalp Рік тому

    Amazing. Hello, I really like the tutorial videos you make. and I am grateful to you for this, I only ask you for one thing. How can we make more abstract abstract works. Can you make a lesson for this. For example, I tried to bring a smoke animation with different colors and more abstract still. I was not successful. I think I need to be more interested in temporalnet. Thank you

  • @HopsinThaGoat
    @HopsinThaGoat Рік тому

    even the one With the comp set to 1 was fire

  • @TheKuzmann
    @TheKuzmann Рік тому

    @enigmatic_e where did you find yaml file? looking at hugging face but there is no diff_control_sd15_temporalnet_fp16.yaml

  • @cyberdogs_
    @cyberdogs_ Рік тому

    how to solve this error (Error: 'A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.'. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli.)...🥲🥲

    • @sebastiendaniel5794
      @sebastiendaniel5794 Рік тому

      I had this issue, i changed the Checkpoint to be compatible with SD 1.5 and the error was gone

  • @FortniteJama
    @FortniteJama Рік тому

    Really happy with results I'm getting after your tutorial, still a way to go, but way less frustration. Think you showing the frustration aspect, helped me push through, thankyou finally feel like I'm making progress. ua-cam.com/video/eez2PZgsliE/v-deo.html