Perfect upscales with SUPIR v2 + full comfyUI workflow

Поділитися
Вставка
  • Опубліковано 29 чер 2024
  • How to use SDXL lightning with SUPIR, comparisons of various upscaling techniques, vRam management considerations, how to preview its tiling, and even how to fix error messages you might encounter.
    Workflow, updated on 3/29/24 with LORAs and Lightning:
    flowt.ai/community/supir-v2-p...
    ▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
    00:00 Intro and results
    00:51 Building the worklow
    07:45 Going deeper - adding the model upscale
    09:05 Vram management (fp16 vs fp32)
    11:34 Using SDXL lightning
    14:43 Creative outputs and restoration
    15:35 Common Errors
    ▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
    Discord: / discord
    All socials: linktr.ee/stephantual
    Hire Actual Aliens: www.ursium.ai/
    ▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
    Kijais' wrapper v2: github.com/kijai/ComfyUI-SUPI...
    SUPIR models: huggingface.co/camenduru/SUPI...
    TCD LORA: huggingface.co/h1t/TCD-SDXL-L...
  • Наука та технологія

КОМЕНТАРІ • 121

  • @stephantual
    @stephantual  3 місяці тому +11

    Update: 3/29/24 and 4/17/24 (yes it moves fast) - it's been updated again and again - Because vRAM is still a concern, I've made it available as a one-click app at tinyurl.com/supirv2 , and updated the downloadable worfklow to reflect the addition of better Lightning support as well as LORAs. Cheers! 👽Also made another video at ua-cam.com/video/EMAz8KktB5U/v-deo.html

    • @Thecroods923
      @Thecroods923 3 місяці тому +1

      Will it work with 8gb vram and 16gb ram ? Can you make the workflow for these settings. And what will be the max image I can use with above configuration?

  • @user-hx1wz1lv4r
    @user-hx1wz1lv4r 3 місяці тому +1

    TY bro you covered everything and I learn a lot watching you walk through the building of the workflow

  • @97BuckeyeGuy
    @97BuckeyeGuy 3 місяці тому +4

    5:00 I think you were the person commenting on my GitHub issue regarding resolutions requiring a division by 64. If you check the dev's replies in that post, he actually reduced it to 32. And then after that, he made additional code changes to make it not an issue at all. The last several images I upscaled were not divisible by 32 and they turned out fine. 👍🏼
    This version of Comfy SUPIR is SO much better than the first. The DEV really did some miracles with this version.

    • @stephantual
      @stephantual  3 місяці тому

      Wasn't me! 👽I'm @stephantual on github. And yes, Kijai fixed it while I was rendering the video, didn't have the heart to re-record, especially given it will likely change again (there's some cool stuff people have dug up from the original repo). Cheers! What's your repo?

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 3 місяці тому

    great video Steph!

  • @xyzxyz324
    @xyzxyz324 3 місяці тому

    Great explanation, got it worked like a charm on first try and results were amazing! Keep it up thank you!

    • @stephantual
      @stephantual  3 місяці тому

      Glad it helped! 👽👽👽

  • @WhySoBroke
    @WhySoBroke 3 місяці тому

    This is exactly what I was looking for... many thanks amigo!! ❤️🇲🇽❤️

  • @clearstoryimaging
    @clearstoryimaging 3 місяці тому

    Thank you again for another great video!

  • @internetperson2
    @internetperson2 3 місяці тому

    Another banger brother

  • @optimbro
    @optimbro 3 місяці тому

    The boss is finally here 👌👌

  • @stepahinigor
    @stepahinigor 3 місяці тому

    Hey thanks for update! That's amazing! I've been actively using SD (ComfyUI) for only a month and honestly SUPIR is not only the best upscale but also the easiest (v1 workflow) which is the fastest to start working correctly as it was in the tutorial, with the others always some problems.

    • @stephantual
      @stephantual  3 місяці тому

      Yeah the CN in Supir is absolutely sick. 👽 I still clean up videos with a quick low denoise Animate Diff LCM pass, but it's pretty much replaced all my upscalers by now!

  • @aleassimarro
    @aleassimarro 3 місяці тому +1

    Thank you. Can't say enough

  • @giusparsifal
    @giusparsifal Місяць тому

    Hello, This is this first workflow that really works (at least for me), thank you! I wrote you already I was looking for img2img workflow to get a realistic skin other than upscaling, this work perfect! I'm now trying with different Loras to achieve what I want, thank you again!
    Just a thing, the SUPIR Conditioner node doesn't have a positive prompt but just the negative, in the workflow I downloaded.
    EDIT: I realized now I have downloaded the next version you did :)

  • @autonomousreviews2521
    @autonomousreviews2521 3 місяці тому

    Fantastic :) Thank you for sharing!

  • @loubakalouba
    @loubakalouba 3 місяці тому

    Thank you, you are a Hero!

  • @SjonSjine
    @SjonSjine 3 місяці тому

    Yes! Liked! Follower! Thanks!

    • @stephantual
      @stephantual  3 місяці тому +1

      Welcome aboard the mothership! 👽👽👽

  • @xphix9900
    @xphix9900 3 місяці тому +1

    love your videos, so thanks, and you're my 2nd favorite french/english speaking person... 1st is Charles Leclerc lol, but it's saying a lot because i'm from Mtl, Qc ;)

  • @runebinder
    @runebinder 3 місяці тому

    Tried this earlier. Took 3 hours on my 2080Ti to take an image where the subject's face takes up a large portion of the image and I hadn't even been able to get it past 1536 x 1024 as face detailers were not doing a good job past that resolution. Used SUPIR to get it to 4608 x 3072 with hardly any loss of detail, the face and in particular the eyes look amazing. Didn't change any of the settings so will have to have try and tweak to see if I can get the time down, but I'm very impressed. Think when Nvidia release the 50 series it may be upgrade time...

  • @gcardinal
    @gcardinal 3 місяці тому +1

    Thank you for the video and the intusiasm with all the details and explanations. I would kindly ask you to consider making smaller workflows. Currently there is a lot of bloatware like watermark, comments, switches etc., I get that this is your workflow and you like to show it off - but for most people and especially the ones who are just getting started - it is too much of unnecessery stuff. My suggestion is to limit workflow to the task at hand, if needed rather post several workflows.
    just my 2cents. thanks for the great content anyhow

  • @BedTimeQuest
    @BedTimeQuest 2 місяці тому

    Thanks this explanation is amazing, got it working instantly! I just kept getting errors on the final image rescale node, taking down the rescale_factor here from 4 to 3 somehow fixed it but I don't really understand what this affects. Also in the final image I get these weird added faces onto objects and 'ghosts' bleeding trough. not sure if it's a problem with my settings, the original (AI generated photo) or the checkpoint I'm using... Will have to do some more fiddling around

    • @stephantual
      @stephantual  2 місяці тому

      Thanks for the feedback and your message on discord, it was very useful, new video on how to apply settings based on the input messages coming up soon! 👽

  • @SkN097
    @SkN097 3 місяці тому

    Awesome videos! Keep it up, bro.
    There's something I'm not getting, though. How do the CFG scale start and end work? and same question about the Control Start and End

    • @stephantual
      @stephantual  3 місяці тому +1

      (oversimplication due to YT comments): it controls the application of the controlnet over time in the diffusion process. Best way to explain it is to think of it as the same as 'start_at' and 'end_at' in Ipadapter. 👽

  • @goodie2shoes
    @goodie2shoes 3 місяці тому +3

    Luckily I can play this at 0,75 or 0,5 speed. You are going so fast (and I'm an old f(*k )

    • @stephantual
      @stephantual  3 місяці тому +1

      Sorry about that :) 👽

  • @TheGladScientist
    @TheGladScientist 3 місяці тому

    awesome video! if you wanted to use this for a video, what would be the recommended approach?

    • @stephantual
      @stephantual  3 місяці тому +1

      I do a lot of video upscaling with SUPIR these days, I found the best approach is:
      a) upscale, then supir (the standard way)
      b) now pass every frame to AD + LCM (or Lighning or whatever you prefer) directly into Ultimate SD upscale to maintain temporal consistency.
      this is because otherwise, the output will be 'grainy' and 'flicker'. I have a demo of this at flowt.ai/community/universal-video-generator-and-upscale-v4-lgdkt-f

  • @cosmingurau
    @cosmingurau Місяць тому

    I am looking for a Windows executable for a SUPIR upscaler with just a handful of options. Has anyone found anything like that yet? I found some for REALESRGAN, so I figured there might be at least one for SUPIR, seeing how awesome it is.

  • @ExacoMvm
    @ExacoMvm 6 днів тому

    Doesn't work.
    Image Resize node has red outline almost immediately, no idea why:
    Error occurred when executing ImageResize+:
    'bool' object has no attribute 'startswith'

  • @idontcare9041
    @idontcare9041 3 місяці тому

    Great video. Unfortunately I pretty much only get those error messages you talked about regarding resolution with SUPIR and I have absolutely no clue why. A few times it started sampling and most of those times I ran out of VRAM. 832*1216 should work if it requires to be divided by 64, I tried both with F and Q and all kinds of things. It's a really great upscale technique though.

    • @stephantual
      @stephantual  3 місяці тому

      Hey! This was fixed now by Kijai I believe, did you update the repo in the last 48h? The error regarding resolution multiplier is unrelated to it running out of vRam btw, they are two distinct things. Yes 832x1236 should fit in an 8gb envelop, especially if you set unet to FP8 and don't use big tiles (try dropping it to smaller scales progressively until it fits in vram, using the task manager or equivalent to track its usage - you'll get the hang of it!).

  • @cfcrow
    @cfcrow 3 місяці тому +1

    not sure why i keep getting this error. The size of tensor a (60) must match the size of tensor b (500) at non-singleton dimension 3. without the resize node it works fine

    • @user-qs2rw3dd1c
      @user-qs2rw3dd1c 3 місяці тому +1

      he has the answer at 15:35 just set the image size that can be divided by 64 such as 1280 and redo your workflow

  • @nkofr
    @nkofr 3 місяці тому

    génial

  • @jocg9168
    @jocg9168 3 місяці тому

    Very interesting I need figure to work with less Ram I have 32gb but my GPU Vram is 24gb but usually my poor 32gb memory goes off.

    • @stephantual
      @stephantual  3 місяці тому

      Ah you mean regular, old boring ddim? So that seems to be mostly linked to the encode/decode process. Drop the precision on those and limit the tile size. There's no shame in using tiny titles 😹👽

  • @RobertJene
    @RobertJene 2 місяці тому

    1:55 - building the workflow

    • @stephantual
      @stephantual  2 місяці тому

      Hey it's nice to have your here Robert, love your videos! 👍👍👍

    • @RobertJene
      @RobertJene 2 місяці тому

      @@stephantual oh sorry. I was making a doc with timestamps

  • @headscout
    @headscout 2 місяці тому

    Can I change the 'Interrogator' to Gemini Pro by Google?

  • @Vigilence
    @Vigilence 3 місяці тому +1

    The image resize node used here doesn’t allow resizes past 8000 pixels, is there a way to override this or another alternative mode I’m an use?

    • @stephantual
      @stephantual  3 місяці тому +1

      mmm beyond the usual 7680 × 4320 of 8k, I like that attitude! 👽 Image resize (stock comfy) lets me go into 6 digits, so use that :) also you could just chain models (4x, 4x, 4x etc). That's... intense - let me know your results on discord if you pull it off, I'm genuinely curious!

    • @Vigilence
      @Vigilence 3 місяці тому +1

      @@stephantualcan you name the node you are referencing for me? I don’t use comfy very much, and I have tested another image resize that only shows the latent type and scale number and it seems to slow the process down for some reason when compared to the image resize+ node you use here (where I manually input the output resolution).

  • @fabiotgarcia2
    @fabiotgarcia2 2 місяці тому

    Does it work for Mac M2 Pro Max?

  • @kalicromatico
    @kalicromatico 3 місяці тому

    777!

  • @Vigilence
    @Vigilence 3 місяці тому +1

    I noticed you link to tcd lora, but the workflow doesn’t reference it, should we ignore it?

    • @stephantual
      @stephantual  3 місяці тому

      Yeah it's just the one at huggingface.co/h1t/TCD-SDXL-LoRA/tree/main, now Kijai has updated the github to support an easy import if you really want to use it. However, that's entirely up to you of course! 👽

  • @BackStab1988
    @BackStab1988 2 місяці тому

    also got this message after opening a workflow: When loading the graph, the following node types were not found:
    GetNode
    CR LoRA Stack
    CR Apply LoRA Stack
    SetNode
    Bookmark (rgthree)
    SUPIR_decode
    ColorMatch
    CR Simple Text Watermark
    SUPIR_first_stage
    SUPIR_encode
    PlaySound|pysssss
    ImageResize+
    GetImageSize+
    SimpleMath+
    SUPIR_model_loader_v2
    Image Comparer (rgthree)
    SUPIR_conditioner
    Fast Groups Bypasser (rgthree)
    Image Resize
    Integer
    SUPIR_sample
    Nodes that have failed to load will show as red on the graph.
    Nothing works.That's pitty.

    • @donzitrone
      @donzitrone 2 місяці тому

      you have to install the custom nodes

  • @michaelbayes802
    @michaelbayes802 3 місяці тому

    Hi, for some reason when executing your workflow I get an error at the step "supir conditioner" - TypeError: 'NoneType' object is not callable. Not sure how to resolve this

    • @stephantual
      @stephantual  3 місяці тому

      That error in comfy is typical of one of the noodles not passing any data to an input. "Follow Execution"> find the node that breaks, Stick "beautify" nodes from Trung's 0246 (or similar) in every position and find the one that says 'null' - that's the culprit. Then trace it back to where it came from and figure out why it's not passing data - solved! I might make a tutorial on how to debug comfy errors because it's such a common question. Cheers! 👽

  • @timothykrell
    @timothykrell 3 місяці тому

    I get an error "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!" when the SUPIR sampler runs. Anyone know what that problem might be?

    • @stephantual
      @stephantual  3 місяці тому

      Answer at 16:28 👽

    • @timothykrell
      @timothykrell 3 місяці тому

      @@stephantual That was it! Thank you! I thought the denoised_latent could be passed directly to the sampler's latent input. Should have watched that end bit!

  • @iFilipis
    @iFilipis 3 місяці тому

    I spent a full hour trying to understand if whatever that was published on flowt ai actually works. For me it just shows the custom nodes in red, and no documentation anywhere how to make it work. Or is it not meant to work there, but only locally?

    • @stephantual
      @stephantual  3 місяці тому

      Good question. At the moment, it's my understanding that SUPIR does not operate on any SaaS service (unless you use a VPS and put it there yourself) due to a) a non-commercial license, b) no reply from the original devs (of the algo). Hopefully this changes soon. And yes it's purely for download purposes. 👽

  • @mr.entezaee
    @mr.entezaee 3 місяці тому

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    fastai 2.7.11 requires torch=1.7, but you have torch 2.0.1 which is incompatible.
    simple-lama-inpainting 0.1.2 requires numpy=1.24.3, but you have numpy 1.22.4 which is incompatible.
    simple-lama-inpainting 0.1.2 requires torch!=2.0.1,>=1.13.1, but you have torch 2.0.1 which is incompatible.
    torchtext 0.14.1 requires torch==1.13.1, but you have torch 2.0.1 which is incompatible.
    xformers 0.0.25 requires torch==2.2.1, but you have torch 2.0.1 which is incompatible.

    • @mr.entezaee
      @mr.entezaee 3 місяці тому

      I am an amateur. Someone tells me step by step what should I do? plz
      Oh yes, it was finally fixed with this:
      ......\python_embedded\python.exe -m pip install -r ./requirements.txt

  • @mr.entezaee
    @mr.entezaee 3 місяці тому +1

    no module 'xformers'. Processing without...
    What should I do to fix it?

    • @stephantual
      @stephantual  3 місяці тому +1

      Don't worry about xFormers, pytorch came a long way and SUPIR will work without. If you want to install it anyways, check out my other supir video, it's in the description.

    • @mr.entezaee
      @mr.entezaee 3 місяці тому

      @@stephantual Yes, finally, I'm glad it worked for me. But could not find the workflow of that old man. I have some very old photos that I want to test. Can you give me a workflow in this case?

    • @stephantual
      @stephantual  3 місяці тому +1

      @@mr.entezaee There is no 'workflow' per image (as per 16:40) - just download the image from the magnific website, then update the settings to be in line with that source (HAT type model upscale, etc etc).

    • @mr.entezaee
      @mr.entezaee 3 місяці тому

      Oh, I just realized now. Thank you for the good training you provided us@@stephantual

  • @RayDusso
    @RayDusso 3 місяці тому

    I don't get it, you say "if you didn't installed supir yet" then right after you give command to install the requirements in the supir folder. I don't have a supir folder since I didn't install it yet.

    • @stephantual
      @stephantual  3 місяці тому

      Yeah it's tricky to cover 'most scenarios' especially when there are so many platforms, good point. But basically all they need to do now is hit 'install' in manager, run requirements (possibly, this varies), install xformers (if they want to AND don't already have it)... you see how all this conditional logic gets pretty hard to make a meaningful video after just a few branches :) Cheers!

  • @kattamaran
    @kattamaran 3 місяці тому

    Put 64 into the resize node „multiple of“

    • @stephantual
      @stephantual  3 місяці тому

      Hahah yeah you're right - but I got a lot of negative comments for putting 'too many nodes' - evidently, you can just use comfyMath (at least that what I do) and find an equivalent multiplier using a quick division by 64 of the original, round up to integer, then multiply again. I'm sure someone will come up with an all-in-one but since this will likely get fixed, didn't want to complicate things 👽👽

  • @97BuckeyeGuy
    @97BuckeyeGuy 3 місяці тому

    Is there any chance you could release a video and workflow using SUPIR to upscale an AnimateDef video? 😊

    • @stephantual
      @stephantual  3 місяці тому

      I'm currently updating my general video generation worfklow to support supir V2, I'll push it to flowt.ai when it's all done :)

    • @97BuckeyeGuy
      @97BuckeyeGuy 3 місяці тому

      @@stephantual You're a good man. Thank you.

  • @BackStab1988
    @BackStab1988 2 місяці тому

    after writing at 1:35 i've got this: ERROR: Exception:
    Traceback (most recent call last):
    File "C:\Program Files\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
    ^^^^^^^^^^^^^^^
    File "C:\Program Files\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli
    eq_command.py", line 245, in wrapper
    return func(self, options, args)
    You didn't recieve such message

  • @franlp32
    @franlp32 3 місяці тому

    Seems like your 3D usage is spiking a lot when running generations. I had the same issue when I had crystools monitor enabled.

    • @stephantual
      @stephantual  3 місяці тому

      Thing is, I use shareX to record at 4k lossless, and it's violent - so it's very hard for me to run benchmarks - I try to take screenshots of the task manager between renders 👽👽

    • @franlp32
      @franlp32 3 місяці тому

      @@stephantualif you have 3D usage spiking when not recording try to disable crystools monitor.

    • @weirdscix
      @weirdscix 2 місяці тому

      @@stephantual OBS would be better, it can take full advantage of NVenc encoding

  • @pabloruizgarcia5628
    @pabloruizgarcia5628 2 місяці тому

    Will this work with a 1080Ti with 11GB vRAM?

    • @stephantual
      @stephantual  2 місяці тому

      Yes, SUPIR is not vRAM-bound, in the sense that it scales linearly with the size of the image you pass. Pass it a meme that's 320p and it will run on 6gb vRAM. Pass it a 4k frame and i don't think there's any local card that could give it the vRAM it would require. Best I was able to do upscale was 2k with 24gb vRAM, but the good news is supir is NOT an upscaler, it's a controlnet. So do a 1.2 upscale, loosen the controlnet, highen the CFG and you're golden. I have a video coming up on just that :) 👽

  • @freefryz462
    @freefryz462 2 місяці тому

    This does not work with the latest versions of ComfyUI and the Supir nodes, no matter which checkpoint/sampler/steps/cfg I use the image just comes out much more noisy than originally - it even adds some details so the functionally it's attempting to do the same thing the standalone does but instead fails horribly. Tested with multiple pictures, different lighting, checkpoints, etc. even tried your older workflows and it's the same story there using the latest versions at the least.

    • @stephantual
      @stephantual  2 місяці тому

      I have it updated again at app.flowt.ai/flow/6605cc9703edf98c9e73567f-v. Currently recording video 3 - things move VERY fast. The principles do work though, I've spent 5 days recording this new 'complete guide' that should solve every issue. The trick is to understand what each paramaters do as SUPIR is running on a totally different pipeline. 👽

  • @firasfadhl
    @firasfadhl 3 місяці тому

    APISR 👀👀?

    • @stephantual
      @stephantual  3 місяці тому

      I know right! Moves so fast! 👽

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 3 місяці тому

    I kind of got lost in middle. It's probably because I haven't seen your first video about supir.

    • @stephantual
      @stephantual  3 місяці тому

      I'd recommend downloading the worfklow itself, and dissect it. I try to not 'put the nodes together' so it's easier to follow when learning. That's pretty much how we're all learning - one node at a time! 👽 Good luck!

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 3 місяці тому

      @@stephantual thanks a lot

  • @pawansharma-lw9ny
    @pawansharma-lw9ny 3 місяці тому

    can I run supir on Mac Studio m2 max 32 gb?

    • @stephantual
      @stephantual  3 місяці тому

      I heard it runs on Mac M2s yes, however I don't have a mac so i can't 100% verify this information. 👽

    • @pawansharma-lw9ny
      @pawansharma-lw9ny 3 місяці тому

      @@stephantual Thanks for the response but I think its not for Mac. I have 32 gb ram but I still getting llvm error.

    • @stephantual
      @stephantual  2 місяці тому

      LLVM is doing to be linked to a visual model, likely in this case moondream - just ditch it and try SUPIR from there with a regular text prompt :)

  • @ALEXEINAV
    @ALEXEINAV 3 місяці тому

    ERROR: Invalid requirement: '/requirements.txt' ?

    • @stephantual
      @stephantual  3 місяці тому

      The syntax depends on your platform (win/mac/osx) - just use what's appropriate - (hint: use 'tab' to cycle through the various files, and remember, it's in the SUPIR custom node folder, not the one above :))

  • @SyamsQbattar
    @SyamsQbattar Місяць тому

    please link Civitai

  • @kolkutta
    @kolkutta 3 місяці тому

    Is it still imposible to use with 8gb ram?

    • @stephantual
      @stephantual  3 місяці тому +1

      Works with FP8 unet, but evidently it's a direct correlation with the original size (and upscale needed) of the image - I'm working on a cloud-based worfklow for everyone to be able to try it 👽

  • @Razunter
    @Razunter 3 місяці тому

    Video title has a typo

    • @stephantual
      @stephantual  3 місяці тому +1

      A typo? no... no I don't see it... 😅 Jokes aside, thank you so much, I need to stop pulling all nighters! Appreciated! 👽👽👽

  • @epelfeld
    @epelfeld 3 місяці тому

    Very interesting, too fast and hard to get what's going on. Thank you

    • @stephantual
      @stephantual  3 місяці тому

      I know - it's difficult to maintain a pace that works for everyone. I reckon some will play it at 2x, others at 0.75x. :) 👽

  • @RobertJene
    @RobertJene 2 місяці тому

    could you please cut the high end on your vocal track? thanks

    • @stephantual
      @stephantual  2 місяці тому

      Yup! It's a huge pain to get good sound. I'd love some help actually - this was recorded on a R0de NT USB, and I just got a Shure MV7 where I pass the sound through Resolve instead of Audacity. If you know how to make it sound good and professional, please let me know on discord :) 👽

  • @ThoughtFission
    @ThoughtFission 3 місяці тому +1

    Amazing video. But OMG, slow down! Some of us are trying to learn and you go sooooo fast, and skip over things that are really important that a newbie won't know or understand.

    • @stephantual
      @stephantual  3 місяці тому +1

      Heheh sorry - I suppose I am SO worried about doing a boring video with just a screen recording and me talking at the same time, that sometimes I get a *tiny* bit too excited. I'll try to organize myself to still be concised but keep a better pace. Thanks for the feedback! 👽

    • @ThoughtFission
      @ThoughtFission 3 місяці тому

      @@stephantual🙂

    • @BadgerDogCat27
      @BadgerDogCat27 2 місяці тому

      @stephantual Your content is definitely not boring, could you elaborate more on installing Moondream interrogator? Do you have instructions how to install it... Do I just run the python script?

  • @twilightfilms9436
    @twilightfilms9436 3 місяці тому +1

    Comfy died before it’s inception. Nodes are not wanted by artists, period. Steve Jobs said it back in 1999…..

    • @b4ngo540
      @b4ngo540 3 місяці тому +5

      lol

    • @user-vt3cu5ov1s
      @user-vt3cu5ov1s 3 місяці тому

      Apple bought Shake in 2002.. just saying

    • @stephantual
      @stephantual  3 місяці тому +1

      Tempted to pin this :) 🛸

    • @albert93911
      @albert93911 3 місяці тому +3

      True. Nobody wants Houdini. Or Blender. Or Nuke. Or Davinci Resolve. Nobodeh!

  • @dan_VFX
    @dan_VFX 3 місяці тому

    Hi, I'm on a Mac M2 and I'm getting this error:
    Error occurred when executing SUPIR_first_stage:
    No operator found for `memory_efficient_attention_forward` with inputs:
    query : shape=(1, 3136, 1, 512) (torch.float32)
    key : shape=(1, 3136, 1, 512) (torch.float32)
    value : shape=(1, 3136, 1, 512) (torch.float32)
    attn_bias :
    p : 0.0
    `ck_decoderF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 256
    device=mps (supported: {'cuda'})
    operator wasn't built - see `python -m xformers.info` for more info
    `ckF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 256
    device=mps (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
    operator wasn't built - see `python -m xformers.info` for more info
    Any idea on how to fix it? Thanks