Photoshop x Stable Diffusion x Segment Anything: Edit in real time, keep the subject

Поділитися
Вставка

КОМЕНТАРІ • 59

  • @risunobushi_ai
    @risunobushi_ai  3 місяці тому

    IMPORTANT: AFTER INSTALLING, SINCE THE NODE HAS BEEN UPDATED FROM THE VERSION I'M RUNNING, YOU NEED TO DOWNGRADE IT.
    OPEN THE FOLDER comfyui-Photoshop, right click, open in terminal, and run:
    git checkout --force 403d4a9af1f947c95367cd40ff8ad6ae65e5df41
    THIS WILL DOWNGRADE THE REPO TO THE VERSION I'M USING

    • @nathanmiller2089
      @nathanmiller2089 3 місяці тому

      What if you are using run diffusion?

    • @risunobushi_ai
      @risunobushi_ai  3 місяці тому

      @@nathanmiller2089 I don't think there's a way of using a remote cloud based solution with local software like PS or Blender

  • @ppbroAI
    @ppbroAI 6 місяців тому +3

    Great video, ty for the effort you put into this. 👍

  • @SuperSarvikMan
    @SuperSarvikMan 2 місяці тому

    Thank you for this great tutorial. I'm getting an error when running your workflow. It seems the IPAdapterUnifiedLoader needs ClipVision. Says "ClipVision model not found"

    • @SuperSarvikMan
      @SuperSarvikMan 2 місяці тому

      Solved. For anyone else running into this, all the files in models/clip_vision and ipadapter have to be named the same as on Hugging Face.

  • @ChloeLollyPops
    @ChloeLollyPops 6 місяців тому +2

    This is amazing teaching thank you!

  • @paultsoro3104
    @paultsoro3104 6 місяців тому +1

    Great Video! Thank you for developing this workflow. I followed the steps and it works great! Thanks for sharing!

  • @AriVerzosa
    @AriVerzosa 5 місяців тому +1

    Sub! Enjoyed the detailed explanation starting from scratch. Keep up the good work!

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      Thank you! I try to not leave anyone behind, so explaining everything takes time but it pays off in the end I think.

  • @elan4912
    @elan4912 3 місяці тому +1

    It's a detailed video. Thanks a lot!!

  • @JavierCamacho
    @JavierCamacho 6 місяців тому

    Thanks!!!! I appreciate the effort you added to this video after I asked about this. God bless you!!!
    I'll try it and place the watch on some ai female models .

    • @risunobushi_ai
      @risunobushi_ai  6 місяців тому

      I don’t touch on this in the video, but if you want to keep two subjects you can duplicate the SAM and then blend the two images and mask together so you keep both a person and a watch for example

  • @xColdwarr
    @xColdwarr 5 місяців тому

    This doesnt work in Google Colab but if it does pls help me

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      I’m not versed in Google Collab, so I’m not sure whether a connection between Photoshop, which acts as a local server, would be able to work with Collab. You’d need to find a way to forward photoshop’s remote connection to the Collab I guess.

  • @Onur.Koeroglu
    @Onur.Koeroglu 6 місяців тому

    Thank you for this Tutorial. Your video title matches with the information in it.. I like that 😅💪🏻 I have to try that. Photoshop meets ComfyUI sounds great. 🙂👍🏻

  • @Mavverixx
    @Mavverixx 4 місяці тому

    where can you find the image blend by mask node, Ive cloned a WAS suite repository but it failed, is there anywhere else to get it? Many Thanks

    • @risunobushi_ai
      @risunobushi_ai  4 місяці тому

      Have you tried a “try fix” in the manager for the WAS suite? I’m not at home right now and can’t check if there’s other blend by mask nodes (I’m sure there are though)

    • @Mavverixx
      @Mavverixx 4 місяці тому

      @@risunobushi_ai Many Many Thanks solved it, however I am now looking for how to connect my Photoshop to comfy UI node, it seems to have been upgraded. There is no password field in the node any longer, not sure how they speak to each other?

    • @risunobushi_ai
      @risunobushi_ai  4 місяці тому

      @@Mavverixx the dev told me both nodes (old and new) should be available, but I can't find it myself in the updated repo. Anyway, you can downgrade it by using "git checkout" and then the version of the repo before it got upgraded to the new nodes.

  • @Sergiopoo
    @Sergiopoo 6 місяців тому +1

    So glad I found this channel, really good info

  • @brunosimon3368
    @brunosimon3368 3 місяці тому

    Thanks for this wonderful tutorial. I've downloaded your json file, but it doesn't work for me. After installing all the different files, ComfyUI blocks on the IPAdapter. I get the following message :
    IPAdapter model not found.
    File "C:
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:
    ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 515, in load_models
    raise Exception("IPAdapter model not found.")
    If you have any idea, you're welcome 🙂

    • @risunobushi_ai
      @risunobushi_ai  3 місяці тому

      have you installed all the models needed for IPAdapter to work? They're on the IPAdapter Plus github github.com/cubiq/ComfyUI_IPAdapter_plus

    • @brunosimon3368
      @brunosimon3368 2 місяці тому

      @@risunobushi_ai Thank you for your answer. In between, I've found out the way to solve this problem. My Photoshop link doesn't work, but an Load Image node works as well.
      Anyway, I have an issue I can't find any solution for:
      mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)
      Do you have any idea?
      Thanks in advance for your time and your patience.

    • @brunosimon3368
      @brunosimon3368 2 місяці тому

      Nevermind!!!! I redid the complete installation from scratch and it now works :-) Thanks alot for your work.

  • @fabiotgarcia2
    @fabiotgarcia2 6 місяців тому +1

    I can´t wait for NimaNzrii update his node to see if it work for mac.

    • @risunobushi_ai
      @risunobushi_ai  6 місяців тому +4

      They did commit something to a private repo a couple of days ago, and apparently they’re working on a new release, but they’re not one of the most communication-oriented devs out there. There’s not even proper docs tbf.
      Still I feel like its simplicity is unparalleled, and it’s exactly what’s needed in order to work alongside photoshop in a simple and intuitive way. so here’s to hoping they can push some more updates in the future.

    • @fabiotgarcia2
      @fabiotgarcia2 6 місяців тому

      @@risunobushi_ai thanks for reply me

  • @jkomno5809
    @jkomno5809 5 місяців тому

    hi! what node should replace the input from photoshop, if I want the input to be just a selected image from my local drive?

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      A load image node would be what you need

  • @andree839
    @andree839 6 місяців тому

    Hi, thanks for a very helpful video again. I have a one problem though appearing in the workflow. I am using the SD1.5 checkpoint model since i dont have that much VRAM. When running Segment anything, I get an error for OUT of memory. Reading the error message it seems the memory capacity is large enough, but "PyTorch limit (set by user-supplied memory fraction)" is way to high.
    Any suggestions how to solve this? I tried with the very small "mobile_sam" model and it actually worked, but the mask was not precise at all.

    • @risunobushi_ai
      @risunobushi_ai  6 місяців тому +1

      Yeah, Mobile SAM is not great for the kind of result we want here. Since yours is a hardware limitation issue, if you haven’t tried this yet, I would, in order:
      - turn off IPAdapter completely;
      - look for lightweight ControlNet depth models;
      - check if other ControlNets are more compact (e.g. if lineart has a lighter model than depth. You miss out on depth but you still get the same spatial coordinates as the photoshop picture)
      - reduce latent image size

    • @andree839
      @andree839 6 місяців тому

      @@risunobushi_ai Thanks for the suggestions! I already tried most of them and even if i reduce the latent image to extremely low, I still get the error. Seems to be very hard to figure out.
      The entire message i get is like this "Allocation on device 0 would exceed allowed memory. (out of memory)
      Currently allocated : 2.85 GiB
      Requested : 768.00 MiB
      Device limit : 4.00 GiB
      Free (according to CUDA): 0 bytes
      PyTorch limit (set by user-supplied memory fraction)
      : 17179869184.00 GiB"
      So the strange part is that the sum of the requested memory is less than the Device limit.

  • @zizhdizzabagus456
    @zizhdizzabagus456 5 місяців тому

    The only problem is it doesn't actually blend lighting to the subject.

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому +1

      Sometimes it does, sometimes it doesn't - the solution would be applying a normal map controlnet as well, but that slows things down a bit, and normal maps extracted from 2D pictures are not great. We can only wait for better depth maps, so that the light can be interpreted better, or we can generate more pictures so that we get coherent lighting eventually.
      For example, sometimes it generates close to perfect shadows, whereas sometimes it doesn't. At its core, it's a non-deterministic approach to post processing, so it will always have some limitations, but going forward I expect those to become less and less impactful.

    • @zizhdizzabagus456
      @zizhdizzabagus456 5 місяців тому

      @@risunobushi_ai does it has to be normal map? I thought depth and normal give pei much same results?

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      Long story short, the latest depth maps can do what normal maps would do, but since it’s all just an approximation of a 3D concept, we’re still not quite there for coherent *and* consistent lighting.

    • @zizhdizzabagus456
      @zizhdizzabagus456 5 місяців тому

      @@risunobushi_ai oh you mean that if I do use the real one from 3d editor it woild make a difference?

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      @@zizhdizzabagus456 it would and it wouldn’t. Normal maps derived from 2D pictures are an approximation, so they’re at best a bit scuffed. Also, apparently generative models weren’t supposed to be able to “understand” normals. For a more in depth analysis, take a look here: arxiv.org/abs/2311.17137

  • @thewebstylist
    @thewebstylist 6 місяців тому

    Just showing the UI at 1:30 is why I still haven’t chosen to use Stable D

    • @risunobushi_ai
      @risunobushi_ai  6 місяців тому +1

      Well, I do try my best to explain why and how to use each and every node, to help anyone understand what they do and how they can use them easily

  • @baceto-jp4fz
    @baceto-jp4fz 5 місяців тому

    do you think this workflow and the pop-up will work for Photopea? (open-source Photoshop alternative)
    also, is it possible to run this workflow without photoshop at all?
    great video!

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      I’m not well versed in Photopea, but if you want a free alternative (for which you would need to develop a different workflow or wait for one, since I’d like to make one) you can look at Krita, which has a SD integration

    • @baceto-jp4fz
      @baceto-jp4fz 5 місяців тому

      @@risunobushi_ai thanks! a video would be great!

  • @houseofcontent3020
    @houseofcontent3020 4 місяці тому

    Such good video!

  • @henroc481
    @henroc481 6 місяців тому

    THANK YOU!!!!

  • @Kafkanistan1973
    @Kafkanistan1973 6 місяців тому

    Well done video!

  • @jkomno5809
    @jkomno5809 5 місяців тому

    i followed the tutorial and built your workflow from scratch but without the photoshop node as i'm on macos. i replaced it with a normal "load image" node, that gets to the resizer just as how photoshop node goes through. I get error "SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)" ... can you help me out with it? ComfyUI Manager doesn't say that I have missing nodes

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      What are you using instead of the photoshop node? A load image node? At which node does the workflow throws an error (usually the one that remains highlighted when the queue stops)?

    • @jkomno5809
      @jkomno5809 5 місяців тому

      @@risunobushi_ai Error occurred when executing SAMModelLoader (segment anything):
      Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

    • @jkomno5809
      @jkomno5809 5 місяців тому

      @@risunobushi_ai i'm running this on M1 Max 32 core GPU, 64 RAM:
      Error occurred when executing SAMModelLoader (segment anything):
      Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

    • @risunobushi_ai
      @risunobushi_ai  5 місяців тому

      Do you mind uploading your json workflow file to pastebin or any other sharing tools? I’m going to see if I can replicate the issue on my MacBook

    • @jkomno5809
      @jkomno5809 5 місяців тому

      @@risunobushi_ai yes of course! can i have your discord or something?