Running Stable Diffusion and ControlNET locally via Grasshopper (thanks AUTOMATIC1111)

Поділитися
Вставка
  • Опубліковано 12 вер 2024

КОМЕНТАРІ • 35

  • @guidageorge
    @guidageorge Рік тому +1

    Always ahead of the game! 👏👏👏

  • @KurtChen
    @KurtChen 8 місяців тому +1

    Marvlous work mate.
    I wonder if it can be used remotely. I mean for example if one's pc is not so powerful, can it runs on a online virtual service?
    Salute!

    • @LucianoAmbrosini
      @LucianoAmbrosini  8 місяців тому

      Hello, and thanks for your inquiry and interest in the toolkit. I am not completely sure about virtual service, but a workaround exists about the possibility of running in LAN on a more powerful machine with these scripts. I have implemented a feature that already exists in A1111 Stable DIffusion, the "listen" mode. You will find out more here: bit.ly/SDandCNinsideGH-Upd3

    • @KurtChen
      @KurtChen 8 місяців тому +1

      @@LucianoAmbrosini This is super! Again thanks a lot! I'll check the link my friend :D

  • @RohitNandakumar
    @RohitNandakumar 10 місяців тому

    Hi, thank you so much for developing this tool and sharing it openly.
    Any way to run it on a Mac OS with Intel chip?

    • @LucianoAmbrosini
      @LucianoAmbrosini  10 місяців тому

      Hello, and thanks for your enquiry,
      Unfortunately at the moment, a Mac version has not been scheduled, however, if something changes I will keep you posted.

  • @thecogarts
    @thecogarts Рік тому +1

    Great video! Thanks a lot! Could you tell how to increase processor resolution in cmd command through grasshopper interface? There is default is 64 in grasshopper but in website I can increase it to 512 and so on

    • @LucianoAmbrosini
      @LucianoAmbrosini  8 місяців тому

      Hello, I know that is an old request, but if you can share more details about it I can fix/update this feature in the next update 😉😉. Please, let me know and thank you again for your interest in ATk! (I have waited for implemented first both features textToImage and ImageToImage).

  • @facundotaborda816
    @facundotaborda816 Рік тому

    Fantastic work Luciano, thanks! I have installed everything but --xformers and managed to generate prompted images and even cn-Depth image based on previous Basic prompt imag. But suddenly generator stops processing new images and even restarting Rhino doesnt work. Is there a reset mechanism? Thank you in advance

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому

      Hello thank you for your message! Recently ControlNET has received an update to v1.1 not yet fully API supported. I have published a video about ho to install ControlNET v1.0. I have scheduled to post an article as soon as I update Ambrosinus-Toolkit v1.2.1 with new components. Finally I noticed that if you run the CN v1.0 with the current toolkit version everything works fine. Another useful tip is to use the standard Python installation and not Anaconda one. 👍🏼 If you have some specific error, please send an email with files , please 😊 ASAP I Will review some users feedback. Thank you!

    • @facundotaborda816
      @facundotaborda816 Рік тому +1

      @@LucianoAmbrosini Thank you very much Luciano, will follow advice and post any issue encountered 👌

  • @IvesBon
    @IvesBon Рік тому

    Thanks a lot for your effort! One question: why is it keeping telling me that the input parameter GenWebUi failed to collect data? Stable diffusion including controlNET works ver well in the browser. Thank you in advance!

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому

      Thanks for your message. That parameter need a button as input, after you have ticked the desired command arguments you need to click on that button please see my last video: ua-cam.com/video/ANsxTVHrFk8/v-deo.html, please let me know 😉

    • @IvesBon
      @IvesBon Рік тому

      @@LucianoAmbrosini Thank you very much! I downloaded the new toolkit, but didn`t plug the button in the last slot (GenWebUI). Now it`s working! Very much appreciated, fantastic tool!

  • @pheeraphatratchakitprakarn3415

    Amazing video! The steps are very clear and easy to follow, however, I seem to encounter an error on the LA_AleNG_loc component (1. Solution exception:The remote server returned an error: (404) Not Found.). I would love to get started generating all kinds of images with your plug-in :). Thank you!

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому +1

      Hi, and thanks for your feedback!
      please have a look at this extended answer: tinyurl.com/GHAIeng-upd2
      Please, let me know.
      Best regards

    • @pheeraphatratchakitprakarn3415
      @pheeraphatratchakitprakarn3415 Рік тому

      @@LucianoAmbrosini my issue is resolved now, I have forgotten to put --api in webui-user.bat :) thank you for your quick response

  • @user-kr9up8le3s
    @user-kr9up8le3s Рік тому

    Fantastic work Luciano, Thanks! I try to use it in Grasshopper. It works for text to the image.But I can't load the control net when I use the SD in Grasshopper. And I was using the V1.0 version . And the control net works when I using the browser.The error cod : To create a public link, set `share=True` in `launch()`.
    Startup time: 8.5s (import torch: 2.1s, import gradio: 1.5s, import ldm: 0.8s, other imports: 1.0s, load scripts: 1.8s, create ui: 0.9s, gradio launch: 0.2s).
    DiffusionWrapper has 859.52 M params.
    Applying xformers cross attention optimization.
    Textual inversion embeddings loaded(0):
    Model loaded in 5.8s (load weights from disk: 1.1s, create model: 0.5s, apply weights to model: 0.6s, apply half(): 0.6s, move model to device: 1.6s, load textual inversion embeddings: 1.4s).
    Error running process: C:\Users\X!\Desktop\SDLOCAL\stable-diffusion-webui-master\extensions\webui-controlnet-v1-archived\scripts\controlnet.py
    Traceback (most recent call last):
    File "C:\Users\X!\Desktop\SDLOCAL\stable-diffusion-webui-master\modules\scripts.py", line 418, in process
    script.process(p, *script_args)
    File "C:\Users\X!\Desktop\SDLOCAL\stable-diffusion-webui-master\extensions\webui-controlnet-v1-archived\scripts\controlnet.py", line 682, in process
    model_net = self.load_control_model(p, unet, unit.model, unit.low_vram)
    File "C:\Users\X!\Desktop\SDLOCAL\stable-diffusion-webui-master\extensions\webui-controlnet-v1-archived\scripts\controlnet.py", line 471, in load_control_model
    model_net = self.build_control_model(p, unet, model, lowvram)
    File "C:\Users\X!\Desktop\SDLOCAL\stable-diffusion-webui-master\extensions\webui-controlnet-v1-archived\scripts\controlnet.py", line 486, in build_control_model
    raise RuntimeError(f"model not found: {model}")
    RuntimeError: model not found: None

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому +1

      Hello thank you for your feedback! Are you sure that you have downloaded the right datasets model for v.1.0 (canny, hed, depth etc..)? Anyway, please send to me your GH files and some screenshots. I will have a look to them asap 😊

  • @pickcj9926
    @pickcj9926 Рік тому

    Hello, I really like your plugin and I would like to try installing it, but I have encountered an issue that I cannot find a solution for on the internet. Additionally, my computer language skills are not very strong, so I was hoping that you could provide me with some assistance. Specifically, I am receiving an error when attempting to run webui-user.bat in PowerShell. The error message is
    "ERROR: Could not find a version that satisfy the requirement torch==1.13.1+cu117 (from versions: 2.0.0, 2.0.0+cu117)
    ERROR: No matching distribution found for torch==1.13.1+cu117."
    The error code is 1.🙏🙏🙏

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому +1

      Hello, thank you for your message. Yep, this depends on AUTOMATIC1111 distribution. I suggest uninstalling your Torch version and relaunching webui.bat (or alternatively install the required version by pip protocol)

  • @zhiyang4726
    @zhiyang4726 Рік тому

    \StableDiffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\models ,but can't load controlNet models?

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому

      Hello and Thanks for your message. Did you follow precisely the instructions to install AUTOMATIC1111 and ControlNET? (before launching Ambrosinus components) lnk here: ambrosinus.altervista.org/blog/ai-as-rendering-eng-sd-controlnet-locally/#part1

    • @zhiyang4726
      @zhiyang4726 Рік тому +1

      @@LucianoAmbrosini thanks for reply! yes ,l follow the guide,but can’t understand this :“the models need to be loaded before running the LaunchSD_loc component.”,what should I do to load these models before running the component

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому

      @@zhiyang4726 ok clear ☺️, you need to load (copy&paste or download and then move) the database like canny, hed, mlsd, depth etc... Into extensions>sd-webui-controlnet>models otherwise you cannot run ControlNET features

    • @zhiyang4726
      @zhiyang4726 Рік тому

      @@LucianoAmbrosini yes,the models are in the folder as you mentioned,but can’t be loaded…the component works fine with model described none

  • @muralimanoj8748
    @muralimanoj8748 Рік тому

    it only works for text to image and not image to image and I used python 3.11

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому +1

      Hello, this tool works for text-to-image and above all using ControlNET features. i am evaluating if develop also another component that works on image-to-image exploiting the A1111 features.

    • @muralimanoj8748
      @muralimanoj8748 Рік тому

      @@LucianoAmbrosini so the things that you show after minute 5:00 is not possible. I applied baseimg but output is something else unlike where you have same building in the picture

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому

      @@muralimanoj8748 Generally, it depends on the ControlNET, in the video (and around the web) I suggested paying attention to the CN version in particular the tool works fine with ControlNET v1.0.

    • @muralimanoj8748
      @muralimanoj8748 Рік тому

      @@LucianoAmbrosini Though not clear how you acheived it in the demo but loved the video.. thanks

    • @LucianoAmbrosini
      @LucianoAmbrosini  Рік тому

      The Python version that I suggested using is 3.10.6 the others caused to me some issues in running the tool