Stable Swarm UI - GPU-Network Rendering

Поділитися
Вставка
  • Опубліковано 12 вер 2024

КОМЕНТАРІ • 112

  • @OlivioSarikas
    @OlivioSarikas  Рік тому +3

    #### Links from my Video ####
    github.com/Stability-AI/StableSwarmUI
    docs.google.com/document/d/15CEBtpKsFIRfZG-WNaAs9euyW0RYzClUy3VE5yZp4IY/edit?usp=sharing

  • @DrHanes
    @DrHanes Рік тому +19

    Looks like the Multi GPU part of this video is playing hide-and-seek, and it's winning! But in all seriousness, I think the uploader accidentally left the Multi GPU magic off-camera.

  • @Otis151
    @Otis151 Рік тому +23

    You do videos on how to install, but I usually don’t know what the thing is/does. I’m sure this format is working for you, so I’m not suggesting change. I’m just letting you know maybe doing a companion video as to why I could/should be interested in this technology would be useful to me, personally. Not sure if others agree.

  • @TheSpaceman1972
    @TheSpaceman1972 Рік тому +37

    The important part is missing, how to config it for multiple GPU‘s within a local network. 😢

    • @crobinso2010
      @crobinso2010 Рік тому +1

      I think he means multiple GUI maybe???

    • @AIMusicExperiment
      @AIMusicExperiment Рік тому

      In your server configuration tab you enter the GPU number that you want it to prioritize for each backend that you set up. Of course the power of this is that it uses whatever gup or other memory is available.

    • @Grimmwoldds
      @Grimmwoldds Рік тому +9

      @@AIMusicExperiment This is the second video that said "multi-GPU" in the title from a "reputable" ai image/text gen youtuber that didn't mention anything regarding multi-GPU even with it in the title. I'm gonna have to ask, WHO TOLD OLIVIO TO SAY THIS? Usually the video title is an summary of the video, click bait, or paid advertising. This video is either #2 or #3.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      I show in the video how to setup A1111 to be addressed by the API. the only thing you need to do on top of that is to use your local ip by looking into ipconfig. It's in early alpha, so it might be a bit of a mess to set it up correctly, but the only thing you need to so is to address a local API of your UI

    • @DanielVagg
      @DanielVagg Рік тому +7

      @@OlivioSarikas
      From what the repo says, I am under the impression it's more for utilising many different installs to help with really large images or workflows, like how BitTorrent protocol works with large files. Or maybe allowing people with less powerful hardware to produce large scale images. We just haven't seen anyone demonstrating this particular setup or usage.
      I'm guessing we add our "node to the network" at some point, but this remains unclear.
      Amazing content BTW, keep up the stellar work 💗 love your videos and the positive attitude you and your community bring 😊

  • @benzpinto
    @benzpinto Рік тому +11

    i think a particular use case for this multi-GPU feature is that you have a few pc/laptops with low-mid range GPUs and are all connected to the same network. you use one of the laptops to install this software and then, run Auto1111 on the others with listen mode on. Connect the laptop with this software to the rest of the laptops. If this works in this situation, then its probably cost effective to buy multiple old used pc/laptops with low end GPUs to generate something on par with a single high-end GPU. only testing results can tell if its worth it

    • @Xanderfied
      @Xanderfied 9 місяців тому

      Agreed, the ability to use web based gpus, while I'm sure was made with the idea of generating income for server farms, will actually cut down on the need to purchase a $2k gpu for most users. Not to mention making subscription or hourly based fees much less hassle to start

  • @mufeedco
    @mufeedco Рік тому +8

    You can use the Anaconda environment to help manage dependencies and isolate projects. Installing Torch makes it easier to use.

    • @blender_wiki
      @blender_wiki Рік тому

      You mean you MUST use Anaconda. 😅😉😉👽👽

  • @timeTegus
    @timeTegus Рік тому +8

    cool can u maby show how it runns on 2 pcs together to show the imprvments of performence? also i ahve ne pc with a 3090 and one with a 1070. when i add both to the swarm do ican load models with the combo of hte vram? like 8 + 24? or how is the performance added together? btw cool video!!!

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      I show in the video how to setup A1111 to be accessible by API and how to add the API to Stable Swarm. Is it really that hard to figure out you have to use the local ip of a different computer instead?

    • @timeTegus
      @timeTegus Рік тому +3

      @@OlivioSarikas yes but how does it function if i have 2 pcs. like odes halve of the process run one one pc and the other on hte other pc? how does the swarm work?

  • @bmorg7244
    @bmorg7244 Рік тому +3

    in reading the documentation for swarm ui it does not use multiple gpus to generate the same image. If you have two gpu in your local machine and you generate a batch with two images it just uses one card for each image. Still not the holy grail of multi gpu power that we were hoping for!

  • @Theexplorographer
    @Theexplorographer Рік тому +4

    Um, Multi-GPU? Or Multi-GUI? Big difference my friend. I have 7 GPUs here I would like to utilize a few of them for rendering. Looking at the thumbnail one would believe this was a tutorial on how to set up A1111 with multiple GPU's. Am I wrong here?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      Well, you can have multiple GPUs in multiple computers. I think the API element gives some indication to that. I don't know if you can address specific GPUs inside the same computer, but i don't think a lot of people have a setup like this, while a lot of people have access to mutiple computes online and offline that they can connect to

    • @phizc
      @phizc Рік тому

      ​@@OlivioSarikas I'm not sure how to do it in that GUI. As in how the program sends different jobs to different backends, but running multiple ComfyUI instances, one per GPU on the same computer is possible if you have enough RAM.
      This is one way to set it up:
      1. Make a copy of the ComfyUI install.
      2. In the copy, edit the "run_nvidia_gpu.bat" file. Add "--cuda-device 1 --port 8189". You may have to tweak the cuda-device value, if e.g. "1" is the default because it's the most powerful or something.
      3. Launch the original and copied ComfyUI. You should have 2 different web browser windows, one for the default GPU (probably device 0), and one with device 1.
      For more GPUs, make more copies, and increment the values in each.
      It's possible you don't even need to make a copy of the entire ComfyUI folder. A copy of the launcher bat file might be all you need, but I'm worried that one instance would overwrite setting files etc. of the other and cause trouble. It's at least likely that you only need one copy of the python (system) folder.
      Note: I don't have multiple CUDA GPUs, so I can't test this, but if OP has, I'd be willing to try to help set it up. I'm on the Discord.

    • @alienrenders
      @alienrenders Рік тому +1

      ​@@OlivioSarikasit's still not multi gpu.

  • @MilesBellas
    @MilesBellas Рік тому +2

    Using GPU access time as a form of currency = powerful

  • @stanleywtang
    @stanleywtang Рік тому +3

    Am I missing something? how does it work with multiple GPUs in the same network?

  • @TheBobo203
    @TheBobo203 Рік тому +4

    Would be cool to use 2 cards locally without SLI xDD

  • @guayard
    @guayard 11 місяців тому +1

    It looks like Swarm does not work with a lot of custom models in default workflow, tested few like DreamShaper, abyssorangemix, and some more - images are completely mess of noise and some random objects sometimes, and if you play with settings it became even worse. But in ComfyUI workflow Swarm works pretty well, and as i see even faster compared to 1111, but i don't really like to play with nodes, it of source gives you some illusion of control, but i don't see how it may be useful. Most of the time i want to quickly pick the settings and press "generate".

  • @randymonteith1660
    @randymonteith1660 Рік тому +3

    You never mentioned if you had an extra GPU in your computer? Isn't the idea of Swarm is when you have multiple GPUs installed? If not why would anyone bother with this?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      No, the idea of a swarm is to have a swarm of machines. Meaning you connect multiple machines with a GPU each.

  • @ducpham1920
    @ducpham1920 Рік тому +1

    I am looking for a solution to use Stable Diffusion to generate image with multiple GPUs at once (like i can use 4 GPUs to generate 1 image to make the rendering more faster). Is there any project like that for now Olivio?

    • @benzpinto
      @benzpinto Рік тому +1

      4 GPU's on 4 different computers. that is what this software seems to suggest, yes.

  • @guayard
    @guayard Рік тому

    Olivio, thanks for a great (as always) tutorial, but what do think about it, how "swarm" feels compared to 1111? Have it any significant benefits?

  • @Cryptocannnon
    @Cryptocannnon Рік тому +1

    sooo could i say put 5 gpus into one of my old mining motherboards and use all the gpus to render images ?????

  • @DalviqCash
    @DalviqCash Рік тому +2

    as of now the GitHub writeup has no mention of the multi-GPU support anywhere in it nor as an implemented feature neither as a planned one. from the looks of it the whole thing seems to be more Mac oriented than any other SD solutions so far. If the multi-GPU support was ever mentioned in the write up, which being alpha gets, quite understandably, updated on a daily basis , then I believe it was a feature that would , by design, be geared much more towards utilization of as many M1 or M2 GPU cores as possible and obviously not meant for any other use case such a a typical windows rig with 2 physical graphics cards installed .

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +2

      The Github page literally says "the original key function of the UI: enabling a 'swarm' of GPUs to all generate images for the same user at once" - that sounds a lot like multi-GPU useage to me.

    • @DalviqCash
      @DalviqCash Рік тому

      @@OlivioSarikas didn't read that far tbh, but still the motivation doc the write-up refers to - it is linked from the "title page" - kind of confirms my guess by saying, I quote " it must be able to use available CPU cores while serving user requests and managing internal data to be able to respond to all requests as quickly as possible" . Available CPU cores (??? what?) - which I believe is a typo and should actually read GPU cores- is NOT the same as physical graphics cards or GPUs as they are commonly known . M2 based Apple Mac has up to 16 GPU cores if I'm not mistaken, and those are the primary candidates fore the project like this. The same motivation doc does not say - clearly - the idea or motivation is to be able to use more than one discrete GPU or graphic card or even more than one GPU core on a single card as is the case with some Tesla cards.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      @@DalviqCash gpu cores are not the same as GPUs. Your CPU has a lot of cores but is still only one CPU

    • @jeffscott3186
      @jeffscott3186 Рік тому

      The simplest way I can explain this. The GPUs crunch numbers (rendering), and the CPU directs traffic.

    • @DalviqCash
      @DalviqCash Рік тому +1

      @@jeffscott3186 sure but in this case the traffic has to go somewhere and that somewhere will be most likely in the cloud which makes the whole idea a front end to a cloud based solution which it obviously isn't. In tech it is imperative to either present a turnkey solution which needs no explanation or a clean and straightforward manual describing steps end user is required to perform to reach the desired goal on their end. After reading both the writeup and the motivation doc for this project twice I saw nor former neither latter. The way it looks now is yet another take on the user interface for the current version SDXL which in my opinion is fast going nowhere, but that is of course the completely different story.

  • @DrDaab
    @DrDaab Рік тому

    Great Videos! Does the install.bat allow you to install to another partition then C: ? Also, will it run ROOP ?

  • @changtc8873
    @changtc8873 Рік тому

    Thank you for the sharing Olivio

  • @monolofiminimal
    @monolofiminimal Рік тому +1

    Sounds great - I want to go the other direction though and have something like onnxstream implemented, though the other issue with that is getting models in onnx format.

    • @phizc
      @phizc Рік тому

      There are tools that can covert safetensors or ckpt to onnx, so you can use the existing models. I haven't tried myself yet, but I've read a bit about it. I think there may be an issue about onnx being less flexible. E.g. you can't select resolution after conversion, and LoRAs must be baked. But this was something I read a few months ago, so it might have been fixed. Also, that was onnx, not onnxstream, so the latter may add other limitations. Not sure. No expert.

  • @c0nsumption
    @c0nsumption Рік тому

    So technically I can run Auto1111 on another machine (locally) and instead pass the address to that system and the port it’s running on?
    This will let me use both machines on one machine essentially

  • @ericanderson4973
    @ericanderson4973 Місяць тому

    Had issues getting models from auto1111 to load Thank you for helping me fix that

  • @Thozi1976
    @Thozi1976 Рік тому +1

    00:00 💻 Stable Swarm UI ist ein neues Projekt von Stability AI zum verteilten Rendering mit GPUs.
    00:41 📁 Die Installation läuft über eine Windows Batch-Datei, die alle Dateien herunterlädt.
    01:47 🖥️ Nach der Installation wird automatisch die Web-UI geöffnet. Probleme können mit Neuinstallation behoben werden.
    03:23 🧠 Modelle müssen manuell zum Server hinzugefügt werden, wenn sie nicht automatisch gefunden werden.
    04:58 🎨 Es können verschiedene UIs wie stable-diffusion-webui genutzt werden durch Konfiguration.
    06:16 🖌️ Über die Workflow-Editor-Oberfläche können Knoten hinzugefügt und Workflows erstellt werden.
    07:25 📈 Mit Upscaling kann die Bildauflösung deutlich erhöht werden für bessere Resultate.
    08:50 🌐 Alternativ kann die UI auch über das lokale Netzwerk erreicht werden, nicht nur lokal.
    09:31 🚀 Durch verteiltes Rendering mit mehr GPUs können Projekte schneller fertiggestellt werden.

  • @admiralgeneralaladeen1851
    @admiralgeneralaladeen1851 Рік тому

    hey olivio, i have been having a problem making loras, but the members of your discord cant help so this is my last dich effort for help. i have been having the problem that everytime i make a lora thats trained on photo realistic images, the lora comes out with a anime or cartoon like style. can you help me?

  • @anonded
    @anonded Рік тому

    i hope we will soon be able to combine all the gpu's one has all of which could be in different local computers/laptops to use for generating images just like the petals project for llms...

  • @Nyarlatha
    @Nyarlatha 11 місяців тому

    I tried to install it and it said i had a existing pakage already installed.. What do i do?

  • @sneedtube
    @sneedtube Рік тому +1

    I missed the part about the gpus network

  • @aZiDtrip
    @aZiDtrip 8 місяців тому

    hi,, do you have an update on that multi gpu rendering ? we have 3 machines running at my home. 2 with rtx3090's and 1 with 4090, i can get one machine to use another gpu via the swarrm network. but i cannot get it to use multiple gpu's its just a remote one or the local one. it would be real fun to know how much performance can i can get if i get 3 gpu's doing the task

  • @onroc
    @onroc Рік тому +3

    Multi-GPU Rendering?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      The project says "the original key function of the UI: enabling a 'swarm' of GPUs to all generate images for the same user at once"

  • @erickstamand
    @erickstamand Рік тому

    Lines that begin with "rem" are commented out and are not executed. So dotnet 7 sdk won't get installed

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 Рік тому

    Will there be a speed reduction compared to simply using comfyUI?

  • @vicentepallamare2608
    @vicentepallamare2608 16 днів тому

    how can one add e.g. SVD for img2vid to this?

  • @danielsuarez5663
    @danielsuarez5663 Рік тому

    Hi, not sure what I'm doing wrong... can get the shortcut

  • @luman1109
    @luman1109 Рік тому

    Could you make a small stack of 1080s on a raspberry pi dedicated to rendering? Like a personal rendering server.

  • @lithium534
    @lithium534 Рік тому

    I would love to know how to run it with multiple GPU as I have an old 980. Together with my 1080 this could be great for faster and larger images.

  • @user-dj3rd4my5k
    @user-dj3rd4my5k Рік тому

    Its still a work in progress.. Loras and many of these things don't "work as yet" but i must admit its downright fast...

  • @MrSongib
    @MrSongib Рік тому

    5:30 I wish we had this for A111.
    it's seems fun and fast to just load the pipeline for certain task that you want to do.
    For example, my own workflow is that testing prompt, then proceed to Adetailer, Do some inpainting maybe, and then upscale it with 2 more sript that need to be enable (ctrlnet and Ultimate upscale).
    The problem is I need to setup three different tabs for this and for now it's ok (a bit annoying sometimes). but I wish that we can load certain settings fast for different type of task inside the WebUI.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      Why don't you use comfyUI for that and have it all in one setup. not sure if comfyui has inpainting (other than automated with things like adetailer) but the rest should work

    • @blisterfingers8169
      @blisterfingers8169 Рік тому +1

      @@OlivioSarikas It does. Right click on an image in a Load Image node and you can mask the area you want. Replicating automatic face detailing in the same style as Only Masked in Auto doesn't seem possible atm, though.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      @@blisterfingers8169 awesome, thank you!

  • @farizyal8108
    @farizyal8108 11 місяців тому

    how to show the it/s in swarm ui?

  • @PerfectArmonic
    @PerfectArmonic Рік тому

    Ufff! One year I was waiting for already… they promised that they’ll do it, but alas! Politicians promises! They promised then that they will do an SD ai NN that will be able to do in graphics what chatGPT does already with words: I mean a consistent dialog after the prompt. For instance: On my first prompt I write: “A beautiful girl wearing blue trousers. The image is generated. Then the dialogue will continue: “Now maintain every aspect of this generated image (here they can find a special emoticon or symbol because this expression will be used a lot by millions of ppl) and change the color of the trousers in red” The image will remain exactly the same except the trousers color. Further: “Now make her smile, keeping the consistency” she will smile! “Now turn her head 15 degrees (or pi/12 radians for others) to her right and 30 degrees upward. And so on! To make the user feels like a puppeteer! This will be the state of the art! Remember: Do they promised but never approach this idea! Already passed one year since they promised “The puppeteer SD” and nothing! When they will start to work on that?

  • @blacksage81
    @blacksage81 Рік тому

    Wonky, is 100% the right word for how this whole stable swarm thing feels to me right now, but, it isnt all bad. For some reason, I must have botched my auto1111 install and every time I try to load an XL model, it either takes several minutes to load, or doesn't load at all. But in Stable swarm? XL models load just as fast as 1.5 models load on my auto1111 install, so I guess I'll use both until all the scripts are updated and the community finally tames XL.

    • @Vyshada
      @Vyshada Рік тому

      Yeah, had that issue too where attempting to load sdxl model would result in crash. So i tried comfy ui, and my only regret is that I hadn't tried it sooner. For some reason, not only it loads models WAY faster, but also reduce generation speed somehow

  • @Havic22123
    @Havic22123 Рік тому

    it seems this could boost the speed the speed of animate diff potentialy

  • @dimonapatrick243
    @dimonapatrick243 Рік тому

    @OlivioSarikas, could you make video for miaoshou assistant extension for A1111

  • @arifkuyucu
    @arifkuyucu 6 місяців тому

    any video about Mac install?

  • @oaahmed7515
    @oaahmed7515 Рік тому +1

    What is the required size of the graphics card, is 8 GB vram enough?

    • @timeTegus
      @timeTegus Рік тому +1

      yes but you need lots of system ram. i think 32gb. but system ram is cheap to upgrade :)

    • @0AThijs
      @0AThijs Рік тому

      @@timeTegus Depends....

    • @oaahmed7515
      @oaahmed7515 Рік тому

      @@timeTegus good. I have 64 ram

    • @AIMusicExperiment
      @AIMusicExperiment Рік тому

      yes

    • @0A01amir
      @0A01amir Рік тому

      4GB VRAM and 8GB system memory is enough for SD 1.5 on ComfyUI and Auto1111.

  • @spiritpower3047
    @spiritpower3047 Рік тому

    Hi ! 😍 Please can you tell me how to interpolate frames for a video LOCALY (gpu) with the new google "FILM" algorithm or better ? i have tried and installed some stuff but it does not work 😓

    • @robthegreatt1
      @robthegreatt1 Рік тому

      Flowframes

    • @spiritpower3047
      @spiritpower3047 Рік тому

      @@robthegreatt1 the special "FILM" algorithm is not inside the list of the flowframe models 😕

  • @MoS910
    @MoS910 Рік тому

    Unfortunately my device has only RTX 1060 (msi laptop) so i can't use it locally 🙁.
    And the bat file work only if i put in desktop and i don't want it in desktop

  • @peterpui7219
    @peterpui7219 Рік тому

    it's unstable now..self starting ComfyUI and Automatic 1111 the server configuration work very bad in connectivity.

  • @Feelix420
    @Feelix420 Рік тому

    i need your negative prompt boss

  • @anamulhaquenahid659
    @anamulhaquenahid659 Рік тому

    Is RTX 3060 12gb a good gpu for this task? ((((NEED HELP)))

  • @purplebladder
    @purplebladder Рік тому

    these never work for me

    • @purplebladder
      @purplebladder Рік тому

      this is the fix they tell you to do if it doesnt make a shortcut after trying the first bat again...TODO Even easier self-contained pre-installer, a .msi or .exe that provides a general install screen and lets you pick folder and all..... like what msi or exe is there to choose

  • @MrWeda2
    @MrWeda2 Рік тому

    I'm sorry... I treated you like a teacher. You helped me deal with the not simple history of the relationship between artificial intelligence and a CREATIVE HUMAN... Now everything is different. I understand that it's important to inform people what's going on. But things get more and more complicated. Please decide, as an artist, what you are working with and I will follow you along this path...
    With respect and best regards, Alexander.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      I'm not sure what you mean by that. almost all of my videos are about Stable Diffusion. This is just a tool to host multiple stable diffusion UIs

  • @MrSongib
    @MrSongib Рік тому

    SLI dead for a reason. xd
    tbh the problem for AI is Memory. other than that it's just a support from AMD or NVIDIA. wcyd
    now i'm thinking for soldering memory that some people do (though idk about the driver). xdd

  • @maybetonite
    @maybetonite Рік тому +3

    Considering you're in your fourties and you're seeing yourself as a "passionate artist", didn't you have enough time to like.. learn actual art? To master color, composition and form? Achieve something, that will represent your efforts, embody your mind and soul? Something that can be your life's work? Something that will have value? I've seen people 14-16 years old whose creativity evokes a whole range of emotions and yet they have a whole lives ahead of them, many-many years to improve, sparpen and perfect their skills. This is depressing.

    • @ronnetgrazer362
      @ronnetgrazer362 Рік тому +6

      Wow, you must be really happy with yourself to need to put other ppl down, or at least try to. Cheer up, life can be fun!

    • @maybetonite
      @maybetonite Рік тому

      ​@@ronnetgrazer362 I'm always happy to put down a liar or someone who lives in delusion trying to pretend to be something they're not. Life sure can be fun, especially if you do nothing but consume.

    • @ronnetgrazer362
      @ronnetgrazer362 Рік тому +2

      ​@@maybetonite So... you do nothing but consume, *or* is life not particularly pleasurable for you?
      I sense projection on your part, perhaps unknowingly. I mean, here you are, commenting on YT instead of slaving away at original works, pretending to know not just this uploader's total creative output, but his self-image as well.
      Maybe it feels nice to lash out on the keyboard, but it reads like insecurity or low self esteem.

    • @lonelycuriosity4784
      @lonelycuriosity4784 Рік тому

      problem is, most people are only interested in the pretty lies these days, no matter how insane that is

    • @maybetonite
      @maybetonite Рік тому

      @@ronnetgrazer362 You radiate such a strong '50 yo hardly coping with life' energy that I don't even want to talk with you. I assume my initial comment offended you in some way, causing you to have an appropriate defensive response? That's the only defense mechanism I can see here. You don't understand the subject, its problematics and also having a hard time understanding the written. Get some grip on reality, if your brain is still capable of doing so.