REVISED ENTRY: Did some testing in a 4070 Ti Super 16GB Model took 13.5 GB VRAM after doing a 512x512 image. [GPU Memory reading from Task Manager] 512x512: 8.62s [13.5GB] 640x640: 9.3s [13.6GB] 768x768: 9.44s [13.6GB] 896x896: 9.6s [14.2GB] 1024x1024: 10.1 sec [14.6GB] 1152x1152: 10.5 sec [15.0GB] 1280x1280: 11.3 sec [15.5GB] 1408x1408: 13.5 sec. [15.9GB] 1536x1536: 21.8 sec (overflowed VRAM into GPU shared memory). [16.4 GB] 1664x1664: 31.6 sec [16.9GB] 1792x1792: 43.2 sec.[17.5GB] 1920x1920: 56.4 sec. [18.2GB] 2048x2048: 78 sec. [18.8GB] 2176x2176: 93-130 sec [19.2GB] (times were highly variable over 5 runs with different styles and prompts)
@@modzha2011 There are other comments mentioning the same thing, one of the issues is this initial unoptimized research model takes up 13GB VRAM without generating anything, it could be causing issues with shared memory swapping to system RAM or even swapping to disk if you don’t have enough free RAM available due to other apps running. I have 64GB system RAM and noticed comfy uses up to 20GB at times for other SD models outside of VRAM. I would also do a clean installation of your NVidia graphics driver, I got the 70 Ti super this past Sunday and did a wipe and clean install using version 551.23, though newer ones might work. Otherwise I would wait on Comfy’s official release which rumour suggest is coming very soon, remember this is an experimental research model not at all optimized for most consumer systems.
It can do 512x512 and 768x768. It can also do 2048x2048 without problems. In the console version, any resolution that's a multiple of 8x works, e.g., 1920x1080 works perfectly.
@@mattahmann Online versions use the diffusers library, which is broken for most resolutions. (Other than that, I don't know about any in particular, I use it offline. Sorry.)
after trying it for architecture , I can say it is quite promising ! it reads patterns and geometry waaay better than SD , still some small issues for pattern of course but we are getting closer to something big ! with Controlnet and stuff ... man this tool will be a killer !
Wonderful! I have a RTX4070 12 GB VRAM and an installed extension for A1111-Forge. I can generate 2048x1024 pixels image in 17 seconds! However, on a A1111 (no Forge), the Stable Cascade extension runs slowly.
Great platform Pinokio! Are there ways to start it in lowram mode or similar like in automatic1111? I tried installing Instant-ID and it gives me cuda out of memory. Furthermore, image generation with Stable Diffusion Cascade is veeeeeery slow
"Very fast" ? Depending of what is "fast" for you because for me, to create 1 image it takes more 1 hour with Stable Cascade ! Something I never experienced before with Stable Diffusion (in this last one, it only takes few seconds)
Dear friend thank you for the video! But I would like to see a comparison in quality and speed, for example with solutions from A1111 or Comfy. And also is there any hope for a working inpaint/outpaint functionality?
not working over here, installed everything as in your instructions but I receive an IMPORT FAILED for SiffuserStableCascade module. Also tried updating all, uninstalling the node from the manager and reinstalling it. Any suggestions?
Here is a fix for import Failed. go to C:\Users\YOURUSERNAME\.cache\huggingface\hub\models--stabilityai--stable-cascade\snapshots\f2a84281d6f8db3c757195dd0c9a38dbdea90bb4\decoder\ open config.json On line 45 change "c_in": 4, to "in_channels": 4,
Everything installed fine on my Mac mini M1. I clicked on the local URL to open the app, but every time I try to run a prompt I get an "ERROR" in the middle of the window. Does anyone have any idea why? What do I need to do here??
Using a 16GB 4060 TI, with default settings, it takes 14.4 seconds to generate an image, 37.8s with max resolution. The model is using all the VRAM, it would explain why it is so slow on GPU with less VRAM
Running Stable Cascade from Pinokio offers the Option 'WiFi - Local Sharing' - but for this to work it obviously needs to allow public sharing first. In the terminal it says: To create a public link, set `share=True` in `launch()`. But I couldn't find out yet WHERE the heck this needs to be changed. Any hint?
10 місяців тому+1
I got an error : Error occurred when executing DiffusersStableCascade: module 'comfy.model_management' has no attribute 'unload_all_models'
On windows, 4090 13900k, pinokio works fine, but comfyUI show a diffuser error looking for it on C: drive, while all Ai stuff is on E: drive. Any suggestion? Tried many times to solve but no success.
hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11
doesnt work. it opens but when i try to run the prompt all i see is an error text in the center, wherre the picture should be, there's an attributeError. same thing with the pinokio version.
After installing I get the error message: "Runtime Error. Failed to import transformers.models.clip" - No module named 'transformers.models'. I can't start the application. What went wrong?
I would guess on, that your VRAM is the problem. How much do you have? EDIT: I just have seen, that many people said, it takes up to 13-16GB VRAM, so if you are below, it might really be that.
Very cool!. I thnk Stable Cascade will be another big leap kind of like SDXL. I think I'll wait until comfy UI's native implementation instead of having it as a custom node for now, but very cool video!
I'll test it sooner or later, but for to start I'm testing it on the Huggingface Spaces and seems like if you add 'made by Dall-E 3' in the prompts you get a very better output overall
hmm when I click the download button for the windows version (after finding it with Discover), it thinks for a second then pinokio just crashes completely, hmm... twice so far.
Is there a way to fix it from downloading installing and drawing from my main C drive? I run Comfyui from an alternate SSD but the nodes puts it on my other drive and makes it real slow
I'm getting this error: Error: User did not grant permission. at C:\Users\home\AppData\Local\Programs\Pinokio esources\app.asar ode_modules\sudo-prompt-programfiles-x86\index.js:553:29 at ChildProcess.exithandler (node:child_process:430:5) at ChildProcess.emit (node:events:513:28) at maybeClose (node:internal/child_process:1091:16) at ChildProcess._handle.onexit (node:internal/child_process:302:5) It failed because my app.asar is a file (51MB) and not a directory. Anyone has this issue and how to fix?
Great video. One question.. How did you create the animation at the end of the video? Im Very interested in creating a fictional character like that. Thanks
how many vram do I need to use this on local ? I just have RTX 4060 with 8gb vram. it is slow and make my pc freeze. I use latest comfyui that have cascade.
New node seems broken on Linux, at least for now, ComfyUI-DiffusersStableCascade module for custom nodes: Failed to import diffusers.pipelines.stable_cascade.pipeline_stable_cascade because of the following error (look up to see its traceback): cannot import name 'List' from 'typing_extensions' I already have typing_extensions so not clear what the fix is.
After completing the install process and trying to start and use the Web UI, I get the following error: NameError: name 'previewer' is not defined. Did you mean: 'Previewer?' Can anyone help? This is after installing via Pinokio
Yah Pinokio failed to install Cascade on mine. Complained about an old python version already on my machine for another program. Eh, I will wait for A1111 support and finetunes to role in.
But does it support loras or similar stuff? Is it uncensored? Does it have any controlnet or similar? Because if it doesn't it will be another curious but not popular tool.
Does someone has tips on fixing? ERROR: Invalid requirement: 'Files\\SD\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-DiffusersStableCascade\ equirements.txt' Hint: It looks like a path. File 'Files\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade equirements.txt' does not exist.
My own testing indicates this model uses at least 13GB VRAM before even doing anything, keep in mind it’s a research phase model and there’s no memory optimizations or even software optimizations built in. Essentially you are playing with version 0.1 of a completely new model type.
What they say in the announcement is that it is faster to run than sdxl, has better prompt following, and better esthetics. By their human judge ratings the quality improvements wrt sdxl are moderate. The new architecture also makes it much more efficient to train fine tune models and loras.
Hello, i am following your channel everyday, and i can not call mysself as expert, well i am on sdxl with comfy ui already, could you please tell me if stable cascade is a new model or is a new stablle diffusion to focus on ? thank you
I'm going to hold off on this until we can get it in a Safetensors format and a full commercial release. I'm not knocking it, its just I dont feel like dedicating the time to experimenting with it. Now, what I'm curious about is how much easier training it will be.
Yeah. Sometimes I do personal projects but even then I still want to have the option for making it commercial. Plus whatever experience I get with it I cannot apply it to my commercial work.
@@TheChameleon2008 thats kind of anoying…. all the big names here tell you pinokio is soooo easy to install on „ all“ platforms…. but it does not work. OLIVIO, warum empfiehlst Du etwas, was gar nicht funzt?😉
#### Links from my Video ####
python_embeded\python.exe -m pip install -r
twitter.com/PurzBeats
www.youtube.com/@PurzBeats/streams
stability.ai/news/introducing-stable-cascade
pinokio.computer/
github.com/kijai/ComfyUI-DiffusersStableCascade
👋
Dude, you can also just run 'pip install -r requirements.txt' in your 'DiffusersStableCascade' folder.
REVISED ENTRY: Did some testing in a 4070 Ti Super 16GB
Model took 13.5 GB VRAM after doing a 512x512 image. [GPU Memory reading from Task Manager]
512x512: 8.62s [13.5GB]
640x640: 9.3s [13.6GB]
768x768: 9.44s [13.6GB]
896x896: 9.6s [14.2GB]
1024x1024: 10.1 sec [14.6GB]
1152x1152: 10.5 sec [15.0GB]
1280x1280: 11.3 sec [15.5GB]
1408x1408: 13.5 sec. [15.9GB]
1536x1536: 21.8 sec (overflowed VRAM into GPU shared memory). [16.4 GB]
1664x1664: 31.6 sec [16.9GB]
1792x1792: 43.2 sec.[17.5GB]
1920x1920: 56.4 sec. [18.2GB]
2048x2048: 78 sec. [18.8GB]
2176x2176: 93-130 sec [19.2GB] (times were highly variable over 5 runs with different styles and prompts)
Too slow for me.
There is a something wrong, my rtx4070 12gb generating VERY slow, every iteration doing about 500 seconds... What could be the reason?
@@modzha2011 There are other comments mentioning the same thing, one of the issues is this initial unoptimized research model takes up 13GB VRAM without generating anything, it could be causing issues with shared memory swapping to system RAM or even swapping to disk if you don’t have enough free RAM available due to other apps running.
I have 64GB system RAM and noticed comfy uses up to 20GB at times for other SD models outside of VRAM.
I would also do a clean installation of your NVidia graphics driver, I got the 70 Ti super this past Sunday and did a wipe and clean install using version 551.23, though newer ones might work.
Otherwise I would wait on Comfy’s official release which rumour suggest is coming very soon, remember this is an experimental research model not at all optimized for most consumer systems.
@@glenyoung1809 Thank you for your answer, you reassured me)) I’ll wait for the release, and until then I’ll calmly continue to use SDXL models
@@modzha2011 SD 2.1 is better for a 12GB card, XL is very vram heavy
It can do 512x512 and 768x768. It can also do 2048x2048 without problems. In the console version, any resolution that's a multiple of 8x works, e.g., 1920x1080 works perfectly.
Is there anywhere we can use it online without downloading it locally?
@@mattahmann Online versions use the diffusers library, which is broken for most resolutions. (Other than that, I don't know about any in particular, I use it offline. Sorry.)
Watched a video saying just a few hours ago showing a web demo but said this hasn't been released openly. That is awesome if this is out now.
I probablly watched the same video as you, was it MattVidPro by any chance?
Würstchen and a sausage dancing, I got your reference...
I'm here sending some love to Germany and Austria too
Hold my Bratwurst!
So you just ignore Switzerland? 😢😅
@@darki0022 Nope, Switzerland is life, Switzerland is love, Switzerland is heaven on earth
after trying it for architecture , I can say it is quite promising ! it reads patterns and geometry waaay better than SD , still some small issues for pattern of course but we are getting closer to something big ! with Controlnet and stuff ... man this tool will be a killer !
Olivio, can you please make a tutorial on how to train a Lora on this? They provide a guide but I am sure you can explain it way better :)
i would lov ethis
Bro you are awesome for this!
Wonderful! I have a RTX4070 12 GB VRAM and an installed extension for A1111-Forge. I can generate 2048x1024 pixels image in 17 seconds! However, on a A1111 (no Forge), the Stable Cascade extension runs slowly.
Same gpu but throu Pinokio, thing generates extremely slow feels like it's using cpu
I can only see an error message on the box in site, and the error 'previewer is not defined' keeps appearing in pinokio. Is there any solution?
Same error here on 3 different systems.
Great platform Pinokio! Are there ways to start it in lowram mode or similar like in automatic1111? I tried installing Instant-ID and it gives me cuda out of memory. Furthermore, image generation with Stable Diffusion Cascade is veeeeeery slow
How slow we talkin?
"Very fast" ? Depending of what is "fast" for you because for me, to create 1 image it takes more 1 hour with Stable Cascade ! Something I never experienced before with Stable Diffusion (in this last one, it only takes few seconds)
Dear friend thank you for the video! But I would like to see a comparison in quality and speed, for example with solutions from A1111 or Comfy. And also is there any hope for a working inpaint/outpaint functionality?
not working over here, installed everything as in your instructions but I receive an IMPORT FAILED for SiffuserStableCascade module. Also tried updating all, uninstalling the node from the manager and reinstalling it. Any suggestions?
Here is a fix for import Failed.
go to C:\Users\YOURUSERNAME\.cache\huggingface\hub\models--stabilityai--stable-cascade\snapshots\f2a84281d6f8db3c757195dd0c9a38dbdea90bb4\decoder\
open config.json
On line 45 change "c_in": 4, to "in_channels": 4,
That pinokio application is amazing for people that are not amazing at software like myself! tysm for showing that
They should at least add a digital signature to their .exe so it doesn't trigger Windows Defender. Seems like amateur hour to me.
It's not exactly a finished product! I had a lot of problems and gave up with it.
i tried installing the custom node... but it failed to import :(
Pinokio doesn't work for me, only detects my CPU.
ComfyUI detects my GPU and is downloaded the required model files and worked the first attempt.
Everything installed fine on my Mac mini M1.
I clicked on the local URL to open the app, but every time I try to run a prompt I get an "ERROR" in the middle of the window.
Does anyone have any idea why? What do I need to do here??
Using a 16GB 4060 TI, with default settings, it takes 14.4 seconds to generate an image, 37.8s with max resolution. The model is using all the VRAM, it would explain why it is so slow on GPU with less VRAM
Just wait, Stable Cascade will be implemented in Comfy itself soon.
So this would be a more basic version of Automatic1111?
Running Stable Cascade from Pinokio offers the Option 'WiFi - Local Sharing' - but for this to work it obviously needs to allow public sharing first. In the terminal it says: To create a public link, set `share=True` in `launch()`. But I couldn't find out yet WHERE the heck this needs to be changed. Any hint?
I got an error : Error occurred when executing DiffusersStableCascade:
module 'comfy.model_management' has no attribute 'unload_all_models'
update your comfyui
Wonderfull answer ! Thank you for the help !
@ I ran into this issue today too:)
installed in with Pinocio and have a 3080, any idea why it takes like 20 minutes to generate?
Are you sure you don't have anything open in the background? Maybe comfyui still running?
@@OlivioSarikas yeah, 100% i know that from instantID, which takes a lot to run, so i tend to make sure nothing is running when generating
@@mendthedivide Are you sure it's running on GPU and not on CPU render?
@@Steamrick interesting, not sure how to check or what the default would be when installing with Pinocio, idk
@@mendthedivide use task manager to see if the GPU or CPU is running a big load?...
After installing Pinokio and Cascade, I just get "Error", if I try to render something.
What can I do? Thanks in advance!
how to install comfyui in pinokio? Also how to run cascade through comfyui in pinokio? Would love to see a vid on that!
On windows, 4090 13900k, pinokio works fine, but comfyUI show a diffuser error looking for it on C: drive, while all Ai stuff is on E: drive. Any suggestion? Tried many times to solve but no success.
hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11
doesnt work. it opens but when i try to run the prompt all i see is an error text in the center, wherre the picture should be, there's an attributeError. same thing with the pinokio version.
Can I use only the method one? I mean just installing Pinokio? May I use images generated by Cascade for commercial purposes?
Unfortunately the ComfyUI version doesn't work on Mac:
Error occurred when executing DiffusersStableCascade:
BFloat16 is not supported on MPS
everything was installed, but refused to work, because I have a comfyui on a separate disk, and give this node the system paths
of sadness
It works like a charm as a Forge extension (I have a Forge installed on a external SSD disk).
When I press Run i get "NameError: name 'previewer' is not defined. Did you mean: 'Previewer'?" and google is not helping
Same here.
Same here on 3 different systems.
Any idea why it takes so long to generate? Stable diffusion didn't take nearly this long...
same here
After installing I get the error message: "Runtime Error. Failed to import transformers.models.clip" - No module named 'transformers.models'. I can't start the application. What went wrong?
I wasn’t ready for that beard coloring 😮
Installed Pinokio, it runs but seems VERY slow here (even though I have a very fast pc). Any idea why?
I would guess on, that your VRAM is the problem. How much do you have?
EDIT: I just have seen, that many people said, it takes up to 13-16GB VRAM, so if you are below, it might really be that.
@@darki0022 I have a RTX3080
Very cool!. I thnk Stable Cascade will be another big leap kind of like SDXL. I think I'll wait until comfy UI's native implementation instead of having it as a custom node for now, but very cool video!
It is expected to officially land in ComfyUI this weekend.
everything seems to have worked, except it doesnt display any image when i queue the prompt
i'm getiing Cuda error with this with my 6gigvram 😥
I'll test it sooner or later, but for to start I'm testing it on the Huggingface Spaces and seems like if you add 'made by Dall-E 3' in the prompts you get a very better output overall
no luck... think I'm going to uninstall and start all over from scratch... but have seen others say it doesn't work through pino as well
Does anyone know where the output folder is in Pinokio?
hmm when I click the download button for the windows version (after finding it with Discover), it thinks for a second then pinokio just crashes completely, hmm... twice so far.
SageMaker Studio Lab Tutorial?
The opposite of faster - on my PC it runs around 6-10x slower than SDXL. I'm running through the stand-alone demo app, though...
Is there a way to fix it from downloading installing and drawing from my main C drive? I run Comfyui from an alternate SSD but the nodes puts it on my other drive and makes it real slow
Use symbolic links
I'm getting this error:
Error: User did not grant permission. at C:\Users\home\AppData\Local\Programs\Pinokio
esources\app.asar
ode_modules\sudo-prompt-programfiles-x86\index.js:553:29 at ChildProcess.exithandler (node:child_process:430:5) at ChildProcess.emit (node:events:513:28) at maybeClose (node:internal/child_process:1091:16) at ChildProcess._handle.onexit (node:internal/child_process:302:5)
It failed because my app.asar is a file (51MB) and not a directory. Anyone has this issue and how to fix?
Olivio, what did you use to make your end screen animation???
Man i running M1 Max MAC shipset here, i want to use this but cant seem to get it to work at the moment
finally i was able to run it, but with updating all dependency parts to the latest versions helped...
is there a way to download the files and models for a1111 etc?
Great video. One question..
How did you create the animation at the end of the video? Im Very interested in creating a fictional character like that. Thanks
with a software called Adobe Animate. this is actually done with the free demo
i am getting cuda out of memory on my nvidia 1060 6gb vram card for stable diffusion cascade
how many vram do I need to use this on local ? I just have RTX 4060 with 8gb vram. it is slow and make my pc freeze. I use latest comfyui that have cascade.
as of now both ways do not work. pinokio says app not found while comfyui says it's deprecated
how to add a lora with running cascade,please
New node seems broken on Linux, at least for now,
ComfyUI-DiffusersStableCascade module for custom nodes: Failed to import diffusers.pipelines.stable_cascade.pipeline_stable_cascade because of the following error (look up to see its traceback):
cannot import name 'List' from 'typing_extensions'
I already have typing_extensions so not clear what the fix is.
The ComfyUI version doesn't work well on AMD. The demo version (installed with Pinokio) does work though.
Copy path? I have no this option windows 10 ...
Has it worked with controlnets or other node yet?
Not working for directml
So how is the NSFW potential?
Huge... 🙄
I downloaded it yesterday and tried it but it was taking too long to be worth it. So i uninstalled it. Looking forward to fine tuned models.
After completing the install process and trying to start and use the Web UI, I get the following error: NameError: name 'previewer' is not defined. Did you mean: 'Previewer?'
Can anyone help? This is after installing via Pinokio
Yah Pinokio failed to install Cascade on mine. Complained about an old python version already on my machine for another program. Eh, I will wait for A1111 support and finetunes to role in.
Same here
Gretting what about graphics card? Work on using amd gpu?thanks
Nope. Need a solution that supports DirectML. Stabled does download the DirectML dependencies, but crashes and spits errors when rendering.
But does it support loras or similar stuff? Is it uncensored? Does it have any controlnet or similar? Because if it doesn't it will be another curious but not popular tool.
30GB Pinokio Cascade installation...extremely slow, 5min for simple prompt (3060/12GB)...heavy censored! - just uninstall.
just a thought here but does this software work off line or isit another ai website i would rather it working offline
Does someone has tips on fixing? ERROR: Invalid requirement: 'Files\\SD\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-DiffusersStableCascade\
equirements.txt'
Hint: It looks like a path. File 'Files\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade
equirements.txt' does not exist.
Can I use it in Automatic1111? Thx for your videos
Thank you, sir, very helpful.
Does it run on low or med end pc like SD1.5 ?
no absolutely not
i cant do it, I get the error 0.0 seconds (IMPORT FAILED): 😟
Can I set this to 2 gigabytes of video memory??
Can anyone solve this for me . Cuda out of memory. I have 8gb vram of 3070 ti.
My own testing indicates this model uses at least 13GB VRAM before even doing anything, keep in mind it’s a research phase model and there’s no memory optimizations or even software optimizations built in.
Essentially you are playing with version 0.1 of a completely new model type.
@@glenyoung1809 Thanks Glen, I understand now will wait to see if any memory optimizations happens or not.
Hooray for Purz!
Not really seeing why Cascade is superior to ComyUI or Forge with current models? Is it speed? Is it quality? Does it do more things?🤔
What they say in the announcement is that it is faster to run than sdxl, has better prompt following, and better esthetics. By their human judge ratings the quality improvements wrt sdxl are moderate. The new architecture also makes it much more efficient to train fine tune models and loras.
@@robmacl7 Early reviews sound like it needs a top end GPU.
It is the quality and how it interprete the prompts (better).
@@uk3dcom I have a RTX 4070 12 GB VRAM and I works perfectly as a Forge extension.
Where is info for a111?
Many many many many thanks!!! 😃
how do you use face swap?
Ther are models for ComfyUI
Is it normal for a RTX 3080 to be taking several minutes to create an image?
any word on Forge being able to run it?
Not that I heard of. But it was only released yesterday
you should have mentuioned what is the diffrence between this and forgeUI
Fooocus is better overall I think. but idk it depends on the user.
Hey! Vsauce, Michael here.
Requires min 20GB of hard disk? Is it really worth it... yet? 🤔
nope.....3 hours, installed, nothing work..unistall...done
Can you run Cascade in Forge UI?
Hello, i am following your channel everyday, and i can not call mysself as expert, well i am on sdxl with comfy ui already, could you please tell me if stable cascade is a new model or is a new stablle diffusion to focus on ? thank you
it's a new model and a new method
I'm going to hold off on this until we can get it in a Safetensors format and a full commercial release. I'm not knocking it, its just I dont feel like dedicating the time to experimenting with it. Now, what I'm curious about is how much easier training it will be.
Yeah. Sometimes I do personal projects but even then I still want to have the option for making it commercial. Plus whatever experience I get with it I cannot apply it to my commercial work.
do you steal other 1.5 and SDXL finetuned models as well?
@@Dmitrii-q6p I honestly have no idea what you mean by that?
Translation: No NSFW makes this model useless to me. :) (just kidding but I bet a lot are thinking that)
@@Dmitrii-q6p It's kind of impossible to steal that which is made freely available.
does pinokio work for arc a770?
The output seems worse than SDXL and takes forever on any none monster card
thanks for covering pinokio, Olivio! 🎉
Walter White --"Cowboy House?" 🤣🤣🤣
probably we need community performance hacks to run this as fast as it should.
I only have 12GB of VRAM. Will it work with it?
Yes.
Thanks for all your effort and nice videos! Just one question.. what is "koopi"? Do you mean to say "copy" when you say "koopi"?
it's called an accent lol, yes, copy
Tried installing Pinokio on my M1 Macbook…. did not work…. anybody else having these issues?
yup, it installs but it wont load the browser
@@TheChameleon2008 thats kind of anoying…. all the big names here tell you pinokio is soooo easy to install on „ all“ platforms…. but it does not work.
OLIVIO, warum empfiehlst Du etwas, was gar nicht funzt?😉
How to install in a collab notebookM
Sadly, you need a powerful GPU with a bunch of VRAM (at least 5 min with VRAM 8 Gb for one picture).
I am using a RTX4070 and a Forge extension. My 4070 creates a 2048x1024 image in 12 seconds.