Colab x Diffusers Tutorial: LoRAs, Image to Image, Sampler, etc - Stable Diffusion in Colab
Вставка
- Опубліковано 25 сер 2024
- We cover how to use LoRAs, output multiple images, change the sampler, and do image-to-image with HF Diffusers in Google Colab. Use this method to avoid the disconnect screen.
Thanks to uPix for sponsoring this video: Generate AI selfies in just 1 click. Turn yourself into a superhero, anime character, and more!
upix.app/
Watch this video first if you haven't already:
How to Run Stable Diffusion in Google Colab (Free) WITHOUT DISCONNECT
• How to Run Stable Diff...
Txt2img notebook:
colab.research...
Img2img notebook:
colab.research...
Discover thousands of AI Tools. Also available in 中文, español, 日本語:
ai-search.io/
Here's our equipment, in case you're wondering:
GPU: RTX 4080 amzn.to/3OCOJ8e
Secondary GPU: GTX 1080 (too old, would not recommend)
Mic: Shure SM7B amzn.to/3DErjt1
Secondary mic: Maono PD400x amzn.to/3Klhwvu
Audio interface: Scarlett Solo amzn.to/3qELMeu
CPU: i9 11900K amzn.to/3KmYs0b
Mouse: Logi G502 amzn.to/44e7KCF
If you found this helpful, consider supporting me here. Hopefully I can turn this from a side-hustle into a full-time thing!
ko-fi.com/aise...
Oh my god!! This solves all my questions in mind from your last video on how to use collab. Thank you so much I learned a lot ❤🥰
You’re welcome 😊
is not working help me please
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from
@@CreeperCommander it is working fine. you need to watch the first video of this tutorial so know how to run it.
cant wait for your episode on using ControlNets and AminateDiff workflow using diffusers inside of Colab 🙂
Seconding this!
Ah these are the first videos that allowed me to do anything with stable diffusion (I own an intel mac so as far as I can tell Colab is my only free option). Echoing what other people are saying that I'd really love a video on using Animatediff with this system. Specifically, saw a video generated using 2 images (start and end frame) - would LOVE to know how to do that!
Can you please make a follow up tutorial on creating our own loras and using them? Thank you!
For those wondering how to add multiple loras I did some tests and one of the solutions that has worked for me is to fuse Lora models in the code prior to generating an image (doesn't work out well if you add it to the image generation code box)
Repeat the code for each lora
pipe.load_lora_weights("[your_lora_path]")
pipe.fuse_lora(lora_scale=0.7)
nice, thanks for sharing!
did you find anything for clip skip ?
@@mastermaxin1548 haven't looked
@@Puttis so should we move the lora.path to the above code box as well? and i would imagine no changes to "prompt" command? please advice
@@Puttis i tried your multiple loras code but it showed "The current API is supported for operating with a single LoRA file. You are trying to load and fuse more than one LoRA which is not well-supported."
Most awaited video ever. Thank you so much ❤❤
My pleasure
yes def one of the best series out there!
Thanks
Hey! Great video. One question. I get an error on line:
pipe.load_lora_weights(lora_path)
the error says "PEFT backend is required for this method."
I already tried installed peft and import it but still the same. Cant find anything about it.
same problem here
same problem, did anyone find a solution?@@RichardDXTeamFlemisL4D2
u tried !pip install -U peft transformers ?
@@angelolveraramirez Worked for me, thanks :)
OMG!!! this is gold !!! this video explains everything!!!! thank you sooo much
Thank you so much for making this video! I was able to solve my problem I'd been struggling with.
You're welcome!
Excellent tutorial!! THANK YOU VERY MUCH!!!!
I am having trouble with the lora part. the path. its having trouble with finding the lora weights although had given a valid path to it
ValueError: PEFT backend is required for this method.
same, did you find solution?
same here
Thank you verymuch, it's very helpful
I love you. You are a life saver. Thank you so much for all of this.
You are welcome!
How do I get it to show which seed it used and to save that in a filename?
hey bro, I am getting ''peft backend required'' what can I do, i installed and imported it, but still it shows. Any solutions?
So many thanks for your kind sharing. This's very helpful 🥰
You are welcome!
BIG trouble i cannot use the lora???why ??i have follow all your steps,but it jump out traceback, in
its not working anymore😢
Have trouble to solve pipe.load_lora_weights(lora-path)
raise valueerror("peft backend is required for this methode.")
I’m getting the same error, did you figure out how to solve it?
same error with loras (using SDXL), installed "peft" too and still doesnt work.@@BV-mg1ek
try adding !pip install -U peft transformers (in the first bracket)
@@ciaone03 Bro you are perfect
@@SeriusTr nah bro, u are
Best AI channel
Thanks!
@@theAIsearch what happens when you want bigger prompts in colab diffusion? Currently the limit is like 80 characters or something
Is there a way, how can i use a Civitai model, that no one added at Hugging face? Download or something else
What about adding it yourself?
huggingface.co/docs/hub/en/models-uploading
@@yujirokitami-uq8zr Your instructions tell you how to simply upload files to Hugging Face. I'm interested in all the folders and stuff that are usually included in graphic models. Only a single safetensor is downloaded from Civitai, this is not enough
Best UA-camr ever
Thanks!
I need help. In my case, I need to make clip_skip=2. I found how to do it on hugging face, but I have a problem. The output is white noise instead of a picture (and I know it shouldn't be like that, plus this problem happens with different models without LoRas). It doesn't give any error, just the output is white.
Thank you! This was very helpful.
You're welcome!
Thank you, it's very helpful. Can you share the inpainting and depth2img tutorial too?
Plz do sdxl
I have problems lora not working
You are excellent trainer.
The other question, if the lora was trained by myself but no trigger word, how to make this lora work?
Bro, the lora don't work, can you help me?
How can we use ready seeds? I looked through the documentation but no matter what I tried I couldn't succeed.
How to remove pic from refrence
I encounter new problem, it says cross attention kwargs is deprecated
were you able to resolve?
@@Nonzerotonin nope, but it generate images, if you can't see the images, add a line of code that save the image
Thank bro I hope you get a hundred thousand subscribers
thank you!
You're such a teacher bro lol. thanks
Thanks!
How can i load loras from local disk indeed internet?
cant upload lora on hugging face it says limited
Great video! Is there a way to combine two images into one with the img2img diffusion?
LoRAs aren't working, please help. It shows a PEFT error, i tried doing '!pip install peft' but that didn't fix it.
how to fix this "Token indices sequence length is longer than the specified maximum sequence length for this model (83 > 77)"?
Hey I get the outof memory error plsss tell me how to fix this
Could you make a tutorial on how to do it but with the XL model? Your tutorial has helped me a lot, but there is a Checkpoint that I would like to use, but it doesn't work because it is an XL, so I would need some help with that
After following each steps in the previous video, the Colab window now displays a full disk space error, hindering the execution of new commands. What can I do to continue using it freely?
I got a question about inpainting. While it does follow the mask just fine, it does seem to create it's own interpretation of the prompt without following the original image. I've tried changing the 'guidance' value but it still does it's own stuff.
amazing as usual thank you.
ok i played with this for a bit now and holy shit is it beautiful and that separation in the code so i dont have to load it all so FUCKING BEATIFUL
even tho this will most likely be patched very soon but thank you so much for at least giving some of us the chance to use it
glad it worked for you!
u helped me a lot bro
my pleasure
hi, maybe you can add options like VAE, and Seed like on a1111 , thankssssssss
excellent video! thank you! I have another question, where can I find the necessary information to install extensions like Adetailer, control net, Faceswap and many more that are easily installed in Stable Diffusion Local? It would be great if you created a discord group where we can collaborate among all interested parties to improve this notebook and share information on this topic. Thanks again for your contribution to the community! 😊
you can try the add details lora: civitai.com/models/82098/add-more-details-detail-enhancer-tweaker-lora
or try upscaling: github.com/huggingface/diffusers/issues/3429
controlnet is here:huggingface.co/docs/diffusers/using-diffusers/controlnet
I couldn't find any solution for faceswap w diffusers
I am having trouble with the lora part. This ia what the error is saying: ModuleNotFoundError Traceback (most recent call last)
in ()
1 import torch
----> 2 from diffusers import StableDiffusionPipeline
3
4 pipe = StableDiffusionPipeline.from_pretrained("emilianJR/epiCRealism", torch_dtype=torch.float16)
5 pipe = pipe.to("cuda")
ModuleNotFoundError: No module named 'diffusers'
How to increase the token limit? I managed to do it on stable diffusion 1.5, but the XL doesn't work.
What about upscalers
Thak you for the wonderful video. Can you tell how can we activate end key api in this build
Is it possible to add multiple loras?
is there a way so instead of uploading to hugging face you upload the lora into google drive and the code read the lora from Google drive or nah?
is not working help me please
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from
at the top, click runtime > change runtime type > t4 gpu
@@theAIsearch i clicked t4 gpu, but it says GPU usage limit reached...what should i do?
Could you add an option to add checkpoint models?
Loved the video. Is there a way to use seed. If yes than is there a way to reuse the image and seed to make changes to the generated images.
Hello i get error GPU Limit in Collab. Do you know how to solve it?
Hello Great Work. an you link docs to batch convertion i cant seem to find it
How do I use a ControlNet model (other than the inpainting one) when inpainting??
I can't get the lora to work. even without changing the code and using your lora path with the_rock lora.
Hi, im trying to use your collab with lora, but i get this error
ValueError Traceback (most recent call last)
in ()
1 lora_path = "ai-tools-search/the-rock"
----> 2 pipe.load_lora_weights(lora_path)
3
4
5 prompt = "th3r0ck, the rock, 8K, masterpiece, ultradetailed"
What are the GPU minimum specs for Stable Diffusion? (so I run it locally)
Edit: It's 10 GB VRAM (with NVIDIA GPU), but there are forks where it can run with 6
Anyway, thanks for the video, the whole workflow looks great!
glad you found the specs!
the new results are blurry, or with error, don't know if google did something.
I tried a lot of lora's and all of them not work, can you help me?
I changed it a little and give me this error:
ValueError Traceback (most recent call last)
in ()
1 lora_path = "ai-tools-search/the-rock"
----> 2 pipe.load_lora_weights(lora_path)
3
4
5 prompt = "photo of th3r0ck , he is wearing a military clothes, military cap, in battleground destroyed city. dramatic lighting"
/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora.py in load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs)
105 """
106 if not USE_PEFT_BACKEND:
--> 107 raise ValueError("PEFT backend is required for this method.")
108
109 # First, ensure that the checkpoint is a compatible one and can be successfully loaded.
The thing is that it happends me with every checkpoint, and the error value says: ValueError: PEFT backend is required for this method.
Does anyone reading comments know how to resolve "cross_attention_kwargs ['scale'] are not expected by AttnProcessor2_0 and will be ignored"?
Can something be done to directly run models from civitai?
Is there any way to apply another shaper and rescaling?
Love your videos. Can you show how to upscale your images on the next one?
I haven't tried it but see this huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/upscale
Thank you so much bro! Is it possible to install extensions like ControlNet with its models for this?
yes see this huggingface.co/docs/diffusers/using-diffusers/controlnet
@@theAIsearch That was so helpful man, thanks!
thank you sir
Most welcome
Great tutorial.. Love your growth mindset! Can you do a demo where you replicate a image 100% from CivitAI using all the components of the image plus the seed. This is something I still haven't been able to do
note that civitai images can be misleading, because some of them use img2img. even if you copy the same settings, if you don't have the base image, you won't get the same result.
It's better if you tell me what exactly you're trying to achieve? if you want to get a similar to image, I would do img2img from the civitai image
Thank you amazing video, I wonder if I can use stable video diffusion in this way
yes! huggingface.co/docs/diffusers/using-diffusers/svd
@@theAIsearchThank you, I'm trying it right away 😅
Thank you very much!! I wonder that can i add textual inversion into this model or there exists other way to achieve this gaol.
I haven't tried it but see this: huggingface.co/docs/diffusers/en/training/text_inversion
for nsfw images, the images are being blurred or deformed, this didin't happen before, maybe google put a lock or something?, face and n1ppl3s are covered or deformed.
it seems that the final touch while swapping faces you didn't provide the exact final collab code in the description. Please can you help me to get the final things?
this requires roop or reactor which isn't available for diffusers afaik
why output show disrtoyed picture
can we generate web design with it for ideas?
Is there any way to use after detailer plugin
Doesn't work ! Says cuda run out of memory even after: [import gc
torch.cuda.empty_cache()
gc.collect() ]This command 😢 what should i do? Plz help
try to disconnect/delete runtime and start again
Still didn't work 😢
@@theAIsearch
For those getting peft error while adding loras add "!pip install peft" to the top code block
how do you import it
@@jenniferohunyon9871 the lora? He explained in the video...
@@jenniferohunyon9871 for loras the guy in the video explained pretty well just add !pip install peft
With the pip install torch[diffusers] and stuff
bro i make the pip install and import but doesn't solve my error can u help me ?
bro please tell me how to add vae
try:
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to("cuda")
pipe.vae = vae
assuming you named your pipeline pipe
@@theAIsearch its not working it says AutoencoderKL" is not defined
hi! thanks for video. can i ask why i receive "TypeError: argument of type 'NoneType' is not iterable" error when i try change model - checkpoint?
please paste more of the error. it should say which line broke
What about using colab as an API for making requests?
hey man really love watching your videos , they are so informative and so much to learn from , also just wanted to know if we could also use SDXL inpainting in google colab? and if we could i would really appreciate if you made a video about it
yes huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1
@@theAIsearch could you make a video bout it showing us how to implement it because I've tried doing it and it just doesn't work
how can i use a seed on this mode?
Hi, after some read of the documentation is very easy, just add this to define the seed.
generator = torch.Generator("cuda").manual_seed(10)
And then add the parameter in the pipe, like this:
images= pipe(prompt, generator=generator ,height=h, width=w, num_images_per_prompt=num_images, num_inference_steps=steps, guidance_scale=guidance, negative_prompt=neg).images
Hope it works.
do you have workflow in comfyUI? do you have colab with different kinds of models to improve performance, particularly for video
Unfortunately comfyui is a gui so it wouldn't work in the colab free plan
is it possible to use multiple loras? how to do it?
can you tell me how to do inpainting??
Hey, love your video, I have question, can you use inpaint on this? Or maybe can you create the tutorial? I read the documentation, it said I need 2 images, original image and masked image, how do i get the masked image?
see this: huggingface.co/docs/diffusers/using-diffusers/inpaint#create-a-mask-image
A short question, when you put a lora and generate it, your lora work? I try someones and I didn't saw any effect
yes. did you put any keyword to trigger the lora in the prompt?
@@theAIsearch yes, I had tried all type of keyword, but I got nothing. Also, I tried use other people Lora but it still same
How to improve image quality and such
Thank you for the video and resources! - What about multiple LoRAs? Can we load more than one at a time and tweak their weight independently?
i haven't tried it, but see this medium.com/@natsunoyuki/using-civitai-loras-with-diffusers-e3ef3e47c413
Thank you! You're amazing! @@theAIsearch
Could you apply multiple Loras?
God bless you
does diffusers generate the similar picture to stable diffusion with the same prompt ?
yes, If you use the same settings and seed, it should be a similar image
Great tutorial! Can we upload our own models to Colab folder instead of hugging face?
i think so, but I haven't tried it
bro is there a way to make an influencer with AI with this tools?... iam trying this days but iam trying to do it with colab because i need to help another friends to do the same and have a good income.... Another realistic checkpoints not work or said that is an error.... nobody in youtube teach how to make this influencers with colab or the colabs that were working are banned from google because the high requirements. I am just a seller on the internet but i like to learn new things, greetings from veracruz, mexico
can you add control net
Please make a full video on RVC V2 without using external gpu
I'll need to see if that's possible. There's no clear documentation like diffusers
make a tutorial on how to add control net if possible thank you🤩🤩
I did not understand what code you have generated in chatgpt . And what is the code . 11:14 Please maintain in reply or give link
Txt2img notebook:
colab.research.google.com/drive/1sv9Nlleu09I5PB9CpB1kMq_nNFihYMJN?usp=sharing
Img2img notebook:
colab.research.google.com/drive/1DS2PL-JTIh6PGo_-9xQuXu8vbJGsBN_c?usp=sharing