Hello, first off, thanks for the tutorial. My question is - how do I put a custom Model? I dont wanna work on 1.5 but for exaple RedshiftDiffusion? thanks
Whatever model you want, you can download the model and add it to the Models folder. Generally look for any article or tutorial on adding a new model to Auto1111 it should guide you!
@@1littlecoder Hi, so these models would not be persistent, right? I would need to download models and install extensions every time I want to use them?
Nice tutorial :) I do however miss that you not showed how you downloaded the image you created. How do I access the output such as images or the embedding file in case I want to train an embedding or change the model file?
This may be cost efficient but only when you get your result quickly. Tinkering for 10 hours will take you to the level of midnourney. Runpod is very expensive when you sum up your hours.
@1littlecoder if you dont mind i want to correct some of your pronounciations 1: TIER is pronounced as "TEEER" like " BEER or FEAR - so tier = teer - thats how you say it 2: Menu is pronounced as " menn - youu" men as in "there are men in the army" and you as in "how are you" so MEN-YOU 3: machine is pronounced like "Ma- sheeeen" its an elongated eee sound ma-sheeen Thanks and great tutorial, just thought since these words are very common in your work and you will be using them a lot so its worth correcting, thanks for the tutorial
Thanks a lot man! I really appreciate, I have received complaints about English at times in the past, but none of that had been this constructive. So really appreciate it!
@@1littlecoder I have a question, because so many new a.i stuff came out especially with langchain, i am trying to figure out which is the best pipeline in order to take a large data set then embed it semantically then use a smartgpt like thing to retrieve it and so on - like is distilBERT the absolote best or is there better options?
@@zoomoot yes, actually. Try going on incognito mode or using chromium. It'll vary how long to start. But after 5 minutes it booted up properly when I was using school wifi. Home wifi abut 2 minutes.
that sounds very interesting. I just got myself a refurbished workstation. I'm doing SD locally on a Xeon and a Nvidia Quadro M4000 (8GB VRAM, 2015 model). Textual inversion with 11.000 steps just took me 13 hours. I think that's very slow. I might test runpod with the same conditions to compare. It would just be cool to not start again, but to continue the training as the training cycles take always the same time - it would still be comparable. Quite a challenge for my non-coder brain :)
@@1littlecoder I started with TI just because it was already available in automatic1111. The results have been dissapponting to me, so I will try dreambooth next.
How to add other models from huggingface?
Great tutorial for those of us that don't have an NVidia GPU or high end PC.
you can only access through UI? Not API?
Thank you. Have you tried generating a video?
Can this do Loopback wave?
What is the Gradio WebUI? Do I need it to run SD?
Is this better than Vultr?
This is amazing. I’ll try this out.
If I set up a template with additional models for Automatic1111 is Runpod going to charge to store that data even when I am not renting a GPU?
Muchas gracias buen hombre, great explained!
How do I add models to runpod? Also if I do batches it loses all the images and freezes... :(
Hello, first off, thanks for the tutorial. My question is - how do I put a custom Model? I dont wanna work on 1.5 but for exaple RedshiftDiffusion? thanks
Whatever model you want, you can download the model and add it to the Models folder. Generally look for any article or tutorial on adding a new model to Auto1111 it should guide you!
@@1littlecoder Thank you!
@@1littlecoder Hi, so these models would not be persistent, right? I would need to download models and install extensions every time I want to use them?
Nice tutorial :) I do however miss that you not showed how you downloaded the image you created. How do I access the output such as images or the embedding file in case I want to train an embedding or change the model file?
Can I generate an image using an API from the deployed pod?
If you have deployed an API service you can, this tutorial covers a basic API creation way - ua-cam.com/video/ReO6vqJxg-s/v-deo.html
This may be cost efficient but only when you get your result quickly. Tinkering for 10 hours will take you to the level of midnourney. Runpod is very expensive when you sum up your hours.
@1littlecoder if you dont mind i want to correct some of your pronounciations
1: TIER is pronounced as "TEEER" like " BEER or FEAR - so tier = teer - thats how you say it
2: Menu is pronounced as " menn - youu" men as in "there are men in the army" and you as in "how are you" so MEN-YOU
3: machine is pronounced like "Ma- sheeeen" its an elongated eee sound ma-sheeen
Thanks and great tutorial, just thought since these words are very common in your work and you will be using them a lot so its worth correcting, thanks for the tutorial
Thanks a lot man! I really appreciate, I have received complaints about English at times in the past, but none of that had been this constructive. So really appreciate it!
@@1littlecoder I have a question, because so many new a.i stuff came out especially with langchain, i am trying to figure out which is the best pipeline in order to take a large data set then embed it semantically then use a smartgpt like thing to retrieve it and so on - like is distilBERT the absolote best or is there better options?
Great video! But i get the "Bad gateway Error code 502" when i try to connect to the HTTP. What could this be caused by?
I get the same did you find a solution?
@@zoomoot It occasionally appears, and occasionally works.
pardon but all i get is bad gateway when I try to connect
Same here did you find a solution?
@@zoomoot yes, actually. Try going on incognito mode or using chromium. It'll vary how long to start. But after 5 minutes it booted up properly when I was using school wifi. Home wifi abut 2 minutes.
that sounds very interesting. I just got myself a refurbished workstation. I'm doing SD locally on a Xeon and a Nvidia Quadro M4000 (8GB VRAM, 2015 model). Textual inversion with 11.000 steps just took me 13 hours. I think that's very slow. I might test runpod with the same conditions to compare. It would just be cool to not start again, but to continue the training as the training cycles take always the same time - it would still be comparable. Quite a challenge for my non-coder brain :)
I think at this point Dreambooth is better than Textual inversion if you want to go in that route.
@@1littlecoder I started with TI just because it was already available in automatic1111. The results have been dissapponting to me, so I will try dreambooth next.
Collab takes 40mins to train
i always get GPU on colab it never disappoint
Lucky for you 🙂
yes but colab banned webui's now...
Purchasing here is tough from india, its only credit card and crypto whill aren't easily available for students
What a boss!!
no plugins? it's a no no