Real-Time Text to Image Generation With Stable Diffusion XL Turbo
Вставка
- Опубліковано 7 сер 2024
- Installing Comfy-UI and configuring to use SDXL Turbo model for real-time text to image generation.
○○○ LINKS ○○○
stability.ai/news/stability-a...
github.com/comfyanonymous/Com...
○○○ SHOP ○○○
Novaspirit Shop ► teespring.com/stores/novaspir...
Amazon Store ► amzn.to/2AYs3dI
○○○ SUPPORT ○○○
💗 Patreon ► goo.gl/xpgbzB
○○○ SOCIAL ○○○
🎮 Twitch ► / novaspirit
🎮 Pandemic Playground ► / @pandemicplayground
▶️ novaspirit tv ► goo.gl/uokXYr
🎮 Novaspirit Gaming ► / @novaspiritgaming
🐤 Twitter ► / novaspirittech
👾 Discord chat ► / discord
FB Group Novaspirit ► / novasspirittech
0:00 intro
0:20 about this video
0:52 SDXL Turbo Model
1:12 Comfy-UI vs Automatic 1111
2:17 installing Comfy-UI
3:57 How To Use Comfy-UI
6:26 Configure Comfy-UI Dashboard for SDXL Turbo
9:00 Running Real-Time Text to Image Generation
12:08 Conclusion
○○○ Send Me Stuff ○○○
Don Hui
PO BOX 765
Farmingville, NY 11738
○○○ Music ○○○
From Epidemic Sounds
patreon @ / novaspirittech
Tweet me: @ / novaspirittech
facebook: @ / novaspirittech
Instagram @ / novaspirittech
DISCLAIMER: This video and description contains affiliate links, which means that if you click on one of the product links, I’ll receive a small commission.
#ai #stablediffusion #sdxl - Наука та технологія
Dude, your videos have been awesome over the years. Glad you will put out content like this just because it's something you're passionate about. Keep doing what you're doing! So many cool projects I'd love to do if i had the time. I hope you can hit 500k Subscribers in 2024.
So cool!! I’d love to see more AI videos 🤓
Hi, just letting you know I really enjoy your videos. Particularly since I'm working from a Ubuntu box. Your explanations are clear and concise, which is very helpful. Thanks for all you do.
Also, you can use the extra inpaint extension to edit (in real time). ;)
More videos please... I understand quicker when you explain considering English is not my first language ,, thanks
Now this was a fun thing to play along with after fighting with nextcloud and localAI... Imagine having this kind of prompt within nextcloud, where you can type up the image and get the result morphing nicely :D. I did have the same issues as other described and you responded to; noise_seed set to 0, control_after_generate fixed, cfg to 1, and I had fun results with sampler_name lcm. It works nicely on a 3090 :D.
It's so frustrating that I downloaded the exact same models and did the same settings and steps, but the actual image generated looks like a dump compared to your results.
play wiht the cfg and change it to 1.0 and control_after_generate to fixed
Ok nevermind, I place noise_seed to 0, control_after_generate to fixed and cfg to 1.0 into the SamplerCustom. It now produce the same kind of result. These steps are not in the video.
great video thx,
Would love to see more like this. I'm wondering is it possible to do this on google colab too for those of us that don't have a modern nvidia gpu? I've been playing with invoke ai and fooocus but don't know if those can support real-time generation.
Thanks Don.
Great work! Yes, your AI videos are good because you know enough about the technology and enjoy using it.
Hopefully the other people clickbait AI content will lessen; then people can see your straightforward and detailed content.
I'm one of the determined that love your AI vids! How about a separate channel foe AI topics @novaspiritai 👍
use the lcm-turbo lora in combination w/ sdxl-turbo, it will make it 100x's faster
pretty coool just tested this, couldn't really tell a difference on the 3080 but works great on the 1070
Hi @novaspirit - could you do a video on the setup of Ollama/AutoGen Studio/liteLLM on ubuntu please? I am starting to get interested in agents, but running into issues setting this configuration up on my ubuntu box. Love your videos.
hmm, is there a docker container for this?, for simple installation and a more friendlier ui. what sort of hardware does it need
Nice real
For us who have a weak PC. Can we rent a server to run a powerful emulated machine? Or the cost would bw prohibiting
I tired a while ago, my first mistake was using windows !
Hey Don, what OS do you use in this build? Great video, more AI will be great to see.
ubuntu 22.04
I get the low quality noisy generation despite using a 4070. Do you know what might be causing this? When i let it generate a blank image, i get the same one as you do.
Nice ! ... OK it's more Wow!
@Don - you had an older (professional) gpu that you bought (don't remember what type) in an older video about AI. How does that stack up against the 3080 for AI?
the m40 is much much slower but has 24gb ram (more then double the 3080) so serves other purposes
@@NovaspiritTech I am going to try setting this up as a container in ProxMox as it has the NVIDIA P4 card in it. Speed wise shouldn't be too bad.
For some reason I can't get it to work on my Debian 12 workstation with AMD Radeon 6900XT due to issue with the pytorch ROCm. I get this No HIP GPUs are available. I am still researching this. I might have to install the AMD GPU proprietary drivers which I've been avoiding due to a mess with the kernel updates. Automatic DKMS don't always compile the kernel correctly.
Please do more AI videos.... PLEEEEEEASAE!
there is a way of using this with directML? it would make this even more available!
More AI vids would be great. I use fooocus its cool but slow on a 1080ti
Wow this is a great way to learn the concepts without having to know how to code. I cant wait to give it a try.
👍❤️
Every time i change the promt it reloads the whole modle, is there a fix for this?
change control after generate to "fixed"
with 1660 gpu?
What vram does your 1070 have and how much system ram do you need?
8gb vram and doesn't use much system ram if using GPU
Is it possible to use an AMD card and if so how would I do that.
the instructions in the git shows you how you can install torch rocm5.6 so you will need to install 5.6 rocm drivers
This would be awesome to make custom wallpapers 😅
yup
@@NovaspiritTech Do you know if this could be run on a raspberry pi?
My python crashed while using comfy ui during prompting, what can be the initial cause of it to happen ?
depending on the error could be not enough vram
@NovaspiritTech yeah I have a 1660ti, anyway to make things work ? Because I wanna learn comfy ui
Can you do this on Windows OS or is it need to be Linux?
you can run comfyui in windows
you can try WSL, not sure about performance but still if you got time try.
MORE AI PLEASE
Why does AI not 'view well' on your channel? Do you mean the views are lower? Is Google/UA-cam downgrading content with or about AI?
I think that the odd AI video once a month or so, it is a good option.
might take you up on that hahaha
What a complex mess
🤢 ew ai
ew Luddite 🤮
ew gay
@@nneeerrrd lmao