😊Like anything it takes some getting used to but once you get a handle on it, it's truly a joy to use. Hopefully they get better flux support but even for SDXL the canvas is very powerful! More vids to come so I can show you all.
Nice guide, I am wondering if Invoke AI also supports other tools like video generation and some of the fancy ones like facefusion, mimic motion and the like, right now I have stand a lone stuff and would like a UI with everything.
Thanks for checking it out. Invoke is strictly image generation. If you want an all in one solution Comfyui is the way to go but comes with a steep learning curve.
Anyone having trouble with Flux…you must change the default vram usage. Render times went from 5mins to 30secs. 4090. It’s an .xml file and has an example file
Yes the free community edition has everything as long as you can run it locally although I would say for certain models like Flux you will need a decent GPU 12GB or more. For SDXL 12 GB and less is fine.
Hello again. I LOVE InvokeAI but I've two issues with it and wonder if you or your viewers have noticed the same. One issue is that, when compared to WebUI Forge's and SwarmUI's output - using the same model and settings - InvokeAI takes 50% longer and its images are a bit, mildly diffuse / fuzzy in comparison. I wonder if this is just the way it is or if it can be fixed. I sure don't want to stop using it.
Are you talking about generating with SDXL? If that's the case It's pretty quick on my system and the output is just fine. What are your system specs? If you are using Flux then yes, it's not as optimized for Flux at the moment.
@@MonzonMedia Sorry. I should have said what model. It was a Pony variant. GetPhat something or other. I didn't do timed tests with any others - assuming, out of laziness, that it would yield similar results. System is a pretty zippy ASUS ROG Strix gaming laptop with an 8 GB 3070 Ti. I love it. Anyway I solved the diffuseness / fuzziness problem by sliding the noise thingy to 1. Default was .75. Now it's just slower. Invoke AI has so many great features though I can't abandon it. The Boards feature alone being one of them. To be able to so easily organize outputs by model in its Gallery is... WONDERFUL. And F key to put away the left and right panels for larger viewing; being able to rename the models so that they're easier to find in an ever growing list of them. You've used it though so you know all that lol. Actually Stability Matrix's built in generator to be pretty decent. I've tried all the Packages that Stability Matrix currently supports, plus Easy Diffusion - after learning of it from you - and, so far, it looks like I'll use it and InvokeAI. Besides, InvokeAI puts out SD 1.5 fast so it's really only for Pony and SXDL variants that I might use SM more for. I'm hoping it'll help me run Flux - cause my system groans to a halt loading it with InvokeAI.
Is it possible you already have it downloaded? If you go into the advance settings when you select a Flux model, do you see it in the drop down? The other thing you can try is download it manually and install it using a local path.
Really? Are you using the quantized Flux Dev model? Also in the advance settings make sure you are using the quantized T5 encoder. I know people with 3090/4090 that are running it well. My card is too slow and only 8GB VRAM. Need to wait for them to optimize it for lowvram use.
You totally lost me on Prompt Triggers. What's the point of having a prompt trigger if you still need to select the model. You say this tute is for beginners. Seems like a person would have to be more than that to understand what you're getting at.
0:30 I do state who this video is for. As for the triggers they are helpful for specific words that are needed for certain lora's or can be used for specific words you may use with a model. For example if I'm using a Lora in a particular style...let's say it's a photorealistic lora, the developer would indicate to use "realistic" to trigger the style. So I can set that up ahead of time. For models I use it the same way like in the example where I'm always using analog or polaroid to get a specific look. I can definitely dive deeper into it but for a beginner lesson I didn't want to go too in depth with it.
Yes I've been waiting for the official post on the github page, ETA was Oct 7 but perhaps they are running behind. I'll be sure to cover it once it's live! 👊
Make sure to check out part 2 here where we go over basics on img2img, in-outpainting Check out Part 2 here ua-cam.com/video/cx7L-evqLPo/v-deo.html
Thanks. FYI, Invoke is also available in Pinokio, there it's basically a one click install.
Thanks for this detailed overview. I learned a lot new stuff. Please keep this series going!
You're welcome! More to come soon!
Your videos made me give Invoke another try and I'm glad I did. I even downloaded Stability Matrix pretty nice hope it adds more programs.
Hoooray! Thank you!
You're welcome enjoy! I'm personally having a blast using it!
@@MonzonMedia I am struggling :) and prefer ComfyUI (yet) but it is a very unique and powerful platform. Thanks again!
😊Like anything it takes some getting used to but once you get a handle on it, it's truly a joy to use. Hopefully they get better flux support but even for SDXL the canvas is very powerful! More vids to come so I can show you all.
@@MonzonMedia Thank you very much.
Great! Liked, Saved, and Subbed.
Thanks again for the support!
thanks! good job!
You're welcome! More to come on invoke soon.
thank you i subscribed
Thanks for subbing! Appreciate the support. More to come on invokeai soon!
Great tutorial 🙂 Is Zavy Chroma among the models? It was my fav on Playground AI.
Yessss I saw Zavy flashing by there!
Definitely one of my favs 😊 Any of the custom SDXL filters can be found on Civitai.com 👍
Nice guide, I am wondering if Invoke AI also supports other tools like video generation and some of the fancy ones like facefusion, mimic motion and the like, right now I have stand a lone stuff and would like a UI with everything.
Thanks for checking it out. Invoke is strictly image generation. If you want an all in one solution Comfyui is the way to go but comes with a steep learning curve.
@@MonzonMedia Ok thanks, btw you are a natural teacher!
@CalsProductions I appreciate that! Means a lot. 🙏👍🏼
Anyone having trouble with Flux…you must change the default vram usage. Render times went from 5mins to 30secs. 4090. It’s an .xml file and has an example file
Yeah tried the yaml file to no avail. Might be because I’m using stability matrix as the launcher. Going to test a stand alone version.
So I run InvokeAI in docker and when I paste a local directory for it to scan it just doesn't work. Any work around for this issue?
Sorry I'm not familiar with Docker, I'd suggest asking in their Discord server for support.
DO you get the pro for free if you run it locally?
Yes the free community edition has everything as long as you can run it locally although I would say for certain models like Flux you will need a decent GPU 12GB or more. For SDXL 12 GB and less is fine.
Hello again. I LOVE InvokeAI but I've two issues with it and wonder if you or your viewers have noticed the same. One issue is that, when compared to WebUI Forge's and SwarmUI's output - using the same model and settings - InvokeAI takes 50% longer and its images are a bit, mildly diffuse / fuzzy in comparison. I wonder if this is just the way it is or if it can be fixed. I sure don't want to stop using it.
Are you talking about generating with SDXL? If that's the case It's pretty quick on my system and the output is just fine. What are your system specs? If you are using Flux then yes, it's not as optimized for Flux at the moment.
@@MonzonMedia Sorry. I should have said what model. It was a Pony variant. GetPhat something or other. I didn't do timed tests with any others - assuming, out of laziness, that it would yield similar results.
System is a pretty zippy ASUS ROG Strix gaming laptop with an 8 GB 3070 Ti. I love it.
Anyway I solved the diffuseness / fuzziness problem by sliding the noise thingy to 1. Default was .75. Now it's just slower.
Invoke AI has so many great features though I can't abandon it. The Boards feature alone being one of them. To be able to so easily organize outputs by model in its Gallery is... WONDERFUL. And F key to put away the left and right panels for larger viewing; being able to rename the models so that they're easier to find in an ever growing list of them. You've used it though so you know all that lol.
Actually Stability Matrix's built in generator to be pretty decent. I've tried all the Packages that Stability Matrix currently supports, plus Easy Diffusion - after learning of it from you - and, so far, it looks like I'll use it and InvokeAI. Besides, InvokeAI puts out SD 1.5 fast so it's really only for Pony and SXDL variants that I might use SM more for. I'm hoping it'll help me run Flux - cause my system groans to a halt loading it with InvokeAI.
The clip embed file supporting flux 'clip-vit-large-patch14' fails after downloading from the invokeai starter directory. Anyone have a fix for this?
Is it possible you already have it downloaded? If you go into the advance settings when you select a Flux model, do you see it in the drop down? The other thing you can try is download it manually and install it using a local path.
It’s broken. You have to wait for a new release or run from source.
@@plainpixels Thanks. I'm using Pinokio and a lot of the apps are bombing out due to various updates as well.
Invoke 5 isn't usable with FLUX for me.... Generating one picture with my RTX 4090 took about 5 Minutes :/
Really? Are you using the quantized Flux Dev model? Also in the advance settings make sure you are using the quantized T5 encoder. I know people with 3090/4090 that are running it well. My card is too slow and only 8GB VRAM. Need to wait for them to optimize it for lowvram use.
There is a possibility that Control Net for FLUX will appear on Forge tomorrow
You totally lost me on Prompt Triggers. What's the point of having a prompt trigger if you still need to select the model.
You say this tute is for beginners. Seems like a person would have to be more than that to understand what you're getting at.
0:30 I do state who this video is for. As for the triggers they are helpful for specific words that are needed for certain lora's or can be used for specific words you may use with a model. For example if I'm using a Lora in a particular style...let's say it's a photorealistic lora, the developer would indicate to use "realistic" to trigger the style. So I can set that up ahead of time. For models I use it the same way like in the example where I'm always using analog or polaroid to get a specific look. I can definitely dive deeper into it but for a beginner lesson I didn't want to go too in depth with it.
There is a possibility that Control Net for FLUX will appear on Forge tomorrow
Yes I've been waiting for the official post on the github page, ETA was Oct 7 but perhaps they are running behind. I'll be sure to cover it once it's live! 👊