I'm currently in the process of building an inference UI very similar to InvokeAI... But the truth is that InvokeAI is so good that sometimes I wonder why I'm even building this... lol The only way mine differs is having a lot of these features automated and it has a Compose, Transform, and Enhance tab. I also have my Models & LoRas dropdown on the right panel above the Image Gallery, because I have found that the right panel tends to have very little content in it... Having models on the right, with images, feels intuitive because they are both assets that are managed by the user.
extremely interesting. It didn't work out at first. I study through translation.. This is your first lesson where something has finally started to work. That's great. thank you. It's all fascinating. It's just a miracle. Here's another thing, maybe tell me. Is it possible to manually install models and in which folder.? Everything here is not the same as in the automation of 1111, which I have already started to get used to.
So much usefull information, thank you! I have a question, I have a 3d render of interior room. Can I somehow change lightning to night/morning etc. without changing geometry and texture of furniture and other details?
Great tutorial thanks! Love it. But I’m wondering where these models are from? Custom illusion etc. did invoke train these? Why not include it in the app starter models? What if we wanted say an exact house as guidance? Can we guide with an existing image? Also for In paint. Wouldn’t it be better to try the same seed first? Or does it really not matter? In Inpaint I assume weight is the same as prompt priority or coherence.
Do you have a list of models behaviors that we can reference? You change models often under 'Generation' with an obvious understanding of what they do so it would be very helpful to have a list we can reference somewhere to the effect of "Model X = good for doing this" or "Model Y = will do that". Thanks
I downloaded and installed the new free version from the website. But for some reason I don't have a slider in the interface-Denoising stretch.Either it's not there because I'm doing something wrong or because the free version
Hi, thank you for this very interesting tutorial. I'm new so I have a silly question, but is it possible to simply remove the background from an image and export the image as a PNG?
What I don't understand... When you created the images for in-fill and outpaint right at the start - why wasn't the prompt of the night scene applied any more? The prompt was still there!
Similar to how the Lower Denoising Strength example from the "Nighttime" example didn't change the image to a night time scene (that only happened at a strength of 1), when infilling/outpainting, it is using the color from the existing image for that context.
I'm doing the same thing as you, but the result is completely different... The image ends up looking like a sand painting. However, I'm using different models because, honestly, what even is this CustomIllusionXL?
I think the sadness might have gone better if you had included the ears in the inpaint mask, that expression in animals is often portrayed with droopy ears.
Could you please use examples with human characters. You only show landscapes and that's maybe not what people mostly generate. Or is Invoke only good at architecture and landscapes?
We typically ask our live audience for direction and this is what they typically ask for, so feel free to join a future live stream and make your request!
@@invokeai I don't know your audience, but I am quite sure you should include a humanoid example in each of your future videos. Or are there Invoke tutorial channels focusing on real life generating scenarios?
I'm not sure I like the idea of moving the denoising slider to the layers area. It's still a parameter for the generation process. Perhaps it would be better to place this slider in one of the corners of the canvas box, and if you move the slider, it would be nice if the image inside the canvas box would show increased noise. Why? First, this would make it very easy to understand for a newbie who might not grasp what it does without a lengthy tutorial explanation. Secondly, it's an additional visual aid for the user to see how much information is being approximately destroyed. We are visual, not digital creatures, and it's not easy to imagine adding noise to an image by 11%.
Fantastic tutorial. Invoke is so incredibly powerful.
This is very well done. Good job my dude.
I followed it from start to finish and it was interesting and constructive, thanks for the work you do!
Can't wait to see the training process!!!
Great vid, lots of learning in that for me...thank you..
I'm currently in the process of building an inference UI very similar to InvokeAI... But the truth is that InvokeAI is so good that sometimes I wonder why I'm even building this... lol The only way mine differs is having a lot of these features automated and it has a Compose, Transform, and Enhance tab. I also have my Models & LoRas dropdown on the right panel above the Image Gallery, because I have found that the right panel tends to have very little content in it... Having models on the right, with images, feels intuitive because they are both assets that are managed by the user.
Compose Workspace = txt2img and img2img
Transform Workspace = inpainting, outpainting, live drawing
Enhance Workspace = Upscaling. Exposure, contrast and white balance correction.
Love your videos, Liked and Subscribed. Please consider 4K for future videos; with all the small text in the interface, it would really help. Thanks!
extremely interesting. It didn't work out at first. I study through translation.. This is your first lesson where something has finally started to work. That's great. thank you. It's all fascinating. It's just a miracle. Here's another thing, maybe tell me. Is it possible to manually install models and in which folder.? Everything here is not the same as in the automation of 1111, which I have already started to get used to.
Fantastic share - thank you :)
where i can download custom illusionXL model, please
where can i download customillusionxl model ??
So much usefull information, thank you! I have a question, I have a 3d render of interior room. Can I somehow change lightning to night/morning etc. without changing geometry and texture of furniture and other details?
Please fix the issue that with each update i have to download everytime the same model.
Great tutorial thanks! Love it. But I’m wondering where these models are from? Custom illusion etc. did invoke train these? Why not include it in the app starter models?
What if we wanted say an exact house as guidance? Can we guide with an existing image? Also for In paint. Wouldn’t it be better to try the same seed first? Or does it really not matter?
In Inpaint I assume weight is the same as prompt priority or coherence.
This is the model used: civitai.com/models/719084/customxl
Do you have a list of models behaviors that we can reference? You change models often under 'Generation' with an obvious understanding of what they do so it would be very helpful to have a list we can reference somewhere to the effect of "Model X = good for doing this" or "Model Y = will do that". Thanks
Damn. I was looking where is denoising strength in the new version for larger part of en hour ;) Why it is there no one knows.
I downloaded and installed the new free version from the website. But for some reason I don't have a slider in the interface-Denoising stretch.Either it's not there because I'm doing something wrong or because the free version
Check the description of this video. The denoising slider has moved to the top of the Layers tab.
@@invokeai thank you very much
@@invokeai thank you very much
@@invokeai thank you very much
Hi, thank you for this very interesting tutorial. I'm new so I have a silly question, but is it possible to simply remove the background from an image and export the image as a PNG?
You can't export as a transparent png, yet.
Can I ask if this is version dependent? The reason I'm asking is I'm running 5.4.1rc1 and there is no denoising slider on the generation tab
Ignore that I found it :)
Thanks :)
What I don't understand... When you created the images for in-fill and outpaint right at the start - why wasn't the prompt of the night scene applied any more? The prompt was still there!
Similar to how the Lower Denoising Strength example from the "Nighttime" example didn't change the image to a night time scene (that only happened at a strength of 1), when infilling/outpainting, it is using the color from the existing image for that context.
Wait, so 1.0 denoise still leave some "meanings" on the canvas, the prev image is not completely gone?
Can you guys get Playground 2.5 to work in Invoke?
We likely won't be incorporating it, but contibutors are welcome to add it.
@@invokeai why would they add it if it doesn't work in Invoke which is too bad because personally I think it's one of the best models out there
I'm doing the same thing as you, but the result is completely different... The image ends up looking like a sand painting. However, I'm using different models because, honestly, what even is this CustomIllusionXL?
This is the model used: civitai.com/models/719084/customxl
@@invokeai Thank you for your response!
I think the sadness might have gone better if you had included the ears in the inpaint mask, that expression in animals is often portrayed with droopy ears.
Great point!
와 미쳤다..
저도요... ㄷㄷ
Could you please use examples with human characters. You only show landscapes and that's maybe not what people mostly generate. Or is Invoke only good at architecture and landscapes?
We typically ask our live audience for direction and this is what they typically ask for, so feel free to join a future live stream and make your request!
@@invokeai I don't know your audience, but I am quite sure you should include a humanoid example in each of your future videos. Or are there Invoke tutorial channels focusing on real life generating scenarios?
I'm not sure I like the idea of moving the denoising slider to the layers area. It's still a parameter for the generation process. Perhaps it would be better to place this slider in one of the corners of the canvas box, and if you move the slider, it would be nice if the image inside the canvas box would show increased noise. Why? First, this would make it very easy to understand for a newbie who might not grasp what it does without a lengthy tutorial explanation. Secondly, it's an additional visual aid for the user to see how much information is being approximately destroyed. We are visual, not digital creatures, and it's not easy to imagine adding noise to an image by 11%.