Hi there - the banner text node is just a simple string node, so you can replace that with any string node that passes on a string and you'll be all set. Good luck!
@GrocksterRox I did that but still get the missing node error. Don't know what to do next. Anyway, I'll try something close to that, build from scratch and see how it goes
@@ThePetroale ok good luck. This is an older flow, but feel free to jump on the discord (link in vid description) and we can chat through it if interested.
Thanks. Regarding the text banners: which image do you actually feed into the control nets? This wasn't really clear today me... Is it just the plain text on a black background?
Insightful question - so the controlnets are doing the control based on the CR Simple Banner text (the simple word that you have in the Banner Text text node). Essentially you're pre-painting your canvas with just that word/phrase and then controlling the diffusion around it. I hope that makes sense?
I'm so glad you're enjoying them! In the top right corner of the comfy manager, there's a gear, about half way down the options, there's a line style which you change from "spline" to "straight". Good luck!
@@GrocksterRox your an amazing in a dj rave artist from northwest going on tour such an amazing g service and stuff let you all know reach out for commercial stuff cuz if I can just be doing this live at festivals on my iPad and Starlink work anywhere
Failed to validate prompt for output 21: * GetImageSize 26: - Required input is missing: images Output will be ignored [rgthree] Using rgthree's optimized recursive execution. Prompt executor has been patched by Job Iterator! Prompt executed in 0.00 secondsSPoke to soon getting this error in get image size node any ida @@GrocksterRox
I'd have to see the workflow error spot, but my guess is that you need to mask out the text and right now it's not masking anything (so essentially you're trying to get the image size of blank). Happy to take a quick look if you jump on Discord.
I'd really like to understand more about how those start_at_step and stop_at_step controls work in the Advanced KSampler. What is happening during the first 8 steps? I don't understand how it knows to get to step 9 without the first 8 steps. And then, if you're telling it to stop at step 11, why tell it to do 15 steps? Help me Grokster-Wan Kenobi... you're my only hope.
Hi there. You're not alone - it is a bit confusing (took some experimentation for me to figure it out), but essentially: Normally when you use the standard 2nd pass sampler (you're passing in an image that's already been rendered or loaded), you say how many total steps to render, and Comfy will render against every single one of those steps against the image - e.g. if your denoise is high, the entire image changes substantially (at the same time if the denoise is low, the entire image barely changes). Note that when rendering happens, the engine is figuring out which elements to put/layer in at different points in time, so it may start with your main subject and then work its way eventually to the background details) Ok now when you use the Advanced Sampler, you're still passing in that original image, but you're saying to the engine "Even though you would normally render across the entire set of steps involving all the layers of subject matter and background details, ONLY apply changes at the point when you're working on the background details and leave the initial subject matter alone". So in the case of 15 total steps, let's say the first 8 steps focused on subject matter details and the remaining 7 steps on background details, if you start on step #9 and end at step #15, any changes will only happen on the background details). So the question in your head probably is "How the heck do you know what steps relate to layer/subject/background???" and unfortunately the best answer I can give right now is experimentation. In the Lightning Rock Band picture, I started with a full render and the entire scene changed. I then tried (for 15 total steps) to start at 0 and end at 5, then tried 6 and end at 12, etc.) and after a few tries, I got to a great result and stuck with it :)
Hi great vidéo! I enjoyed your last tut on making $49.K under 30 minutes ! :)) BUT! I'm a bit disapointed cause obviously this is a click bait (and I'm really OK with that, it was funny), but I would like to know more seriously how to start earning some money with generative AI images... Because it's really enriching to learn and practice, but finally I spent a year filling my hard drive of weights and pngs but eating pastas and white rice because of no money! It would be cool to learn more how effectively earn a bit from this knowledge, something a bit more usefull than "Here are advice to make money: sell your pics anywhere" ... yeah thx Dr Obvious :/ Any real advice? Any links to begin with on how to sell stuff?
Hi there - sorry to hear that you think it's clickbait. In the video there are many different references on how you can make money using AI Imagery. But to your point, if you have a lot of imagery, a popular way that people are making money right now is to set up a print shop on Etsy. You show your imagery on t-shirts, prints, or frames, and have your store connected up to an online printer like Printify, so you can ultimately sell your works of art directly to the consumers. I hope that's helpful and good luck!
Hi, cannot find the banner text node, can you please assist? Thanks!
Hi there - the banner text node is just a simple string node, so you can replace that with any string node that passes on a string and you'll be all set. Good luck!
@GrocksterRox I did that but still get the missing node error.
Don't know what to do next. Anyway, I'll try something close to that, build from scratch and see how it goes
@@ThePetroale ok good luck. This is an older flow, but feel free to jump on the discord (link in vid description) and we can chat through it if interested.
Thanks. Regarding the text banners: which image do you actually feed into the control nets? This wasn't really clear today me... Is it just the plain text on a black background?
Insightful question - so the controlnets are doing the control based on the CR Simple Banner text (the simple word that you have in the Banner Text text node). Essentially you're pre-painting your canvas with just that word/phrase and then controlling the diffusion around it. I hope that makes sense?
Its gonna be out of topic but all the workflow looks neat. how did you do the reroutes lines sharp/straight ? Btw it is amazing tutor!
I'm so glad you're enjoying them! In the top right corner of the comfy manager, there's a gear, about half way down the options, there's a line style which you change from "spline" to "straight". Good luck!
@@GrocksterRox It’s a golden info thank you 🙏
This was good man
Thank you so much! Have fun with those text effects, your creativity can really let loose and come up with some pretty wild images! :)
@@GrocksterRox your an amazing in a dj rave artist from northwest going on tour such an amazing g service and stuff let you all know reach out for commercial stuff cuz if I can just be doing this live at festivals on my iPad and Starlink work anywhere
Failed to validate prompt for output 21:
* GetImageSize 26:
- Required input is missing: images
Output will be ignored
[rgthree] Using rgthree's optimized recursive execution.
Prompt executor has been patched by Job Iterator!
Prompt executed in 0.00 secondsSPoke to soon getting this error in get image size node any ida
@@GrocksterRox
So cool - it's really a fun industry to be in!
I'd have to see the workflow error spot, but my guess is that you need to mask out the text and right now it's not masking anything (so essentially you're trying to get the image size of blank). Happy to take a quick look if you jump on Discord.
I'd really like to understand more about how those start_at_step and stop_at_step controls work in the Advanced KSampler. What is happening during the first 8 steps? I don't understand how it knows to get to step 9 without the first 8 steps. And then, if you're telling it to stop at step 11, why tell it to do 15 steps? Help me Grokster-Wan Kenobi... you're my only hope.
Hi there. You're not alone - it is a bit confusing (took some experimentation for me to figure it out), but essentially: Normally when you use the standard 2nd pass sampler (you're passing in an image that's already been rendered or loaded), you say how many total steps to render, and Comfy will render against every single one of those steps against the image - e.g. if your denoise is high, the entire image changes substantially (at the same time if the denoise is low, the entire image barely changes). Note that when rendering happens, the engine is figuring out which elements to put/layer in at different points in time, so it may start with your main subject and then work its way eventually to the background details)
Ok now when you use the Advanced Sampler, you're still passing in that original image, but you're saying to the engine "Even though you would normally render across the entire set of steps involving all the layers of subject matter and background details, ONLY apply changes at the point when you're working on the background details and leave the initial subject matter alone". So in the case of 15 total steps, let's say the first 8 steps focused on subject matter details and the remaining 7 steps on background details, if you start on step #9 and end at step #15, any changes will only happen on the background details).
So the question in your head probably is "How the heck do you know what steps relate to layer/subject/background???" and unfortunately the best answer I can give right now is experimentation. In the Lightning Rock Band picture, I started with a full render and the entire scene changed. I then tried (for 15 total steps) to start at 0 and end at 5, then tried 6 and end at 12, etc.) and after a few tries, I got to a great result and stuck with it :)
@@GrocksterRox This is a really helpful answer. Thank you for explaining it to me.
Hi great vidéo!
I enjoyed your last tut on making $49.K under 30 minutes ! :))
BUT!
I'm a bit disapointed cause obviously this is a click bait (and I'm really OK with that, it was funny), but I would like to know more seriously how to start earning some money with generative AI images... Because it's really enriching to learn and practice, but finally I spent a year filling my hard drive of weights and pngs but eating pastas and white rice because of no money!
It would be cool to learn more how effectively earn a bit from this knowledge, something a bit more usefull than "Here are advice to make money: sell your pics anywhere" ... yeah thx Dr Obvious :/
Any real advice? Any links to begin with on how to sell stuff?
Hi there - sorry to hear that you think it's clickbait. In the video there are many different references on how you can make money using AI Imagery. But to your point, if you have a lot of imagery, a popular way that people are making money right now is to set up a print shop on Etsy. You show your imagery on t-shirts, prints, or frames, and have your store connected up to an online printer like Printify, so you can ultimately sell your works of art directly to the consumers. I hope that's helpful and good luck!
🤣🤣🤣
Glad you enjoyed :)
Howdy Grokster! Another great YT video. See ya soon on Discord! LOL on the $49.7K
I'm so glad you enjoyed it and the infused cheesy humor 😂