Thanks man. You know I cant sit an learn from a youtube video sometimes, Im always into something, when I have technical questions, and this style of learning is easy to digest on the go. I dont even have to watch the video, text to speech is clear enough. I appreciate your time to walk us through .Thanks again
As always, very good video, impeccable phrasing easy to understand even for a French person like me, top explanations and incredible solutions that few would have thought of. Bravo, a concentrate of genius and full of generosity in sharing!
@@PixelEaselI'd say it's something else to be more precise. Photoshop still has a lot of things it can do for which comfyui isn't and won't be the best tool.
@@PixelEasel one thing I don't really understand: how did you manage to send the Show Text node string to the CLIP text encoder? The CLIP text doesn't support String as input and yet in the video I see two inputs to that node.
thanks for sharing. may i asked if you were able to test this with the flux gguf variants? would like to know if it will work with those models too or if there is something lost in quantization. 🙌
@@PixelEasel it seems to work just fine only needed to replace your model loader with the gguf loader. i tested with flux schnell q3 gguf and it seemed to output just fine. downloading q2 now just to see the difference. 🙏
An amazing tutorial. I wonder if you could help me with a minor tweak? How do I approach a task I have: I have two different illustrations. One that has a stylized hand drawn yellow arrow pointing straight UP and another is a simple U-Turn arrow. I want to combine the style of the first one to the U-Turn so the hand drawn style would be transferred onto the simple U-Turn.
Thank you for the video. Since I am a complete beginner with this, could you please clarify what we need to do with the workflow? As far as I can see, it's a .json file. Thanks in advance.
Great! Thank you. Can you please create a worklow in a next lesson to style an image. I have created a LORA on Replicate on flux dev with a trigger word to create images in a specific style. Now I am looking for a way to apply this to an img2img => the img as input should be transfered in that style. How?
Hello. I have everything updated, but the Manager button in top right corner is not there for me. What can be wrong? Thanks. The manager itself is installed of course and to use it i have to switch back to old style interface. :(
Can't FLUX implement all art styles? Even I had a problem with some loras..everything I do, the output of the photo is real...is there a workaround and can you make a tutorial video for it..thanks
They are literally trying to turn the image into another image, which keeps some of the elements from the original one, and changes others. Which is quite obvious. And it's also the opposite to what you are describing, which is keeping a face consistent across different shots. As of it was the same character in a movie or short film.
@@lucascarracedo7421 Yeah, I get that. But where you're doing composites, especially professionally, you want to keep some elements the same as the original. People being the prime example of this.
Thanks man. You know I cant sit an learn from a youtube video sometimes, Im always into something, when I have technical questions, and this style of learning is easy to digest on the go. I dont even have to watch the video, text to speech is clear enough. I appreciate your time to walk us through .Thanks again
Its been a very long time that I've had a workflow blow my mind! Thank you so much for posting!
nice to hear! I really like this one!
Outstanding work, Thank you so much again for this, by far the most simple explanation and workflow.
cheers !!!
Thankyou so much for this amazing workflow 😊
thanks for commenting!
great tutorial, I like the way you explain and demonstrate some of the key settings so we understand more, not just rushing it through.
As always, very good video, impeccable phrasing easy to understand even for a French person like me, top explanations and incredible solutions that few would have thought of. Bravo, a concentrate of genius and full of generosity in sharing!
WOW! Awesome video, as always! Thank you🔥
oh wow that new comfyui was disabled on my end on default, thank you for bringing that up!! Really loving the new design
Thanks a lot bro. another useful Tutorial.
Amazing, looking forward to test it 🙌👍
waiting to hear what you think
Amazing, definetly better than photoshop generative fill, I will go with this from now on:)
For the first demo, what would you do different if you wanted to maintain the same person identifiable in the end result?
This is the new photoshop😎
comfyui, it's much more
@@PixelEaselI'd say it's something else to be more precise. Photoshop still has a lot of things it can do for which comfyui isn't and won't be the best tool.
How could I install it on my device pls make a video about this method.
thats cool, another way to playing with comfy
What does max_shift and base_shift do in the ModelSamplingFlux node?
Thanks as always 😊❤
more than welcome!
you are the best one 👍🏻
That's probably what I have ever been looking for from the start of stable diffusion.
nice to hear. thanks😊
@@PixelEasel one thing I don't really understand: how did you manage to send the Show Text node string to the CLIP text encoder? The CLIP text doesn't support String as input and yet in the video I see two inputs to that node.
Very fascinating idea!!
thanks😊
wow amazing!
thanks
thank you for sharing this! Would be awesome if we can tweak the Florence prompt beyond just replacing 'the image is'...how can we do that?
Great workflow!! Sny way you can integrate a Lora loader.
Is there a way to make inout image smaller? Thank you for sharing, amazing video!
Amazing video, so all these workflows could work with dev model as well?
thanks for sharing. may i asked if you were able to test this with the flux gguf variants? would like to know if it will work with those models too or if there is something lost in quantization. 🙌
didnt check yet... if u'll try it please share your thoughts !
@@PixelEasel it seems to work just fine only needed to replace your model loader with the gguf loader. i tested with flux schnell q3 gguf and it seemed to output just fine. downloading q2 now just to see the difference. 🙏
An amazing tutorial. I wonder if you could help me with a minor tweak? How do I approach a task I have: I have two different illustrations. One that has a stylized hand drawn yellow arrow pointing straight UP and another is a simple U-Turn arrow. I want to combine the style of the first one to the U-Turn so the hand drawn style would be transferred onto the simple U-Turn.
Thanks!
thx!
Would it be possible to use a blonde that I have already trained to create more realistic photos?
Thanks for sharing this is pretty cool! I cant get use denoise below 8 anything less and the image turns to greyish noise.
Can you add stack Loras node to the workflow?
Have you managed to get Flux Schnell to do inpainting?
Can this be used with SDLX or SD1.5? Flux doesn't quite like my 12gb 3060
Thank you for the video. Since I am a complete beginner with this, could you please clarify what we need to do with the workflow? As far as I can see, it's a .json file. Thanks in advance.
just upload it to comfyui and you can start working with it
@@PixelEasel thank you
how do you change the text Prompt please, it looked ? I'm new for ComfyUI
Great! Thank you. Can you please create a worklow in a next lesson to style an image. I have created a LORA on Replicate on flux dev with a trigger word to create images in a specific style. Now I am looking for a way to apply this to an img2img => the img as input should be transfered in that style. How?
just uploaded!
Only because now jumping between workflows and saving them it's easier and faster, the new UI is better.
Hello. I have everything updated, but the Manager button in top right corner is not there for me. What can be wrong? Thanks.
The manager itself is installed of course and to use it i have to switch back to old style interface. :(
Update the manager
how do install this tool on my device pls?
Can't FLUX implement all art styles? Even I had a problem with some loras..everything I do, the output of the photo is real...is there a workaround and can you make a tutorial video for it..thanks
use lora for flux
Can you tell me why?? Thanks
It doesn't work. BRIAAI Matting
you just killed photoshop
lol 😆 not yet...
In the work I've done in compositing, the goal was always to keep the person the same. Your method wouldn't be useful professionally.
this workflow has another purpose
@@PixelEasel Which is? I guess I misunderstood.
They are literally trying to turn the image into another image, which keeps some of the elements from the original one, and changes others. Which is quite obvious.
And it's also the opposite to what you are describing, which is keeping a face consistent across different shots. As of it was the same character in a movie or short film.
@@lucascarracedo7421 Yeah, I get that. But where you're doing composites, especially professionally, you want to keep some elements the same as the original. People being the prime example of this.
THIS IS JUST ALIEN TECHNOLOGY, I'M IN SHOCK