The problem is AI can produce a nice-looking image from any set of words. So people type in some daft prompt with "masterpiece" and "trending on Artstation" in it and a great image appears. They then think "I did that!" but they didn't, they only did typing which was mostly ignored.
No I quit out of Photoshop as it makes ComfyUI slow due to taking a chunk of GPU band width. There is a plugin where you can automatically feed information to nodes from Photoshop but it won't work on my set up at present.
Love this video because it demystifies that notion that one gets what one looks for just by one clever prompt
The problem is AI can produce a nice-looking image from any set of words. So people type in some daft prompt with "masterpiece" and "trending on Artstation" in it and a great image appears. They then think "I did that!" but they didn't, they only did typing which was mostly ignored.
There is a photoshop to comfy ui node... seems like it would make great for your use cases.
It allows for you to use photoshop as a front end.
Yes, I've looked at it, I don't have the hardware unfortunately. It's main use seems to be that Comfy iterates as you edit. @@Dabble-m4q
very interesting. maybe i missed the info, but do you run SD on a different machine?
No I quit out of Photoshop as it makes ComfyUI slow due to taking a chunk of GPU band width. There is a plugin where you can automatically feed information to nodes from Photoshop but it won't work on my set up at present.
does SD run on macos? as far as I knew it only runs on cuda hardware efficiently@@robadams2451