Blender 3D has nodes too, and it’s totally stunning-amazing. Even for 3D elements, shading, compositing, finally you make your very own modules, and it’s non destructive.
This reminds me - I've found that plain old image-to-image can be "teased" in a similar way, for really surprising/unusual results. The trick is to add "noise" to the input image in advance, using an image editor. And by "adding noise", I mean super-imposing/blending the source image (e.g. a face) with another image (e.g. a pattern - maybe a piece of fabric, some wallpaper, some text... something random). Using an interesting blend mode, so the resulting image looks quite psychedelic and messy, perhaps even a bit negative/colour-inverted looking. Then use that as the source image for image-to-image, with a prompt to help bring out the original face (or whatever it was). And the results can be pretty awesome.
Hey we stumbled upon a similar technique. I've been using random photos I find a flickr, making them noisy then using them at like .85 denoise strength, to get it to "somewhat" influence the output, it's working well to get portraits and stylized photos, or just to get something way out there.
just did it, looks awesome! I've actually replaced the first step of creating a white frame by using inner glow layer style,I mean, we are already in affinity, why not just make pictures in the right size and with the white border to begin with...
actually a good point, yes that should work. however you don't have the flexibility of manipulating the images inside the workflow like comfyui does. I show here a somewhat basic build. but you can do a lot more, blending noise images together, changing their color and more, all with different nodes.
@@OlivioSarikas with photopea (web based photoshop clone) extention in automatic1111 you can just paint any splotches or even silhouettes and then import them into img2img with a single button, and then export it back into photopea with another button, then iterate it back and forth all you like. And stuff like blending images, changing colors, and many many more is much easier done in photopea than in comfy.
I have been using a similar self made workflow for a while on text2image but it requires no image inputs it creates weird noise inputs and cycles them through various samplers to generate a range of different images from the same prompt. The idea was based on a workflow from someone else and iterated on. You can do it by creating noise outputs with the 'image to noise' node, on a low step sample and them blending that with perlin or plasma noise and then having the step count start at a number above 10.
that's awesome! akatsuzi also has different pattern and noise generator nodes. in this video i wanted to show that you can also create them yourself and the effects it has from the different shapes you can paint into it. you can see in the images that the circle or triangle and the colors have a strong impact on the resulting composition
@@cipher893 SawmUI, FooocusUI... Check them out. A1111 is "old hat" now. Swarm is Stability's own revamped UI and I think those two are much better. I'd also look into Aegis workflows for COmfyUI that make it more professional to use.
question: what's the biggest difference between this and image to image? easier to colour? asking because i feel you could get same pose easy with image to image
I tried your workflow but just get a blank screen. I did update for missing nodes, update everything and restart. Akatsuzi workflow does load for me, but I don’t have a model for CR Upscale Image and not sure where to get it. The GitHub repo for this module is not clear where to get them.
This feature is actually built into Invoke AI. Its very easy to use as well if you guys havent played with it. It just works as a reference to be used as a texture.
Fun stuff Olivio. Thanks for the workflows. FYI the workflows are way off from the default starting area meaning newbs might think it didn't work. ♥ Thanks for going over how you make the inputs too. Makes me wanna train a lora for them.
I don't think you have to go this far to get this kind of effect. Just take those abstract images you generated and go i2i on them. It's a old technique proposed like a year ago and gives very much the same creative and colorful results.
I do agree, i dont see anything revolutionary here. I have been doing this since the beginning. :) Also, feeding weird depth maps. I think he just discovered it i guess :)
I completely agree with that :) it's not giving "more control" but the opposite : more lack of control, so that stable diff could digress from the most common poses and image compositions... Which it has obviously been overtrained on. It's still something that can be more simply controlled via open pose (for more special poses) and img2img (if you need more colorful outputs). Much more satisfying when you need to use SD for work. Still, fun experiments!
@@filigrif What he showed was essentially an img2img workflow (with depth-map control net) with some extra nodes to per-condition the image along with a very high denoise. So I'm not sure what you mean that he could have just used img2img. Also this absolutely does provide an additional level of control over a completely empty latent noise.
Wasn't segmentation from controlnet doing the same thing for recoloring pictures using masks but this time it's kind of all-in-one ?? Want a little explaining about it.
I would rather prefer to inject more noise (resolution) in order to have more complex scenes. Anyway, it's a nice workflow. Got to check that Facedetailer node next :)
you can actually blend this noise with a normal empty latent noise or any other noise you create to get both :) - also you can inject more noise on the second render step too ;)
The point is, we need to make AI Images way more controllable in an artistic way! ...painting noise / strokes/ lines etc. for the base composition. Then refining in a second or third pass the detail and afterwards the color pass... All of that has to be in a simple Interface like Photoshop. This will bring the artistic part back to AI Imagery and bring it to completely different level
Thank you for always being a great source of inspiration and admiration; I look forward to watching your videos. Also, thank you for not having these workflows and trips on a paid page. I understand why they do it; I'm so glad you're not one of them.
prooobably going to need to see this with a turbo or latent model for near-real-time wonderment. also.. any way to load a moving (or at least periodically changing/ auto queuing) set of images into the noise channel for some video-effect styling? thanks for the great video as always!
You can use Photoshop, when you save a file from your ComfyUi input folder and you are using Auto Queue mode, the input picture is reloaded by ComfyUI. The only difference with an integrated canvas is that you have to save manually your changes, but it's way more flexible..
Already getting there, I started with ai prompting and slowly gotten better with digital drawing using img2img, figured it made more sense that visual control translates better to visual output, I wonder how strange my art style will be, essentially being ai trained than classically trained
@@user-zi6rz4op5l Unless he charges or don't share such workflows, I don't see the issue. Maybe he could at least tell where did he got it from. I end up using 3rd party workflows as base or to learn a process, then I make my owns or customize them as needed.
ComfyRoll has a bunch of nodes for generating patterns like halftone, perlin noise, gradients etc. Blend a bunch of those together with an image blend node.
i dont like this because there are way too many error for someone who is just starting and who gets confused by all this stuff. other workflows have no issues though.
I't cool, but not new... Have used gradients, generated in comfyui, in the past, injecting into a previous image and can change day to night and a few other things with it. Process almost identical - I do like the addition of the depth map - I tend to use monster instead
I get why comfy was created because Gradio is trash and A1111 doesn't update as fast as it should do for being at the front on the cutting edge of AI. Still though, I feel like it was really created because "real" artists kept complaining about ai-artists just writing some text and clicking generate which requires no skill and is lazy. So, behold, comfyUI, an interface that'll give you Blender flashbacks and over complicates the whole process of just generating a simple image.
Node systems have been gaining prevalence in all sorts of rendering areas including shaders for games, 3d software etc. The SD ecosystem just lends themselves to it. Also, check out Invoke for more artist focussed UI.
I think you are missing the point of ComfyUI. It wasn't mean to compete with 1111. It was specifically designed to be a highly modular backend application. When you need to create something that you need to call over and over again it's fantastic and you can make that workflow very complex. However, if you are experimenting, doing miscellaneous 1111 should be your go-to. Personally, I switch between the two depending on the type of work, but like comfy more because it gives me more control and re-usibility.
It is only as complex as you need it to be; it takes only a few nodes to generate. I don't know why people are taking such personal offense to a GUI that simply allows for essentially endless workflow customization. You're pointlessly hyperbolizing. A potato could learn to use ComfyUI.
Рік тому
Please don't leave A1111! Comfy is used by very few, A1111 is used by many.
Makes no practical sense... It's like spin the wheel - you never know what the outcome is going to be. At best, we look at the results for entertainment, and then exit the app and go do some real work.
Sadly, comfyui is so intimidating and so much like programming that it's terrifying. As a new/casual person, this is so very technical that I have given up all hope of using AI art. It's disheartening to see your videos of the last couple of months, knowing that I would take years to understand any of this, by which time the tech will have moved away from this so it will be of no value :-(
It took me less than a month to get comfortable with ComfyUI and I have zero programming experience, and really it takes only a few days to understand the node flow. It's not intimidating or difficult, you're just putting yourself down for no reason. You can generate images with less than five nodes, even less with efficiency nodes.
Come on. I don't love comfyui as a get-go either, but it is not that difficult. There are a ton of dumb proof tutorials out there. Just do some experimentation, and in minutes, you will get a grip. If you are that uncomfortable with learning difficult things, I don't even know how you got to SD instead of mid journey for example.
@GoodEggGuy ComfyUI actually looks and works like a lot of modern artist tools and workflows that artists (not programmers) are already used to using. These types of tools are to allow programming like control for non-programmers. Programmers could do this a lot simpler with code.
I think the point is to use specific noise patterns to guide your image as opposed to completely random noise with an empty latent. Just another way of experimenting.
Hmmm maybe I didn‘t get it, but seems like a very complicated way to get a tiny bit control of colors and shapes.
you get a lot of creative outputs that the model on it's own couldn't create. so there is endless ways of experimentation with this
This is more of an exploratory method than anything, which sometimes you want for inspiration.
I see, makes sense now, thanks.
You should try it. It’s pretty fun.
Blender 3D has nodes too, and it’s totally stunning-amazing. Even for 3D elements, shading, compositing, finally you make your very own modules, and it’s non destructive.
As always love your walkthroughs, you don't miss a node and explain the flow. Keeps it simple and on track. Hope you are having fun on your trip!
thank you very much. i forgot to include new shots from my bangkok stay this time
@@OlivioSarikas No worries. I was there last year. Beautiful country.
This reminds me - I've found that plain old image-to-image can be "teased" in a similar way, for really surprising/unusual results. The trick is to add "noise" to the input image in advance, using an image editor. And by "adding noise", I mean super-imposing/blending the source image (e.g. a face) with another image (e.g. a pattern - maybe a piece of fabric, some wallpaper, some text... something random). Using an interesting blend mode, so the resulting image looks quite psychedelic and messy, perhaps even a bit negative/colour-inverted looking. Then use that as the source image for image-to-image, with a prompt to help bring out the original face (or whatever it was). And the results can be pretty awesome.
amazing tip, thank you
Hey we stumbled upon a similar technique. I've been using random photos I find a flickr, making them noisy then using them at like .85 denoise strength, to get it to "somewhat" influence the output, it's working well to get portraits and stylized photos, or just to get something way out there.
Couldn't you do this in Automatic1111 using the colour image as img2img input and the black & white image as controlnet depth?
just did it, looks awesome! I've actually replaced the first step of creating a white frame by using inner glow layer style,I mean, we are already in affinity, why not just make pictures in the right size and with the white border to begin with...
actually a good point, yes that should work. however you don't have the flexibility of manipulating the images inside the workflow like comfyui does. I show here a somewhat basic build. but you can do a lot more, blending noise images together, changing their color and more, all with different nodes.
@@OlivioSarikas with photopea (web based photoshop clone) extention in automatic1111 you can just paint any splotches or even silhouettes and then import them into img2img with a single button, and then export it back into photopea with another button, then iterate it back and forth all you like. And stuff like blending images, changing colors, and many many more is much easier done in photopea than in comfy.
I have been using a similar self made workflow for a while on text2image but it requires no image inputs it creates weird noise inputs and cycles them through various samplers to generate a range of different images from the same prompt. The idea was based on a workflow from someone else and iterated on. You can do it by creating noise outputs with the 'image to noise' node, on a low step sample and them blending that with perlin or plasma noise and then having the step count start at a number above 10.
that's awesome! akatsuzi also has different pattern and noise generator nodes. in this video i wanted to show that you can also create them yourself and the effects it has from the different shapes you can paint into it. you can see in the images that the circle or triangle and the colors have a strong impact on the resulting composition
The output really are artistic, can't wait to play around with this. Thanks for another great video on a really useful technique.
you are welcome. i love this creative approach and the results that akatsuzi came up with
For me it just seems like you could have used img2img with high denosing to get the same effect?
Didn't really get it either.
Sir Please Bring Automatic Tutorial Also
A1111 is dead, bro 😂
Can’t do it there.
@@CoreyJohnson193 I’m a little out of the loop. What’s the better alternative for A1111? Counting out Comfy UI.
@@cipher893 SawmUI, FooocusUI... Check them out. A1111 is "old hat" now. Swarm is Stability's own revamped UI and I think those two are much better. I'd also look into Aegis workflows for COmfyUI that make it more professional to use.
there isn't one, a1111 is the best for what it is, he was saying it is dead because comfy exists.. i disagree for some use cases@@cipher893
Interesting, but I don't get it, 05:15 where the blue went? Background? Or the blue that you are talking about turn into yellow?
Yes, i meant to say her outfit is yellow now
question: what's the biggest difference between this and image to image? easier to colour? asking because i feel you could get same pose easy with image to image
when are you making some more A1111 tutorials, i really liked them!
I tried your workflow but just get a blank screen. I did update for missing nodes, update everything and restart. Akatsuzi workflow does load for me, but I don’t have a model for CR Upscale Image and not sure where to get it. The GitHub repo for this module is not clear where to get them.
The v2 update fixed this issue. 🙏
This feature is actually built into Invoke AI. Its very easy to use as well if you guys havent played with it. It just works as a reference to be used as a texture.
Remember when AI gen was about writing a prompt?
A1111 reminds me everytime I use it.
Much more interesting this way :) a depth map is worth 1000 words
it still is on Midjourney ;)
Fun stuff Olivio. Thanks for the workflows. FYI the workflows are way off from the default starting area meaning newbs might think it didn't work. ♥
Thanks for going over how you make the inputs too. Makes me wanna train a lora for them.
Thanks, that got me haha
Scroll out ftw^^
thank you, i will look into that
can you do this in Automatic1111?
My Python crashed while running stable diffusion
What can be the issue ?
I don't think you have to go this far to get this kind of effect. Just take those abstract images you generated and go i2i on them. It's a old technique proposed like a year ago and gives very much the same creative and colorful results.
Yeah, the clickbait title made it seem like it's some new technique but it's just using img2img and control net to get interesting results.
That's exactly what he is doing: 75% denoise with initial image is just i2i
@@vintagegenious you can go 100% denoise and still get some benefit too.
@@vuongnh0607l I didn't know, isn't that just txt2img (if we ignore the controlnet)
I do agree, i dont see anything revolutionary here. I have been doing this since the beginning. :)
Also, feeding weird depth maps. I think he just discovered it i guess :)
This is so 80s... I liked it!
Hi, beautiful flow. I tried to run it on SDXL (with SDXL controlnet depth) but got weird results. Seems only 1.5 checkpoints work. Is it true?
A1111 can use this?
@Olivio Sarikas what would be usefull for RX 6600 XT? AMD GPU?
Awesome video
Looks like soon enough we're going to recreate the entire photoshop interface inside a comfyui workflow :))
Fr
pretty much, yes ;) endless posibilities
I probably don't understand. I have the impression we replace a noise by another noise which effect we still not controlled either.
I completely agree with that :) it's not giving "more control" but the opposite : more lack of control, so that stable diff could digress from the most common poses and image compositions... Which it has obviously been overtrained on. It's still something that can be more simply controlled via open pose (for more special poses) and img2img (if you need more colorful outputs). Much more satisfying when you need to use SD for work.
Still, fun experiments!
@@filigrif What he showed was essentially an img2img workflow (with depth-map control net) with some extra nodes to per-condition the image along with a very high denoise. So I'm not sure what you mean that he could have just used img2img. Also this absolutely does provide an additional level of control over a completely empty latent noise.
Hey I'm in Bangkok right now. I have a casual interest in AI not as in depth as you but we can grab a quick coffee
What hardware did do you use? What graphics card?
Wasn't segmentation from controlnet doing the same thing for recoloring pictures using masks but this time it's kind of all-in-one ?? Want a little explaining about it.
how is it ai, if you have to do all the work, may as well draw it at this point. Can you make ai more complicated?
The result looks pretty random, but the artistic touch is wonderful though
might be fun to use this with SDXL Turbo and do live painting
workflow doesn't load , it doesnt give any errors just nothing happens on comfyui. maybe the image you produced even non upscaled version ?
Zoom out and pan down.
I would rather prefer to inject more noise (resolution) in order to have more complex scenes. Anyway, it's a nice workflow. Got to check that Facedetailer node next :)
you can actually blend this noise with a normal empty latent noise or any other noise you create to get both :) - also you can inject more noise on the second render step too ;)
Wouldn't addetail lora during upscaling part of workflow do the job too?
The point is, we need to make AI Images way more controllable in an artistic way! ...painting noise / strokes/ lines etc. for the base composition. Then refining in a second or third pass the detail and afterwards the color pass... All of that has to be in a simple Interface like Photoshop. This will bring the artistic part back to AI Imagery and bring it to completely different level
you can also just use prompt travel to achieve the same result
Thank you for always being a great source of inspiration and admiration; I look forward to watching your videos. Also, thank you for not having these workflows and trips on a paid page. I understand why they do it; I'm so glad you're not one of them.
prooobably going to need to see this with a turbo or latent model for near-real-time wonderment. also.. any way to load a moving (or at least periodically changing/ auto queuing) set of images into the noise channel for some video-effect styling? thanks for the great video as always!
also... how about an actual oscilloscope to create the noise channel from actual NOISE? =)
I like it. It would be easier if it had a drawing node in ComfyUI but might not be as controllable as using a Photoshop type application.
There'a Krita plugin that uses Comfy as it's backend but it's really finicky to use, it seems.
Try using the canvas node for live turbo gens, and connect to depth or any other controlnet. Experiment!
You can use Photoshop, when you save a file from your ComfyUi input folder and you are using Auto Queue mode, the input picture is reloaded by ComfyUI.
The only difference with an integrated canvas is that you have to save manually your changes, but it's way more flexible..
soon ai artists will actualy have to draw their prompts
Already getting there, I started with ai prompting and slowly gotten better with digital drawing using img2img, figured it made more sense that visual control translates better to visual output, I wonder how strange my art style will be, essentially being ai trained than classically trained
Really cool
This is amazing, you're the boss Olivio!
He is basically ripping off other people's workflows and pastes them on his channel.
@@user-zi6rz4op5l Unless he charges or don't share such workflows, I don't see the issue.
Maybe he could at least tell where did he got it from.
I end up using 3rd party workflows as base or to learn a process, then I make my owns or customize them as needed.
The man at his wits end for some content invents img2img but calls it differently to make it seem like novelty. Bravo.
Great Guide
I thought I was the only human being to have 10,000 tabs open at the same time! hahahaha
Thanks!
Thank you...
Going to need GPT to break this down 😂
So it's depth map + custom img2img with high denoise. ok
The ideas are interesting, but I'm lazy. Anyone have any ideas on how to make a lot of noise pictures without spending a lot of time on it?
ComfyRoll has a bunch of nodes for generating patterns like halftone, perlin noise, gradients etc. Blend a bunch of those together with an image blend node.
Have you given up doing tutorials for proper photography, or are you going down this AI route?
I think you're about 12 months late asking that question.
that looks nice but totally random to me.
are you a fan of aespa?
Interesting. Not really sure why the AI art world has so many anime girl artworks. Oh well.......
nothing change in this world
i dont like this because there are way too many error for someone who is just starting and who gets confused by all this stuff. other workflows have no issues though.
What i see are a lot of unweighted compositions. The masspoint of the poor girl is not above her feet. So she would drop to the floor.
👋
way to convoluted
I't cool, but not new...
Have used gradients, generated in comfyui, in the past, injecting into a previous image and can change day to night and a few other things with it.
Process almost identical -
I do like the addition of the depth map - I tend to use monster instead
I get why comfy was created because Gradio is trash and A1111 doesn't update as fast as it should do for being at the front on the cutting edge of AI. Still though, I feel like it was really created because "real" artists kept complaining about ai-artists just writing some text and clicking generate which requires no skill and is lazy. So, behold, comfyUI, an interface that'll give you Blender flashbacks and over complicates the whole process of just generating a simple image.
Node systems have been gaining prevalence in all sorts of rendering areas including shaders for games, 3d software etc. The SD ecosystem just lends themselves to it.
Also, check out Invoke for more artist focussed UI.
I think you are missing the point of ComfyUI. It wasn't mean to compete with 1111. It was specifically designed to be a highly modular backend application. When you need to create something that you need to call over and over again it's fantastic and you can make that workflow very complex. However, if you are experimenting, doing miscellaneous 1111 should be your go-to. Personally, I switch between the two depending on the type of work, but like comfy more because it gives me more control and re-usibility.
It is only as complex as you need it to be; it takes only a few nodes to generate. I don't know why people are taking such personal offense to a GUI that simply allows for essentially endless workflow customization. You're pointlessly hyperbolizing. A potato could learn to use ComfyUI.
Please don't leave A1111! Comfy is used by very few, A1111 is used by many.
Makes no practical sense... It's like spin the wheel - you never know what the outcome is going to be. At best, we look at the results for entertainment, and then exit the app and go do some real work.
Sadly, comfyui is so intimidating and so much like programming that it's terrifying. As a new/casual person, this is so very technical that I have given up all hope of using AI art. It's disheartening to see your videos of the last couple of months, knowing that I would take years to understand any of this, by which time the tech will have moved away from this so it will be of no value :-(
It took me less than a month to get comfortable with ComfyUI and I have zero programming experience, and really it takes only a few days to understand the node flow. It's not intimidating or difficult, you're just putting yourself down for no reason. You can generate images with less than five nodes, even less with efficiency nodes.
Come on. I don't love comfyui as a get-go either, but it is not that difficult. There are a ton of dumb proof tutorials out there. Just do some experimentation, and in minutes, you will get a grip. If you are that uncomfortable with learning difficult things, I don't even know how you got to SD instead of mid journey for example.
@@rbscli Olivio recommended Fooocus and I have been using that.
@GoodEggGuy ComfyUI actually looks and works like a lot of modern artist tools and workflows that artists (not programmers) are already used to using. These types of tools are to allow programming like control for non-programmers. Programmers could do this a lot simpler with code.
First
Using again that unComfyUI... I just don't like it... I'll wait for Automatic1111 video.
Overblown, overreaction to a basic background texture achievable in any photo editor. 'Noise'? Really? The Emperor's New Clothes, anyone?
I think the point is to use specific noise patterns to guide your image as opposed to completely random noise with an empty latent. Just another way of experimenting.
Got excited but clicked off after seeing ComfyUI.
Missing all the fun stuff
Basically use noisy colorful images to do img2img