Noise Styling is the NEXT LEVEL of AI Image Generation

Поділитися
Вставка
  • Опубліковано 23 гру 2024

КОМЕНТАРІ •

  • @uni0ue87
    @uni0ue87 Рік тому +47

    Hmmm maybe I didn‘t get it, but seems like a very complicated way to get a tiny bit control of colors and shapes.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +5

      you get a lot of creative outputs that the model on it's own couldn't create. so there is endless ways of experimentation with this

    • @ImmacHn
      @ImmacHn Рік тому +4

      This is more of an exploratory method than anything, which sometimes you want for inspiration.

    • @uni0ue87
      @uni0ue87 Рік тому +1

      I see, makes sense now, thanks.

    • @alecubudulecu
      @alecubudulecu Рік тому +1

      You should try it. It’s pretty fun.

    • @jeanrenaudviers
      @jeanrenaudviers 11 місяців тому +1

      Blender 3D has nodes too, and it’s totally stunning-amazing. Even for 3D elements, shading, compositing, finally you make your very own modules, and it’s non destructive.

  • @Foolsjoker
    @Foolsjoker Рік тому +6

    As always love your walkthroughs, you don't miss a node and explain the flow. Keeps it simple and on track. Hope you are having fun on your trip!

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      thank you very much. i forgot to include new shots from my bangkok stay this time

    • @Foolsjoker
      @Foolsjoker Рік тому

      @@OlivioSarikas No worries. I was there last year. Beautiful country.

  • @TimothyMusson
    @TimothyMusson Рік тому +7

    This reminds me - I've found that plain old image-to-image can be "teased" in a similar way, for really surprising/unusual results. The trick is to add "noise" to the input image in advance, using an image editor. And by "adding noise", I mean super-imposing/blending the source image (e.g. a face) with another image (e.g. a pattern - maybe a piece of fabric, some wallpaper, some text... something random). Using an interesting blend mode, so the resulting image looks quite psychedelic and messy, perhaps even a bit negative/colour-inverted looking. Then use that as the source image for image-to-image, with a prompt to help bring out the original face (or whatever it was). And the results can be pretty awesome.

    • @syndon7052
      @syndon7052 11 місяців тому +1

      amazing tip, thank you

    • @ProzacgodAI
      @ProzacgodAI 9 місяців тому +1

      Hey we stumbled upon a similar technique. I've been using random photos I find a flickr, making them noisy then using them at like .85 denoise strength, to get it to "somewhat" influence the output, it's working well to get portraits and stylized photos, or just to get something way out there.

  • @frankiesomeone
    @frankiesomeone Рік тому +6

    Couldn't you do this in Automatic1111 using the colour image as img2img input and the black & white image as controlnet depth?

    • @eskindmitry
      @eskindmitry Рік тому +3

      just did it, looks awesome! I've actually replaced the first step of creating a white frame by using inner glow layer style,I mean, we are already in affinity, why not just make pictures in the right size and with the white border to begin with...

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +2

      actually a good point, yes that should work. however you don't have the flexibility of manipulating the images inside the workflow like comfyui does. I show here a somewhat basic build. but you can do a lot more, blending noise images together, changing their color and more, all with different nodes.

    • @xn4pl
      @xn4pl Рік тому +1

      @@OlivioSarikas with photopea (web based photoshop clone) extention in automatic1111 you can just paint any splotches or even silhouettes and then import them into img2img with a single button, and then export it back into photopea with another button, then iterate it back and forth all you like. And stuff like blending images, changing colors, and many many more is much easier done in photopea than in comfy.

  • @jeffbull8781
    @jeffbull8781 Рік тому +4

    I have been using a similar self made workflow for a while on text2image but it requires no image inputs it creates weird noise inputs and cycles them through various samplers to generate a range of different images from the same prompt. The idea was based on a workflow from someone else and iterated on. You can do it by creating noise outputs with the 'image to noise' node, on a low step sample and them blending that with perlin or plasma noise and then having the step count start at a number above 10.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      that's awesome! akatsuzi also has different pattern and noise generator nodes. in this video i wanted to show that you can also create them yourself and the effects it has from the different shapes you can paint into it. you can see in the images that the circle or triangle and the colors have a strong impact on the resulting composition

  • @BoolitMagnet
    @BoolitMagnet Рік тому +2

    The output really are artistic, can't wait to play around with this. Thanks for another great video on a really useful technique.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      you are welcome. i love this creative approach and the results that akatsuzi came up with

  • @Herman_HMS
    @Herman_HMS Рік тому +3

    For me it just seems like you could have used img2img with high denosing to get the same effect?

    • @rbscli
      @rbscli Рік тому

      Didn't really get it either.

  • @subhralaya_clothing
    @subhralaya_clothing Рік тому +10

    Sir Please Bring Automatic Tutorial Also

    • @CoreyJohnson193
      @CoreyJohnson193 Рік тому +2

      A1111 is dead, bro 😂

    • @pedrogorilla483
      @pedrogorilla483 Рік тому +1

      Can’t do it there.

    • @cipher893
      @cipher893 Рік тому

      @@CoreyJohnson193 I’m a little out of the loop. What’s the better alternative for A1111? Counting out Comfy UI.

    • @CoreyJohnson193
      @CoreyJohnson193 Рік тому

      @@cipher893 SawmUI, FooocusUI... Check them out. A1111 is "old hat" now. Swarm is Stability's own revamped UI and I think those two are much better. I'd also look into Aegis workflows for COmfyUI that make it more professional to use.

    • @jakesalmon4982
      @jakesalmon4982 Рік тому

      there isn't one, a1111 is the best for what it is, he was saying it is dead because comfy exists.. i disagree for some use cases@@cipher893

  • @pedroserapio8075
    @pedroserapio8075 Рік тому +1

    Interesting, but I don't get it, 05:15 where the blue went? Background? Or the blue that you are talking about turn into yellow?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +2

      Yes, i meant to say her outfit is yellow now

  • @MrMustachio43
    @MrMustachio43 Рік тому +2

    question: what's the biggest difference between this and image to image? easier to colour? asking because i feel you could get same pose easy with image to image

  • @minecraftuser8900
    @minecraftuser8900 Рік тому

    when are you making some more A1111 tutorials, i really liked them!

  • @geraldhewes
    @geraldhewes Рік тому +1

    I tried your workflow but just get a blank screen. I did update for missing nodes, update everything and restart. Akatsuzi workflow does load for me, but I don’t have a model for CR Upscale Image and not sure where to get it. The GitHub repo for this module is not clear where to get them.

    • @geraldhewes
      @geraldhewes Рік тому

      The v2 update fixed this issue. 🙏

  • @AndyHTu
    @AndyHTu Рік тому

    This feature is actually built into Invoke AI. Its very easy to use as well if you guys havent played with it. It just works as a reference to be used as a texture.

  • @kazioo2
    @kazioo2 Рік тому +17

    Remember when AI gen was about writing a prompt?

  • @blisterfingers8169
    @blisterfingers8169 Рік тому +5

    Fun stuff Olivio. Thanks for the workflows. FYI the workflows are way off from the default starting area meaning newbs might think it didn't work. ♥
    Thanks for going over how you make the inputs too. Makes me wanna train a lora for them.

    • @TheSickness
      @TheSickness Рік тому +1

      Thanks, that got me haha
      Scroll out ftw^^

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +2

      thank you, i will look into that

  • @gatwick127
    @gatwick127 Рік тому +2

    can you do this in Automatic1111?

  • @keepitshort4208
    @keepitshort4208 Рік тому

    My Python crashed while running stable diffusion
    What can be the issue ?

  • @mihailbormin
    @mihailbormin Рік тому +24

    I don't think you have to go this far to get this kind of effect. Just take those abstract images you generated and go i2i on them. It's a old technique proposed like a year ago and gives very much the same creative and colorful results.

    • @c0dexus
      @c0dexus Рік тому

      Yeah, the clickbait title made it seem like it's some new technique but it's just using img2img and control net to get interesting results.

    • @vintagegenious
      @vintagegenious Рік тому

      That's exactly what he is doing: 75% denoise with initial image is just i2i

    • @vuongnh0607l
      @vuongnh0607l Рік тому

      @@vintagegenious you can go 100% denoise and still get some benefit too.

    • @vintagegenious
      @vintagegenious Рік тому

      @@vuongnh0607l I didn't know, isn't that just txt2img (if we ignore the controlnet)

    • @AliTanUcer
      @AliTanUcer Рік тому

      I do agree, i dont see anything revolutionary here. I have been doing this since the beginning. :)
      Also, feeding weird depth maps. I think he just discovered it i guess :)

  • @eduMachado83
    @eduMachado83 Рік тому

    This is so 80s... I liked it!

  • @EddieGoldenberg
    @EddieGoldenberg Рік тому

    Hi, beautiful flow. I tried to run it on SDXL (with SDXL controlnet depth) but got weird results. Seems only 1.5 checkpoints work. Is it true?

  • @windstar2006
    @windstar2006 Рік тому

    A1111 can use this?

  • @veteranxt4481
    @veteranxt4481 Рік тому

    @Olivio Sarikas what would be usefull for RX 6600 XT? AMD GPU?

  • @BF-non
    @BF-non Рік тому

    Awesome video

  • @petec737
    @petec737 Рік тому +1

    Looks like soon enough we're going to recreate the entire photoshop interface inside a comfyui workflow :))

  • @aymericrichard6931
    @aymericrichard6931 Рік тому +1

    I probably don't understand. I have the impression we replace a noise by another noise which effect we still not controlled either.

    • @filigrif
      @filigrif Рік тому

      I completely agree with that :) it's not giving "more control" but the opposite : more lack of control, so that stable diff could digress from the most common poses and image compositions... Which it has obviously been overtrained on. It's still something that can be more simply controlled via open pose (for more special poses) and img2img (if you need more colorful outputs). Much more satisfying when you need to use SD for work.
      Still, fun experiments!

    • @aruak321
      @aruak321 10 місяців тому

      @@filigrif What he showed was essentially an img2img workflow (with depth-map control net) with some extra nodes to per-condition the image along with a very high denoise. So I'm not sure what you mean that he could have just used img2img. Also this absolutely does provide an additional level of control over a completely empty latent noise.

  • @summerofsais
    @summerofsais Рік тому

    Hey I'm in Bangkok right now. I have a casual interest in AI not as in depth as you but we can grab a quick coffee

  • @programista15k22
    @programista15k22 Рік тому

    What hardware did do you use? What graphics card?

  • @mistraelify
    @mistraelify Рік тому

    Wasn't segmentation from controlnet doing the same thing for recoloring pictures using masks but this time it's kind of all-in-one ?? Want a little explaining about it.

  • @jhnbtr
    @jhnbtr Рік тому +1

    how is it ai, if you have to do all the work, may as well draw it at this point. Can you make ai more complicated?

  • @Shingo_AI_Art
    @Shingo_AI_Art Рік тому +2

    The result looks pretty random, but the artistic touch is wonderful though

  • @KDawg5000
    @KDawg5000 Рік тому

    might be fun to use this with SDXL Turbo and do live painting

  • @HasanAslan
    @HasanAslan Рік тому

    workflow doesn't load , it doesnt give any errors just nothing happens on comfyui. maybe the image you produced even non upscaled version ?

  • @hleet
    @hleet Рік тому +2

    I would rather prefer to inject more noise (resolution) in order to have more complex scenes. Anyway, it's a nice workflow. Got to check that Facedetailer node next :)

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      you can actually blend this noise with a normal empty latent noise or any other noise you create to get both :) - also you can inject more noise on the second render step too ;)

    • @sznikers
      @sznikers Рік тому

      Wouldn't addetail lora during upscaling part of workflow do the job too?

  • @patfish3291
    @patfish3291 Рік тому +1

    The point is, we need to make AI Images way more controllable in an artistic way! ...painting noise / strokes/ lines etc. for the base composition. Then refining in a second or third pass the detail and afterwards the color pass... All of that has to be in a simple Interface like Photoshop. This will bring the artistic part back to AI Imagery and bring it to completely different level

  • @BeautifulThingsComeTrue
    @BeautifulThingsComeTrue Рік тому

    you can also just use prompt travel to achieve the same result

  • @Explorewithajwise
    @Explorewithajwise Рік тому

    Thank you for always being a great source of inspiration and admiration; I look forward to watching your videos. Also, thank you for not having these workflows and trips on a paid page. I understand why they do it; I'm so glad you're not one of them.

  • @pedxing
    @pedxing Рік тому

    prooobably going to need to see this with a turbo or latent model for near-real-time wonderment. also.. any way to load a moving (or at least periodically changing/ auto queuing) set of images into the noise channel for some video-effect styling? thanks for the great video as always!

    • @pedxing
      @pedxing Рік тому

      also... how about an actual oscilloscope to create the noise channel from actual NOISE? =)

  • @jibcot8541
    @jibcot8541 Рік тому

    I like it. It would be easier if it had a drawing node in ComfyUI but might not be as controllable as using a Photoshop type application.

    • @blisterfingers8169
      @blisterfingers8169 Рік тому +1

      There'a Krita plugin that uses Comfy as it's backend but it's really finicky to use, it seems.

    • @TheDocPixel
      @TheDocPixel Рік тому

      Try using the canvas node for live turbo gens, and connect to depth or any other controlnet. Experiment!

    • @SylvainSangla
      @SylvainSangla Рік тому +1

      You can use Photoshop, when you save a file from your ComfyUi input folder and you are using Auto Queue mode, the input picture is reloaded by ComfyUI.
      The only difference with an integrated canvas is that you have to save manually your changes, but it's way more flexible..

  • @kamillatocha
    @kamillatocha Рік тому +5

    soon ai artists will actualy have to draw their prompts

    • @UmbraPsi
      @UmbraPsi Рік тому +2

      Already getting there, I started with ai prompting and slowly gotten better with digital drawing using img2img, figured it made more sense that visual control translates better to visual output, I wonder how strange my art style will be, essentially being ai trained than classically trained

  • @gameswithoutfrontears416
    @gameswithoutfrontears416 Рік тому

    Really cool

  • @ivoxx_
    @ivoxx_ Рік тому +1

    This is amazing, you're the boss Olivio!

    • @user-zi6rz4op5l
      @user-zi6rz4op5l Рік тому

      He is basically ripping off other people's workflows and pastes them on his channel.

    • @ivoxx_
      @ivoxx_ Рік тому

      @@user-zi6rz4op5l Unless he charges or don't share such workflows, I don't see the issue.
      Maybe he could at least tell where did he got it from.
      I end up using 3rd party workflows as base or to learn a process, then I make my owns or customize them as needed.

  • @xn4pl
    @xn4pl Рік тому

    The man at his wits end for some content invents img2img but calls it differently to make it seem like novelty. Bravo.

  • @Clupea101
    @Clupea101 Рік тому

    Great Guide

  • @alekxsander
    @alekxsander Рік тому

    I thought I was the only human being to have 10,000 tabs open at the same time! hahahaha

  • @sb6934
    @sb6934 Рік тому

    Thanks!

  • @webraptor007
    @webraptor007 Рік тому

    Thank you...

  • @Soshi2k
    @Soshi2k Рік тому

    Going to need GPT to break this down 😂

  • @MrSongib
    @MrSongib Рік тому

    So it's depth map + custom img2img with high denoise. ok

  • @Artazar777
    @Artazar777 Рік тому

    The ideas are interesting, but I'm lazy. Anyone have any ideas on how to make a lot of noise pictures without spending a lot of time on it?

    • @blisterfingers8169
      @blisterfingers8169 Рік тому +1

      ComfyRoll has a bunch of nodes for generating patterns like halftone, perlin noise, gradients etc. Blend a bunch of those together with an image blend node.

  • @lazydogfilms30
    @lazydogfilms30 Рік тому

    Have you given up doing tutorials for proper photography, or are you going down this AI route?

    • @sirflimflam
      @sirflimflam Рік тому +6

      I think you're about 12 months late asking that question.

  • @TeamPhlegmatisch
    @TeamPhlegmatisch Рік тому +1

    that looks nice but totally random to me.

  • @HyperGalaxyEntertainment
    @HyperGalaxyEntertainment Рік тому

    are you a fan of aespa?

  • @simonmcdonald446
    @simonmcdonald446 Рік тому

    Interesting. Not really sure why the AI art world has so many anime girl artworks. Oh well.......

  • @kanall103
    @kanall103 Рік тому

    nothing change in this world

  • @jiggishplays
    @jiggishplays Рік тому

    i dont like this because there are way too many error for someone who is just starting and who gets confused by all this stuff. other workflows have no issues though.

  • @artisans8521
    @artisans8521 11 місяців тому

    What i see are a lot of unweighted compositions. The masspoint of the poor girl is not above her feet. So she would drop to the floor.

  • @LouisGedo
    @LouisGedo Рік тому

    👋

  • @robotron07
    @robotron07 Рік тому +1

    way to convoluted

  • @drjjones
    @drjjones Рік тому

    I't cool, but not new...
    Have used gradients, generated in comfyui, in the past, injecting into a previous image and can change day to night and a few other things with it.
    Process almost identical -
    I do like the addition of the depth map - I tend to use monster instead

  • @Danny2k34
    @Danny2k34 Рік тому +4

    I get why comfy was created because Gradio is trash and A1111 doesn't update as fast as it should do for being at the front on the cutting edge of AI. Still though, I feel like it was really created because "real" artists kept complaining about ai-artists just writing some text and clicking generate which requires no skill and is lazy. So, behold, comfyUI, an interface that'll give you Blender flashbacks and over complicates the whole process of just generating a simple image.

    • @blisterfingers8169
      @blisterfingers8169 Рік тому +1

      Node systems have been gaining prevalence in all sorts of rendering areas including shaders for games, 3d software etc. The SD ecosystem just lends themselves to it.
      Also, check out Invoke for more artist focussed UI.

    • @dvanyukov
      @dvanyukov Рік тому +4

      I think you are missing the point of ComfyUI. It wasn't mean to compete with 1111. It was specifically designed to be a highly modular backend application. When you need to create something that you need to call over and over again it's fantastic and you can make that workflow very complex. However, if you are experimenting, doing miscellaneous 1111 should be your go-to. Personally, I switch between the two depending on the type of work, but like comfy more because it gives me more control and re-usibility.

    • @dinkledankle
      @dinkledankle Рік тому

      It is only as complex as you need it to be; it takes only a few nodes to generate. I don't know why people are taking such personal offense to a GUI that simply allows for essentially endless workflow customization. You're pointlessly hyperbolizing. A potato could learn to use ComfyUI.

  •  Рік тому

    Please don't leave A1111! Comfy is used by very few, A1111 is used by many.

  • @IDSbrands
    @IDSbrands 8 місяців тому

    Makes no practical sense... It's like spin the wheel - you never know what the outcome is going to be. At best, we look at the results for entertainment, and then exit the app and go do some real work.

  • @GoodEggGuy
    @GoodEggGuy Рік тому

    Sadly, comfyui is so intimidating and so much like programming that it's terrifying. As a new/casual person, this is so very technical that I have given up all hope of using AI art. It's disheartening to see your videos of the last couple of months, knowing that I would take years to understand any of this, by which time the tech will have moved away from this so it will be of no value :-(

    • @dinkledankle
      @dinkledankle Рік тому +1

      It took me less than a month to get comfortable with ComfyUI and I have zero programming experience, and really it takes only a few days to understand the node flow. It's not intimidating or difficult, you're just putting yourself down for no reason. You can generate images with less than five nodes, even less with efficiency nodes.

    • @rbscli
      @rbscli Рік тому

      Come on. I don't love comfyui as a get-go either, but it is not that difficult. There are a ton of dumb proof tutorials out there. Just do some experimentation, and in minutes, you will get a grip. If you are that uncomfortable with learning difficult things, I don't even know how you got to SD instead of mid journey for example.

    • @GoodEggGuy
      @GoodEggGuy Рік тому

      @@rbscli Olivio recommended Fooocus and I have been using that.

    • @aruak321
      @aruak321 10 місяців тому

      @GoodEggGuy ComfyUI actually looks and works like a lot of modern artist tools and workflows that artists (not programmers) are already used to using. These types of tools are to allow programming like control for non-programmers. Programmers could do this a lot simpler with code.

  • @NotThatOlivia
    @NotThatOlivia Рік тому

    First

  • @Slav4o911
    @Slav4o911 Рік тому +1

    Using again that unComfyUI... I just don't like it... I'll wait for Automatic1111 video.

  • @T-Bone54
    @T-Bone54 Рік тому

    Overblown, overreaction to a basic background texture achievable in any photo editor. 'Noise'? Really? The Emperor's New Clothes, anyone?

    • @aruak321
      @aruak321 10 місяців тому

      I think the point is to use specific noise patterns to guide your image as opposed to completely random noise with an empty latent. Just another way of experimenting.

  • @ಥ_ಥ-ಞ9ಞ
    @ಥ_ಥ-ಞ9ಞ Рік тому +1

    Got excited but clicked off after seeing ComfyUI.

    • @vuongnh0607l
      @vuongnh0607l Рік тому

      Missing all the fun stuff

    • @vintagegenious
      @vintagegenious Рік тому +1

      Basically use noisy colorful images to do img2img