Noise Styling is the NEXT LEVEL of AI Image Generation

Поділитися
Вставка
  • Опубліковано 5 жов 2024
  • Noise Styling is the NEXT Dimension of AI Image Generation. This new Method by Akatsuzi creates incredible new Styles and AI Designs. Go far beyond what your AI Model can do. Explore now artistic Expressions. Become more versatile with AI Noise Styling.
    #### Links from the Video ####
    My Workflow + Noise Map Bundles: drive.google.c...
    Akatsuzi Workflows: openart.ai/wor...
    Akatsuzi Noise Maps: drive.google.c...
    #### Join and Support me ####
    Buy me a Coffee: www.buymeacoff...
    Join my Facebook Group: / theairevolution
    Joint my Discord Group: / discord
    AI Newsletter: oliviotutorial...
    Support me on Patreon: / sarikas

КОМЕНТАРІ • 135

  • @uni0ue87
    @uni0ue87 9 місяців тому +47

    Hmmm maybe I didn‘t get it, but seems like a very complicated way to get a tiny bit control of colors and shapes.

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +5

      you get a lot of creative outputs that the model on it's own couldn't create. so there is endless ways of experimentation with this

    • @ImmacHn
      @ImmacHn 9 місяців тому +4

      This is more of an exploratory method than anything, which sometimes you want for inspiration.

    • @uni0ue87
      @uni0ue87 9 місяців тому +1

      I see, makes sense now, thanks.

    • @alecubudulecu
      @alecubudulecu 9 місяців тому +1

      You should try it. It’s pretty fun.

    • @jeanrenaudviers
      @jeanrenaudviers 8 місяців тому +1

      Blender 3D has nodes too, and it’s totally stunning-amazing. Even for 3D elements, shading, compositing, finally you make your very own modules, and it’s non destructive.

  • @Foolsjoker
    @Foolsjoker 9 місяців тому +6

    As always love your walkthroughs, you don't miss a node and explain the flow. Keeps it simple and on track. Hope you are having fun on your trip!

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +1

      thank you very much. i forgot to include new shots from my bangkok stay this time

    • @Foolsjoker
      @Foolsjoker 9 місяців тому

      @@OlivioSarikas No worries. I was there last year. Beautiful country.

  • @TimothyMusson
    @TimothyMusson 9 місяців тому +7

    This reminds me - I've found that plain old image-to-image can be "teased" in a similar way, for really surprising/unusual results. The trick is to add "noise" to the input image in advance, using an image editor. And by "adding noise", I mean super-imposing/blending the source image (e.g. a face) with another image (e.g. a pattern - maybe a piece of fabric, some wallpaper, some text... something random). Using an interesting blend mode, so the resulting image looks quite psychedelic and messy, perhaps even a bit negative/colour-inverted looking. Then use that as the source image for image-to-image, with a prompt to help bring out the original face (or whatever it was). And the results can be pretty awesome.

    • @syndon7052
      @syndon7052 9 місяців тому +1

      amazing tip, thank you

    • @ProzacgodAI
      @ProzacgodAI 6 місяців тому +1

      Hey we stumbled upon a similar technique. I've been using random photos I find a flickr, making them noisy then using them at like .85 denoise strength, to get it to "somewhat" influence the output, it's working well to get portraits and stylized photos, or just to get something way out there.

  • @subhralaya_clothing
    @subhralaya_clothing 9 місяців тому +10

    Sir Please Bring Automatic Tutorial Also

    • @CoreyJohnson193
      @CoreyJohnson193 9 місяців тому +2

      A1111 is dead, bro 😂

    • @pedrogorilla483
      @pedrogorilla483 9 місяців тому +1

      Can’t do it there.

    • @cipher893
      @cipher893 9 місяців тому

      @@CoreyJohnson193 I’m a little out of the loop. What’s the better alternative for A1111? Counting out Comfy UI.

    • @CoreyJohnson193
      @CoreyJohnson193 9 місяців тому

      @@cipher893 SawmUI, FooocusUI... Check them out. A1111 is "old hat" now. Swarm is Stability's own revamped UI and I think those two are much better. I'd also look into Aegis workflows for COmfyUI that make it more professional to use.

    • @jakesalmon4982
      @jakesalmon4982 9 місяців тому

      there isn't one, a1111 is the best for what it is, he was saying it is dead because comfy exists.. i disagree for some use cases@@cipher893

  • @jeffbull8781
    @jeffbull8781 9 місяців тому +4

    I have been using a similar self made workflow for a while on text2image but it requires no image inputs it creates weird noise inputs and cycles them through various samplers to generate a range of different images from the same prompt. The idea was based on a workflow from someone else and iterated on. You can do it by creating noise outputs with the 'image to noise' node, on a low step sample and them blending that with perlin or plasma noise and then having the step count start at a number above 10.

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +1

      that's awesome! akatsuzi also has different pattern and noise generator nodes. in this video i wanted to show that you can also create them yourself and the effects it has from the different shapes you can paint into it. you can see in the images that the circle or triangle and the colors have a strong impact on the resulting composition

  • @BoolitMagnet
    @BoolitMagnet 9 місяців тому +2

    The output really are artistic, can't wait to play around with this. Thanks for another great video on a really useful technique.

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому

      you are welcome. i love this creative approach and the results that akatsuzi came up with

  • @kazioo2
    @kazioo2 9 місяців тому +17

    Remember when AI gen was about writing a prompt?

    • @DivinityIsPurity
      @DivinityIsPurity 9 місяців тому

      A1111 reminds me everytime I use it.

    • @jakesalmon4982
      @jakesalmon4982 9 місяців тому +2

      Much more interesting this way :) a depth map is worth 1000 words

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +2

      it still is on Midjourney ;)

  • @blisterfingers8169
    @blisterfingers8169 9 місяців тому +5

    Fun stuff Olivio. Thanks for the workflows. FYI the workflows are way off from the default starting area meaning newbs might think it didn't work. ♥
    Thanks for going over how you make the inputs too. Makes me wanna train a lora for them.

    • @TheSickness
      @TheSickness 9 місяців тому +1

      Thanks, that got me haha
      Scroll out ftw^^

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +2

      thank you, i will look into that

  • @Herman_HMS
    @Herman_HMS 9 місяців тому +3

    For me it just seems like you could have used img2img with high denosing to get the same effect?

    • @rbscli
      @rbscli 9 місяців тому

      Didn't really get it either.

  • @mihailbormin
    @mihailbormin 9 місяців тому +24

    I don't think you have to go this far to get this kind of effect. Just take those abstract images you generated and go i2i on them. It's a old technique proposed like a year ago and gives very much the same creative and colorful results.

    • @c0dexus
      @c0dexus 9 місяців тому

      Yeah, the clickbait title made it seem like it's some new technique but it's just using img2img and control net to get interesting results.

    • @vintagegenious
      @vintagegenious 9 місяців тому

      That's exactly what he is doing: 75% denoise with initial image is just i2i

    • @vuongnh0607l
      @vuongnh0607l 9 місяців тому

      @@vintagegenious you can go 100% denoise and still get some benefit too.

    • @vintagegenious
      @vintagegenious 9 місяців тому

      @@vuongnh0607l I didn't know, isn't that just txt2img (if we ignore the controlnet)

    • @AliTanUcer
      @AliTanUcer 9 місяців тому

      I do agree, i dont see anything revolutionary here. I have been doing this since the beginning. :)
      Also, feeding weird depth maps. I think he just discovered it i guess :)

  • @kamillatocha
    @kamillatocha 9 місяців тому +5

    soon ai artists will actualy have to draw their prompts

    • @UmbraPsi
      @UmbraPsi 9 місяців тому +2

      Already getting there, I started with ai prompting and slowly gotten better with digital drawing using img2img, figured it made more sense that visual control translates better to visual output, I wonder how strange my art style will be, essentially being ai trained than classically trained

  • @frankiesomeone
    @frankiesomeone 9 місяців тому +6

    Couldn't you do this in Automatic1111 using the colour image as img2img input and the black & white image as controlnet depth?

    • @eskindmitry
      @eskindmitry 9 місяців тому +3

      just did it, looks awesome! I've actually replaced the first step of creating a white frame by using inner glow layer style,I mean, we are already in affinity, why not just make pictures in the right size and with the white border to begin with...

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +2

      actually a good point, yes that should work. however you don't have the flexibility of manipulating the images inside the workflow like comfyui does. I show here a somewhat basic build. but you can do a lot more, blending noise images together, changing their color and more, all with different nodes.

    • @xn4pl
      @xn4pl 9 місяців тому +1

      @@OlivioSarikas with photopea (web based photoshop clone) extention in automatic1111 you can just paint any splotches or even silhouettes and then import them into img2img with a single button, and then export it back into photopea with another button, then iterate it back and forth all you like. And stuff like blending images, changing colors, and many many more is much easier done in photopea than in comfy.

  • @summerofsais
    @summerofsais 9 місяців тому

    Hey I'm in Bangkok right now. I have a casual interest in AI not as in depth as you but we can grab a quick coffee

  • @Shingo_AI_Art
    @Shingo_AI_Art 9 місяців тому +2

    The result looks pretty random, but the artistic touch is wonderful though

  • @petec737
    @petec737 9 місяців тому +1

    Looks like soon enough we're going to recreate the entire photoshop interface inside a comfyui workflow :))

  • @AndyHTu
    @AndyHTu 9 місяців тому

    This feature is actually built into Invoke AI. Its very easy to use as well if you guys havent played with it. It just works as a reference to be used as a texture.

  • @MrMustachio43
    @MrMustachio43 9 місяців тому +2

    question: what's the biggest difference between this and image to image? easier to colour? asking because i feel you could get same pose easy with image to image

  • @eduMachado83
    @eduMachado83 9 місяців тому

    This is so 80s... I liked it!

  • @mick7727
    @mick7727 7 місяців тому

    Nice results! Would this be achievable with multiple ipadapter references? I feel like it would in practice i just haven't thought of trying it yet.

  • @minecraftuser8900
    @minecraftuser8900 9 місяців тому

    when are you making some more A1111 tutorials, i really liked them!

  • @KDawg5000
    @KDawg5000 9 місяців тому

    might be fun to use this with SDXL Turbo and do live painting

  • @hleet
    @hleet 9 місяців тому +2

    I would rather prefer to inject more noise (resolution) in order to have more complex scenes. Anyway, it's a nice workflow. Got to check that Facedetailer node next :)

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому

      you can actually blend this noise with a normal empty latent noise or any other noise you create to get both :) - also you can inject more noise on the second render step too ;)

    • @sznikers
      @sznikers 9 місяців тому

      Wouldn't addetail lora during upscaling part of workflow do the job too?

  • @alekxsander
    @alekxsander 9 місяців тому

    I thought I was the only human being to have 10,000 tabs open at the same time! hahahaha

  • @BF-non
    @BF-non 9 місяців тому

    Awesome video

  • @patfish3291
    @patfish3291 9 місяців тому +1

    The point is, we need to make AI Images way more controllable in an artistic way! ...painting noise / strokes/ lines etc. for the base composition. Then refining in a second or third pass the detail and afterwards the color pass... All of that has to be in a simple Interface like Photoshop. This will bring the artistic part back to AI Imagery and bring it to completely different level

  • @PostmetaArchitect
    @PostmetaArchitect 9 місяців тому

    you can also just use prompt travel to achieve the same result

  • @webraptor007
    @webraptor007 9 місяців тому

    Thank you...

  • @pedroserapio8075
    @pedroserapio8075 9 місяців тому +1

    Interesting, but I don't get it, 05:15 where the blue went? Background? Or the blue that you are talking about turn into yellow?

    • @OlivioSarikas
      @OlivioSarikas  9 місяців тому +2

      Yes, i meant to say her outfit is yellow now

  • @gatwick127
    @gatwick127 9 місяців тому +2

    can you do this in Automatic1111?

  • @ivoxx_
    @ivoxx_ 9 місяців тому +1

    This is amazing, you're the boss Olivio!

    • @user-zi6rz4op5l
      @user-zi6rz4op5l 9 місяців тому

      He is basically ripping off other people's workflows and pastes them on his channel.

    • @ivoxx_
      @ivoxx_ 9 місяців тому

      @@user-zi6rz4op5l Unless he charges or don't share such workflows, I don't see the issue.
      Maybe he could at least tell where did he got it from.
      I end up using 3rd party workflows as base or to learn a process, then I make my owns or customize them as needed.

  • @mistraelify
    @mistraelify 9 місяців тому

    Wasn't segmentation from controlnet doing the same thing for recoloring pictures using masks but this time it's kind of all-in-one ?? Want a little explaining about it.

  • @Clupea101
    @Clupea101 9 місяців тому

    Great Guide

  • @sb6934
    @sb6934 9 місяців тому

    Thanks!

  • @aymericrichard6931
    @aymericrichard6931 9 місяців тому +1

    I probably don't understand. I have the impression we replace a noise by another noise which effect we still not controlled either.

    • @filigrif
      @filigrif 9 місяців тому

      I completely agree with that :) it's not giving "more control" but the opposite : more lack of control, so that stable diff could digress from the most common poses and image compositions... Which it has obviously been overtrained on. It's still something that can be more simply controlled via open pose (for more special poses) and img2img (if you need more colorful outputs). Much more satisfying when you need to use SD for work.
      Still, fun experiments!

    • @aruak321
      @aruak321 7 місяців тому

      @@filigrif What he showed was essentially an img2img workflow (with depth-map control net) with some extra nodes to per-condition the image along with a very high denoise. So I'm not sure what you mean that he could have just used img2img. Also this absolutely does provide an additional level of control over a completely empty latent noise.

  • @geraldhewes
    @geraldhewes 9 місяців тому +1

    I tried your workflow but just get a blank screen. I did update for missing nodes, update everything and restart. Akatsuzi workflow does load for me, but I don’t have a model for CR Upscale Image and not sure where to get it. The GitHub repo for this module is not clear where to get them.

    • @geraldhewes
      @geraldhewes 9 місяців тому

      The v2 update fixed this issue. 🙏

  • @gameswithoutfrontears416
    @gameswithoutfrontears416 9 місяців тому

    Really cool

  • @jibcot8541
    @jibcot8541 9 місяців тому

    I like it. It would be easier if it had a drawing node in ComfyUI but might not be as controllable as using a Photoshop type application.

    • @blisterfingers8169
      @blisterfingers8169 9 місяців тому +1

      There'a Krita plugin that uses Comfy as it's backend but it's really finicky to use, it seems.

    • @TheDocPixel
      @TheDocPixel 9 місяців тому

      Try using the canvas node for live turbo gens, and connect to depth or any other controlnet. Experiment!

    • @SylvainSangla
      @SylvainSangla 9 місяців тому +1

      You can use Photoshop, when you save a file from your ComfyUi input folder and you are using Auto Queue mode, the input picture is reloaded by ComfyUI.
      The only difference with an integrated canvas is that you have to save manually your changes, but it's way more flexible..

  • @AlexsForestAdventureChannel
    @AlexsForestAdventureChannel 9 місяців тому

    Thank you for always being a great source of inspiration and admiration; I look forward to watching your videos. Also, thank you for not having these workflows and trips on a paid page. I understand why they do it; I'm so glad you're not one of them.

  • @jhnbtr
    @jhnbtr 9 місяців тому +1

    how is it ai, if you have to do all the work, may as well draw it at this point. Can you make ai more complicated?

  • @xn4pl
    @xn4pl 9 місяців тому

    The man at his wits end for some content invents img2img but calls it differently to make it seem like novelty. Bravo.

  • @EddieGoldenberg
    @EddieGoldenberg 9 місяців тому

    Hi, beautiful flow. I tried to run it on SDXL (with SDXL controlnet depth) but got weird results. Seems only 1.5 checkpoints work. Is it true?

  • @Soshi2k
    @Soshi2k 9 місяців тому

    Going to need GPT to break this down 😂

  • @programista15k22
    @programista15k22 9 місяців тому

    What hardware did do you use? What graphics card?

  • @keepitshort4208
    @keepitshort4208 9 місяців тому

    My Python crashed while running stable diffusion
    What can be the issue ?

  • @windstar2006
    @windstar2006 9 місяців тому

    A1111 can use this?

  • @veteranxt4481
    @veteranxt4481 9 місяців тому

    @Olivio Sarikas what would be usefull for RX 6600 XT? AMD GPU?

  • @TeamPhlegmatisch
    @TeamPhlegmatisch 9 місяців тому +1

    that looks nice but totally random to me.

  • @MrSongib
    @MrSongib 9 місяців тому

    So it's depth map + custom img2img with high denoise. ok

  • @pedxing
    @pedxing 9 місяців тому

    prooobably going to need to see this with a turbo or latent model for near-real-time wonderment. also.. any way to load a moving (or at least periodically changing/ auto queuing) set of images into the noise channel for some video-effect styling? thanks for the great video as always!

    • @pedxing
      @pedxing 9 місяців тому

      also... how about an actual oscilloscope to create the noise channel from actual NOISE? =)

  • @HasanAslan
    @HasanAslan 9 місяців тому

    workflow doesn't load , it doesnt give any errors just nothing happens on comfyui. maybe the image you produced even non upscaled version ?

  • @simonmcdonald446
    @simonmcdonald446 9 місяців тому

    Interesting. Not really sure why the AI art world has so many anime girl artworks. Oh well.......

  • @HyperGalaxyEntertainment
    @HyperGalaxyEntertainment 9 місяців тому

    are you a fan of aespa?

  • @Artazar777
    @Artazar777 9 місяців тому

    The ideas are interesting, but I'm lazy. Anyone have any ideas on how to make a lot of noise pictures without spending a lot of time on it?

    • @blisterfingers8169
      @blisterfingers8169 9 місяців тому +1

      ComfyRoll has a bunch of nodes for generating patterns like halftone, perlin noise, gradients etc. Blend a bunch of those together with an image blend node.

  • @jiggishplays
    @jiggishplays 9 місяців тому

    i dont like this because there are way too many error for someone who is just starting and who gets confused by all this stuff. other workflows have no issues though.

  • @kanall103
    @kanall103 9 місяців тому

    nothing change in this world

  • @artisans8521
    @artisans8521 8 місяців тому

    What i see are a lot of unweighted compositions. The masspoint of the poor girl is not above her feet. So she would drop to the floor.

  • @robotron07
    @robotron07 9 місяців тому +1

    way to convoluted

  • @LouisGedo
    @LouisGedo 9 місяців тому

    👋

  • @lazydogfilms30
    @lazydogfilms30 9 місяців тому

    Have you given up doing tutorials for proper photography, or are you going down this AI route?

    • @sirflimflam
      @sirflimflam 9 місяців тому +6

      I think you're about 12 months late asking that question.

  • @Danny2k34
    @Danny2k34 9 місяців тому +4

    I get why comfy was created because Gradio is trash and A1111 doesn't update as fast as it should do for being at the front on the cutting edge of AI. Still though, I feel like it was really created because "real" artists kept complaining about ai-artists just writing some text and clicking generate which requires no skill and is lazy. So, behold, comfyUI, an interface that'll give you Blender flashbacks and over complicates the whole process of just generating a simple image.

    • @blisterfingers8169
      @blisterfingers8169 9 місяців тому +1

      Node systems have been gaining prevalence in all sorts of rendering areas including shaders for games, 3d software etc. The SD ecosystem just lends themselves to it.
      Also, check out Invoke for more artist focussed UI.

    • @dvanyukov
      @dvanyukov 9 місяців тому +4

      I think you are missing the point of ComfyUI. It wasn't mean to compete with 1111. It was specifically designed to be a highly modular backend application. When you need to create something that you need to call over and over again it's fantastic and you can make that workflow very complex. However, if you are experimenting, doing miscellaneous 1111 should be your go-to. Personally, I switch between the two depending on the type of work, but like comfy more because it gives me more control and re-usibility.

    • @dinkledankle
      @dinkledankle 9 місяців тому

      It is only as complex as you need it to be; it takes only a few nodes to generate. I don't know why people are taking such personal offense to a GUI that simply allows for essentially endless workflow customization. You're pointlessly hyperbolizing. A potato could learn to use ComfyUI.

  • @chirojanee
    @chirojanee 9 місяців тому

    I't cool, but not new...
    Have used gradients, generated in comfyui, in the past, injecting into a previous image and can change day to night and a few other things with it.
    Process almost identical -
    I do like the addition of the depth map - I tend to use monster instead

  •  9 місяців тому

    Please don't leave A1111! Comfy is used by very few, A1111 is used by many.

  • @IDSbrands
    @IDSbrands 6 місяців тому

    Makes no practical sense... It's like spin the wheel - you never know what the outcome is going to be. At best, we look at the results for entertainment, and then exit the app and go do some real work.

  • @NotThatOlivia
    @NotThatOlivia 9 місяців тому

    First

  • @ಥ_ಥ-ಞ9ಞ
    @ಥ_ಥ-ಞ9ಞ 9 місяців тому +1

    Got excited but clicked off after seeing ComfyUI.

    • @vuongnh0607l
      @vuongnh0607l 9 місяців тому

      Missing all the fun stuff

    • @vintagegenious
      @vintagegenious 9 місяців тому +1

      Basically use noisy colorful images to do img2img

  • @Slav4o911
    @Slav4o911 9 місяців тому +1

    Using again that unComfyUI... I just don't like it... I'll wait for Automatic1111 video.

  • @GoodEggGuy
    @GoodEggGuy 9 місяців тому

    Sadly, comfyui is so intimidating and so much like programming that it's terrifying. As a new/casual person, this is so very technical that I have given up all hope of using AI art. It's disheartening to see your videos of the last couple of months, knowing that I would take years to understand any of this, by which time the tech will have moved away from this so it will be of no value :-(

    • @dinkledankle
      @dinkledankle 9 місяців тому +1

      It took me less than a month to get comfortable with ComfyUI and I have zero programming experience, and really it takes only a few days to understand the node flow. It's not intimidating or difficult, you're just putting yourself down for no reason. You can generate images with less than five nodes, even less with efficiency nodes.

    • @rbscli
      @rbscli 9 місяців тому

      Come on. I don't love comfyui as a get-go either, but it is not that difficult. There are a ton of dumb proof tutorials out there. Just do some experimentation, and in minutes, you will get a grip. If you are that uncomfortable with learning difficult things, I don't even know how you got to SD instead of mid journey for example.

    • @GoodEggGuy
      @GoodEggGuy 9 місяців тому

      @@rbscli Olivio recommended Fooocus and I have been using that.

    • @aruak321
      @aruak321 7 місяців тому

      @GoodEggGuy ComfyUI actually looks and works like a lot of modern artist tools and workflows that artists (not programmers) are already used to using. These types of tools are to allow programming like control for non-programmers. Programmers could do this a lot simpler with code.

  • @T-Bone54
    @T-Bone54 9 місяців тому

    Overblown, overreaction to a basic background texture achievable in any photo editor. 'Noise'? Really? The Emperor's New Clothes, anyone?

    • @aruak321
      @aruak321 7 місяців тому

      I think the point is to use specific noise patterns to guide your image as opposed to completely random noise with an empty latent. Just another way of experimenting.

  • @sxonesx
    @sxonesx 9 місяців тому +2

    It's cool, but it's unpredictable. And if it's unpredictable, then it's unusable.

    • @vuongnh0607l
      @vuongnh0607l 9 місяців тому

      This is for when you want just a little bit of control but still let the model hallucinate. If you need stronger control, use the various controlnet models.