A Solution to AI Plastic Skin

Поділитися
Вставка
  • Опубліковано 16 гру 2024

КОМЕНТАРІ • 26

  • @ArrowKnow
    @ArrowKnow 10 годин тому +1

    Love it! I always enjoy when a new video drops because it is apparent that you put a lot of thought into them and I know that I am going to learn something interesting. Usually you make me look at things from a new perspective which is always useful. Thank you!

  • @baheth3elmy16
    @baheth3elmy16 12 годин тому

    This is revolutionary!!!!! I can't wait till you create the nodes as you said..

  • @Filokalee999
    @Filokalee999 9 годин тому +1

    Very good workflow, indeed! Also your reference images are excellent fashion editorial images. May I ask what checkpoint(s) are recommended for fashion images?

  • @Han3D
    @Han3D 2 години тому

    Always you bring great content!
    It's similar with 3D workflow! very interesting.
    Seems like comfy has so much potential
    Thank you so much Andrea!

  • @kennedysworks
    @kennedysworks 14 годин тому +2

    정확합니다! 안그래도 저 역시 동일한 프로세스를 고민하고 있었어요. 실제로 포토샵이나 3d에서 쓰는 방식이기도 하죠. 빛은 질량이 0인 물질이라는 물리학의 이론으로 볼 때 생성형 AI는 빛을 그저 "존재하지 않음"으로 보기에 생기는 현상이기도 합니다. AI에게는 어떤 형태이건 단지 색상이 다른 물질덩어리일 뿐이죠

    • @risunobushi_ai
      @risunobushi_ai  13 годин тому +1

      My hope for the future is that we’re able to train “PBR Materials” ControlNet to drive roughness, metallic, IOR, etc. but for now yeah, subsurface scattering is just a foreign concept to AI

  • @dadrian
    @dadrian 2 години тому

    Nice idea - If I find time, I think I bundle that whole workflow in an Nuke Gizmo as I do retouch of my pictures in Nuke anyway

  • @JacekPilarski
    @JacekPilarski 12 годин тому +1

    As someone who has been using Comfy for 1.5 years, I think you should focus on integrating img2img and upscaling/latent upscale techniques to maximize the potential of the model, rather than trying to fix complex details like human skin using "randomly generated noise" in one go. Start from not generating with guidence >3 both for distilled and de-distilled models.

    • @risunobushi_ai
      @risunobushi_ai  11 годин тому +1

      IMO there’s nothing inherently wrong in using random noise for skin - I’m testing a theory about using 3D shading techniques in a different medium, rather than relying on upscaling. If the theory is sound, then I can develop a model agnostic technique that can be applied to any model, regardless of how good they are at generating skin textures. If I’m wrong, I’ll have fun trying to build things, so that’s good!
      I’ve been working in gen AI since 4 years ago now, and I like researching how to apply different, non gen AI techniques to the medium.

    • @JacekPilarski
      @JacekPilarski 11 годин тому

      @@risunobushi_ai I get your point, but your results wont be realistic in a sense of achieving photorealism using randomly generated patterns. While every model is better than previous one, there is no point of adding more with messy noise. I'm just trying to be helpful because I watch your channel from the beginning. You should try to play with latent upscale methods maybe, along with "Redux Inpaint IP adapted" where you could source a skin texture reference from input image for example. That would be something.

  • @lucifer9814
    @lucifer9814 Годину тому

    Is it just me or does anyone have an issue running the facedetailer node ? With the recent most update, a lot comfyUI nodes broke, including pretty much all the face ID tools like pulidID, ecomID, and the rest of them in this category, but apart from them, I can't seem to be able run the face detailer node as well, gives me a huge error each time I run it.

  • @freneticfilms7220
    @freneticfilms7220 21 хвилина тому

    sameface fix? realistic checkpoint?

  • @kennedysworks
    @kennedysworks 13 годин тому +1

    한가지 더 말씀드리면 커스텀노드중 하나인 "FACE PARSING" 을 같이 사용하면 더 좋을 것 같아요

    • @risunobushi_ai
      @risunobushi_ai  13 годин тому +1

      Yep that one’s better! Although it involves a ton of nodes

  • @bgtubber
    @bgtubber 13 годин тому +1

    Funnily enough, most of the popular SDXL and even SD 1.5 models create much more realistic skin textures than Flux without having to resort to weird tricks. 🤔

    • @JacekPilarski
      @JacekPilarski 11 годин тому

      Then you are doing something wrong, Flux gives pure realism if used with correct settings.

    • @bgtubber
      @bgtubber 11 годин тому +1

      ​@@JacekPilarski Yes, there are ways to get better realism, but not without losing prompt adherence. For me, the default settings, which give good prompt adherence, have always given me plastic-y looking skin (and more CGI-like appearance overall). I know you can decrease Flux Guidance from 3.5 to ~2 for more realistic output, but then you start losing the image coherency and the prompt adherence. It's basically a trade-off between realism and prompt adherence. Do you have any suggestions for getting more realistic results while also keeping the prompt adherence and coherency of the higher Flux Guidance?

    • @ChanhDucTuong
      @ChanhDucTuong 2 години тому

      Do you mean Base SDXL/SD1.5 vs Flux Dev? Or do you mean best finetuned SDXL/SD1.5 vs Flux Dev?
      The first one is not true imo, and the 2nd one is not fair.

  • @Vinz-VYG
    @Vinz-VYG 13 годин тому

    Why not use LoRAs? There are plenty available to improve skin

    • @kennedysworks
      @kennedysworks 13 годин тому

      많은 테스트를 해봤는대.. 로라는 형태를 변형시키기도 하더군요.. 결국 후가공을 하는 것이 좋아보여요

    • @risunobushi_ai
      @risunobushi_ai  13 годин тому +2

      This is a first step towards a model-agnostic solution, so I wanted to experiment with as few models as I could in order to find a solution that could work regardless of the models used during the generation of the base image

    • @Vinz-VYG
      @Vinz-VYG 12 годин тому

      @@risunobushi_ai I understand thank you for your answer, But to me, It seems more like 'I need to fix my mistake' than 'I choose the right tools for the job'. Nevertheless, the approach taken in the video is interesting.

    • @WhySoBroke
      @WhySoBroke 5 хвилин тому

      Loras are small and limited and also limited in terms of knowledgd of where to inject the noise. The reason is that style Loras are trained on a variety of images and work best when applied to similar images as the training dataset. To improve skin I normally use a second pass refiner or model based upscaling. These two methods increase the time it takes to generate the image. Which is why I think this is an interesting approach.

  • @luisfelipemurguiaramos659
    @luisfelipemurguiaramos659 13 годин тому +1

    Your solution is very promising and elegant. It can work not only for skin but for any type of material, as this issue isn't limited to skin but affects almost the entire image (plastic textures in clothing, etc.). Additionally, I notice that many limitations stem from technical constraints in ComfyUI nodes, which could be resolved by creating a custom node.
    Mateo (ua-cam.com/video/tned5bYOC08/v-deo.html) also presented a Noise Injection approach, which adds more "details" by injecting noise.
    It's ironic that diffusion images lack noise when they are generated from pure noise itself.
    We could collaborate on creating a custom node to solve this issue. I have extensive knowledge of ComfyUI's operation and node development, but I would benefit from a professional creative's perspective.