Photo to Cartoon Stable Diffusion: Easy Step-by-Step Guide - Forge UI

Поділитися
Вставка
  • Опубліковано 1 лис 2024

КОМЕНТАРІ • 53

  • @pixaroma
    @pixaroma  6 місяців тому +1

    You can download the cartoon model from here
    civitai.com/models/297501?modelVersionId=357959
    You can try other cartoon models, but this one worked ok for me, juggernautxl v9 kind of works but not all the time
    You can add after the subject in the prompt
    professional 3D design of a cute and adorable cartoon character with big eyes, pixar art style
    or
    professional 3D design of a cartoon character, pixar art style
    or mention the type of the object like: portrait of a monster, white and red, creature, professional 3d design of a cartoon creature, pixar art style
    for the negative you can find it also in the model page is:
    (worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2)

    • @mr_fries1111
      @mr_fries1111 5 місяців тому

      Awesome, cheers my friend.

  • @lxic-bz8hf
    @lxic-bz8hf 3 місяці тому +1

    Man, all your tutorials are awesome in terms of comprehensive explanation, brevity, and without disturbing the follower with nonsense. Thank you for sharing your knowledge with us.🙏🏻

  • @SumoBundle
    @SumoBundle 6 місяців тому +1

    Absolutely mind-blown by the possibilities this opens up for content creation! Your step-by-step guide makes it seem so easy and the results are just stunning. Can't wait to try this out myself, thank you!

  • @Kryptonic83
    @Kryptonic83 6 місяців тому +1

    some great tips, thanks. This in combination with Instant-ID in Forge seems to work well. Using InsightFace(InstantID) as controlnet preprocessor and ip-adapter-instant-id as cnet model. Seems to allow for a bit more freedom with denoising strength while still keeping face features.

  • @UmarandSaqib
    @UmarandSaqib 6 місяців тому

    this is what I was looking for! Perfect

  • @VieiraVFX
    @VieiraVFX 4 місяці тому

    Nice tutorial, dude! Thank you so much!

  • @MrRetobor
    @MrRetobor 4 місяці тому

    that was great! I need a turorial for cartoon to realistic photo :) I tried the same the other way around but somehow it always looks trash

  • @filmychris
    @filmychris 3 місяці тому

    Great tutorial! However, I am getting an error that reads.
    NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
    Time taken: 1 min. 45.5 sec.

    • @pixaroma
      @pixaroma  3 місяці тому

      Did you tried those recommend settings that error gives? Also froge got different updates, the last version isn't the stable version, might want to switch to different fork or to a stable version. I used one version that start with 29

    • @filmychris
      @filmychris 2 місяці тому

      @@pixaroma Thank you for your response. I've since upgraded my GPU and am not experiencing the issue anymore.

  • @igibomba
    @igibomba 6 місяців тому

    Just found your youtube channel, great video! Will def check out the rest..

  • @farhang-n
    @farhang-n 6 місяців тому

    thanks

  • @utkutoptas5246
    @utkutoptas5246 6 місяців тому

    Which one do you suggest for turning real photos into Disney/Pixar style: Fooocus, Forge, or Automatic1111?

    • @pixaroma
      @pixaroma  6 місяців тому

      I think is not about which interface you use but about the checkpoint model, some models are trained with more cartoon images or in a cartoon style so it is easier to get it in that style, so any ui you use should work if you got a good model

    • @utkutoptas5246
      @utkutoptas5246 6 місяців тому

      ​@@pixaroma Thanks for the response. Do you have any suggestions for improving eyes and hands? When I try your method in the video, the eyes and hands come out weird most of the time.

    • @pixaroma
      @pixaroma  6 місяців тому

      Depends on the model, some have better eyes and hand then others but in general AI is still struggles sometimes with that, also cartoon eyes are all kind of categories so that might also affect, and with hand is a general problem on all AI because you can do so many combinations of poses with hands so it still do mistakes

  • @hahuyson
    @hahuyson 5 місяців тому

    Can you tell me how your computer is configured to be running SD?Because I see the rendering speed is very fast, I want to buy a computer to do the work you are doing! thanks

    • @pixaroma
      @pixaroma  5 місяців тому

      For the speed i get like 5 second for a 1024px image, i often speed up the video to not wait. I have rtx 4090 video card,with 24gb of vram. So if you want to run stable diffusion i recommend an Nvidia card with at least 8gb of VRAM, more VRAM the faster you will generate. For the rest of the system is not so important i think normal ram also help a little like 8GB of ram or 16. My pc was too expensive but i need it for 3d renders and stuff so probably you can go with not so expensive pc.
      My config
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit- Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700 - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz- SSD Samsung 980 PRO, 2TB, M.2 - SSD WD Blue, 2TB, M2 2280- Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White- Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid- PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W

    • @hahuyson
      @hahuyson 5 місяців тому

      @@pixaroma If the photo is 512px, how long will it take?

    • @pixaroma
      @pixaroma  5 місяців тому

      @@hahuyson I am not at pc to test it, but I sdxl does better images in 1024px and only v1.5 does better on 512px so since the sdxl models appeared I didnt generate lower then 1024px anymore, but probably way faster, but also depends on models and numbers of steps, with some models like lightning and hyper you can generate way faster

    • @hahuyson
      @hahuyson 5 місяців тому

      @@pixaroma thank you so much!

    • @pixaroma
      @pixaroma  5 місяців тому

      tested this morning on sdxl model juggernaut X with 20 sampling steps: on 512x512pxTime taken: 1.4 sec. | For 768x768px Time taken: 2.0 sec. | 1024x1024px Time taken: 3.6 sec.

  • @verdoemme
    @verdoemme Місяць тому

    I'm using forge as wel, downloaded the model, used the same settings but my img2img pictures just don't become cartoony, it's driving me crazy

    • @pixaroma
      @pixaroma  Місяць тому

      I didn't try in the new version they changed a lot of things since i did the video maybe they changed some settings, depending on the image and the prompt also

    • @verdoemme
      @verdoemme Місяць тому +2

      @@pixaroma I found it, in the UI settings my forge was still on flux. Once I changed that it worked! Thx for the tutorial!

  • @StringerBell
    @StringerBell 6 місяців тому +2

    Why don't you just use the new IP Adapter 2 style transfer? It's much superior and consistent than that

    • @pixaroma
      @pixaroma  6 місяців тому

      I will check it out, i was assuming it is transferring style and here it just switches from photo to cartoon, plus is faster, and the control net still has errors on image size that is not divisible with 64.

    • @8561
      @8561 6 місяців тому

      What photo would you use as a reference image?

    • @pixaroma
      @pixaroma  6 місяців тому

      Depends on what image you want to make it look cartoon

    • @pixaroma
      @pixaroma  6 місяців тому

      @StringerBell I saw only comfy ui workflows, do you know if it works with automatic1111 or forge or where i can find the models for download?

  • @accountgoogle-b9d
    @accountgoogle-b9d Місяць тому

    Im making the exact step by step but my image isnt far from the cartoon as yours :(

    • @pixaroma
      @pixaroma  Місяць тому

      probably something changed in the interface since I did the video, not sure, if is the same model same settings, must be the new interface maybe they updated. I mean you see how I did it in the video..

  • @nghia-kientrucsu5709
    @nghia-kientrucsu5709 6 місяців тому

    Please guide me to create architectural images from sketches

    • @pixaroma
      @pixaroma  6 місяців тому

      Until I make a new video for forge you can try this one ua-cam.com/video/IBNuALJuOgw/v-deo.html Use Juggernaut XL v9 instead and for control net model you can find here, try canny huggingface.co/lllyasviel/sd_control_collection/tree/main just make sure you are using image size with width and height that is divisible with 64, forge has a bug that gives an error if is not in those sizes, so test with 1024x1024px first to see if works

  • @cesaragostinho1089
    @cesaragostinho1089 5 місяців тому

    Hi, I'm facing some problems in generating a image, basically, at the beginning, it's ok, but in the end, the image is pixelated, blur, a terrible image .

    • @pixaroma
      @pixaroma  5 місяців тому

      I think it has something to do with vae, change it to automatic so it takes auto the VAE see if it helps, other things can be cfg scale too big

    • @cesaragostinho1089
      @cesaragostinho1089 5 місяців тому

      @@pixaroma It worked!!! Thank a lot, wonderful tutorial

    • @cesaragostinho1089
      @cesaragostinho1089 5 місяців тому

      Let me ask you something, is there a way of transforming a photo of a dog or a person, and changing the pose, and the environment, without changing the personnel characteristics? For example, my dog died in February, and I'd like to see a photo-cartoon of him running in the sky with clouds...

    • @pixaroma
      @pixaroma  5 місяців тому +1

      Only if you train a lora maybe with it. Or maybe you can try with in painting to leave the head and inpaint anything else. Is quite difficult

    • @cesaragostinho1089
      @cesaragostinho1089 5 місяців тому

      @@pixaroma Thank you, i'll try.

  • @evelnogueira3112
    @evelnogueira3112 6 місяців тому +1

    Anyone knows how to fix this error? TypeError: 'NoneType' object is not iterable

    • @pixaroma
      @pixaroma  6 місяців тому

      are you using image width and height that is not divisible with 64? doest it work with 1024x1024px or on other size get that message?

    • @pixaroma
      @pixaroma  6 місяців тому +1

      most of the nonetype error can be fixed by Enable Pad prompt/negative prompt in Settings -> Optimizations
      or to Set the Width and Height to be multiple of 64

    • @evelnogueira3112
      @evelnogueira3112 6 місяців тому

      @@pixaroma Thanks, i will try.

    • @EdisiSpecial
      @EdisiSpecial 2 місяці тому

      What kind of software to use the download file? Im so sorry, im newbie in Ai

    • @pixaroma
      @pixaroma  2 місяці тому

      Check this video in how to install ua-cam.com/video/BFSDsMz_uE0/v-deo.html

  • @Fuse-q8z
    @Fuse-q8z 2 місяці тому

    TypeError: 'NoneType' object is not iterable, any ideas?

    • @pixaroma
      @pixaroma  2 місяці тому +1

      I usually got that when i used a width and height that was not divisible with 64, so try 1024*1024 to make sure is not because of that. Then it can be from other things also since is a general error that is caused by other errors, sometimes you can see in command Window more details

    • @Fuse-q8z
      @Fuse-q8z 2 місяці тому

      @@pixaroma I was working with 1024 x 1024 along with IMG2IMG and still had it.

    • @pixaroma
      @pixaroma  2 місяці тому +1

      I was using the forge with commit that start with 29, maybe the updated version doesn't work anymore. I have been switching to comfy UI to get latest updates faster and am trying to recreate all the workflows from forge to comfyui

    • @Fuse-q8z
      @Fuse-q8z 2 місяці тому

      @@pixaroma Thank you so much for the info. Will look into it tomorrow :)