ComfyUI Tutorial Series: Ep12 - How to Upscale Your AI Images

Поділитися
Вставка
  • Опубліковано 9 лют 2025
  • In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Using ComfyUI, you can increase the size of your images while enhancing their sharpness and detail.
    We'll cover the process of installing the necessary nodes, choosing models like Siax and Anime Sharp for different image styles, and creating workflows that deliver quick, high-quality results. You’ll see how to compare upscaled images and fine-tune settings for the best output, whether you're working with portraits, landscapes, or illustrations.
    This tutorial is perfect for anyone looking to improve their AI-generated art with sharper, larger images. Whether you’re using SDXL, Flux, or any other models, you’ll learn how to upscale efficiently.
    Download all the workflows from Discord
    / discord
    look for the channel pixaroma-workflows
    Go to manger, model manager
    sort by type Upscale
    Install 4x_NMKD-Siax_200k
    4x-AnimeSharp
    Refresh ComfyUI
    Install this custom nodes if you don't have it
    ControlAltAI Nodes
    ComfyUI-PixelResolutionCalculator
    ComfyUI Easy Use
    rgthree's ComfyUI Nodes
    Restart ComfyUI
    Unlock exclusive perks by joining our channel:
    / @pixaroma

КОМЕНТАРІ • 152

  • @SebAnt
    @SebAnt 5 місяців тому +30

    A small token of my appreciation. Thank you for taking so much time to thoroughly test, select the best, and so clearly explain comfyUI to us. The workflows on your discord work like a charm 🙏🏽

    • @pixaroma
      @pixaroma  5 місяців тому

      Thank you so much ☺️ glad it helped

  • @RiftWarth
    @RiftWarth Місяць тому +4

    Amazing video. Great job on this and thanks for the workflows. 🙂

    • @pixaroma
      @pixaroma  Місяць тому +1

      thank you for support 🙂

  • @santhoshvasamsetti9165
    @santhoshvasamsetti9165 2 місяці тому +5

    Great work! Appreciate your time and effort.

    • @pixaroma
      @pixaroma  2 місяці тому

      Thank you so much 😊

  • @Patricia_Liu
    @Patricia_Liu 4 місяці тому +3

    Thanks!

    • @pixaroma
      @pixaroma  4 місяці тому +1

      Thank you so much for your support😊

    • @Patricia_Liu
      @Patricia_Liu 4 місяці тому

      @@pixaroma Thank YOU! 💖

  • @RandMpinkFilms
    @RandMpinkFilms 2 місяці тому

    Man, I subbed to your channel after giving this a try. This is by far the best upscaling tutorial and workflow I've come across in the past year. I've seen about 15. No joke. A huge thank you!

    • @pixaroma
      @pixaroma  2 місяці тому

      Thank you so much🙂

  • @eineatombombe
    @eineatombombe 3 місяці тому +7

    this is like the only tutorial without attractive woman clickbait thumbnail

  • @jorgeluismontoyasolis9800
    @jorgeluismontoyasolis9800 4 місяці тому

    Thank you so much. Your work is amazing and highly appreciated. I usually find tutorials about this topic that doesn't show the details behind the process or the role of each node.

  • @lucifer9814
    @lucifer9814 5 місяців тому +1

    these upscalers are absolutely amazing, thank you

  • @philippeheritier9364
    @philippeheritier9364 5 місяців тому +1

    wow finally a good upscaler, thank you very much.

  • @ESheridan
    @ESheridan 4 місяці тому +1

    Amazing! Thank you so much for your special explanation!🤩🤩🤩

  • @SumoBundle
    @SumoBundle 5 місяців тому +2

    Very detailed tutorial. Congrats and thank you for the effort

  • @WanderlustWithT
    @WanderlustWithT 4 місяці тому +1

    Bro your tutorials and workflows are super useful, thank you!

  • @devon9374
    @devon9374 3 місяці тому

    Another banger, I love open source AI so much ❤

  • @yourenotallowed7494
    @yourenotallowed7494 23 дні тому

    you really help to understand comfyui. love you

  • @jamesrademacher7873
    @jamesrademacher7873 Місяць тому

    wow I got amazing results with your workflow!

    • @pixaroma
      @pixaroma  Місяць тому

      Great to hear :)

  • @jennifertsang6572
    @jennifertsang6572 5 місяців тому

    Very detailed video & great information!

  • @VietnamShorts
    @VietnamShorts 2 місяці тому

    very good, precise explanation. Thank you.

  • @MarjoleinPas-interieurontwerp
    @MarjoleinPas-interieurontwerp 2 місяці тому

    thank you so much, this is an amazing workflow

  • @Mypstips
    @Mypstips 5 місяців тому +2

    Thank you!

  • @Fayrus_Fuma
    @Fayrus_Fuma 5 місяців тому +1

    Eeeeeeee boy! Really Thx man!

  • @Jordan.mysn808
    @Jordan.mysn808 Місяць тому

    Thanks a lot ! you're the best !

  • @АндрейАлександрович-ч7ж
    @АндрейАлександрович-ч7ж 5 місяців тому +2

    Thank You

  • @aquaartistrytutorials
    @aquaartistrytutorials 5 місяців тому +1

    Awesome!

  • @alexrosas9525
    @alexrosas9525 13 днів тому

    Thank you so much !!

  • @59Marcel
    @59Marcel 5 місяців тому

    Fantastic tutorial, Thank you.

  • @lowrider6419
    @lowrider6419 5 місяців тому

    Impressive 👍

  • @eslamafifi1020
    @eslamafifi1020 4 місяці тому

    Great tutorial, thank you very much

    • @pixaroma
      @pixaroma  4 місяці тому

      Glad I could help ☺️

  • @jonrich9675
    @jonrich9675 5 місяців тому +3

    Next video please make a tutorial on how to use Flux Controlnet and how to make good images with it. 👍

    • @pixaroma
      @pixaroma  5 місяців тому +1

      I will see what I can do ☺️

  • @haojunphalanx1696
    @haojunphalanx1696 29 днів тому

    very good

  • @MannyGonzalez
    @MannyGonzalez 4 місяці тому

    I am very surprised this works so well. I have done pixel space upscaling using [euler/beta] with horrible results and even with very low denoise (.20-.35) the composition changes too much.
    Using dpmpp_2m/karras seems to be the trick.
    Thank you.

  • @vb6code
    @vb6code 5 місяців тому

    فيديو جميل وشرح واضح

  • @saarhadad
    @saarhadad 3 місяці тому

    tnx alot :D

  • @Queenbeez786
    @Queenbeez786 3 місяці тому +2

    thank you so much angel. now tell me how you get those performance bars on the right above your settings? thank you

    • @pixaroma
      @pixaroma  3 місяці тому +3

      Install the crystools node from manager

  • @patrickfougere0001
    @patrickfougere0001 5 місяців тому +3

    Great video! (Question:1.8) Where is the setting so you can see the CPU,GPU ect... On the menu gui?????

    • @pixaroma
      @pixaroma  5 місяців тому +2

      Is a custom node install crystools form custom nodes manager

    • @patrickfougere0001
      @patrickfougere0001 5 місяців тому +2

      @@pixaroma thank you! I'll check that out tonight!!

    • @patrickfougere0001
      @patrickfougere0001 5 місяців тому +1

      @@pixaroma I would also love to see a PROPER video on text syntax, tips and tricks for "CLIP text encode prompt". Like what is the proper format? When should I use 'underscores' how does *{(option1|option2|option3):1.2} work in an actual flow. I would love to see a video on this! Great work keep it up.

  • @konnstantinc
    @konnstantinc 5 місяців тому +1

    Niice

  • @djwhispers3157
    @djwhispers3157 Місяць тому

    i think you are awesome

    • @djwhispers3157
      @djwhispers3157 Місяць тому

      i downloaded and tried out the workflow. You are a saint, an angel from above of workflow heaven. Thank you so much.

    • @djwhispers3157
      @djwhispers3157 Місяць тому +1

      also, i modded the workflow a little bit to generate image to image. magnifique.

    • @djwhispers3157
      @djwhispers3157 Місяць тому

      do you have a workflow on how to change clothing on character models?

    • @pixaroma
      @pixaroma  Місяць тому +1

      I don't have, there is online something with "try on", but it didn't work for me as expected

    • @djwhispers3157
      @djwhispers3157 Місяць тому

      @@pixaroma right many clothing swap videos out there but do not work. ok we will wait.

  • @Babubabu-f1d2y
    @Babubabu-f1d2y 4 місяці тому

    please make a compy ui video on using and installing mimic motion, i really appreciate your video, it is very clear compared to other UA-camrs, can mimic motion be used on comfy ui on swarm ui?

    • @pixaroma
      @pixaroma  4 місяці тому

      I saw there are some nodes for comfyui with mimic motion so I will check it out, but probably in kater episodes there are still more to cover in static images before i go to motion and video

  • @CrunchyKnuts
    @CrunchyKnuts 3 місяці тому

    Even with my 3080Ti, I was having a lot of issues with freezing on this one. For some reason, haven't quite found out yet, Comfy isn't clearing VRAM appropriately. My solution was just to put Clean VRAM nodes after most operations. It added a couple seconds on, but prevented freezing.

    • @pixaroma
      @pixaroma  3 місяці тому

      Not sure, you can try to put vae encode and vae decode tiled the version with tiled in name

    • @CrunchyKnuts
      @CrunchyKnuts 3 місяці тому

      @@pixaroma I'll give that a try and let you know

    • @MrCorris
      @MrCorris Місяць тому

      @@CrunchyKnuts YOU NEVER GOT BACK TO HIM SCANDOLOUS!

  • @Daniel-fo6rv
    @Daniel-fo6rv 3 місяці тому

    👏

  • @jigneshparmar678
    @jigneshparmar678 4 місяці тому

    👏🏻💯🙏🏻

  • @AlexanderGarzon
    @AlexanderGarzon 4 місяці тому

    Personally I prefer to let the model-upscaler step for the last step, and have the latent img2img upscale as the second, that way you can make a good use of your VRAM, speed up the process and the result ends up the same. so, TLDR: from your workflow, I will swap the generations 2 3

    • @pixaroma
      @pixaroma  4 місяці тому

      I didn't get the right setting with the latent img2img, the image had some artifacts with latent compared to pixel method can you share how you did it on discord? Thanks

    • @mrrubel8841
      @mrrubel8841 29 днів тому

      Hi, @AlexanderGarzon, so first you generate the image, then you bypass these nodes, then you start upscaling process? Did you mean that?

  • @Moral114
    @Moral114 Місяць тому

    Great tutorial. One question: how could the upscaled results look similar to the first one since they go through a different seed in the second KSampler? Thanks.

    • @pixaroma
      @pixaroma  Місяць тому

      You can reduce the denoise strength to make it look more similar

    • @Moral114
      @Moral114 Місяць тому

      @@pixaroma Thanks for the quick response. Could feeding both Ksampler with the same seed also work?

    • @pixaroma
      @pixaroma  Місяць тому +1

      It works but is using the same image and gets like super sharp or overcooked, like HDR look, I avoid using same seed

  • @robrever
    @robrever 5 місяців тому

    Cool video! I'll definitely try out your approach. However, in AI Search's comfyUI tutorial, he says using tile upscaling yields far better results. Have you tried his method to compare?

    • @pixaroma
      @pixaroma  5 місяців тому +2

      I tried with tiles and Ultimate SD upscalers with controlnet but for me took longer time and the results wasnt as good, maybe i didnt find the right settings, i mean I played a few days, and found this settings by mistakes and just worked good enough for me. I wanted something fast. Is not perfect but for me is good enough for what I need. If I find a better way in the future I will make a new video

  • @yapyh2872
    @yapyh2872 2 місяці тому

    Cool. which ai voice are you using?

    • @pixaroma
      @pixaroma  2 місяці тому

      VoiceAir and they have the voices from elevenlabs. Voice is called burt us

  • @timelesshardcore6551
    @timelesshardcore6551 Місяць тому

    I made a 4608x3072p image with this method. My gpu (RXT3080) and cpu was at its limit and they are not happy with me, but I must say the image is really nice. I think it is way to much, but I found the limit of my pc. From now I make them half size and upscale them without the sampler to get 4k 🤣

    • @pixaroma
      @pixaroma  Місяць тому +1

      You can also try to use vae decode tiled instead of vae decode maybe that help with lowvram

    • @timelesshardcore6551
      @timelesshardcore6551 Місяць тому

      With tiles I get some lines in the image, so I was looking for a new solution. I will give it a try. Thanks ✌🏻

  • @septembre1129
    @septembre1129 20 днів тому

    Thank you so much for this video. I can't access the Discord invitation. Could I learn the reason why?

    • @pixaroma
      @pixaroma  19 днів тому +1

      try different browsers maybe or mobile app, this should work discord.com/invite/gggpkVgBf3

    • @septembre1129
      @septembre1129 19 днів тому

      @@pixaroma Thank you, it worked! :)

  • @DreamAnimate
    @DreamAnimate 5 місяців тому

    awesome! is there any way to reduce the grain applied after upscaling?

    • @pixaroma
      @pixaroma  5 місяців тому +1

      is sharpness from the model, if you can try a different uspcale model that has different sharpness maybe, I didnt find yet a solution for that, other upscaler give different results instead of siax i tried RealEsrgan x4, for some illustrations might work but is smooth things too much, 4x_Foolhardi_Remacri might work in some cases, I also tried from the was node suite custom node to add the Image Dragan Photography Filter and has ther ea field for sharpness and reduced to 0.7 or 0.5 that reduce it slight make it more blurry, but didnt find a permanent solution yet

    • @DreamAnimate
      @DreamAnimate 5 місяців тому

      @@pixaroma got it, thanks!

  • @MrZero00000000000000
    @MrZero00000000000000 Місяць тому

    Using the "upscale image using model" node, then the "upscale image by" node set to 0.5 (2x), results in the same workflow run time as running the "upscale image by" node at 1.0 (4x).
    Is there any way to improve efficiency by forcing a 4x upscale model to run at 2x, instead of upscaling, and then downscaling the image. I tried to find a 2x version of NMKD-siax, but had no luck.

    • @pixaroma
      @pixaroma  Місяць тому

      you can try maybe other models that are 2x only, but that doesnt take so much power anyway, the most consuming is the ksampler, that the bigger the image more time it takes, and can not be bigger than 2MP because it will start getting all kind of lines

  • @hyejinahn6305
    @hyejinahn6305 4 місяці тому

    I have a Lora that I trainted with Flux dev for my beauty product. Can I incorporate the lora node into the t2i upscale workflow and change the diffusion model to flux dev?

    • @pixaroma
      @pixaroma  4 місяці тому

      yes you can add it between load checkpoint and clip text encode

  • @gameboypaul1702
    @gameboypaul1702 4 місяці тому

    Thank you... When you add the kSampler at @14:33, is the upscaling now using Flux? And not just the Siax?

    • @pixaroma
      @pixaroma  4 місяці тому +1

      Yes, i make the image larger and sharper, then is running through flux again, just like you do on image to image, just instead of uploading a new image, I take the image from the previous generation from vae decode, i mage it bigger, entering again in ksampler so basically is an image to image but with bigger image instead of a small image.

  • @F99_Digital_Dance_Music_AI
    @F99_Digital_Dance_Music_AI 12 днів тому

    how we can share the .gguf file for the unet nodes and Serge nodes? the require do place the files in different folders, and i think its not cool copy/paste a 14 gigabyte of flux Q8 in both folders.

    • @pixaroma
      @pixaroma  12 днів тому +1

      I only have in one folder like i did on episode 10

  • @ModRebelMockups
    @ModRebelMockups Місяць тому

    if im using the full large version of flux, would i still use the flux workflows from this vid? they say gguff so just not sure.

    • @pixaroma
      @pixaroma  Місяць тому

      No I dont really use the full large one because is double in size slower and the quality is almost the same

  • @Dunc4n1d4h0
    @Dunc4n1d4h0 3 місяці тому

    Old good trick with second sampler works as expected... but how to deal with those "flux lines" at final step?

    • @pixaroma
      @pixaroma  3 місяці тому

      if the image is under 2 megapixel so is not too big, and the width and height is divisible with 64 you can get usually ok result without lines. You could try different upscalers, i can not use huge images in the ksampler so i need it to do a normal upscale for last step, you can drag a save image before the final upscaler and use a different uspcaler if you want, so if you do a 1024px you could get a 2048 that is over 2 megapixel, so you cand do smaller maybe so the initial image is 960 maybe and the final image would be under 2 megapixel, play with settings

    • @Dunc4n1d4h0
      @Dunc4n1d4h0 3 місяці тому

      @@pixaroma Thanks for reply, I use 0.5Mpixel with that node as in video with 16:9 ar, then model upscale 4x scaled * 0.5 downscale... so 0.5Mpix becomes 2Mpix (as it upscales in both x &y dimensions), all usual and still lines, that is why I'm asking 🙂

    • @pixaroma
      @pixaroma  3 місяці тому

      @@Dunc4n1d4h0 I only get it on some images, not sure what cause it, but most of the time I dont get any lines, maybe the prompt influence somehow or some settings, but I didnt figure it out, I usually just run it a few seeds and pick my favorite :)

  • @last-partizan
    @last-partizan 3 місяці тому

    Can you please put a links to downloads on some public site, like github?

    • @pixaroma
      @pixaroma  3 місяці тому

      Only the workflow are on discord but that is free, the rest is public like all the links to models and other stuff are linked to public sites. For me discords is easier because i have them all in same channel and i can link them in the discussion channel when people need help so they don't have to leave discord and can find all they need there.

  • @fernandohildebrand6319
    @fernandohildebrand6319 Місяць тому

    Thank you. I can't help financially. I hope the likes and comments bring attention to your channel

    • @pixaroma
      @pixaroma  Місяць тому

      Thank you, yes that like and comments really helps 😊

  • @AnNguyen-pd2xi
    @AnNguyen-pd2xi 4 місяці тому

    2:40 . Hello. Can you explain how to get the result image to be exactly the same as the original image? Whenever I use this workflow, the result is always different from the original.

    • @pixaroma
      @pixaroma  4 місяці тому +1

      are you using the same settings I put on the workflow, just download that workflow from discord and test it, you can reduce the denoise on the Ksampler, but if is the same scheduler and sampler and model the result should be the same

    • @AnNguyen-pd2xi
      @AnNguyen-pd2xi 4 місяці тому

      @@pixaroma I created a workflow from a different Sampler with the same structure as your workflow. I noticed that it's basically an image-to-image process and adds upscale after the Sampler. I want to know which parameter determines the result image being similar to the original image but with more details.

    • @pixaroma
      @pixaroma  4 місяці тому +1

      You can't always have similar and with more detail is either similar and you don't get more details, or is less similar so is not constrained and can add more creativity and details. You can add a controll net to keep things more similar like depth and canny that way the composition is the same and lines so you can change more things between those lines. I used the setting in the video snd needed high denoise, with other schedulers it needed less denoise

    • @AnNguyen-pd2xi
      @AnNguyen-pd2xi 4 місяці тому

      The resulting image you created includes additional details but still retains the entire face of the character and the composition without using ControlNet. However, when I run my workflow, the result is a completely different image from the original.

    • @pixaroma
      @pixaroma  4 місяці тому

      @@AnNguyen-pd2xi are you using the same workflow? not sure what workflows you have there, but the workflow I use works like in the video, if you changed something can work differently, so get the same workflows and try to see if works, then see what you did different. Download the workflow from discord and try it.

  • @大白熊-z2o
    @大白熊-z2o 2 місяці тому

    Why use gguf model instead of fp8 model? I'm curious

    • @pixaroma
      @pixaroma  2 місяці тому +1

      the quality of q8 is similar to fp16, so fp8 is lower quality compared with gguf q8,
      1. fp16 the original flux dev.
      2. The Q8
      3.Fp8

  • @ValleStutz
    @ValleStutz 3 місяці тому +1

    I‘m using a RTX 3090 but it breakes my vram, so the ksampler can’t work

    • @pixaroma
      @pixaroma  3 місяці тому

      I have included some low vram workflows on discord for that episodes, try those maybe if dont have enogh vram

  • @BryceHaymond
    @BryceHaymond 3 місяці тому

    why do you scale down before scaling up? that loses resolution before upscaling.

    • @pixaroma
      @pixaroma  3 місяці тому +1

      because is too big for the ksampler, and the pixels are replaced anyway when is generated a new image, you can increase it to see, if you have a good video card, but flux has like 2 megapixel limit, after that it doesnt get so good results

  • @КристинаБуняева-о3в
    @КристинаБуняева-о3в 5 місяців тому

    Hi (:
    Can you please tell me what is the other use of Upscale besides photoshop? Here I am doing 1280 by 720 resolution for visual novels. Even if I will use in the game is not this screen resolution and FULL HD, but still the difference is almost equal to zero. Thanks 🙂

    • @pixaroma
      @pixaroma  5 місяців тому +2

      i use topazgigapixel Ai is not free, but does a good job for me when i need something fast

    • @КристинаБуняева-о3в
      @КристинаБуняева-о3в 5 місяців тому

      @pixaroma I meant that I'm a rookie. I've read that Upscale is used mostly for photoshop users. I make art for games for VN, there resolution is 1280 by 720. So, after Upscale even in 4 times still no effect for visual novels. Or is it useless for my work? 🙂

    • @mrrubel8841
      @mrrubel8841 29 днів тому

      Hi @КристинаБуняева-о3в, I also learning comfyui for my visual novels.
      What genre do you write?

    • @mrrubel8841
      @mrrubel8841 29 днів тому

      @@pixaroma Why do you need topazgigapixel when upscaler can create very good upscaling? What it lacks than topazgigapixel ?

  • @rufu4981
    @rufu4981 4 місяці тому

    I'm getting Bad Request errors when trying to install upscalers. What might I be doing wrong?

    • @pixaroma
      @pixaroma  4 місяці тому

      can you post some screenshot on discord with workflow and the error you get and comand window error, mention pixaroma there

  • @massaro555
    @massaro555 14 днів тому

    hello i hope you can help me i keep getting this error: mat1 and mat2 shapes cannot be multiplied (5280x16 and 64x3072)

    • @pixaroma
      @pixaroma  14 днів тому

      That errors is usually when you use models that doesn't have same base, so make sure all models, control net, vae, lora all you use is same base, either are all sd, or all sdxl or all flux, you can not combine them

    • @massaro555
      @massaro555 14 днів тому

      @@pixaroma im using the same as you just in VAE it forces me to choose safetensors not keep it default

    • @pixaroma
      @pixaroma  14 днів тому

      Can you post on discord maybe screenshots of workflow and error to see what models you have there

    • @massaro555
      @massaro555 14 днів тому

      @@pixaroma i posted in the comfyui channel : the problem with the image

    • @pixaroma
      @pixaroma  14 днів тому

      Replied on discord

  • @anastasiiailina5044
    @anastasiiailina5044 3 місяці тому

    Cannot execute because node UpscaleModelLoader does not exist.: Node ID '#136:6' - hmmm could you pls tell me, do you have any idieas?

    • @pixaroma
      @pixaroma  3 місяці тому

      did you download and load the model? can you post a screenshot with workflow and error on discord on comfyui channel? discord.com/invite/gggpkVgBf3

  • @yinodiaz4290
    @yinodiaz4290 5 місяців тому

    Do you have any video that helps me install and set up Flux01 and Confy like for noobs, I have a 24Vram 4090

    • @pixaroma
      @pixaroma  5 місяців тому +1

      episode 1, 8 and 10 ua-cam.com/play/PL-pohOSaL8P9kLZP8tQ1K1QWdZEgwiBM0.html

  • @hlyanhtet7120
    @hlyanhtet7120 3 місяці тому

    i have nvidia graphic card 2060 super, can i try flux?

    • @pixaroma
      @pixaroma  3 місяці тому +1

      i have that one too on older pc with 64gb of ram also, i use flux schnell, for flux dev takes too much time for me. flux gguf q4 version

  • @danysvay
    @danysvay 4 місяці тому

    i get an error Install failed: 4x-AnimeSharp Bad Request

    • @pixaroma
      @pixaroma  4 місяці тому

      try to download it manually and put it in the ComfyUI_windows_portable\ComfyUI\models\upscale_models folder from the models manager if you click on model name a page will open from where you can download it

  • @EntaFahemGhalat
    @EntaFahemGhalat 4 місяці тому

    I cant join discord , it says invalid invitation or expired link .

    • @pixaroma
      @pixaroma  4 місяці тому +1

      thanks for letting me know, not sure what happened, here is the new link discord.gg/gggpkVgBf3

    • @EntaFahemGhalat
      @EntaFahemGhalat 4 місяці тому

      @@pixaroma you are very welcome

  • @MichaelErickson-i8y
    @MichaelErickson-i8y 4 місяці тому

    Your discord link is unfortunately invalid

    • @pixaroma
      @pixaroma  4 місяці тому

      I changed on the channel description yesterday, but in comments and descriptions some remained unchanged, try discord.com/invite/gggpkVgBf3

    • @rezvansheho6430
      @rezvansheho6430 4 місяці тому

      ​@@pixaromahiii. still invalid link :((

    • @pixaroma
      @pixaroma  4 місяці тому +1

      @@rezvansheho6430 I just test it, it works for me click on it and then click on go to site discord.com/invite/gggpkVgBf3

    • @rezvansheho6430
      @rezvansheho6430 4 місяці тому

      ​@@pixaromaI used a VPN, and now it worked ♥️

  • @m3mee2010
    @m3mee2010 2 місяці тому

    Your videos are great, but it would help if you slowed down your voice er..you talk very fast..

    • @pixaroma
      @pixaroma  2 місяці тому

      Sorry but the AI voice I use doesnt have yet a speed option, it generate the voice from the text I give it, but I dont have a way to make it talk slower :(

  • @chipjohansen8132
    @chipjohansen8132 4 місяці тому +1

    I felt like I have followed along the steps closely and installed everything correctly, but when I try to queue the image I get the following error message. Can you help me figure out what I am missing? I downloaded your Flux Dev Q8 GGUF IMG2IMG with Upscaler workflow and my screen looks exactly like yours in the UA-cam video. Many thanks!
    Prompt outputs failed validation
    UnetLoaderGGUF:
    - Value not in list: unet_name: 'flux1-dev-Q8_0.gguf' not in []
    DualCLIPLoaderGGUF:
    - Value not in list: clip_name1: 't5-v1_1-xxl-encoder-Q8_0.gguf' not in ['clip_l.safetensors', 't5xxl_fp16.safetensors']

    • @chipjohansen8132
      @chipjohansen8132 4 місяці тому

      Never mind. I started with this Ep12. I needed to go back to Ep 10 for the proper GGUF installation.

    • @pixaroma
      @pixaroma  4 місяці тому

      Glad you figured it out, just woke up, you can always mention me on discord and give a screenshot ☺️

  • @FemmeResonance
    @FemmeResonance 4 місяці тому

    I'm so impressive with your work and all the afford you have given here (and in your discord), It really helps beginner like me a lot really. I'm appreciated. And your like button should be OVER 30K. For people who read this message, pls gives a LIKE!!!!! it do not cost you anything! thank you love and respect. ❤❤❤

    • @pixaroma
      @pixaroma  4 місяці тому

      thank you 🙂

  • @chipjohansen8132
    @chipjohansen8132 4 місяці тому

    Thanks!

    • @pixaroma
      @pixaroma  4 місяці тому

      Thank you so much ☺️