Written Tutorial: www.nextdiffusion.ai/tutorials/upscale-and-add-detail-locally-with-stable-diffusion Be sure to explore our latest image generation tool now available on our website! ❤
Ultrasharp tends to make images more saturated especially combined with noise inversion. If you don't want this effect then do not enable VAE encoder colorfix and choose different upscaler such as SwinIR or Remacri. Although NI alone will still make colors brighter.
This is an excellent video. I have one question - You say your rendering took 3 minutes. Mine is taking 2 hours and I'm running on a 2080Ti with 12Gb VRAM - Is that normal? I cannot render 1024x1024 and upscale as I run out of VRAM. I'm relatively new to this. I don't mind waiting but if there is a weird setting that I have missed, it would be helpful to know about it.
Scrap that. I realised that I had not downloaded the tile model. For some reason, it made it take hours. Redid it and it was minutes. Still an excellent video, though.
I downloaded all the models and put them in the correct folder. They will not show in stable diffusion controlnet. I have verified that they are in the right folder. I've restarted stable diffusion. Nothing. Also, I've noticed that in the extensions tab, multidifusion-upscaler-for-automatic1111 and sd-webui-controlnet revert to being unchecked. I check the boxes and after reloading stable diffusion they are unchecked again.
In my opinion your upscalled version looks more cartoonish. Like a dall-e image. I prefer the first one, but I tend to create more realistic material so my opinion might be biased.
If you want more realism, you can use a more realistic focussed checkpoint like picX real. Also, try adding "skin details, skin pores" in the prompts will give your realism a boost!
Has anyone used DemoDiffusion? I'm interested to know how it works and what's the difference between it and Tiled Diffusion. Maybe an idea for a future video? :)
More fake details not always better, in many cases it just gets eclectic and over-burned. It is like modern stupid trend in TV for popping acid screaming colors and absurd brightness.
Hi and thanks! Just a couple of question if I may. How I can get 4x-UltraSharp upscaler? It's not present in my A1111 (My version is 1.9.3). And i can install Controlnet as when I do it the response is "PermissionError: [WinError 5] Accesso negato: 'D:\\A1111\\sd.webui\\webui\\tmp\\sd-webui-controlnet' -> 'D:\\A1111\\sd.webui\\webui\\extensions\\sd-webui-controlnet'"?
You can download the 4x-UltraSharp model here: openmodeldb.info/models/4x-UltraSharp When downloaded put them in this folder: \stable-diffusion-webui\models\ESRGAN
Sorry if I bother you, just a question. I'm triyng to use other upscalers, like "4xFFHQDAT" but A111 tells me is not a ERSGAN , so where I have to store the pth file? Thanks!
I'm not sure about this, but I think upscalers trained on ESRGAN go in the ESRGAN folder. For DAT upscalers, you put them in the DAT folder etc.. On openmodeldb is tells you what kind of upscaler it is.
Can you help me with this one. Whenever I run stable diffusion and use a negative embeddings, this error always occur. "runtimeerror: expected scalar type half but found float" But when I don't use negative embeddings it runs fine. Please help
That's not enhancing the images, what you did is modifying the original picture into someting new and increasing the details level, but that does not make the image more beautiful... Actually, the one you started with was more beautiful before than after (I know the fashion boosted by the Flux AI wants us to believe that THIS is beautiful (and it wants it because it is its model and the company wants it to work), but it is not more beautifule, that's quite the opposite, actually
Written Tutorial: www.nextdiffusion.ai/tutorials/upscale-and-add-detail-locally-with-stable-diffusion
Be sure to explore our latest image generation tool now available on our website! ❤
Ultrasharp tends to make images more saturated especially combined with noise inversion. If you don't want this effect then do not enable VAE encoder colorfix and choose different upscaler such as SwinIR or Remacri. Although NI alone will still make colors brighter.
This upscaler IS insane. It can add TOO MUCH detail and skin becomes patchy if you don't keep the denoiser way down. It's a nice option to have.
good tutorial
i'm just struggling a little with Anime based images and checkpoints. But i'm still confident 😄
This is an excellent video. I have one question - You say your rendering took 3 minutes. Mine is taking 2 hours and I'm running on a 2080Ti with 12Gb VRAM - Is that normal? I cannot render 1024x1024 and upscale as I run out of VRAM. I'm relatively new to this. I don't mind waiting but if there is a weird setting that I have missed, it would be helpful to know about it.
Scrap that. I realised that I had not downloaded the tile model. For some reason, it made it take hours. Redid it and it was minutes. Still an excellent video, though.
I downloaded all the models and put them in the correct folder. They will not show in stable diffusion controlnet. I have verified that they are in the right folder. I've restarted stable diffusion. Nothing. Also, I've noticed that in the extensions tab, multidifusion-upscaler-for-automatic1111 and sd-webui-controlnet revert to being unchecked. I check the boxes and after reloading stable diffusion they are unchecked again.
i dont know if you already fixed it but next to control net-s model there is a reload little button, press that
So helpful video! 👏👏👏 what parameters do you recommend for a low vram (6gb)?
You should be fine using the default settings, if you run into errors, decrease the batch size and the VAE tile size.
Thanks a lot for the video!
You're welcome!
In my opinion your upscalled version looks more cartoonish. Like a dall-e image. I prefer the first one, but I tend to create more realistic material so my opinion might be biased.
If you want more realism, you can use a more realistic focussed checkpoint like picX real. Also, try adding "skin details, skin pores" in the prompts will give your realism a boost!
Has anyone used DemoDiffusion? I'm interested to know how it works and what's the difference between it and Tiled Diffusion. Maybe an idea for a future video? :)
Nice vid, thanks!
it creates faces everywhere when i enable it during gen; i don't want a separate post process in img2img i want prompt to final result in one click
More fake details not always better, in many cases it just gets eclectic and over-burned. It is like modern stupid trend in TV for popping acid screaming colors and absurd brightness.
What's wrong with "fake details" on a fake image? And if your images are being burned, lower your denoising strength.
can this do style transfer as good as magnific?
brilliant
Hi and thanks! Just a couple of question if I may. How I can get 4x-UltraSharp upscaler? It's not present in my A1111 (My version is 1.9.3). And i can install Controlnet as when I do it the response is "PermissionError: [WinError 5] Accesso negato: 'D:\\A1111\\sd.webui\\webui\\tmp\\sd-webui-controlnet' -> 'D:\\A1111\\sd.webui\\webui\\extensions\\sd-webui-controlnet'"?
You can download the 4x-UltraSharp model here: openmodeldb.info/models/4x-UltraSharp
When downloaded put them in this folder: \stable-diffusion-webui\models\ESRGAN
@@NextDiffusion thanks!
Sorry if I bother you, just a question. I'm triyng to use other upscalers, like "4xFFHQDAT" but A111 tells me is not a ERSGAN , so where I have to store the pth file? Thanks!
I'm not sure about this, but I think upscalers trained on ESRGAN go in the ESRGAN folder.
For DAT upscalers, you put them in the DAT folder etc..
On openmodeldb is tells you what kind of upscaler it is.
Can you help me with this one. Whenever I run stable diffusion and use a negative embeddings, this error always occur. "runtimeerror: expected scalar type half but found float"
But when I don't use negative embeddings it runs fine. Please help
Bro how do I just add the upscaler to the folder? Where do I add it? I am trying to add "4x-ultrasharp"
Into the ESRGAN folder, should be created automatically when you first use an ESRGAN upscaler. If you don't have it then you can create one yourself.
That's not enhancing the images, what you did is modifying the original picture into someting new and increasing the details level, but that does not make the image more beautiful... Actually, the one you started with was more beautiful before than after (I know the fashion boosted by the Flux AI wants us to believe that THIS is beautiful (and it wants it because it is its model and the company wants it to work), but it is not more beautifule, that's quite the opposite, actually
Things change soooooo fast. I just tried this set up. My Tiled Diffuson extension does not have an upscaler line. WTF?
Are you in img2img?
@@NextDiffusion DUH! Noob mistake. It is there in img to img. Thanks again!
Keren woiyy. Bukan maeeen
👍👍👍👍
idk any time I try any of these ai tools the upscale is barely better than the input. wtf
ua-cam.com/video/kRNCVPqL610/v-deo.htmlsi=vbcwqy4i7KKbfL6n&t=138
how did you seperate "sampling method", "schedule type" ? , is that extension?
It's newest update on A1111