References: ►Read the full article: www.louisbouchard.ai/swinir/ ►Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L. and Timofte, R., 2021. SwinIR: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1833-1844). ►Code: github.com/JingyunLiang/SwinIR ►Demo: replicate.ai/jingyunliang/swinir ►My Newsletter (A new AI application explained weekly to your emails!): www.louisbouchard.ai/newsletter/
6:12 pic right , it like a blurry, i see like it on tv Samsung s95c on youtube , and different in xbox serie x in youtube it like more detailed or grain ,for that i want test the upscaled , what you recommend, use usb and put image 1080p converted in video hevc for see how work the upscaling in tv? How can i know upscaling work ? I guess if it fit in full screen. --- edit some time it better on UA-cam app on Samsung and sometimes in xbox app 🤔, and the xbox serie x dont get full screen 4k pic in 4k tv while if i try in tv have the full screen in 4k pic
I can help out but do you need to run the code absolutely or do you simply want to try it out? Because if you just want to try it there’s a demo where you don’t have to code, it’s linked in the description in the references :)
Im a total noob here, but quick question. Is the upscaling used in DALLE-2 (diffusion or whatever) superior to this? And have Nvidia got an even better one as well?
Diffusion models are quite good for upscaling, and I think some recent models are indeed better than this one, but are most certainly comparable for now!
Yay now I can make ppl's csgo vidoes more better... wait it's only for images???? wtf.....well no worries I will make every frame better one by one and make a video!
seems to lose a lot of detail, becoming overly smooth. For example the grey haired fellow's shirt has no pattern afterwards, and his skin is smoother than a baby's arse. I can see the correct skin and shirt details in the low res image, so I think the model is lacking something here.
Oh yeah like I said in the video it doesn’t work that well for smaller images! I didn’t test with 300-400 pixels but tried with 100-200 and it was quite bad. You should try with images of 500 or larger pixels and let me know what you think! I believe the x4 on 512 is amazing compared to other approaches I’ve seen, and it seems to stick to the real picture much better as well
At least, I can assure you it is the best one that gives pretrained models and a free live demo haha! Other papers might be better, but we have to trust the results in the paper or retrain/reimplement them. This one is not corner cases or hand picked results and still it works well. This is what’s most impressive!
References:
►Read the full article: www.louisbouchard.ai/swinir/
►Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L. and Timofte, R., 2021. SwinIR: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1833-1844).
►Code: github.com/JingyunLiang/SwinIR
►Demo: replicate.ai/jingyunliang/swinir
►My Newsletter (A new AI application explained weekly to your emails!): www.louisbouchard.ai/newsletter/
It seems like classic cnns combined the self attention layer mechanism from transformers are the next big key in image ai use cases.
Yep it seems like! Both have complementary strengths and limitations!
6:12 pic right , it like a blurry, i see like it on tv Samsung s95c on youtube , and different in xbox serie x in youtube it like more detailed or grain ,for that i want test the upscaled , what you recommend, use usb and put image 1080p converted in video hevc for see how work the upscaling in tv? How can i know upscaling work ? I guess if it fit in full screen. --- edit some time it better on UA-cam app on Samsung and sometimes in xbox app 🤔, and the xbox serie x dont get full screen 4k pic in 4k tv while if i try in tv have the full screen in 4k pic
Man I love your channel :D
So glad you do! Thank you 🙏 ☺️
Can you make video on this whole code by explaining it from the begining.....?
Can you make a tutorial on how to use it? I'm not familiar with running code
I can help out but do you need to run the code absolutely or do you simply want to try it out? Because if you just want to try it there’s a demo where you don’t have to code, it’s linked in the description in the references :)
@@WhatsAI I'll give it a try
Thanks!
Man this will be game changer once it overcomes limitations. Still results are amazing👌👏
CNN + Transformer = Mind blowing🤯😍
hold on to your papers
Im a total noob here, but quick question. Is the upscaling used in DALLE-2 (diffusion or whatever) superior to this? And have Nvidia got an even better one as well?
Diffusion models are quite good for upscaling, and I think some recent models are indeed better than this one, but are most certainly comparable for now!
So basically use the one that you have the code for and is easily adaptable to your pipeline haha
zoom and enhance😂
That tv tropes is becoming a reality.
Yay now I can make ppl's csgo vidoes more better... wait it's only for images???? wtf.....well no worries I will make every frame better one by one and make a video!
There are better methods for videos! It is much more easy to enhance a video than an image and requires different networks :)
too much artifacts for me there is something better repos
Which repos would you suggest? :)
@@WhatsAI 1st pass Gpfgan+ 2nd Real ESRgan or GPEN+Real ESRgan in both case better results
seems to lose a lot of detail, becoming overly smooth. For example the grey haired fellow's shirt has no pattern afterwards, and his skin is smoother than a baby's arse. I can see the correct skin and shirt details in the low res image, so I think the model is lacking something here.
Tried a few images. Not impressed at all
Really?! I was really impressed with images around 512*512 to 2048! What did you try?
@@WhatsAI 300 - 400 px. I'm not saying it does a poor job. It's just not that impressive
@@WhatsAI You think it's the best one around yet? I haven't tried all the papers.
Oh yeah like I said in the video it doesn’t work that well for smaller images! I didn’t test with 300-400 pixels but tried with 100-200 and it was quite bad. You should try with images of 500 or larger pixels and let me know what you think! I believe the x4 on 512 is amazing compared to other approaches I’ve seen, and it seems to stick to the real picture much better as well
At least, I can assure you it is the best one that gives pretrained models and a free live demo haha!
Other papers might be better, but we have to trust the results in the paper or retrain/reimplement them. This one is not corner cases or hand picked results and still it works well. This is what’s most impressive!