References: ►Read the full article: www.louisbouchard.ai/4k-image-translation-in-real-time/ ►Liang, Jie and Zeng, Hui and Zhang, Lei, (2021), "High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network", export.arxiv.org/pdf/2105.09188.pdf ►Code: github.com/csjliang/LPTN
I'm a bit of a newbie at ML. To confirm, it looks like it has a simplifying algo, effectively convolutions as an upscaling method. They downscaled massively, do the image changes, then use the previously developed convolutional maps to reupscale the image.
Exactly! And only the downscaled image is sent into the typical encoder-decoder architecture we use in GANs instead of the whole image! Which is why it is so much faster.
@@WhatsAI I think calling it downscaling instead of low frequency would have made it easier to understand for themathematically less inclined 😉. Thanks for making the video though,this is a brilliant approach indeed!
Thank you, noted! I tried to introduce this high-low frequency nuance as it's the terminology they used in the paper and I like seeing it this way more haha! But I agree, I should've made the comparison more clear!
@@WhatsAI a year or so ago I started working on a project called lane area segmentation. Initially I trained model on around 10k images and images were captured by dashcam. The trained model really did a great job segmenting lane area in regular sunny day and more or less shadow as well. But it failed terribly in weather like situations. Now that we have a new approach that is faster I'm planning to use it as augmentation for robustness. I hope it will work.
This method is suitable for photorealistic neural style transfer, because it only considered the color and illumination, and I dont think it works for non-photorealistic NST.😅
I do implement some of them and not others, it depends! Sometimes there's just no code given. For this one, the GitHub repo is very clear and easy to follow + implement! Just follow their steps one by one :)
References:
►Read the full article: www.louisbouchard.ai/4k-image-translation-in-real-time/
►Liang, Jie and Zeng, Hui and Zhang, Lei, (2021), "High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network", export.arxiv.org/pdf/2105.09188.pdf
►Code: github.com/csjliang/LPTN
it is really impressive ,thank you for making us discover this paper
I agree, it's clever and very impressive! It is my pleasure, glad to be able to share these amazing papers with you! :)
Great video, keep going! 👏
Thank you ☺️
I'm a bit of a newbie at ML. To confirm, it looks like it has a simplifying algo, effectively convolutions as an upscaling method.
They downscaled massively, do the image changes, then use the previously developed convolutional maps to reupscale the image.
Exactly! And only the downscaled image is sent into the typical encoder-decoder architecture we use in GANs instead of the whole image! Which is why it is so much faster.
@@WhatsAI I think calling it downscaling instead of low frequency would have made it easier to understand for themathematically less inclined 😉. Thanks for making the video though,this is a brilliant approach indeed!
Thank you, noted! I tried to introduce this high-low frequency nuance as it's the terminology they used in the paper and I like seeing it this way more haha! But I agree, I should've made the comparison more clear!
Impressive!👌 Love it😍
Gonna use it in my personal project😋
Amazing! Please let me know your progress and what you do with it!
@@WhatsAI a year or so ago I started working on a project called lane area segmentation. Initially I trained model on around 10k images and images were captured by dashcam. The trained model really did a great job segmenting lane area in regular sunny day and more or less shadow as well. But it failed terribly in weather like situations. Now that we have a new approach that is faster I'm planning to use it as augmentation for robustness. I hope it will work.
Oh awesome application!
Thanks Louis.
Impressive like it!!!
This method is suitable for photorealistic neural style transfer, because it only considered the color and illumination, and I dont think it works for non-photorealistic NST.😅
I think so too!
The future is awating us
First comment ? Love your videos
Thank you so much! Glad that you are the first comment then! ;)
👏👏👏👏👏
Do you implement the papers that you talk about in your videos? and if so, how difficult is it to implement them
I do implement some of them and not others, it depends! Sometimes there's just no code given. For this one, the GitHub repo is very clear and easy to follow + implement! Just follow their steps one by one :)