I come from a non-English speaking country, so we had to learn this language. I must say that your pronunciation is very clear, I can hear your words clearly. Thank you very much for making a very useful video for me, when I am learning about HDR technology in rendering videos when creating content on UA-cam.
I have to say, it's unbelievable to find such clear, technical information about these topics, usually they are harder to find or really difficult to understand but you made it sooo clear :D You've got +1 subscriber ! Awesome job
Something I'd like to comment on: If you're watching this video in SDR, you might notice some strange artifacts and noise in the darker scenes (noticeable around 0:33). This is a known issue with UA-cam's handling of HDR. And since explaining things like this is what this channel is for, I thought I'd bring it up. According to UA-cam's documentation: "When mastering in PQ (ST 2084), much of the signal range is devoted to shadow detail. Digital intermediate codecs like ProRes and DNxHR retain detail across the image range. Your videos may have noise in darker image regions that's visually masked by highlights in the image. UA-cam's video processing may remove some noise to achieve streaming bitrates. You can get more control by denoising your video before rendering it for upload. Denoising can also help if your video looks too "compressed" when streamed. We're always working to improve the quality of UA-cam videos, including handling this case better." So basically, there is some noise in the shadows of the image, and UA-cam is doing a pretty poor job of compressing said noise for the SDR version of the video. And since the entire scene is (deliberately) fairly dark, the compression artifacts are more visible than they usually would be. As far as I can tell, this issue is not present in the HDR version of the video. Although I think this is a case UA-cam should handle better, in the future I'll make sure to apply extra denoising to dark HDR scenes.
@@VideoTechExplained if you use a video downloader, like yt-dlp, you can see the video streams of the video and the bitrates.. YT generally gives about 3 to 6x less bitrate to the SDR stream when an HDR stream is available from what I've seen, it heavily deprioritizes it in this case the 2160p SDR stream is about 2.5mbps and the hdr stream is about 13mbps Heh, 1080p SDR stream is only 0.25mbps.. wow...
Heya just curious about the video and the graphics... was the title at 2:34 supposed to be a dim white? I notice that the graph axes have a true max white color but the text in your video is muted. Is this on purpose, or could this be something with my monitor not scaling the HDR correctly. I am watching on a M1 2020 MacBook Air, so its display does have P3 output and was able to test that it does work..
Maybe this is due to 2:34 having that extra bright text and therefore making the rest of the video dim due to YT's HDR10? If anyone else is experiencing this on a HDR monitor, please lmk
Thanks for the video, here are a few comments : 1:35 Hyrbid Log Gama 5:50 The music seems to be a bit too noisy to focus on what you're saying 8:00 Could have added a summary of the differents formats with their specification since you compared them right before :) I subscribed, looking forward to other to the point video tech Explained :)
Best explanation so far! but Im still confused! because Im a Apple Final cut pro user. when I output a video it give me like 10bit 2020 PQ or 10 bit 2020 HLG With Dolby vision. Based on your explanation. HLG is a Standard. and Dolby vesion is another standard. who can this too be together? it feels like HLG is not a stand but a type? Like HDR10 can use PQ or HLG type or Dolby version also can be PQ or HLG? and the output format is 10bit for Dolby version not 12bit. so a 10bit Dolby vision is a fake ?
It would be very helpful if you could please make a video explaining how to compress an HDR video. I used handbrake (which is my go-to solution for video compression) to compress an HDR video and the video lost its HDR and the colours look all bland and flat even on an HDR capable display.
The compression doesn't really matter, as long as it's at least 10 bit. You do need to attach appropriate HDR metadata though, which I don't think handbrake can do. I'd recommend checking out my previous video about grading HDR on an SDR monitor, as it covers how to do this
@@VideoTechExplained Okay, so theoretically, if I compress the HDR video using Handbrake and the HDR metadata in the new compressed video is lost but I add the meta-data extracted from the source video back into the new video, will it retain its HDR capability? If yes, how do I put meta-data back into the new compressed video?
@@ashishxthakur Yes. When you re-compress the video it retains all the HDR information but the metadata which informs the player that the video is HDR is lost. Compress the video, then add new HDR metadata using the method I describe in this video: ua-cam.com/video/HBTyWV1FEIo/v-deo.html
Rim light that couldn't usefully be placed outside the frame :) I spotted that little glitch in editing but didn't consider it a big enough deal to be worth rotoscoping out my arm
@@VideoTechExplained You can keep light in the shot, that's okay to have it, it's not a movie. Also it better to have light head instead empty space (at least for current composition).
What I don't really understand is why is the metadata necessary to transmit? Why can't the display device calculate it for each frame? Max and average are both pretty quick calculations that can be accelerated with matrix math hardware that I'm pretty confident almost every display's processor chip has nowadays. Especially with PQ, where the bit value corresponds directly with a nit brightness, storing and transmitting metadata about peak and average brightness seems totally redundant... And in the case of regular HDR10, useless. Is the problem more about you want scenes to look consistent and if you do it frame by frame you might get weird flickering if some bright object pops into a dark scene for a couple of frames? E.g. in a dark forest scene where one frame has a brightish tonemap and then another has a much darker tonemap because the display only supports 600 nits but the a flashlight in the image is 1000 nits? Isn't pretty similar logic to what auto exposure on a camera does? Basically, you're saying these screens are not doing a fixed tone-map, right? They have multiple curves or some function that takes into account max and avg brightness in the scene, so a 600 nit TV doesn't always have to make everything really dark in anticipation of a 10k nit event that may never come Why can't a display basically have 1 tone curve that is tuned to the display, but the curve looks something like HLG?
The purpose of the metadata is twofold. For one thing, like you said, it helps eliminate flicker in rapidly-changing scenes. While you could theoretically calculate the metadata dynamically, the result would be a tone curve that changed on every frame, which likely wouldn't look good outside some very specific situations. While there are circumstances in which a filmmaker might want to rapidly make changes to the tone curve, the majority of scenes have relatively consistent brightness within a shot, so updating the tone map constantly would likely be distracting. The other purpose is, like you said, to prevent a situation where the display dims itself in order to allow for extremely bright values which may never actually come. While brightnesses up to 10,000 nits are theoretically allowed, they're almost never used in practice. If the display knows that a particular shot never goes above, say, 400 nits, then it doesn't have to worry about leaving room for values brighter than that and can do a better job of mapping the image. Hope that helps clear things up! :)
I come from a non-English speaking country, so we had to learn this language. I must say that your pronunciation is very clear, I can hear your words clearly. Thank you very much for making a very useful video for me, when I am learning about HDR technology in rendering videos when creating content on UA-cam.
I have to say, it's unbelievable to find such clear, technical information about these topics, usually they are harder to find or really difficult to understand but you made it sooo clear :D You've got +1 subscriber ! Awesome job
Simply the best
Great explanation
Go on
Something I'd like to comment on:
If you're watching this video in SDR, you might notice some strange artifacts and noise in the darker scenes (noticeable around 0:33). This is a known issue with UA-cam's handling of HDR. And since explaining things like this is what this channel is for, I thought I'd bring it up.
According to UA-cam's documentation:
"When mastering in PQ (ST 2084), much of the signal range is devoted to shadow detail. Digital intermediate codecs like ProRes and DNxHR retain detail across the image range. Your videos may have noise in darker image regions that's visually masked by highlights in the image.
UA-cam's video processing may remove some noise to achieve streaming bitrates. You can get more control by denoising your video before rendering it for upload. Denoising can also help if your video looks too "compressed" when streamed.
We're always working to improve the quality of UA-cam videos, including handling this case better."
So basically, there is some noise in the shadows of the image, and UA-cam is doing a pretty poor job of compressing said noise for the SDR version of the video. And since the entire scene is (deliberately) fairly dark, the compression artifacts are more visible than they usually would be.
As far as I can tell, this issue is not present in the HDR version of the video. Although I think this is a case UA-cam should handle better, in the future I'll make sure to apply extra denoising to dark HDR scenes.
Relevant: i.redd.it/dpxb7oy2avu21.png
UA-cam also gives 5x less bitrate to the SDR stream when an HDR one is available which certainly doesn't help
@@RainboomDash Where did you read this?
@@VideoTechExplained if you use a video downloader, like yt-dlp, you can see the video streams of the video and the bitrates.. YT generally gives about 3 to 6x less bitrate to the SDR stream when an HDR stream is available from what I've seen, it heavily deprioritizes it
in this case the 2160p SDR stream is about 2.5mbps and the hdr stream is about 13mbps
Heh, 1080p SDR stream is only 0.25mbps.. wow...
@@RainboomDash That's very interesting. I'll have to look into that!
This video made me follow you! Great explainer, saved me a lot of time!
It's been a long time for me to find this exact proper explanation for HDR
I'm planning to make similar thing in Indonesian language
I primarily delivered in HDR, this is very useful, would love for you to do an updated version 🎥
Concise and precise as always. Keep growing and creating these quality videos.
best explanation ever
This was incredible. Thank you for this.
hi, I found this channel very useful! I hope your channel will grow and be able to reach out many people around the world.Good Luck!
You sir, are a legend. Thank you
Thanks! Also, I like your Mercedes Benz logo 😅
Heya just curious about the video and the graphics... was the title at 2:34 supposed to be a dim white? I notice that the graph axes have a true max white color but the text in your video is muted. Is this on purpose, or could this be something with my monitor not scaling the HDR correctly. I am watching on a M1 2020 MacBook Air, so its display does have P3 output and was able to test that it does work..
same with the website screenshot at 2:40, it is dimmed and not a true white... could this be by accident, or am I somehow on a borked setup? 🙂
Maybe this is due to 2:34 having that extra bright text and therefore making the rest of the video dim due to YT's HDR10? If anyone else is experiencing this on a HDR monitor, please lmk
It was a mistake! Sometimes the graphics white defaults to 100 nits and I don't remember to manually bump it up to my intended graphics white level :)
Very Awesome!
Thanks for the video, here are a few comments :
1:35 Hyrbid Log Gama
5:50 The music seems to be a bit too noisy to focus on what you're saying
8:00 Could have added a summary of the differents formats with their specification since you compared them right before :)
I subscribed, looking forward to other to the point video tech Explained :)
Excellent tutorial! I'd just like to comment that on the graphic it says Hybid instead of Hybrid, thanks
Wonderfull
Best explanation so far! but Im still confused! because Im a Apple Final cut pro user. when I output a video it give me like 10bit 2020 PQ or 10 bit 2020 HLG With Dolby vision. Based on your explanation. HLG is a Standard. and Dolby vesion is another standard. who can this too be together? it feels like HLG is not a stand but a type? Like HDR10 can use PQ or HLG type or Dolby version also can be PQ or HLG? and the output format is 10bit for Dolby version not 12bit. so a 10bit Dolby vision is a fake ?
It would be very helpful if you could please make a video explaining how to compress an HDR video.
I used handbrake (which is my go-to solution for video compression) to compress an HDR video and the video lost its HDR and the colours look all bland and flat even on an HDR capable display.
The compression doesn't really matter, as long as it's at least 10 bit. You do need to attach appropriate HDR metadata though, which I don't think handbrake can do. I'd recommend checking out my previous video about grading HDR on an SDR monitor, as it covers how to do this
@@VideoTechExplained Okay, so theoretically, if I compress the HDR video using Handbrake and the HDR metadata in the new compressed video is lost but I add the meta-data extracted from the source video back into the new video, will it retain its HDR capability? If yes, how do I put meta-data back into the new compressed video?
@@ashishxthakur Yes. When you re-compress the video it retains all the HDR information but the metadata which informs the player that the video is HDR is lost.
Compress the video, then add new HDR metadata using the method I describe in this video: ua-cam.com/video/HBTyWV1FEIo/v-deo.html
@@VideoTechExplained Alright man, thanks a lot
I tried UA-cam hdr contents, there is no difference,
Have you studied filmmaking?
Look how darker its looks on hdr. I think you encoded it with Rec 2020 PQ , its ruining all my videos on gaming.
8:55 manipulation detected ^_^ (second microphone?)
Rim light that couldn't usefully be placed outside the frame :) I spotted that little glitch in editing but didn't consider it a big enough deal to be worth rotoscoping out my arm
@@VideoTechExplained You can keep light in the shot, that's okay to have it, it's not a movie. Also it better to have light head instead empty space (at least for current composition).
What I don't really understand is why is the metadata necessary to transmit?
Why can't the display device calculate it for each frame? Max and average are both pretty quick calculations that can be accelerated with matrix math hardware that I'm pretty confident almost every display's processor chip has nowadays.
Especially with PQ, where the bit value corresponds directly with a nit brightness, storing and transmitting metadata about peak and average brightness seems totally redundant... And in the case of regular HDR10, useless.
Is the problem more about you want scenes to look consistent and if you do it frame by frame you might get weird flickering if some bright object pops into a dark scene for a couple of frames? E.g. in a dark forest scene where one frame has a brightish tonemap and then another has a much darker tonemap because the display only supports 600 nits but the a flashlight in the image is 1000 nits? Isn't pretty similar logic to what auto exposure on a camera does?
Basically, you're saying these screens are not doing a fixed tone-map, right? They have multiple curves or some function that takes into account max and avg brightness in the scene, so a 600 nit TV doesn't always have to make everything really dark in anticipation of a 10k nit event that may never come
Why can't a display basically have 1 tone curve that is tuned to the display, but the curve looks something like HLG?
The purpose of the metadata is twofold. For one thing, like you said, it helps eliminate flicker in rapidly-changing scenes. While you could theoretically calculate the metadata dynamically, the result would be a tone curve that changed on every frame, which likely wouldn't look good outside some very specific situations. While there are circumstances in which a filmmaker might want to rapidly make changes to the tone curve, the majority of scenes have relatively consistent brightness within a shot, so updating the tone map constantly would likely be distracting.
The other purpose is, like you said, to prevent a situation where the display dims itself in order to allow for extremely bright values which may never actually come. While brightnesses up to 10,000 nits are theoretically allowed, they're almost never used in practice. If the display knows that a particular shot never goes above, say, 400 nits, then it doesn't have to worry about leaving room for values brighter than that and can do a better job of mapping the image.
Hope that helps clear things up! :)
Since when is HDR10 1000 nits max? It's 10'000 Nits.
Hope to have a episode to explain difference between Dolby vision standard