I had a similar conversation with myself about banding. It's everywhere, I only notice it when I'm thinking about it, whatever. Maybe in 2050 editors will apply banding in post to get that old school look.
From what I’ve learned the banding you see in Better Call Saul and other Netflix shows is there because of Netflix compression. The footage was most likely very clean before the compression. I’ve also seen that with my own footage. Cine4 footage clean out from Final Cut, but with some banding after UA-cam compression.
Interesting, this also begs the question...if Netflix, Amazon, UA-cam (especially!) and other streaming services compress video so severely that they introduce banding, why try to avoid it? :)
it's so minimal. i don't think it's a major problem. the only people who notice is the people who're actively looking for it. i could barely even see it in your test clips even after you said where the banding was
Banding is everywhere like you said. When I had my blackmagic pocket cinema camera it can record at 12 bit raw. I saw no banding when color grading but as soon as I uploaded to UA-cam I saw banding. And even all US cable shows stream at 8 bit. So does Netflix and Amazon video. The only 10 bit footage you will see is if you buy a blu-Ray movie.
Well said and SPOT ON! The story should be communicating the overall message and distract you from banding. You make an amazing point that our gear is leaps and bounds, and we shouldn't even worry about such a small topic.
Very interesting video James, nice to see someone who has a level head on these things. Like you I notice banding everywhere now, especially on netflix shows, very bad in House Of Cards. I think it comes down to who are we making videos for? The average viewer or other filmmakers? Content of videos will always triumph in my opinion. Cheers!
Interesting discussion. As a photographer I don't like banding at all and immediately notice it in videos. It looks like going back to the old 256k colour mode on monitors of years gone by. To anyone who notices it - it definitely gives the "impression" of inferior image quality. Great videos BTW! Keep up the good work.
I know this is probably years late, but I'm pretty sure you can avoid banding by pushing up the saturation when recording in S-Log. That way it retains more colour data. Whilst still having awesome dynamic range.
I love your perspective, and agree with weighing the advantages against the common perceived disadvantages of utilizing S-Log3. In my opinion, to shell out a few grand for such an incredible camera, it doesn't make sense to turn around (as an owner), and dumb down the a7s2's ability to deliver such wonderful picture range (just to avoid the issue of banding).
Loving your channel James - very informative and very interesting. I'm really thinking about upgrading my 550D to an A7S. Many thanks for the great videos!
From camera direct to youtube choose 8. From camera to ingest to pre-edit, to post-edit, to VFX, to compositing, to edit again, to approval, to grading to mastering to distribution - choose more bits :)
Thanks for the videos James. I have been in the same situation as you, I am torn between Slog2/3 profiles and Cine4. I was preparing to shoot a peace on 120fps HD with the A7sII, and I honestly found that the footage was super noisy and took grading poorly. I was worried to the point that I was considering alternatively getting a new camera exclusively for slow motion stuff. This made me experiment quite allot and I ended up getting decent results with Cine4 in the 2.2 120fps mode. So I am sticking to my current setup. But this made me really question if I should also shoot Cine4 in 4K. My conclusion was to shoot Cine4 during even lit situations, with only using Slog2 in harsh lighting (I feel that Slog3 needs to much grading). The bitrate in XAVC-S is quite decent, but heavy editing/grading will cause some problems. Good point on whether or not those issues are something we can live with or not. In the and, as people here in in comments have pointed out, compression can ultimately compromise the footage... So I guess we all should just do what works the best for our selfs, regardless of internet trolls :)
I'm writing the following more from theory than experience, but it seems to make sense to me and relate to what I think this video is getting at. On the other hand, I'm open to being corrected: What is Dynamic Range? (and how does this help us avoid posterization and/or banding issues that some people get)? (rough draft) Here's a visual way to understand dynamic range, or basically, contrast. If the darkest areas in your image are not that dark and your brightest areas in your image are not that bright then you don't have much contrast or dynamic range. Maybe you shot a foggy scene, or filled up your frame with a photographic "grey card". Dynamic range basically describes the difference between the darkest darks and the brightest brights in your scene. A greater contrast gives less of flat looking image and more 3D-like "pop". Often "dynamic range" is also used to describe the difference between the darkest darks that your camera can capture (typically counting 0 photons) and the brightest bright it can capture (with some maximum number of photons that any of the camera sensor's photosites can count). The cool thing here is that just because at some point we might want to represent some high number of photons counted as a pure white pixel does not mean that our photosite can't count even MORE photons, which means that there may be hidden, recoverable detail in what would otherwise be shown as a mass of white pixels. We can see more of the highlights if we want usually through some in-camera adjustment and, depending on what digital format of photo or video we are shooting in, through some post-capture adjustment. However, this COULD lead to us "compressing" the mid-tones in a way that looks unnatural, which is why cameras often by default do NOT albeit at the sacrifice of unnaturally compressing the highlights. This would not be a problem if our computer and television monitors and printed photos could show our highlights as brightly as they were in our original recorded scene, but then again that might be TOO uncomfortably bright, so living with a little tonal compression here or there shouldn't seem so bad after all. BUT, in order to make our final image look as realistic a duplication of the original scene in tonality as possible, we can't really rely on even the most expensive camera. We have to rely on OURSELVES … particularly in doing some post-capture tonal/exposure correction to the areas of our image. Even when some areas do not need such correction, others will at least look a little better WITH it. Basically we're making up for the fact that cameras "see" differently than our eyes do. Highlight recovery is basically an application you can make of tonal expansion of the highlights, but your tonal range (or number of tones) or colour palette (or number of colours) may be too limited depending on how you recorded your image for doing this correction without causing issues of banding and/or posterization. This is more of a problem though with "JPEG" images or "8-bit colour" video though and pretty much not at all with "RAW" images or "10-bit colour" video. Some people claim that these ill effects though are not bad in 8-bit video if you are using a "log" profile that brings out more detail in the highlights and shadows at the cost of unnaturally compressing the mid-tones (yielding a rather "flat" look) which require post-capture correction. Other people argue that there however ARE still issues at least in the mid-tones after post-correction when trying to add more contrast through tonally expanding those mid-tones and to some extent re-compressing the darks and shadows for a nice "falloff" in those areas - especially if you are not sure about what you are doing. It's bad enough that gain/grain is often most noticeable in dark areas and homogeneously coloured brighter areas like the sky and that we often (in high contrast scenes at least) can't do much to mitigate that - at least not with post-capture exposure adjustment alone. Sometimes allowing a slightly flat look to a high contrast scene at least shows more detail in the shadows and highlights and allows more of a "cinematic" look rather than a high contrast "video" look. The point is that you can have a lot of control over this given the right software, and there is a lot less chance of posterization - basically the ripping apart of tones or in this case most likely mid-tones - if we are using a higher gamut of colours than what JPEG or RAW allows us. One might argue however that with the lower gamut formats there is at least a level of "forgivable" posterization - which may look like grain - in mid-tones where objects are not predominant with textureless monotone colours but are instead rather busy with detail, and especially in video while not so much in a still photo. A lot of this comes down to experimentation and personal choice, although I really think that for scenes that have low dynamic range, there is not much need to shoot in a log format especially with 8-bit colour - although for grain reduction, if you feel you really need that, shooting in 10-bit and using "ETTR" at least in LOW contrast situations - where you can use intentional overexposure without clipping your highlights - would seem to be a good option. Mostly I just think it's cool that we have all these options now. Otherwise we might still have just video, for instance, that can't be adjusted post-capture in exposure even if it can be adjusted in brightness. There IS a difference there which relates to tonal compression and expansion, which I'll get into later. Sometimes the arguments of RAW vs JPEG or 8-bit vs 10-bit are oversimplified - as if one format allows you NO latitude for post-capture adjustment over the other. But when you think about it, a flat picture profile DOES allow more latitude even for these "lesser" formats, and that is good news for those who prefer to shoot in JPEG or 8-bit video, or whose camera is simply one of the majority that is limited to 8-bit video. It's also worth noting that posterization and banding are things that you often do not see UNLESS you are looking out for those things and hyper-aware of them. It's also unfortunate but true that if you have created a video with no such problems, the final delivery system for it may add file compression that introduces those such problems anyway. But even so, it is good to have a "master" video that mitigates these things as much as possible.
Love your channel, gives me hopes for mine. I gotta say, SLOG 3 no camera in the world can handle it's capacity check out the sony documentation on it. That said, SLOG 2 currently is much better overall with less banding. Does banding matter? It depends on your output/client honestly and if you think they'll care, because the gen public imo doesnt.
I don't know if the fact that better call Saul has banding is dew to the camera, it might be dew to Netflix's compression because I also notice a lot of noise in some dark scenes in Netflix, even in some scenes which aren't too dark
It's just a professional consideration to avoid banding. As in noise or CA or aliasing, moire, a boom mic in the shot. Ya know? I would forego dynamic range before accepting psychedelic bands in the sky that you don't see with the naked eye.
Is there actually more dynamic range in Slog-3? I heard that Slog-2 has smoother highlight roll off (which I love) and that Slog-3 is cleaner in the shadows. Thoughts on this? Love this video thanks.
Just bought a LG 27UL600-W 4K HDR 400. Horrible color banding / posterization out of the box. Also the colors were all washed out and completely lifeless. The fix? Needed to download the screen driver from LG. Now the picture is perfect, the colors rich and vibrant. Color banding completely gone. Why they are not included, or at least instructions to install them first is beyond me. Also anytime you tinker with the Nvidia control panel though the screen reverts back to its horrible default state. You have to re-install the LG drivers anytime/everytime you mess it. Ive learned to just leave alone once its working. I have a Nvidia 1080 ti and thought it may be the issue but it wasnt.
Fuji and Panasonic had no issues implementing internal 10-bit log, often with firmware updates (which we all know won't happen on Sony camera). Stop finding excuses for a rushed product.
8-bit colour space basically have 256 shades to choose from in a colour. a7s2 does record video in 8-bit per channel (bpc) so no wonder that banding occurs. Its fine. What I don't like is that you claim that R3D also has this issue. That's not the case. For example Red Dragon is 16bpc and that means that there will be no banding! The bottleneck is at the consumer side. Most monitors are 8bpc so there is no reason to watch videos that contain more then 8bpc. Why would you even export a video that is higher then 8bpc? There is no reason really. BUT!!! You want to record professional video in higher bitrate because you don't want to see banding in teaters on really big screens! And also it leaves more headroom to CC your video and export it to the consumer. //// What I'm saying is that there is a really big difference between a7s2 and R3D. a7s2 footage falls apart a lot faster when graded. I'm just saying that when you noticed banding on a footage that was recorded on the R3D camera is probably due to monitor and media limitation. For example, if you want to avoid banding you need to watch Netflix in HDR mode with HDR capable monitor.
Good! It wasn't clear in the video so I had to put my Keyboard warrior hat on and shell away =) I will sub because I like the grading of the video lol >< I need to learn!
Hi 🙃 well...the more you go into the film-stuff the more faults are visible for you....but most of the people didn't recognise such things...they are just viewers 😳 Perfect for filmmakers...😎
I hate banding so much. It might be fine if you have an interesting story but if your film is purely art and is all about visuals then banding will ruin it. I don't know why someone hasn't figured this out yet. Do not enough people care? Also I'm pretty sure netflix compresses the crap out of movies and TV shows for streaming purposes.
I had a similar conversation with myself about banding. It's everywhere, I only notice it when I'm thinking about it, whatever. Maybe in 2050 editors will apply banding in post to get that old school look.
+datalaforge hahah hilarious.
From what I’ve learned the banding you see in Better Call Saul and other Netflix shows is there because of Netflix compression. The footage was most likely very clean before the compression. I’ve also seen that with my own footage. Cine4 footage clean out from Final Cut, but with some banding after UA-cam compression.
Interesting, this also begs the question...if Netflix, Amazon, UA-cam (especially!) and other streaming services compress video so severely that they introduce banding, why try to avoid it? :)
Because then you would have banding upon banding. Why pursue critical focus if people don't even adjust their TVs correctly? Get it?
Still valuable 8 years later! Thank you!
it's so minimal. i don't think it's a major problem. the only people who notice is the people who're actively looking for it. i could barely even see it in your test clips even after you said where the banding was
What picture profile do you usually shoot in Justin?
Banding is everywhere like you said. When I had my blackmagic pocket cinema camera it can record at 12 bit raw. I saw no banding when color grading but as soon as I uploaded to UA-cam I saw banding. And even all US cable shows stream at 8 bit. So does Netflix and Amazon video. The only 10 bit footage you will see is if you buy a blu-Ray movie.
On the a 6300, Cine4 lets you use any ISO from 200 up. Slogs are 800+, so that is something worth thinking about if you are filming in bright light.
I quite enjoy all your reviews and banter. Thank you for posting.
Well said and SPOT ON! The story should be communicating the overall message and distract you from banding. You make an amazing point that our gear is leaps and bounds, and we shouldn't even worry about such a small topic.
I think I care more about the rolling shutter than the banding and the dynamic range is sick for the price point. Thanks for this viideo
Very interesting video James, nice to see someone who has a level head on these things. Like you I notice banding everywhere now, especially on netflix shows, very bad in House Of Cards. I think it comes down to who are we making videos for? The average viewer or other filmmakers? Content of videos will always triumph in my opinion. Cheers!
Couldn't have said it better myself :)
Interesting discussion. As a photographer I don't like banding at all and immediately notice it in videos. It looks like going back to the old 256k colour mode on monitors of years gone by. To anyone who notices it - it definitely gives the "impression" of inferior image quality.
Great videos BTW! Keep up the good work.
I know this is probably years late, but I'm pretty sure you can avoid banding by pushing up the saturation when recording in S-Log. That way it retains more colour data. Whilst still having awesome dynamic range.
I love your perspective, and agree with weighing the advantages against the common perceived disadvantages of utilizing S-Log3. In my opinion, to shell out a few grand for such an incredible camera, it doesn't make sense to turn around (as an owner), and dumb down the a7s2's ability to deliver such wonderful picture range (just to avoid the issue of banding).
Is banding more noticeable when filming sky or large areas of solid colors (w/ shadow gradients)?
I only ever do lightning quick whip pans of the sky in s-log 3 for hours on end - so onto eBay my A7sii goes. Sigh.
Lol witty
Loving your channel James - very informative and very interesting. I'm really thinking about upgrading my 550D to an A7S. Many thanks for the great videos!
From camera direct to youtube choose 8. From camera to ingest to pre-edit, to post-edit, to VFX, to compositing, to edit again, to approval, to grading to mastering to distribution - choose more bits :)
Thanks for the videos James. I have been in the same situation as you, I am torn between Slog2/3 profiles and Cine4. I was preparing to shoot a peace on 120fps HD with the A7sII, and I honestly found that the footage was super noisy and took grading poorly. I was worried to the point that I was considering alternatively getting a new camera exclusively for slow motion stuff. This made me experiment quite allot and I ended up getting decent results with Cine4 in the 2.2 120fps mode. So I am sticking to my current setup. But this made me really question if I should also shoot Cine4 in 4K. My conclusion was to shoot Cine4 during even lit situations, with only using Slog2 in harsh lighting (I feel that Slog3 needs to much grading). The bitrate in XAVC-S is quite decent, but heavy editing/grading will cause some problems. Good point on whether or not those issues are something we can live with or not. In the and, as people here in in comments have pointed out, compression can ultimately compromise the footage... So I guess we all should just do what works the best for our selfs, regardless of internet trolls :)
I'm writing the following more from theory than experience, but it seems to make sense to me and relate to what I think this video is getting at. On the other hand, I'm open to being corrected:
What is Dynamic Range? (and how does this help us avoid posterization and/or banding issues that some people get)?
(rough draft)
Here's a visual way to understand dynamic range, or basically, contrast. If the darkest areas in your image are not that dark and your brightest areas in your image are not that bright then you don't have much contrast or dynamic range. Maybe you shot a foggy scene, or filled up your frame with a photographic "grey card". Dynamic range basically describes the difference between the darkest darks and the brightest brights in your scene. A greater contrast gives less of flat looking image and more 3D-like "pop".
Often "dynamic range" is also used to describe the difference between the darkest darks that your camera can capture (typically counting 0 photons) and the brightest bright it can capture (with some maximum number of photons that any of the camera sensor's photosites can count). The cool thing here is that just because at some point we might want to represent some high number of photons counted as a pure white pixel does not mean that our photosite can't count even MORE photons, which means that there may be hidden, recoverable detail in what would otherwise be shown as a mass of white pixels.
We can see more of the highlights if we want usually through some in-camera adjustment and, depending on what digital format of photo or video we are shooting in, through some post-capture adjustment. However, this COULD lead to us "compressing" the mid-tones in a way that looks unnatural, which is why cameras often by default do NOT albeit at the sacrifice of unnaturally compressing the highlights. This would not be a problem if our computer and television monitors and printed photos could show our highlights as brightly as they were in our original recorded scene, but then again that might be TOO uncomfortably bright, so living with a little tonal compression here or there shouldn't seem so bad after all. BUT, in order to make our final image look as realistic a duplication of the original scene in tonality as possible, we can't really rely on even the most expensive camera. We have to rely on OURSELVES … particularly in doing some post-capture tonal/exposure correction to the areas of our image. Even when some areas do not need such correction, others will at least look a little better WITH it. Basically we're making up for the fact that cameras "see" differently than our eyes do.
Highlight recovery is basically an application you can make of tonal expansion of the highlights, but your tonal range (or number of tones) or colour palette (or number of colours) may be too limited depending on how you recorded your image for doing this correction without causing issues of banding and/or posterization. This is more of a problem though with "JPEG" images or "8-bit colour" video though and pretty much not at all with "RAW" images or "10-bit colour" video. Some people claim that these ill effects though are not bad in 8-bit video if you are using a "log" profile that brings out more detail in the highlights and shadows at the cost of unnaturally compressing the mid-tones (yielding a rather "flat" look) which require post-capture correction. Other people argue that there however ARE still issues at least in the mid-tones after post-correction when trying to add more contrast through tonally expanding those mid-tones and to some extent re-compressing the darks and shadows for a nice "falloff" in those areas - especially if you are not sure about what you are doing. It's bad enough that gain/grain is often most noticeable in dark areas and homogeneously coloured brighter areas like the sky and that we often (in high contrast scenes at least) can't do much to mitigate that - at least not with post-capture exposure adjustment alone.
Sometimes allowing a slightly flat look to a high contrast scene at least shows more detail in the shadows and highlights and allows more of a "cinematic" look rather than a high contrast "video" look. The point is that you can have a lot of control over this given the right software, and there is a lot less chance of posterization - basically the ripping apart of tones or in this case most likely mid-tones - if we are using a higher gamut of colours than what JPEG or RAW allows us. One might argue however that with the lower gamut formats there is at least a level of "forgivable" posterization - which may look like grain - in mid-tones where objects are not predominant with textureless monotone colours but are instead rather busy with detail, and especially in video while not so much in a still photo. A lot of this comes down to experimentation and personal choice, although I really think that for scenes that have low dynamic range, there is not much need to shoot in a log format especially with 8-bit colour - although for grain reduction, if you feel you really need that, shooting in 10-bit and using "ETTR" at least in LOW contrast situations - where you can use intentional overexposure without clipping your highlights - would seem to be a good option.
Mostly I just think it's cool that we have all these options now. Otherwise we might still have just video, for instance, that can't be adjusted post-capture in exposure even if it can be adjusted in brightness. There IS a difference there which relates to tonal compression and expansion, which I'll get into later. Sometimes the arguments of RAW vs JPEG or 8-bit vs 10-bit are oversimplified - as if one format allows you NO latitude for post-capture adjustment over the other. But when you think about it, a flat picture profile DOES allow more latitude even for these "lesser" formats, and that is good news for those who prefer to shoot in JPEG or 8-bit video, or whose camera is simply one of the majority that is limited to 8-bit video.
It's also worth noting that posterization and banding are things that you often do not see UNLESS you are looking out for those things and hyper-aware of them. It's also unfortunate but true that if you have created a video with no such problems, the final delivery system for it may add file compression that introduces those such problems anyway. But even so, it is good to have a "master" video that mitigates these things as much as possible.
Good explanation
You gave me the knowledge I was after
Excelent as always and this time no more funny faces on inserts. Ty ty ty so much for that. (y)
hey harv, don´t accept banding. buy an atomos ninja star (10bit) or learn to use "neat video" against banding. it works.
cheers from austria!
Love your channel, gives me hopes for mine. I gotta say, SLOG 3 no camera in the world can handle it's capacity check out the sony documentation on it. That said, SLOG 2 currently is much better overall with less banding. Does banding matter? It depends on your output/client honestly and if you think they'll care, because the gen public imo doesnt.
I don't know if the fact that better call Saul has banding is dew to the camera, it might be dew to Netflix's compression because I also notice a lot of noise in some dark scenes in Netflix, even in some scenes which aren't too dark
It's just a professional consideration to avoid banding. As in noise or CA or aliasing, moire, a boom mic in the shot. Ya know? I would forego dynamic range before accepting psychedelic bands in the sky that you don't see with the naked eye.
hey james i want to make a music video and will be using a7 s ii with matt box and i hve some questions before i shoot. how should i contact u?
Is there actually more dynamic range in Slog-3? I heard that Slog-2 has smoother highlight roll off (which I love) and that Slog-3 is cleaner in the shadows. Thoughts on this? Love this video thanks.
Thanks :) yes definitely more DR in slog3, slog2 is nice too and yes possibly better for highlights
banding in shows like bcs or got are mostly due to the codec. especially bcs suffers from netflix codec.
Just bought a LG 27UL600-W 4K HDR 400. Horrible color banding / posterization out of the box. Also the colors were all washed out and completely lifeless. The fix? Needed to download the screen driver from LG. Now the picture is perfect, the colors rich and vibrant. Color banding completely gone. Why they are not included, or at least instructions to install them first is beyond me. Also anytime you tinker with the Nvidia control panel though the screen reverts back to its horrible default state. You have to re-install the LG drivers anytime/everytime you mess it. Ive learned to just leave alone once its working. I have a Nvidia 1080 ti and thought it may be the issue but it wasnt.
Thanks for this!
Fuji and Panasonic had no issues implementing internal 10-bit log, often with firmware updates (which we all know won't happen on Sony camera). Stop finding excuses for a rushed product.
8-bit colour space basically have 256 shades to choose from in a colour. a7s2 does record video in 8-bit per channel (bpc) so no wonder that banding occurs. Its fine. What I don't like is that you claim that R3D also has this issue. That's not the case. For example Red Dragon is 16bpc and that means that there will be no banding! The bottleneck is at the consumer side. Most monitors are 8bpc so there is no reason to watch videos that contain more then 8bpc. Why would you even export a video that is higher then 8bpc? There is no reason really. BUT!!! You want to record professional video in higher bitrate because you don't want to see banding in teaters on really big screens! And also it leaves more headroom to CC your video and export it to the consumer. //// What I'm saying is that there is a really big difference between a7s2 and R3D. a7s2 footage falls apart a lot faster when graded. I'm just saying that when you noticed banding on a footage that was recorded on the R3D camera is probably due to monitor and media limitation. For example, if you want to avoid banding you need to watch Netflix in HDR mode with HDR capable monitor.
+Dimitri T yeah I get all this
Good! It wasn't clear in the video so I had to put my Keyboard warrior hat on and shell away =) I will sub because I like the grading of the video lol >< I need to learn!
+Dimitri T thanks :) very much appreciated
Hi 🙃 well...the more you go into the film-stuff the more faults are visible for you....but most of the people didn't recognise such things...they are just viewers 😳 Perfect for filmmakers...😎
I hate banding so much. It might be fine if you have an interesting story but if your film is purely art and is all about visuals then banding will ruin it. I don't know why someone hasn't figured this out yet. Do not enough people care? Also I'm pretty sure netflix compresses the crap out of movies and TV shows for streaming purposes.
Color grading for skin tones, SHOULD YOU CARE ?
Care to explain?
I think if I have banding, then the stock agencies will refuse my footage. Whops. :\