Linus, you're such a cool guy. You rarely mess up in speech, you entertain the audience so well with your consistent (but not overdone) gestures, you're not monotone in your speaking, and you can crack a joke or two if needed. Video editor, keep it up, you're in-sync and that what matters. 9/10 channel but just wanted to put in my two cents after watching so many of these.
Just a note. Both AMD Radeon (gaming) & FirePro (professional) cards support 10-bit per channel color. Only Nvidia Quadro (professional) cards support 10-bit per channel while GeForce (gaming) cards do not.
kyle krone no, sadly only 4. A bit is basically a single numerical value of either 0 or 1, the computer only knows 0 and 1 because he only knows the states on and off. It's the binary system. Basically like the decimal system which we normally use except that in the decimal system we can use 10different numerals 0-9, and in binary only 2. With the decimal system you can display 10 numbers with a single digit, with 2digits you can already display 10²=10x10=100numbers. 3 digits can display 10³=10x10x10=1000numbers and so on. In binary you can only display 2numbers with a single digit. With 2digits you can display 2²=2x2=4. With 3digits 2³=2x2x2=8 etc. Basically the systems work the same, but with a different amount of numerals. Binary numbers can also be converted to decimal numbers. 0 is 0 1 is 1 10 is 2 11 is 3 100 is 4 101 is 5 110 is 6 111 is 7 the digit one place further left being just twice as big, different than in the decimal system where the digit one place further left is 10times larger. There are also more system like the octagonal system which uses the numeral 0-7 and the hexadecimal system which uses the numerals 0-9 and additionally A-F(representing the numbers 10-15 as a single digit).
Yea, UA-cam often gets shafted, and I hate that it get throttled. I have Fiber, but it's interesting to see what hours during which UA-cam cannot keep up in my region because of peering.
This is why on torrent sites, we who know the importance of bitrate laugh at the people complaining about a 1080p video's filesize. _"This upload sucks. There's the same upload here that's also 1080p but 5x smaller"._ And then the people who torrent uncompressed BDMVs laugh at us.
4FootTech I know its amazing! I tell my friends to pick a movie they want to see and 15 minutes later its up an running! Not to mention the super small file size makes it to where I can download like 10 videos to my phone at once.
austingost505 Only problem is I can download much, my cap gets over in a week and after that my speed is reduced to 512Kbps, which is total hell. Indian ISPs, still partying like it's 2008!
There are two forms of 8-bit color graphics. The most common uses a separate palette of 256 colors, where each of the 256 entries in the palette map is given red, green, and blue values. In most color maps, each color is usually chosen from a palette of 16,777,216 colors (24 bits: 8 red, 8 green, 8 blue)
Great work on the video Linus and the team! A suggestion for the next as Fast as Possible would be the controversy between the "fact" that our eye can only see 30 FPS so more than that is overkill? Thanks and keep up the great work :)
Tech quickie idea: "monitor calibration". It's way more important than 10bit vs 8bit. I can guarantee a lot of viewers didn't even see your blocky background at 1:23. I wasn't even aware this was an issue until I had a discussion a with a website designer on why something wasn't differently shaded on my screen and even sent him screen shots of it not being shaded ... oh the irony.
Techquickie he's right you know, do a monitor calibration video! :D people who are stuck with TN screens will probably want to know what they should do (i know i do... one of my screens is IPS and my other two TN's look terrible compared to it, i managed to make them look a little bit similar but it took a lot of research woulda been nice to have had a LTQ video to refer to :)
Thanks. As a future video idea, it might be handy to go into a little more detail on the differences and use cases for higher colour depths (16 bit vs 24 bit vs 32 bit).
Its very rare for me to comment on feeds!!! ....But I just had to say this guy in the video is ... AMAZING ... In front of the camera!! His humour ,delivery, flow , likeabillity is unbelievable!!!! I cant remember the last time someone was this “ on point “.. He truly has a gift !
The day I get a 1440p, 10-bit, 144hz, G-Sync monitor is the day that I am a very VERY happy man. I just can't give up refresh rates for color quality. Not as a gamer :(
I love this channel. I've been into new technology my whole life, and i've learned more on this channel in about a half dozen videos than i have in like a year lol.
I have a question: I have a video file, with a bitrate of around 8000 kbps. And the size is also around 1 GB and the time of the file is around 25 minutes. Same file after I uploaded on Google photos and Downloaded it from there the size is around 200 MB and bitrate dropped to 1200. So which is the original file in this case? Or what do I analyse from this relating from your video, like does the one with higher size has more color or does the lower one was the original compressed one and it somehow became that big? Can you explain? I just didn't understand why it was like that. Thank you.
google photos compressed the file and you downloaded the compressed file with "less color". your original video with the size of 1gb has better quality
Can you please make a video about 4.2.2 and 4.2.0 video? I still dont get it, i knew the color depth already, but when filming with a camera for instance, the Canon 5D mkIII records in 4.2.0 8-Bit while the Sony A7S records in 10-bit 4.2.2. WHAT IS 4.2.2. and 4.2.0??
HamDerDanskeren *See bottom for TL;DR* It's quite simple - I will try and explain. Images (and video obviously) can be split in two parts: 1) Black and white data (lightness) 2) Colour data (colour... ness?) To save bandwidth the colour data is often at a lower resolution than the black and white data, but images still look sharp because our eyes are more sensitive to this than they are to colours. 4:4:4 represents when the colour data is at the same resolution as the black and white data. It's a ratio and might as well be written as 1:1:1 but it isn't for... reasons. 4:2:0 represents when both the vertical and horizontal resolution of the colour data is halved. (black and white data unaffected) 4:2:2 on the other hand represents when the horizontal resolution is halved but the vertical resolution remains full. (black and white data unaffected) Just for reference, 4:1:1 is when horizontal resolution is quartered but vertical is still full. (black and white date unaffected... you get the idea!) So why the numbers then? Well, the first number (always 4) is pretty much just a reference number of pixels wide a section of the image is. 4 means it's 4 pixels wide and this is used as quartering is the lowest it ever really goes for colour resolution. The second number is the amount of colour pixels in the first vertical row of those 4 pixels. For example 2 means that there are effectively 2 colour pixels in the top row of 4 black and white pixels. The third number is the amount of colour pixels in the second vertical row of those 4 pixels. By the way, this block of 4 pixels wide is ALWAYS 2 tall, again for... reasons. Well actually for some things such as 4:1:0 it isn't but never mind. So, let's take this step by step with 4:2:0. 1) The 4 means that 4 pixels are being sampled. 2) The 2 means that of those 4, in the top row, colour resolution is 2. 3) The 0 means that of those 4, in the bottom row, colour resolution is 0 (ie. the row doesn't exist and is the same as the top row) Still don't understand? The Wikipedia page "Chroma Subsampling" contains some nice little multicoloured diagrams of what I have tried to explain in this comment under the heading "Sampling Systems and Ratios". Or else search google images with these ratios. Hopefully this has cleared it up for you... looking back I guess it isn't quite as simple as I thought! Nevermind! By the way the 8 bit vs 10 bit thing is REALLY simple :) 8 bit means there are up to 256 shades per primary colour (32 bit colour total including transparency, or 24 bit without). 10 bit means there are up to 1024 shades per primary colour (40 bit colour total including transparency, or 30 bit without) Consumer products such as well.. most monitors and whatever can only display 8 bit though. In fact without a pro-grade graphics card (quadro or firepro) you can't even output 10 bit colour. *TL;DR mode - The Sony A7S is a better camera but you probably won't be able to utilise the full potential of it. The Canon 5D MkIII is a worse camera but will still product awesome looking images and video as the difference between 10 bit and 8 bit (and 4:2:2 and 4:2:0) is really not that large unless you are zooming in super close on the final image etc*
HamDerDanskeren Well I really wasn't expecting it to be so long and have thought of a better way of explaining it anyway LOL So read on if well.. you want? This 3-number code represents a block of 8 pixels (4 wide by 2 tall). It could be anywhere on the image but its location really doesn't matter. The first number is pretty much always 4. The second and third numbers represent the number of colour pixels in each line (top line and bottom line) of this block. A 2 means that in the 4 wide (1 tall) row there are 2 colour pixels - effectively half the resolution as the black and white pixels. A 0 means that row doesn't exist and is essentially the same as the one above. So 4:2:0 - The 4 is always 4 and represents that the block of pixels is 4 wide. (it is always 2 tall but this is not specified) The 2 represents that the top row of the block of 8 pixels has a colour resolution of 2 (half of 4) The 0 means that the bottom row doesn't exist and is therefore the same as the top row. So the colour resolution is halved horizontally and vertically. For example your camera might take a picture of dimensions 1920x1440 (even really cheap cameras are more than this in still image mode). So that means that the black and white resolution of the image is 1920 pixels wide by 1440 tall. If this image is 4:2:0 the colour resolution is actually only 960x720. If the image is 4:2:2 on the other hand the colour resolution would be 960x1440 (the pixels for colour would not be square - this does not matter).
2:09 Hit fullscreen here and don't move your mouse so the play bar disappears. When it goes away, in the bottom center does anyone else see fuzzyness going on? I would REALLLLYY appreciate a reply from as many people as possible, THANK YOU!
everything... almost. also different for video than for pictures. videos dont really need more MP they need better stabilization, zooming, bitrates, audio if you dont have external. i am only a moderate user so this may be misleading info but from my experience it has been these. also for pictures more than 12 MP is just ridiculous but i like 18 standard by canon dslr for the zooming in. 2MP is 1080P and 5 is about 4K so 18 is extremely large! but just because it says dslr doesnt mean everything is good. higher end have better processors and better lenses. lenses is where its at for pictures in my opinion. you can have a rebel but with a good lens you dont HAVE to get a pro grade dslr unless if you are a pro. lens with good aperture control (f-stop) is what i am most interested in. the more you zoom in the higher the base aperture goes for most lenses. but usually in good lighting lower is better! so ya MP is not really all that but only for clarity if you take landscape shots i guess. bitrates for video and the compression method it uses if any. and lenses with lower aperture. this is what i start with then get into quality comparisons. but again i am probably somewhat wrong, just go on a proper forum.
I like what you said, especially about aperture. I got a 50mm f/1.4, and if it freaking fantastic. Most cameras work fine in good light and in scenes with mainly one color. But some cameras make sacrifices when a scene contains multiple shades of multiple different colors and there is just too much.
Colour Depth is directly affected by the Sensor size and technology. Larger sensors with lower pixel count means more detail can be crammed in to each pixel, usually.
***** Usually if the company is well known for it and has experience. But not always. You just need enough more may not always be better for specific situations.
A lot of the stuff above talks about making better looking images, but nothing about colour depth. For that, just look at the bit rate of the raw images. You would see something like 12-bit raws images in a review site such as Dpreview. If we are talking about video, you will see something like 4:2:2, with 4:4:4 being the highest quality. Unless things have changed in the past couple of year, I don't think you are going to get a 4:4:4 camera. Those cameras are like a quarter of a million dollars and the file size is massive. A few seconds at full HD, uncompressed would be a gigabyte. But generally speaking, colour depth is not a concern. Most cameras takes pictures at qualities better than even professional monitors could display (when in raw format). The bigger concern is colour replication and dynamic range. Review websites for professionals would go into great detail about that (AKA, not CNET). After that, look at the suggestions above for taking better looking pictures.
I would like an episode where you explain the differences in monitor/tv technology and what to look for when purchasing one. Right now all I know is higher resolution is better and IPS is supposed to be better than TN. (forgive me if there is one already)
This is so true! And believe it or not, when you walk in electronic stores, you will see 4k tvs on display looking so rich and vibrant, which basically trick most buyers into thinking its better in every way compared to 1080p. When actually the content being played is just rich in colour due to it being uncompressed. I got one of the clips used by these stores and played it in 1080p, althought the image wasnt as sharp, the colour was still amazing and made the picture look great and surprisingly fluid. That being said, content quality aside, a tv with great Contrast ratios also play a huge part on image quality just like pixels do.
quick question iv watched a video of that monitor you where showcasing one with the 10bit panel it is quite expensive but what im trying to get at is how comes theres a option in windows for 16bit and 32bit when the panel is 10 or even lower on a normal panel
Nice to learn this. This is why 10 bit Monitors are very expensive. And why a 48fps Raw Bluray Frozen Movie I downloaded a while ago was 32GB in size, comparing to the compressed one at 1,8GB. The quality was very noticeble though.
I'm about to buy IPS gaming monitor of 1440p (16:9 aspect ratio) sooner. *What are the things I should check while buying?* -High Refresh rate (144hz or more) and low response time (4ms) -G-sync (my GPU is NVIDIA's) -27 inch maximum size (usually between 24-27 inches) *_Is there anything more that I should check like brightness, color depth, contrast, etc?_* *_What is the good number/amount/value of these stuffs?_*
This is... partly wrong. A higher bitrate of color, say 10-bit video (also known as Hi10P) can yes, get less banding (so a lot better. smoother color like you mentioned), but it can do so with a smaller file size, file size is not the reason it isn't a standard because it actually improves in that aspect. It compresses without losing as much quality as an 8 bit compression. It's not a standard because it's hard to make new standards. It's harder to decode so most blu ray players now wouldn't be able to play it at the frame rates necessary for smooth playback. Also, most content isn't shot in 10bit, so it has the same problems 4K does. Some cartoons, usually japanese anime, are produced in 10bit because there is no special equipment really needed to produce animation in 10bit. Comparing the same source in 10 bit vs 8 bit, compressed to the same file size, the 10 bit would look better every time. So in theory, if we made 10 bit the standard, it would actually be EASIER to stream HD content, just harder to decode it if you have a bad CPU/GPU. You could have smaller file sizes for the same quality, or higher quality for the same file size. Mobile devices can't really decode 10bit yet, and certainly not 1080p 10bit, so I don't think it will be a standard anytime soon, but maybe when 4K becomes a standard, 10 bit color will follow.
it should be noted that 8-bit vs 10-bit is talking about bits per color, which determines the color grading. When Linus mentioned the shadow grading he could've touched on this a bit more clearly. But to really explain it well would've been too much of a tangent I think. Anyway, solid post. +1
I have a LG OLED, 55" B7V. When i watch programs or play games i still see colour banding. Not sure how to fix this. It with default tv picture settings or optimized picture settings for that content. Is their anyway to get rid of it? I play games in HDR but am never sure whether to use RGB, 4.2.0, 4.2.2 or 4.4.4. And i think their are incompatibilities with Nvidia graphics cards, gaming ones and colour format.
Thought about not equal depth for channels? like in RGB565? (16-bit depth) ... and another ways of representing colour, and what happens when different channels are then cut in depth?
Fokum8 well even in 1 bit I guess, yeah :) I should have been more specific, that the difference between "normal" and 8 bit was barely noticeable. With the right dithering, you can often get away with less colours
When the Game Of Thrones S8 episode "Battle Of Winterfell" first aired on TV, lots of people compained about the blockiness and lack of color depth. It even got mainstream media coverage. When S8 went on Bluray, this episode was way better in all areas.
Brilliant Video ,Can you Help i have a Gif still at 64x 64 but i get a error code to many colours max 128, but don't know how to reduce the colours please can you help!..
What are these high quality codecs apart from standard h264? What codecs to blu-rays use? Where can you get raw uncompressed content? I assume blu-rays usually have a higher bitrate than youtube will allow? Whats the highest bitrate and best codec you can use on youtube or is it just one standard?
Actually both are correct, however 'gray' is more commonly used in American English.* Most other varieties of English use the spelling 'grey'. So if you want to place me remotely in the world based on my comment go ahead! :) *I knew both were correct, however I didn't know the geographical implications until I used my google-fu for fact checking. link: grammarist.com/spelling/gray-grey/
The Gentleman Sandwich oh yeh I remember that xd I like how its most commonly used in american English but if you put grey it would still work in an essay xd
I have a tn panel and I could easily tell the difference between all of the colour depths you showcased, even from 8-bit to what you usually have. and that's through a youtube compressed video. my monitor wasn't very expensive at all, so I guess most people will have hardware to match almost any colour depth (although colour fidelity might be a problem).
Linus, you're such a cool guy. You rarely mess up in speech, you entertain the audience so well with your consistent (but not overdone) gestures, you're not monotone in your speaking, and you can crack a joke or two if needed.
Video editor, keep it up, you're in-sync and that what matters.
9/10 channel but just wanted to put in my two cents after watching so many of these.
Thanks :)
Cryogenical hi
hi
I second this compliment! love the "fast as possible" series
+Cryogenical I wish I could talk like he does O_O
UA-cam recommendation algorithm is doing its thing again i see.
Why did this show up?... What was relevant all of a sudden?... I thought it was random in my case.
@@michaelmonstar4276 THEEEEEEEEEEEE WRIIIIIIIIIIIIIIIST GAAAAAAAAAAAAAAMEEEEEEEEEEEEEE
yes
To me too
Same
You should have used 50 Shades of Grey for the 2 bit segway.
check the bottom right corner of the video ;)
Techquickie Ohh subtle. I like it! Are you doing a Double 50 Shades of Grey promotion for your 100th video? ;D
_Segue_, not _segway_ ;)
I know, it's weird.
Oh you're right, I just put segway because I didnt remember how to spell segue. Thanks
LOL
"Thank you Linus"
How i met your mother huh? :D
I don't know what you are talking about
Just a note. Both AMD Radeon (gaming) & FirePro (professional) cards support 10-bit per channel color.
Only Nvidia Quadro (professional) cards support 10-bit per channel while GeForce (gaming) cards do not.
How i met your mother right there! haha
I like Linus in 1bit.
2 bit is good too.
boy4everjoy 8 bit is great too.
boy4everjoy
i wonder is 2bit has 50 shades of gray
kyle krone
no, sadly only 4. A bit is basically a single numerical value of either 0 or 1, the computer only knows 0 and 1 because he only knows the states on and off.
It's the binary system. Basically like the decimal system which we normally use except that in the decimal system we can use 10different numerals 0-9, and in binary only 2.
With the decimal system you can display 10 numbers with a single digit, with 2digits you can already display 10²=10x10=100numbers. 3 digits can display 10³=10x10x10=1000numbers and so on.
In binary you can only display 2numbers with a single digit. With 2digits you can display 2²=2x2=4. With 3digits 2³=2x2x2=8 etc.
Basically the systems work the same, but with a different amount of numerals.
Binary numbers can also be converted to decimal numbers.
0 is 0
1 is 1
10 is 2
11 is 3
100 is 4
101 is 5
110 is 6
111 is 7
the digit one place further left being just twice as big, different than in the decimal system where the digit one place further left is 10times larger.
There are also more system like the octagonal system which uses the numeral 0-7 and the hexadecimal system which uses the numerals 0-9 and additionally A-F(representing the numbers 10-15 as a single digit).
Lonewolf I like porn in 1bit
Am I the only one seriously creeped out by that bunny?........
Hey you again wassup? lol
yeah wtf its just a bunny
***** Really? Big Buck Bunny? But he's such a cutie!
+Jeremy Novoa nope
We're not alone
What perfect timing. My video stopped loading as soon as Linus says, "Challenges include: Internet Bandwidth."
Yea, UA-cam often gets shafted, and I hate that it get throttled. I have Fiber, but it's interesting to see what hours during which UA-cam cannot keep up in my region because of peering.
This is why on torrent sites, we who know the importance of bitrate laugh at the people complaining about a 1080p video's filesize. _"This upload sucks. There's the same upload here that's also 1080p but 5x smaller"._
And then the people who torrent uncompressed BDMVs laugh at us.
Well I'd rather download a 1 gig 1080p file and not cross my inhumane data cap of 15 gigs, sue me.
4FootTech I agree. If done properly, you can make a file super small and loose vary little quality. YIFY is a master at this art.
austingost505 Oh YIFY is the Almighty Lord for downloading movies, the guy has saved so much bandwidth for me.
4FootTech I know its amazing! I tell my friends to pick a movie they want to see and 15 minutes later its up an running! Not to mention the super small file size makes it to where I can download like 10 videos to my phone at once.
austingost505 Only problem is I can download much, my cap gets over in a week and after that my speed is reduced to 512Kbps, which is total hell. Indian ISPs, still partying like it's 2008!
There are two forms of 8-bit color graphics. The most common uses a separate palette of 256 colors, where each of the 256 entries in the palette map is given red, green, and blue values. In most color maps, each color is usually chosen from a palette of 16,777,216 colors (24 bits: 8 red, 8 green, 8 blue)
I really liked the way you play around with the image as you're describing the terms, really effective.
0:43 SO BRING ON THE BITS
Great work on the video Linus and the team!
A suggestion for the next as Fast as Possible would be the controversy between the "fact" that our eye can only see 30 FPS so more than that is overkill?
Thanks and keep up the great work :)
0:30 to 0:40 is all I needed from this video
0:30 - 0:45
Tech quickie idea: "monitor calibration". It's way more important than 10bit vs 8bit. I can guarantee a lot of viewers didn't even see your blocky background at 1:23. I wasn't even aware this was an issue until I had a discussion a with a website designer on why something wasn't differently shaded on my screen and even sent him screen shots of it not being shaded ... oh the irony.
Techquickie he's right you know, do a monitor calibration video! :D people who are stuck with TN screens will probably want to know what they should do (i know i do... one of my screens is IPS and my other two TN's look terrible compared to it, i managed to make them look a little bit similar but it took a lot of research woulda been nice to have had a LTQ video to refer to :)
we have to upvote this. teach us monitor calibration linus!
" I can guarantee a lot of viewers didn't even see your blocky background at 1:23." - On a true 10 bit, UHD panel, it's very obvious.
This should be higher up. I thought it was plain black-ish grey background.
@@piers389 You don't need a 10-bit UHD panel to see it. It's pretty obvious on an 8-bit panel too.
Thanks. As a future video idea, it might be handy to go into a little more detail on the differences and use cases for higher colour depths (16 bit vs 24 bit vs 32 bit).
0:34 nice compositing touch when his hand overlaps *maybe* at the right bottom, good job.
SO BRING ON THE BITS! 0:42
Its very rare for me to comment on feeds!!! ....But I just had to say this guy in the video is ... AMAZING ... In front of the camera!! His humour ,delivery, flow , likeabillity is unbelievable!!!! I cant remember the last time someone was this “ on point “.. He truly has a gift !
UA-cam do be finding the right time to recommend this.
Yea
The golden era of Linus Tech Tips, Useful info, fast, no bullshit and idiotic skit that means nothing.
The day I get a 1440p, 10-bit, 144hz, G-Sync monitor is the day that I am a very VERY happy man. I just can't give up refresh rates for color quality. Not as a gamer :(
you will get there
Why can't we have it all..IGZO PLS 144hz 1ms Gsync 1440p and cost 600$ and below
Why limited to 1440p?
Lu Dux Well, higher would be fine, but with all the issues with UI scaling and VRAM limitations of 4K, I'm just not that sold on it yet.
Lu Dux 4k with 60Hz is rare, and a lot of PC gamers care a lot more about refresh rates than 4k.
2 Bit actually look surprisingly good. I guess at higher spatial resolution the density of grey dots can provide the illusion of even more shade.
2:34 That is the exact kind of thing that would be in a children’s cartoon that I would have nightmares about as a kid.
I love this channel. I've been into new technology my whole life, and i've learned more on this channel in about a half dozen videos than i have in like a year lol.
THE WRIST GAAAAAAMEEEEAAAAAAAA!!!
A fellow AVGN viewer, I see.
I have a question: I have a video file, with a bitrate of around 8000 kbps. And the size is also around 1 GB and the time of the file is around 25 minutes. Same file after I uploaded on Google photos and Downloaded it from there the size is around 200 MB and bitrate dropped to 1200. So which is the original file in this case? Or what do I analyse from this relating from your video, like does the one with higher size has more color or does the lower one was the original compressed one and it somehow became that big? Can you explain? I just didn't understand why it was like that. Thank you.
google photos compressed the file and you downloaded the compressed file with "less color". your original video with the size of 1gb has better quality
Can you please make a video about 4.2.2 and 4.2.0 video? I still dont get it, i knew the color depth already, but when filming with a camera for instance, the Canon 5D mkIII records in 4.2.0 8-Bit while the Sony A7S records in 10-bit 4.2.2. WHAT IS 4.2.2. and 4.2.0??
HamDerDanskeren
*See bottom for TL;DR*
It's quite simple - I will try and explain.
Images (and video obviously) can be split in two parts:
1) Black and white data (lightness)
2) Colour data (colour... ness?)
To save bandwidth the colour data is often at a lower resolution than the black and white data, but images still look sharp because our eyes are more sensitive to this than they are to colours.
4:4:4 represents when the colour data is at the same resolution as the black and white data. It's a ratio and might as well be written as 1:1:1 but it isn't for... reasons.
4:2:0 represents when both the vertical and horizontal resolution of the colour data is halved. (black and white data unaffected)
4:2:2 on the other hand represents when the horizontal resolution is halved but the vertical resolution remains full. (black and white data unaffected)
Just for reference, 4:1:1 is when horizontal resolution is quartered but vertical is still full. (black and white date unaffected... you get the idea!)
So why the numbers then? Well, the first number (always 4) is pretty much just a reference number of pixels wide a section of the image is. 4 means it's 4 pixels wide and this is used as quartering is the lowest it ever really goes for colour resolution.
The second number is the amount of colour pixels in the first vertical row of those 4 pixels. For example 2 means that there are effectively 2 colour pixels in the top row of 4 black and white pixels.
The third number is the amount of colour pixels in the second vertical row of those 4 pixels.
By the way, this block of 4 pixels wide is ALWAYS 2 tall, again for... reasons. Well actually for some things such as 4:1:0 it isn't but never mind.
So, let's take this step by step with 4:2:0.
1) The 4 means that 4 pixels are being sampled.
2) The 2 means that of those 4, in the top row, colour resolution is 2.
3) The 0 means that of those 4, in the bottom row, colour resolution is 0 (ie. the row doesn't exist and is the same as the top row)
Still don't understand? The Wikipedia page "Chroma Subsampling" contains some nice little multicoloured diagrams of what I have tried to explain in this comment under the heading "Sampling Systems and Ratios". Or else search google images with these ratios.
Hopefully this has cleared it up for you... looking back I guess it isn't quite as simple as I thought! Nevermind!
By the way the 8 bit vs 10 bit thing is REALLY simple :)
8 bit means there are up to 256 shades per primary colour (32 bit colour total including transparency, or 24 bit without).
10 bit means there are up to 1024 shades per primary colour (40 bit colour total including transparency, or 30 bit without)
Consumer products such as well.. most monitors and whatever can only display 8 bit though. In fact without a pro-grade graphics card (quadro or firepro) you can't even output 10 bit colour.
*TL;DR mode - The Sony A7S is a better camera but you probably won't be able to utilise the full potential of it. The Canon 5D MkIII is a worse camera but will still product awesome looking images and video as the difference between 10 bit and 8 bit (and 4:2:2 and 4:2:0) is really not that large unless you are zooming in super close on the final image etc*
Alright, thanks alot for your time :D i think i get it now :D
HamDerDanskeren
Well I really wasn't expecting it to be so long and have thought of a better way of explaining it anyway LOL
So read on if well.. you want?
This 3-number code represents a block of 8 pixels (4 wide by 2 tall). It could be anywhere on the image but its location really doesn't matter.
The first number is pretty much always 4.
The second and third numbers represent the number of colour pixels in each line (top line and bottom line) of this block.
A 2 means that in the 4 wide (1 tall) row there are 2 colour pixels - effectively half the resolution as the black and white pixels.
A 0 means that row doesn't exist and is essentially the same as the one above.
So 4:2:0 -
The 4 is always 4 and represents that the block of pixels is 4 wide. (it is always 2 tall but this is not specified)
The 2 represents that the top row of the block of 8 pixels has a colour resolution of 2 (half of 4)
The 0 means that the bottom row doesn't exist and is therefore the same as the top row.
So the colour resolution is halved horizontally and vertically. For example your camera might take a picture of dimensions 1920x1440 (even really cheap cameras are more than this in still image mode). So that means that the black and white resolution of the image is 1920 pixels wide by 1440 tall. If this image is 4:2:0 the colour resolution is actually only 960x720. If the image is 4:2:2 on the other hand the colour resolution would be 960x1440 (the pixels for colour would not be square - this does not matter).
*****
English please, Sir?
this video about to pop off with that 16 bit, 8bit,4bit ... meme.
it did for me jahaja.
@@Dragonfire511 lol
@@triynizzles IT BEGINS.....
was looking for the comment
That was a better integration than usual, I was expecting "Speaking of shadows, audible.com!"
*What about the wrist game though?*
someone please make a gif of 0:42
.
.
Sorry I'm late
ezgif.com/video-to-gif
2:09 Hit fullscreen here and don't move your mouse so the play bar disappears. When it goes away, in the bottom center does anyone else see fuzzyness going on?
I would REALLLLYY appreciate a reply from as many people as possible, THANK YOU!
when I look for a camera with high resolution I look for megapixles, if I want a camera with good color depth i look for ....?
everything... almost. also different for video than for pictures. videos dont really need more MP they need better stabilization, zooming, bitrates, audio if you dont have external. i am only a moderate user so this may be misleading info but from my experience it has been these. also for pictures more than 12 MP is just ridiculous but i like 18 standard by canon dslr for the zooming in. 2MP is 1080P and 5 is about 4K so 18 is extremely large! but just because it says dslr doesnt mean everything is good. higher end have better processors and better lenses. lenses is where its at for pictures in my opinion. you can have a rebel but with a good lens you dont HAVE to get a pro grade dslr unless if you are a pro. lens with good aperture control (f-stop) is what i am most interested in. the more you zoom in the higher the base aperture goes for most lenses. but usually in good lighting lower is better! so ya MP is not really all that but only for clarity if you take landscape shots i guess. bitrates for video and the compression method it uses if any. and lenses with lower aperture. this is what i start with then get into quality comparisons. but again i am probably somewhat wrong, just go on a proper forum.
I like what you said, especially about aperture. I got a 50mm f/1.4, and if it freaking fantastic. Most cameras work fine in good light and in scenes with mainly one color. But some cameras make sacrifices when a scene contains multiple shades of multiple different colors and there is just too much.
Colour Depth is directly affected by the Sensor size and technology. Larger sensors with lower pixel count means more detail can be crammed in to each pixel, usually.
***** Usually if the company is well known for it and has experience. But not always. You just need enough more may not always be better for specific situations.
A lot of the stuff above talks about making better looking images, but nothing about colour depth. For that, just look at the bit rate of the raw images. You would see something like 12-bit raws images in a review site such as Dpreview.
If we are talking about video, you will see something like 4:2:2, with 4:4:4 being the highest quality. Unless things have changed in the past couple of year, I don't think you are going to get a 4:4:4 camera. Those cameras are like a quarter of a million dollars and the file size is massive. A few seconds at full HD, uncompressed would be a gigabyte.
But generally speaking, colour depth is not a concern. Most cameras takes pictures at qualities better than even professional monitors could display (when in raw format). The bigger concern is colour replication and dynamic range. Review websites for professionals would go into great detail about that (AKA, not CNET). After that, look at the suggestions above for taking better looking pictures.
I would like an episode where you explain the differences in monitor/tv technology and what to look for when purchasing one. Right now all I know is higher resolution is better and IPS is supposed to be better than TN. (forgive me if there is one already)
"Mabel, come with us to a land filled with colors that only shrimp and art students can see!"
What's causing the static artifacting at the bottom of this video after they put the text up on the screen?
sixty four bits
thirty two bits
sixteen bits
EIGHT BITS
FOUR BITS
TWO BITS
ONE BIT
HALF BIT
QUARTER BIT
THE WRIST GAAAAAAAAME
This is so true! And believe it or not, when you walk in electronic stores, you will see 4k tvs on display looking so rich and vibrant, which basically trick most buyers into thinking its better in every way compared to 1080p. When actually the content being played is just rich in colour due to it being uncompressed. I got one of the clips used by these stores and played it in 1080p, althought the image wasnt as sharp, the colour was still amazing and made the picture look great and surprisingly fluid. That being said, content quality aside, a tv with great Contrast ratios also play a huge part on image quality just like pixels do.
Yes there is such a thing as a Book of Shadows. It's used in many wiccan rituals.
Rena Ryuugu Nowhere really
quick question iv watched a video of that monitor you where showcasing one with the 10bit panel it is quite expensive but what im trying to get at is how comes theres a option in windows for 16bit and 32bit when the panel is 10 or even lower on a normal panel
I love how you used Big Buck Bunny for the example hahaha
0:35 8 bit but what bits / how many bits u r using because your appearance looks different than 8 bit....
I thought the contrast is also good for the colors or colours or couleurs or FARBEN.
3:05 also there r vids saying 10-bit vid, what about them? they don't even run on phones with older SoCs so they must be good?
Honestly, sometimes low color depth can be kind of cool in some cases. Just saying it can add character to content, if done right.
A Scanner Darkly comes to mind.
Thanks for burning up my eyes with that bright white background
And now we have a meme about these bits
64 bits
32 bits.
16 bits
8 bits
4 bits
1:34
what should i look for when buying a tv or display? (specs)
the wrist game
yeah
Is this why when I switch resolution in gaming from 1600x900 to something higher, everything seems more vibrant and colorful?
128 BITS, 64 Bits, 8 bits, 4 b¡ts , 2 bi|s, 1 6¡|s
2:17 OMG I went to Canada from the US last year and a bunch of videos wouldn't play anymore!! It's really sad
GPU used to be bottle neck now they are not"
Linux 2014
for displaying stored content like videos, yes
Nice to learn this.
This is why 10 bit Monitors are very expensive.
And why a 48fps Raw Bluray Frozen Movie I downloaded a while ago was 32GB in size,
comparing to the compressed one at 1,8GB. The quality was very noticeble though.
+ZeroneRaven i'd think the 1.8gb was already wasted on that crappy singing shit-fest (absoultely hate singing in movies) but to each his own.
2:37 I would NOT have guessed that was a rabbit.
Hello. Just checking if you still have this account.
but a rabbi
I'm about to buy IPS gaming monitor of 1440p (16:9 aspect ratio) sooner.
*What are the things I should check while buying?*
-High Refresh rate (144hz or more) and low response time (4ms)
-G-sync (my GPU is NVIDIA's)
-27 inch maximum size (usually between 24-27 inches)
*_Is there anything more that I should check like brightness, color depth, contrast, etc?_*
*_What is the good number/amount/value of these stuffs?_*
Thank you for spelling it "colour"
I agree. Now we just need em to learn to say aluminium!!
The original and true English is the best
I concur.
Duluxdoggy and favourite
Duluxdoggy
I can't say it either way. Indeed tis my most hated word to say!
What is the maximum sample rate, bit depth, and bpc youtube allowes? Thanks in advance.
64 bit 32 bit 16 bit 8 bit 4bit 2 bit 1bit half bit quarter bit the WRIST GAME
No
A big thanks for constantly upgrading my tech knowledge..,
"Speaking Of" at the end of each video to introduce the sponsor is the most interesting fact on Techquickie Channel
Big Buck Bunny gets a cameo at 2:24 this makes me happy!
64 bits 32 bits 16 bits 8 bits 4 bits 2 bits 1 bit
Meme. Lol. Meme. Lol.
Half Bit, QUARTER BIT, *THE WRIST GAMEEEEEEE!*
Your thumbnail explained everything and I think that's impressive
"So bring on the BITCHES"
*Bits
Really needed this simple explanation.
This is... partly wrong. A higher bitrate of color, say 10-bit video (also known as Hi10P) can yes, get less banding (so a lot better. smoother color like you mentioned), but it can do so with a smaller file size, file size is not the reason it isn't a standard because it actually improves in that aspect. It compresses without losing as much quality as an 8 bit compression. It's not a standard because it's hard to make new standards. It's harder to decode so most blu ray players now wouldn't be able to play it at the frame rates necessary for smooth playback. Also, most content isn't shot in 10bit, so it has the same problems 4K does. Some cartoons, usually japanese anime, are produced in 10bit because there is no special equipment really needed to produce animation in 10bit. Comparing the same source in 10 bit vs 8 bit, compressed to the same file size, the 10 bit would look better every time. So in theory, if we made 10 bit the standard, it would actually be EASIER to stream HD content, just harder to decode it if you have a bad CPU/GPU. You could have smaller file sizes for the same quality, or higher quality for the same file size. Mobile devices can't really decode 10bit yet, and certainly not 1080p 10bit, so I don't think it will be a standard anytime soon, but maybe when 4K becomes a standard, 10 bit color will follow.
I think I can't fault you... Everyone should do this ^ from now on
it should be noted that 8-bit vs 10-bit is talking about bits per color, which determines the color grading. When Linus mentioned the shadow grading he could've touched on this a bit more clearly. But to really explain it well would've been too much of a tangent I think. Anyway, solid post. +1
I was always under the impression that Hi10P was just a software encoding profile. Hi10P =/= bitrate.
The5thBeatle
Well 4k is now becoming standardized and all ready is in many cases and we're seeing 10bit is following with it branded under HDR.
I have a LG OLED, 55" B7V. When i watch programs or play games i still see colour banding. Not sure how to fix this. It with default tv picture settings or optimized picture settings for that content. Is their anyway to get rid of it? I play games in HDR but am never sure whether to use RGB, 4.2.0, 4.2.2 or 4.4.4. And i think their are incompatibilities with Nvidia graphics cards, gaming ones and colour format.
Thought about not equal depth for channels? like in RGB565? (16-bit depth) ... and another ways of representing colour, and what happens when different channels are then cut in depth?
That HD image with low colour depth would've looked fine if it had used dithering.
This was a very good quality video, your explaination was clear and precise :)
I would definitely show this to any non-techie to explain it quickly!
Linus do TN vs IPS monitors. And if you have already, I will send 1000 apologies.
pretty sure he briefly covered it and 2 others a long time ago.
For the highest fidelity maybe in the future. Movies can be rendered in or PCs for the most high bit rate and color depth
you looked alright in 8bit =)
You mean 1 bit?
Fokum8 well even in 1 bit I guess, yeah :) I should have been more specific, that the difference between "normal" and 8 bit was barely noticeable. With the right dithering, you can often get away with less colours
you should do a comparrison with 32 and 64 bit prosessors!
64 bit, 32 bit, 16 bit, 8 bit, 4 bit, 2 bit, 1 bit, *unintelligible*
Haha
half bit
quarter bit
the wrist game
I always love when Linus say bring on the bits
You live in Canada? Wow that sucks.
nope
+SnipeYouFromMars I never thought free healthcare and lower risk of being killed in a school shooting did suck.
You have a great account name.
2:17
It is in Michigan... after they were brought from Canada.
And you don't even cover 6 vs 8 bit?
2-bit looked awesome.
Yay!!
Aya!!
lynchburgcsi57 Yya!!
lynchburgcsi57 aay!
NEPTUNE Ayy!!
Damn, that switch from 1-bit to 2 bits made a huge difference! Just adding a light grey and a dark grey as alternatives to black and white.
i thought you said bring on the bitch lol @ 0:42
Where the wrist game at?
Thumbs up if you think it should be spelled "COLOR"!
nice video dude!
Can you explain how the youtube color subsampling 4:2:0 affect the video quality. please !
THX
don't lie, you aren't happy with your canadian netflix mr recommending hotspot shield for people outside america to get american netflix.
I meant the streaming experience, not the content variety ;)
Good explanation. You can do another about image resolution, DPI in screens and printers
Linus says he's Canadian but, I do not see him chugging maple syrup 24/7...
And listening to jb songs
When the Game Of Thrones S8 episode "Battle Of Winterfell" first aired on TV, lots of people compained about the blockiness and lack of color depth. It even got mainstream media coverage. When S8 went on Bluray, this episode was way better in all areas.
64 bits, 32 bits, 16 bits, 8 bits, 4 bits, 2 bits, 1 bit, half a bit, a quarter of a bit
At 2 bit, the shades of gray:
"Maybe 50 shades of grey?" how dare you editor! >:V
2-bit color (or "colour", if you must!) would actually be 4 shades of gray (or again, "grey" if you speak the Queen's English).
Great topic and brought something new to the table for me.
64 bits 32 bits 16 bits 8 bits 4 bits 2 bits 1 BIT QUARTER BIT
half bit quarter bit THE WRIST GAAAAAAAAAAAME!!!!!!
Brilliant Video ,Can you Help i have a Gif still at 64x 64 but i get a error code to many colours max 128, but don't know how to reduce the colours please can you help!..
64 bits, 32 bit 16 bITS 8 BITS 4 BITS 2BITS 1BIT
What are these high quality codecs apart from standard h264? What codecs to blu-rays use? Where can you get raw uncompressed content? I assume blu-rays usually have a higher bitrate than youtube will allow? Whats the highest bitrate and best codec you can use on youtube or is it just one standard?
Well 50 shades of gray would have worked for the sponsor spot, just that it would be a little inappropriate!!!
oops.. yeah 50... now I just feel silly...I'll edit it for fidelity.
I suppose that just goes to show I'm not so depraved/curious enough to really pay attention to it!
The Gentleman Sandwich grey?
Actually both are correct, however 'gray' is more commonly used in American English.*
Most other varieties of English use the spelling 'grey'. So if you want to place me remotely in the world based on my comment go ahead! :)
*I knew both were correct, however I didn't know the geographical implications until I used my google-fu for fact checking. link: grammarist.com/spelling/gray-grey/
The Gentleman Sandwich oh yeh I remember that xd I like how its most commonly used in american English but if you put grey it would still work in an essay xd
I have a tn panel and I could easily tell the difference between all of the colour depths you showcased, even from 8-bit to what you usually have. and that's through a youtube compressed video. my monitor wasn't very expensive at all, so I guess most people will have hardware to match almost any colour depth (although colour fidelity might be a problem).
who is here in 2021 cuz of youtube