...ah, so you're recommended brought you here too? You already watch Linus. You're familiar with Riley in his NCIX days. You already know a bunch about GPUs. But, you watch anyway, as you are a slave to the algorithm.
Your definition of thermal throttling is wrong. Thermal throttling is when the card drops below its BASE clock, not maximum BOOST clock. GPU boost will push a card as far as it can go with the available power until a thermal limit is reached. If it hits that thermal barrier, it will slightly decrease the boost until temps drop back in line and then boost back up again. Sounds similar to thermal throttling, but it isnt. Thats just the way GPU boost works.
Thanks for the info, do I need separate cooling for it, even if it has two fans, so that it won't hit the thermal barrier even if I use it for 12 to 14 hours at around 34 degree centigrade?
you almost got it right at the end. but we went from 28(nvidia and amd) not 22, to 16 and 14 nm. Haswell for example on the cpu not gpu side was 22nm though
True story actually, when I was very very new into computers (im talking like 10 years old) and didnt know much about computer hardware, I thought RAM was the main component that did all the 3D and rendering work, so naturally I thought adding more to the computer would increase performance. So naturally, I thought I could just download it, as I hadnt learnt about building a PC prior to that time. Went online, downloaded a program, aaaaand it was a virus that fucked the computer.
Thank you for making this video! I'm still relatively new to learning about computer specs but you help give a basic rundown of what some of these components mean. Your humor helps keep me engaged, and provides a better understanding of what you're trying to explain. You also offer sound advice and make me want to pursue further knowledge so that I can hopefully buy the computer I want. Again, thank you!
Measuring TDP in Watts isn't that weird since heat is measured in Joules, and Watts is defined as Joules per second. So it's a number telling you the maximum amount of heat dissipated per second.
Okay so all I heard was “the gpl 52817 slr helps the 82 hsv o926 card but depends because if games can’t run jsg628 or kal 973 then it’ll be hard to play”
uummm...... the last gen of CPU processors used 22nm but GPUs used 28nm.... being as you were talking about GPUs its kind of misleading cause you showed a CPU chart and were talking about GPUs
Agreed! Though the process of shrinking the node is kind of the same in both CPUs and GPUs, and the chips themselves are not that different, with the exception that the CPU is made to be able to handle a variety of different kinds of calculations while the GPU is made to only handle a few kinds. Plus, I think that most that look at this doesn't know the history of the 80286 and around that area. That 80486 DX-2 though. Beast back in its day! :D
ProfessorSlash I wanna say the main difference was Intel was the only manufacturer that could get 22nm to work well enough for production, that's what I heard, and maxwell, and AMD's last gen both are advertised as 28nm...I think too many people look solely at the node size as to how fast it'll be and they don't pay enough attention to architecture improvements cause they don't fully understand that part... not that that fact isn't kinda key to Nvidia n Intel's success... if everybody understood the architecture improvements they'd use them as well and everybody'd be as fast as them
Chris Rowlison Yeah, I've heard something about it not being stable and putting out too much heat on such a small area that lower than 28nm was a problem on GPU cores. I've also heard that this finFET technology that's being used is about as effective as a "normal" 22nm core. Not at all sure about the validity of those claims though. Yeah, about the specs. People should really look at the performance instead. In almost EVERY way, the RX 480 should be faster than the GTX 1060. The only thing that the latter has is more ROPs. This may be because Nvidia is really good at using the thing to it's theoretical max. (may be the tile based rastering that's going around) Either way, it'd be really interesting if the Radeon group could push the performance to an extent that Nvidia has when comparing specs to performance. Would also be great for us, consumers.
+ProfessorSlash In Vulcan and directx 12 titles they are really close. Nvidia probably has the edge because developers tend to optimize for Nvidia and Nvidia spends more on driver optimization.
I remember when I first got into computers really looking at the specs like rops, memory bandwidth, and stream processors. Now days I just watch "in game" benchmarks because that is what it really boils down to. I also loved Tiger Direct's youtube series back in like 2008-9ish with Logan, it was like NCIX of the decade past. Nostalgic
on a laptop? why waste your money on a laptop you can't upgrade...? 2 grand for a gaming laptop, get a gaming PC for half that price and you can get better parts in 2-3 years when they become obsolete.
Love these kinda videos with definitions and lots of tech info stuff. I would love to see a maybe more complex and advanced video if these are just the basics :3
If a GTX 950 or 960 will utilize 100% of it's potential in a PCIe 3.0 slot then how much less would it perform in a PCIe 1.0? E.G: 50% or 75% of it's total normal. I'm asking because I need to upgrade my current shitty CyberpowerPC's Radeon HD 6450 sooner than a whole new system next year with Zen.
I really liked the vid, you explain everythig well (before this, I had no idea what a GPU/video card was) and it's also funny, so good combo (yes i subbed)!
Imagine you have a gun with 500 bullets, but you suck at firing guns. Your friend has a gun with only 100 bullets, but he's trained with it longer. You have more bullets, but hes more likely to shoot down the target. Different architectures handle the specs differently and newer ones are more efficient so even if a newer card has worse Spu,Rop,or Tau specs, it could perform better.
shorter paths on a smaller architecture means less time for electrical currents to travel resulting in higher possible clockspeeds. there are lots of factors but this is a major one for going to a smaller architecture.
Finding a faster way of doing something. Say you want to know if 34 is divisible by 2, well you can keep subtracting 2 and see if it reaches 0, or you can check whether the last digit is even or not. This is similar to how they make architectures more efficient. They find a faster and more efficient way of doing something. If you want more depth than that then im afraid youll have to become an electrical engineer and programmer and join Nvidia, AMD, or Intel. There are other factors at play here like shorter paths between circuits like another commenter mentioned.
PENDANTturnips Yes but that offers zero insight whatsoever, you just basically tried to explain efficiency with an unrelated analogy. I don't know much about hardware but I am a software engineer btw.
Agreed but none that require it to run just fine. Yes with more it can run better but 4gb will deliver a good gaming experience. Running everything at max is not the only way to game.
His descriptions seemed very graphic to me as he chipped away at the important details, he cuda added some humor for his fans however, im just pci-expressing my opinion here please dont throttle me.
It means that your card is old af. DDR2 (NOT GDDR2 because graphics memory wasn't a thing) was the standard back when ATI wasn't purchased by AMD. Your GPU is basically old as hell and you need to upgrade either your GPU or your entire rig.
The first two numbers indicate the generation of the card, for example: (The GeForce cards are very old by now and aren't really important but still they should be mentioned), I am gonna be naming one GPU from every generation, mostly the strongest, fastest ones just for the simplicity sake. GeForce 2, (GeForce2 Ultra) GeForce 3. (GeForce3 Ti500) GeForce 4, (GeForce4 Ti4600) GeForce FX, (GeForce PCX 5950) GeForce 6, (GeForce 6800 Ultra) GeForce 7, (GeForce 7950) GeForce 8, (GeForce 8800 Ultra) GeForce 9, (GeForce 9800 GTX+) After the GeForce 9 (2008) They started the 200 series, which used a different naming system which is still used today, and after this point they started using some new prefixes for their GPU's, like Ti and Super. The Ti prefix means that is the best version of that card, for example, you have a GTX1660 and a GTX1660Ti, the Ti version is better than the non-Ti version, you also have the Super prefix which is less common, but that's in-between the stock one and the Ti model. 200 series, (GTX 285) 300 series, (mostly found in laptops) (GeForce GT 340) 400 series, (GTX480) 500 series, (GTX590) 600 series, (GTX690) 700 series, (GTX780Ti) (not mentioning the TITAN cards again for simplicity sake) 800 series (only found in laptops), (GT880M) 900 series, (GTX980Ti) 10 series, (GTX1080Ti) 16 series, (GTX1660Ti) 20 series, (RTX2080Ti) 30 series, (RTX3090) 40 series, (RTX4090) TITAN cards, they were used between the 700 series and 20 series, not really a gaming card, more of a workstation card, still can be used for gaming but will show similar performance to the high end non TITAN cards. The TITAN family includes: GTX Titan, released in 2013 GTX Titan Black, released in February 2014 GTX Titan Z, released in March 2014 GTX Titan X, released in 2015 Titan X (2016), released in 2016 Titan XP, released in April 2017 Titan V, released in December 2017 Titan RTX, released in 2018 The last two numbers indicate the class of the card, 50-70 being somewhere as a entry level to midrange card, 80-90 are high-end powerhouses of cards. For example, the GTX1050 is a entry level to a mid-range card, while the GTX1080Ti was high-end card. You also have the GT cards, which aren't really meant for gaming, mostly office use and web browsing, for example the GT710 or the GT1030. That's really that, but remember, that is only Nvidia cards, you also have AMD which has a whole different and a lot more complicated naming system.
You might think they're the same GPU "Die" but they're not actually the same. GPU are CLASSIFIED as ASSIC quality. Depending on its ASSIC quality will define its model, example GTX 970 vs GTX 980. ASSIC qualities will allow transistors to work better at a higher clock or higher rops, shaders "cuda", and memory bandwidth lanes.
If you try to install a different GPUs vBios two things can happens. Will crash or will burn out. "GTX 970 vs GTX 980" even trying to under volt or under watt or underclock. Will die or crash.
Sorry, I don't click on Linus videos anymore. I understand ads, and making your living on UA-cam, but there's a limit to what your audience will put up with.
great video good info... but the cards on the website what's up with the over $100 mark up? the 1060 SC subject fan gpu is $250 and that. sure is selling for $358.. I'll just wait to they are in n stock at Newegg.
great video good info... but the cards on the website what's up with the over $100 mark up? the 1060 SC subject fan gpu is $250 and that. sure is selling for $358.. I'll just wait to they are in n stock at Newegg.
who is here in 2024?
Me, don't have a gaming PC but I'm about to get one
Here in 2035.
I’m here
Here in 2217
@@kylecribbon2532same
in one ear out the other
Ok now i feel better
Hahaha best comment
Indeed brother.
I am not going to mess with those 666 like XD
Loser
I still don't know wtf I'm doing
42ang you can look up benchmarks for the games you want to play and just buy the card that fits your budget and game
I feel the same....I thought this was "the basics" too many big words...lol
Take your time
@@kiddrebelamv3599 Ask me any questions you have, I'm still pretty new but I may be able to clear some confusion.
@@shade0636 GeForce GTX 780 Ti, is this good?
*_-"How to download graphics card "-_*
Literally me running games on an old laptop
Buy APU - Graphics card is free
download lower graphics
420th like
Just pay to amazon and that's it Graphic card at you doorstep
can't wait for the 1080ti so I can play Minecraft at 1080p 60fps!!
Tbh it's not funny at all
you dont play games in Hz. its FPS
Whypah ~ LeaN You should really look up, what the unit Hz stands for.
Sarcasm. Is not a new concept guys.
With or without shaders?
...ah, so you're recommended brought you here too? You already watch Linus. You're familiar with Riley in his NCIX days. You already know a bunch about GPUs. But, you watch anyway, as you are a slave to the algorithm.
Ahhh yes.
Am I so predictable?
True
I honestly don't know anything about CUDA and clicked on the video because more knowledge doesn't hurt.
I don't know how I should feel now...
Okay now explain this to me as if I was a 5 year old 👁️👄👁️
😂
why was I recommended this now? also RIP ncix
its the algorithm you dunce stop asking the dumb question a million times.
I'm only here to see how Riley was before techlinked
Muhammad hiari same lmao
What why is youtube recommending this to me? 🤣
Idk but young Riley is hilarious to see 😂😂
Did he just say 22nm procces?! *TRIGGERED*
I'm 12, and I got made fun of for not knowing stuff about Pcs yesterday at best buy, so here I am...
Wow, what an insecure looser that worker (I assume) must be.
I've never met anyone at a Best Buy that knew anything about computers so don't worry, on top of being insecure, they were frauds :-)
Still confused as heck
Your definition of thermal throttling is wrong. Thermal throttling is when the card drops below its BASE clock, not maximum BOOST clock. GPU boost will push a card as far as it can go with the available power until a thermal limit is reached. If it hits that thermal barrier, it will slightly decrease the boost until temps drop back in line and then boost back up again. Sounds similar to thermal throttling, but it isnt. Thats just the way GPU boost works.
Thanks for the info, do I need separate cooling for it, even if it has two fans, so that it won't hit the thermal barrier even if I use it for 12 to 14 hours at around 34 degree centigrade?
Rice noodles 4 years later lmao
@@ricenoodles8024 no you won't need anything else if your card is running at 34' Celsius
you almost got it right at the end. but we went from 28(nvidia and amd) not 22, to 16 and 14 nm. Haswell for example on the cpu not gpu side was 22nm though
Vram doesn't matter. You can always download load some if you need more.
RIP In Peace
i downloaded some a few weeks ago, it came in a nice protective box after 3-5 business days ;)
Can i also download more power supply?
+Threebow™ auto correct
True story actually, when I was very very new into computers (im talking like 10 years old) and didnt know much about computer hardware, I thought RAM was the main component that did all the 3D and rendering work, so naturally I thought adding more to the computer would increase performance. So naturally, I thought I could just download it, as I hadnt learnt about building a PC prior to that time. Went online, downloaded a program, aaaaand it was a virus that fucked the computer.
Thank you for making this video! I'm still relatively new to learning about computer specs but you help give a basic rundown of what some of these components mean. Your humor helps keep me engaged, and provides a better understanding of what you're trying to explain. You also offer sound advice and make me want to pursue further knowledge so that I can hopefully buy the computer I want. Again, thank you!
Stop d riding
Measuring TDP in Watts isn't that weird since heat is measured in Joules, and Watts is defined as Joules per second. So it's a number telling you the maximum amount of heat dissipated per second.
I'm so happy he puts way more of his personality into his current videos :)
Haha 2 years ago
@@Noodle_Fr Haha 5 months ago
@@oshane7783 haha 1 year ago
@@Vibricks Haha 9 months ago
i feel like if he got rid of his glasses and had a decent hair cut and style he would be hot as hell (no homo)
CR7NEYMAR I like the glasses and his hair is beautiful
yes you are
Yall gay
CR7NEYMAR bruh you sus
He is totally my type rn
Okay so all I heard was “the gpl 52817 slr helps the 82 hsv o926 card but depends because if games can’t run jsg628 or kal 973 then it’ll be hard to play”
Wow, this guy looks so much like Riley from Linus tech tips!
uummm...... the last gen of CPU processors used 22nm but GPUs used 28nm.... being as you were talking about GPUs its kind of misleading cause you showed a CPU chart and were talking about GPUs
and the last gen of GPUs didn't use 22nm.. they used 28nm
Agreed!
Though the process of shrinking the node is kind of the same in both CPUs and GPUs, and the chips themselves are not that different, with the exception that the CPU is made to be able to handle a variety of different kinds of calculations while the GPU is made to only handle a few kinds.
Plus, I think that most that look at this doesn't know the history of the 80286 and around that area. That 80486 DX-2 though. Beast back in its day! :D
ProfessorSlash I wanna say the main difference was Intel was the only manufacturer that could get 22nm to work well enough for production, that's what I heard, and maxwell, and AMD's last gen both are advertised as 28nm...I think too many people look solely at the node size as to how fast it'll be and they don't pay enough attention to architecture improvements cause they don't fully understand that part... not that that fact isn't kinda key to Nvidia n Intel's success... if everybody understood the architecture improvements they'd use them as well and everybody'd be as fast as them
Chris Rowlison Yeah, I've heard something about it not being stable and putting out too much heat on such a small area that lower than 28nm was a problem on GPU cores.
I've also heard that this finFET technology that's being used is about as effective as a "normal" 22nm core. Not at all sure about the validity of those claims though.
Yeah, about the specs. People should really look at the performance instead.
In almost EVERY way, the RX 480 should be faster than the GTX 1060. The only thing that the latter has is more ROPs.
This may be because Nvidia is really good at using the thing to it's theoretical max. (may be the tile based rastering that's going around)
Either way, it'd be really interesting if the Radeon group could push the performance to an extent that Nvidia has when comparing specs to performance. Would also be great for us, consumers.
+ProfessorSlash In Vulcan and directx 12 titles they are really close. Nvidia probably has the edge because developers tend to optimize for Nvidia and Nvidia spends more on driver optimization.
I remember when I first got into computers really looking at the specs like rops, memory bandwidth, and stream processors. Now days I just watch "in game" benchmarks because that is what it really boils down to. I also loved Tiger Direct's youtube series back in like 2008-9ish with Logan, it was like NCIX of the decade past. Nostalgic
Did this randomly pop in anyone elses recommended.
yep
Video starts @ 3:06
ah, look at the boy. still young... how ltt sent him down the rabbit hole, so tragic...
You're saying way too many things, I just want to know which card to get so Overwatch doesn't look like shit on my laptop
Cesar Ordonez bruh these cards are too big to fit inside a laptop dumbass these are for pcs
Jayden Milam no shit bitch, can you not see my lack of knowledge by my previous post
action jackson i have a gtx 240 and i can run overwatch perfectly
Overwatch looks like shit regardless
on a laptop? why waste your money on a laptop you can't upgrade...? 2 grand for a gaming laptop, get a gaming PC for half that price and you can get better parts in 2-3 years when they become obsolete.
cant you just download VRAM just like you could with regular RAM?
YES, but it takes longer to download that "gaming" type.
Yes, but is harder to found on the internet, some links are malware :(
Sure you can, takes a while to find working links but I just downloaded extra 8 gb onto my gtx 970 and it's great!
Yes
DownloadMoreVram.com/NotA100%Scam
As long as you're running your ether net cable through your printer you'll be fine.
I'd love to see similar videos about the other components!
Love these kinda videos with definitions and lots of tech info stuff. I would love to see a maybe more complex and advanced video if these are just the basics :3
The first episode of NCIX's new show: QuickTechie.
And it's totally not a copy of some other channel on the Internet by someone who used work here who's name is pretty similar to QuickTechie, right?
@@tylerweigand8875 wooosh
@@DionMango mate did you really woosh me three years later when I wasn't even wooshed
any link to the site where you download RAM?
Fact: U didn't search for this
But I did need it
What? Why?
False fact lol
what kind of grafix card do you need to play a PlayStation 2 emulator?
Most Emulators use CPU power more than GPU power
Your hair looks really stylish :D
Flashlight _ thanks so much!
asymmetrical, not nice
Sorry, sorta unrelated question, what are those glasses?
vertebral like Ray-Ban with custom details, idk either
Last generation was 28nm not 22nm
If a GTX 950 or 960 will utilize 100% of it's potential in a PCIe 3.0 slot then how much less would it perform in a PCIe 1.0? E.G: 50% or 75% of it's total normal. I'm asking because I need to upgrade my current shitty CyberpowerPC's Radeon HD 6450 sooner than a whole new system next year with Zen.
I really liked the vid, you explain everythig well (before this, I had no idea what a GPU/video card was) and it's also funny, so good combo (yes i subbed)!
This was Dope! even for people who do have some idea of what's happening this is still useful
I'd have liked more explanation as to why SPUs, ROPs and TAUs don't matter when comparing graphics cards from different architectures/generations.
Imagine you have a gun with 500 bullets, but you suck at firing guns. Your friend has a gun with only 100 bullets, but he's trained with it longer. You have more bullets, but hes more likely to shoot down the target. Different architectures handle the specs differently and newer ones are more efficient so even if a newer card has worse Spu,Rop,or Tau specs, it could perform better.
marcus kapoor
but how? That's what I wanna know. I mean, where does the efficiency actually come from?
shorter paths on a smaller architecture means less time for electrical currents to travel resulting in higher possible clockspeeds. there are lots of factors but this is a major one for going to a smaller architecture.
Finding a faster way of doing something. Say you want to know if 34 is divisible by 2, well you can keep subtracting 2 and see if it reaches 0, or you can check whether the last digit is even or not. This is similar to how they make architectures more efficient. They find a faster and more efficient way of doing something.
If you want more depth than that then im afraid youll have to become an electrical engineer and programmer and join Nvidia, AMD, or Intel.
There are other factors at play here like shorter paths between circuits like another commenter mentioned.
PENDANTturnips
Yes but that offers zero insight whatsoever, you just basically tried to explain efficiency with an unrelated analogy. I don't know much about hardware but I am a software engineer btw.
is an invidia 1050 ti compatible with the MSI gaming x470 motherboard and an amd 2700x?
:) thanks.I wanna start animating
wtf... i go on this Video 3 dislikes 2 likes
linus logan and JZ :)
actually their all linus.
Sony Microsoft & Nintendo
this is meta 'first'
Him in 2017 talking about nvidea’s 980 and 1080 when is 2021 their in 3060!!!
True king of nerds, great vids for nerdians. I got lost in the 3 minute play so..... yeah.
Why the hell did UA-cam recommend me this 2 year old video?
I have been running 1440p for over 2 years and nothing has ever used over 4gb yet.
Try the ultra hd texture pack in Shadows of Mordor with all settings on max at 1440P, then come back.
I have and I did.
It will only use what you have. There are definitely games that use more than 4GB.
Agreed but none that require it to run just fine. Yes with more it can run better but 4gb will deliver a good gaming experience. Running everything at max is not the only way to game.
So u didnt play a lot of games. For example GTA V use a lot of VRAM.
His descriptions seemed very graphic to me as he chipped away at the important details, he cuda added some humor for his fans however, im just pci-expressing my opinion here please dont throttle me.
I have never seen him so serious...
Well they're gpu's and you have to be pretty serious about them because you know... Ok i dont know what the fuck im talking about
Whats the difference between memory bus width and bandwidth. I had followed this up until this point.
Please do more explained videos
you sound and look like milk from the new season of big mouth
Better than Techquickie, only because is straightforward. TOO MANY FARTS, DENIS!!!!
Also without sponsored ads!
This is just NCIX realizing that Techquickie is more popular and trying to copy them.
But they do better.
I agree
Agreed better that Techquickie, but design for more advance audience
The video thumbnail looks like someone trying to poop after having a spicy food 😅
I like your humor, it's like watching The Big Bang Theory, except it's not funny.
so, exactly like watching The Big Bang Theory.
^Savage.
Big Bang theory sucks
ikr
The cancer ward at a children's hospital is funnier than The Big Bang Theory.
7:08 last gen was 28nm (nvidia's maxwell and amd's radeon 300 series). not 22.
What is my card has 256 MB of GDDR2? You only mentioned GDDR5
get another one
It means that your card is old af. DDR2 (NOT GDDR2 because graphics memory wasn't a thing) was the standard back when ATI wasn't purchased by AMD.
Your GPU is basically old as hell and you need to upgrade either your GPU or your entire rig.
@needskillnow
i think either he now that or he just trolling key for not mentioning it
V-RAM: DDR1 -> DDR2 -> DDR3 (early/cheap cards once GDDR5 launched) -> GDDR4 (skipped mostly) -> GDDR5 (mainstay currently, budget cards in future when GDDR5X & HBM2 kick off for high-end cards) -> GDDR5X & HBM/2.
SysRAM: DDR1 -> DDR2 -> DDR3 -> DDR4.
I’m even more lost by watching this Video.
how to get infite goldstars : watch video once learn it and watch again
did this video cover what the numbers mean 2045? 3090? 4090? i didnt catch that and that was basically all i wanted to know.
The first two numbers indicate the generation of the card, for example:
(The GeForce cards are very old by now and aren't really important but still they should be mentioned), I am gonna be naming one GPU from every generation, mostly the strongest, fastest ones just for the simplicity sake.
GeForce 2, (GeForce2 Ultra)
GeForce 3. (GeForce3 Ti500)
GeForce 4, (GeForce4 Ti4600)
GeForce FX, (GeForce PCX 5950)
GeForce 6, (GeForce 6800 Ultra)
GeForce 7, (GeForce 7950)
GeForce 8, (GeForce 8800 Ultra)
GeForce 9, (GeForce 9800 GTX+)
After the GeForce 9 (2008) They started the 200 series, which used a different naming system which is still used today, and after this point they started using some new prefixes for their GPU's, like Ti and Super. The Ti prefix means that is the best version of that card, for example, you have a GTX1660 and a GTX1660Ti, the Ti version is better than the non-Ti version, you also have the Super prefix which is less common, but that's in-between the stock one and the Ti model.
200 series, (GTX 285)
300 series, (mostly found in laptops) (GeForce GT 340)
400 series, (GTX480)
500 series, (GTX590)
600 series, (GTX690)
700 series, (GTX780Ti) (not mentioning the TITAN cards again for simplicity sake)
800 series (only found in laptops), (GT880M)
900 series, (GTX980Ti)
10 series, (GTX1080Ti)
16 series, (GTX1660Ti)
20 series, (RTX2080Ti)
30 series, (RTX3090)
40 series, (RTX4090)
TITAN cards, they were used between the 700 series and 20 series, not really a gaming card, more of a workstation card, still can be used for gaming but will show similar performance to the high end non TITAN cards. The TITAN family includes:
GTX Titan, released in 2013
GTX Titan Black, released in February 2014
GTX Titan Z, released in March 2014
GTX Titan X, released in 2015
Titan X (2016), released in 2016
Titan XP, released in April 2017
Titan V, released in December 2017
Titan RTX, released in 2018
The last two numbers indicate the class of the card, 50-70 being somewhere as a entry level to midrange card, 80-90 are high-end powerhouses of cards.
For example, the GTX1050 is a entry level to a mid-range card, while the GTX1080Ti was high-end card.
You also have the GT cards, which aren't really meant for gaming, mostly office use and web browsing, for example the GT710 or the GT1030.
That's really that, but remember, that is only Nvidia cards, you also have AMD which has a whole different and a lot more complicated naming system.
Last Gen was 28nm.
knob
So there is an hdmi port on my motherboard and my gpu so which one do I use?
Awesome video, will be really helpful for lots of people!
What about asynchronous computes, floating points 64, 32, 16 and opencl
I'll just get an xbox
gay
Ultra gay
gay
gay
U gay..
I came here for the quick bits
Great job! Pascal powered Oratory....
who else want to download GPU?
ty youtube algorithm
*slams fist on table* Speak English damn it!
😂 just search for benchmarks
Umm... ok but what I a graphics card
This is before Riley learns to drop stuff
What Riley also worked at ncix
the „Tech Tips” part from „NCIX Tech Tips” is back! :D
this guy looks like he should work for Linus
What i understood gpu is a processing unit
You might think they're the same GPU "Die" but they're not actually the same. GPU are CLASSIFIED as ASSIC quality. Depending on its ASSIC quality will define its model, example GTX 970 vs GTX 980. ASSIC qualities will allow transistors to work better at a higher clock or higher rops, shaders "cuda", and memory bandwidth lanes.
If you try to install a different GPUs vBios two things can happens. Will crash or will burn out. "GTX 970 vs GTX 980" even trying to under volt or under watt or underclock. Will die or crash.
Same laws applies to CPUs. But unfortunately Intel don't want to release its ASSICs
It feels so weird stumbling across an ncix video randomly
what are the bits (128 , 256 , 384 . etc) in video cards
F for our fallen bretheren.
Recommended in 2020 gang
Can you do a video where literally every word is defined. Even the words in the definitions
The company that made this is dead 💀
Riley before LTT wow
rip NCIX. They went bankrupt because they lost Linus.
R.I.P NCIX :(
Notification squad..
Where you at?
can someone explain why there are so many types of like 1 graphics card. like msi 1080, asus 1080, evega 1080, zodiac 1080 etc. whats the diffrence?
How many channels does Linus media have?
This is the old channel NCIX gone bankrupt
2500th like :)
6:44 Its not manufacturing process , its the manufacturing process nodes
ROPs matter, especially as GPU architectures evolve into tile-based rendering designs. Be careful with what you trivialize.
6:33 well there goes the law of conservation of energy, just like my understanding of graphics cards
Sorry, I don't click on Linus videos anymore. I understand ads, and making your living on UA-cam, but there's a limit to what your audience will put up with.
why am I here?
great video good info... but the cards on the website what's up with the over $100 mark up? the 1060 SC subject fan gpu is $250 and that. sure is selling for $358.. I'll just wait to they are in n stock at Newegg.
great video good info... but the cards on the website what's up with the over $100 mark up? the 1060 SC subject fan gpu is $250 and that. sure is selling for $358.. I'll just wait to they are in n stock at Newegg.
Right when he said “memory bandwidth”, my phone lost connection.
Apparently I do not know the basics cause I was the one who requested this video. Thank You NCIX Tech Tips