poor manufacture and design.. and greedy companies.. sony just took 8k out from their packaging for the ps5 who can bearly do 4k with upscaling .. motherboard manufacturers have temp and over voltage protection on some components and cpu to prevent burning out...not the shitty video cards cos nobody can break the nvidia monopoly..!
Absolutely with you on that. I think the consoles are going to suffer most because of poor maintanance especielly with the liquid metal only supposed to be lasting 5 years or so, but could also be a good thing for repair businesses i guess. In theory the lower end cards shouldnt have these problems as much as the high end ones as long as the designs are SOMEWHAT decent
I think some corners were cut with some Ampere PCBs to make margins and a few were dodgy. Consoles have always been marginal on cooling - Xbox's rings of death etc spring to mins so it wouldn't surprise me if this gen also had issues
overclocking is the main issue, that very stupid thing. and overclocking was never meant to be permanent, but majority run on overclock every single second the machine is on.
Nice summary of the failure modes for gpus! I didn't see a mention of physical damage due to a warped pcb from how heavy the cards are. This causes cracking solder joints and accelerating solder wear. Unleaded solder is particularly prone to this as it's considerably more brittle than leaded. Can't 100% prove this but it does seem to be a thing going by repair channels with what repairs typically get done. It's always the bottom part of the core and memory that have corrosion/ripped pads.
You are correct, i did not mention physical damage but i think i may do a video about that in combination with building a gpu support bracket that can be 3d printed
My broseph thought my pc's default was to be as loud as his ps4... He didn't realize I have a choice in making noise. His HAS TO, otherwise it fries itself.
Usually cards tend to fail due to user error, dying gpu or indeed prolonged overheating of some of the components on the cards, especially when the card manufacturer decides to cheap out on some of the components (a "GPU" is much more than just the core). Modern cards are pretty smart and will run within specification at all times. I haven't seen any 40 series card or rdna 2&3 that hits junction temperature on a stock card. Like with every product there is always a failure rate expressed in %. I highly doubt it is higher these days than before though. 40 series had some major (while rare) issues with their new power 12 pin connector (it is called 12VHPWR).
@@vladvah77 I have no data, by consequence I can't really comment. Might not be that rare... NVIDIA might not care much about the GPU division and don't mind a higher failure rate than normal even though that cost them a lot.
This is completely true, although not really user error in that sense, as most users just dont know any better and even if they clean their card, never change the thermal paste, because they think "its ok"
@@limitless-hardware1866 Well, user error in the broad sense. I got an example: A few years back i was a happy owner of a GTX 770 4GB, that card did tend to get quite hot. Throughout the first years of the lifespan of that card i didn't know much about hardware, the card worked like a charm until I got into benchmarking and taking note of different metrics. As a i saw how the card got reallly hot, especially the VRAM, I decided to change pads and thermal paste. Changing the thermal paste worked like a charm, sadly i probably messed up with the pads i got (they were probably to thick, even though they shouldn't). I broke or caused damage to the VRAM. User error can also just be the eager willingness of the user to fix what isn't broken. Another example would be the case of the 12VHPWR connector, where poeple seemed to not clip it correctly inside of the connector. Some long term hardware failure might of cause be caused by negligence because the user don't know better. In my personal experience that seems only to be the case in laptops though, dry thermal paste and clogged up fans do kill hardware. How often will that happen on a desktop though (other than after 5-10 years of frequent utilisation).
You would think that cards are smart and then you have something like "Silent gaming", which starts the fans only when the temp is 65C or something, which is crazy. The fans are pretty "silent" unter 60% of max speed anyway, so why the need to "silent" them? First thing I did I made my owm fan curve, UV + OC. And now my temps are 56-58C on full load. But I could cook omelette on my card with the stock technology called "Silent gaming". So how many are there of these "smart" technologies? lol
Thats why I set the power-limit of my 4080 from 350 to 250W. I lost about 5% performance, but the card runs so much cooler and quieter. And I dont have to worry about the burning cable thing.
My method of GPU preservation is to limit the FPS to 120 and lowering the voltage. This is a perfect FPS for my games and the GPU is only at 60% to 80% utilization.
I always adjust the fan curve on my graphics cards. I find the defaults have become too conservative. In rhe past, GPU fans would sound like a jet taking off but these days they're very quiet. This is nice but results in very high temperatures. I'd rather deal with a bit more noise if it means my GPU lasts a few more years. Plus it results in higher boost speeds even without overclocking.
Totally understand you, but compared to about ten years ago, temperatures today are pretty low, looking back at 2014 for example an r9 290x was designed to run at 95 degrees celsius under load at all times.
Let’s also not fail to mention that certain manufactures (looking at you Gigabyte) do not put any fuses on their GPU boards, which allows the most sensitive components on the board to burn to a crisp if there are voltage anomalies.
True, but i dont know if that happens as often, most of the time for the users those fuses actually arent doing anything as most defective cards just get sent in for warranty or thrown away. The issue is that it would save valuable resources as repair in general would be easier and more cost effective.
Something that wasn't mentioned is the bigger cards sag in the pcie slot if your card is horizontal and it breaks the pcb of the card and is super hard to repair.
Do not undervolt a 4090. Lower the power limit instead. 40 series behaves very differently than previous generations. When they run at a lower voltages they will lose performance even with a higher core clock. Lowering the power limit will give you the same effect of reducing power but is much easier to do. The power limit will also has the benefit of setting a max power draw, where as an undervolt can still draw 450w - 600w under the right work load. See the optium video about undervolting a 4090 for benchmarks.
I have tested it, depends heavily, obviously you have to look at the performance to see how much it looses, but i have tried on a 4090 undervolting or rather lowering powerlimit and then increasing clocks the 4090 still was able to achieve 90 percent of its performance at 350 watts in power heavy games or benchmarks.
I have 4080S, I did UV (950 mV) + OC (core: + 270 MHz, VRAM: + 1500 MHz) - it does have the same performance as stock settings, I checked the difference in game with RivaTuner. The power draw on full load is 220-230W and the temp 56-58 C. So no, I heavily recommend UV + OC. Idk, maybe it's just something wrong with 4090.
Most of the performance gains going from a 2080 to a 3080 were from the increased power usage. Thankfully the 40 series is noticeably more efficient than the 30 series. On my 4080 I just set my power limit to 90%, increased core clock by 80mhz and RAM by 800 and it’s been brilliant. Uses less than 300 watts under full load and performs better than stock while staying nice and cool.
I have 4080S, I did UV (950 mV) + OC (core: + 270 MHz, VRAM: + 1500 MHz) - it has the same performance as stock settings, I checked the difference in game with RivaTuner. The power draw on full load is 220-230W and the temp 56-58 C. Super and non-Super are pretty much the same anyway, I think you can tweak your card better. Try my settings, via curve, not via power limiting in %.
Come on engineers! Build a complete GPU SoC with 32GB of embedded vram. One product line. Keep the power draw less than 70Watts. Perhaps split the SoC into DirectX and OpenGL into their own optimized pipelines thus shutting down unused logic. Make the bus structure very wide thus less need for higher clocks and keeping thermals low. Work in some virtual memory that won't crash the whole system. Build one product that fits all with the ability to swap out the GPU SoC via some type of optical socket. Use optical bus signaling on the PCB reducing copper trace usage.
Those cards can't get enough power - if I got one I'd be throwing 500W into it 😂 , which my case can't deal with so I got a 4080 Super instead which I've pushed 370W into.
I don't think enough people are talking about the HUGE jump in power from the 2000 to the 4000 series, it literally doubled. Performance has stayed pretty linear when you adjust for power. A 250W 40 series card performs very similar to the top of the line 250w 2080ti or 3080. And of course double the power needs double the cooling and mass.
This is true as i elaborated in the video. electron migration because of hihgh power draw is a massive problem, although voltages dropped, due to the high amound of cores the amperage drawn are ridiculous a lot of the times. BUT! if you undervolt a 4090 to like 300-350 Watts, it can still provide 80-90 percent of the performance. sooo in theory when tweaking the car yourself you can gain a LOT from it!
Just not true. I owned a 3080 and now a 4080 super. The 3080 pulled about 360 watts. My 4080 super pulls about 310 and is about 40 percent faster across the board. Its at least 50 percent more efficient watt for watt. Although some might find the gains disappointing, they do add up over the generations.
The electromigration angle I've not come across before. With newer 30/40 series cards I'm a little skeptical - not heard much else about the cores dying straight out on these. If anything it's the PCBs getting warped and damaged solder balls/traces. On the CPU side I did think it was a bit much sending VIDs of 1400mV down a 10nm process on a 13900k when we did similar with the old Core 2 Quads on 65nm. But those seem to be holding out , as do Ryzens on 7nm and 14nm Intels. The newest Ryzens are designed for very high operating temps too. So I'm not sure if electromigration is going to be a massive problem for those who don't send 900W down their 4090s (I have seen a video here of someone doing exactly that , and the card dies pretty quicky 😂). I do however have a very relevant story of a 2080ti dying though recently almost exactly 2 years after I bought it second hand . Reason was that the memory controller basically cooked itself , with a part of the core near the edge being not cooled enough - but not near sensors which would have showed in hotspots. I got this second hand towards the end of the etherium mining boom so the previous owner would have possibly mined with it. The other thing I noticed about that is that the default fan curve was extremely conservative and had the hotspot running to 90+C (in hindsight I should have repasted it but it didn't seem too much higher than the core temp ). I did overclock and max the power budget out on this card as I needed the performance. But I imagine this card being hammered with little cooling when mined with -there wouldnt be a fan profile for max memory load and min core load.
wow..great vid..hopefully my 4070ti super is o.k...the specs seem to indicate it is power efficient compared to a 3090 or 3080 for that matter..my 3070ti likes to suck power also..the price of performance is way too high.
Simply said the cards are at their limit. 450W seems is already too much to have reliability. INTEL has the solution via REBAR to pair the ARC with CPU and double the performance. I would advice to look little bit deeper on INTEL ARC. If there is 60FPS the performance equals 120FPS. You have two silicone's. AMD/Nvidia are mostly standalone cards and CPU plays very small part. If you don't believe me A770 for 4k and 60FPS gives you better experience than 4070ti at 120FPS. Less waiting. My guess Nvidia looking to make their own CPU's has this in mind.
Absolutely, they shouldnt have gone above like 300 watts, even that was a lot for the smaller coolers nowadays the coolers just get bigger and bigger to be able to somewhat keep these gpus cool, i think it just got out of hand kinda
@@limitless-hardware1866 Intels approach seem logical to split the tasks between two silicone's. It doesn't show in FPS count that much but from 1080p where INTEL has half the caped FPS in games that are gaped and 4k does better job you have to realize that there is something else going on when it is clear the ARC just gets more competitive with higher resolution. Traditional values from FPS seem to be wrong. Almost like the calibration that gives the FPS can't see that there are two silicone's and only giving the half that it can see. 4070ti 12gb VRAM 192 bit bus 285W a770 16gb VRAM 256 bit bus 225W The point really is that ARC 225W + i7-13700K 253W = 475W all up. Wattage usually is good indication of the performance. Something to ponder I guess.
I am turning off zero rpm mode and using custom fan curve to prevent this. Maybe it will shorten the life of the fans, but fans are hundreds of times cheaper than GPU.
Good to know, yeah back then the 8800 series was released and that series started to have massive issues, qlthough those were the first cards having relatively high power draw too
@@limitless-hardware1866 I think it boils down to driver support and optimisations. I am not a gamer, my rendering softwares are optimised only on Nvidia platform. Also, history is the witness, AMD hasn't been very supportive in long lasting driver updates.
So is it mainly temperature that’s the issue or temperature and wattage? My 2080ti spikes above 250w in certain games, but almost always is below 60C under full load, and idles around 30C. Is that pretty safe or should I adjust the fans to allow it to idle at a higher temp in order to reduce temperature fluctuation? Is higher wattage always worse for silicon degradation even if temperatures are kept relatively low (below 80C)?
Its a combination of both, or one or the other. the silicon degredation happens mostly with high power draw through the GPU with electron migration. Temperatures accelerate that slightly. The thing that high temps mostly cause are cracks in solder joints, especially when the difference between idle and load is really high, like 30 and 85 degrees or even more. in addition to that sometimes very heavy cards can also be a factor as the pcb bends and can lead to cracked joints too.
Anything that reduces temperatures will help. And because manufacturers often use low quality thermal ipaste, qimed at a long service life, its often a good idea
I'm seeing more and more UA-camrs using these mic's in their hand ? Is there a genuine reason or is it just a new image/fad ? Legit question! Just curious!
id say the 3070 ti is at less of a risk than the 1080 ti hence the lower power consumption. depending on the cooler, the 1080 ti could benefit from lower temperatures, as in that generation, stock coolers were kinda weak sauce but at least they had a temp limit that was reasonable
@@limitless-hardware1866 honestly ive only ever gamed at 1080p and maybe let the 3070ti try push the frames up to 360 for my 360hz monitor but with the 1080ti being a blower style card i have to adjust the fan curve so it wont start dying so soon and lock it to 144fps in most of my 1080p games usually i find as long as temps don’t exceed 65 it hasn’t degraded performance or started the dreaded coil whine yet
@@pranavraval5282 What 1080ti card do you have, is it FE? Regardless, 1080ti is the greatest card, sadly no more 600 usd top of the line gpus. Here I am in sweden paying 400-450 usd for a 7600xt smh.
@@Apalis Dang, luckily with the uk market it feels a lot cheaper with used gpu’s most of the time, got my 1080ti for around 150 a few years back, it isn’t FE but is the MSI blower style. Too plastic y for my taste but honestly it does the job great for the case its stuck in
Well the high prices nowadays are just unreasonable, 8-10 years ago the most expensive gpus were also 700 usd and that was really expensive. Charging triple that is just unreasonable. And it wold be possible for especially nvidia to offer these gpus at a much lower pricetag if they were willing to do ao
laptop GPUs are power wise petty good, but im not sure how well they are cooled especially in the higher end cards, if the laptop is set up well and the cards stay below 85 core average then that should be ok but on a laptop 4090 might be a different story. ALTHOUGH laptop components in general seem to be having more issues therefore im thinking that the GPUs might not be the biggest issue in most cases.
Laptop manufacturers historically cheap out on everything. Even high end laptops. There's a floor for how large a company needs to be to get into laptop manufacturing so it's pretty much medium to large corporations only. The companies which will do things to save 5 cents on a 1000-2000 dollar bill of components and compromise the entire thing. Bad screws, weak hinges, off brand caps, undersized coolers to shave a few grams off expensive copper. Simple things like not enough glue to hold a bezel in place because an extra dollop is a cent saved. The works.
@@halycon404 I certainly would agree that there are costs cut. But in general the last laptops i had over the year all ended up to be reliable and the only thing i did was clean them out every now and while at it add some upgrades here and there. One of the "worst" cost cuttings i had was the hp pavillion i have having a pretty useless microsd card slot that is limited to 25mb/s... which... well is slow as eff. But hey if THAT is the bad thing they cheap out on then fine, i ordered an external one and that worked fine. The only laptop that ever broke on me was a 280 bucks asus laptop with that intel atom quad core cpu... heck that thing was slow but did it´s job until it decided to no longer power up.
Can you please say if i should play game on turbo mode or silent mode or performance mode my laptop has two gpu one is 4050 and another is 780 and also please say when should i update the drivers i did stop updating as i got scared ones updating drivers once a white screen appeared while updating 4050 after closing the white screen it says the update was complete and another time a black screen appeared during 780 update i had to turn off the laptop and then do the update again so i am also scared to do the new updates.
To resolve the issue it probably wouldnt need to be less powerful, as a 4090 can reach about 90 percent of its performance on 350w or depending on the game even 300w too!
Repasting is never a bad idea. I highly recommend a paste of good quality with a high lifespan such as noctua. Because thermal grizzly kryonaut that i see recommended often, isnt as durable/doesnt last as long as per their description
Not a fan of Nvidia .. long time ago i used nvidia and i had problems with the cards as long as i can remember.. i never ever use nvidia anymore in my life.. same with AMD CPU's .. they're cursed..... intel cpu and amd gpu only from MSI works perfect without issues
In theory same thing but power consumtion is much lower so you shouldt have to worry in that reguard, only thing is temperatures, if the laptop isnt cooled well, it may cause issues
thats not too bad, the 4060 ti is relatively efficient and most coolers are pretty ok so i would doubt that this card would have issues with heat and power, dont worry!
it also depends on how hot the cards run, the laptop gpus dont always suffer from the same issues, as their power draw is much lower, but in some cases they are not cooled as well. Though many of manufacturers in thicker notebooks have been able to actually be capable of building very potent cooling systems so the gpus stay relatively cool even while gaming
@@limitless-hardware1866 mine usually gets around 85 to 90 degrees. And that is while gaming. Mine is a MSI gaming laptop. 6700M GPU. And those temps are not too bad. Now if you have a Dell of any kind then yes you will have a GPU bottle neck due to the temps. Even the XPS are now horrible. They keep using the same old design when they release them. Even the fan is not enough anymore for those.
A step backwards for greedy nvidia. There should be an investigation going on for these clowns. So i should be worried about my 4080 then??? Nvidia are gonna get alot of shouting from me on their support page.
That issue is not only aparent on nvidia cards but can also be a problem on the amd side, although les pronounced as even their high end cards tend to pull less power
I mean they dont have high end unitsyet and their sold units are lower so well have to see how that turns out haha, but competition is always good! As far as im concerned the intel gpus seem to be pretty good
@@limitless-hardware1866 I think that was a deliberate move on Intel they could already produce a reasonably high end card and certainly with the extra efficiency on battlemage but I think that they had enough trying (and mostly succeeding) to get the drivers right to risk problems at the high end. Maybe by the time that they get to celestial they will try their hand at the high end. Now if only the Intel CPU side of the company could work as hard to fix the problems. Oh well.
Your analyze was kinda bias and technical... What make GPU die faster because people using GPU at maximum capacity, like playing game at max setting in a long time, for every day. The longer GPU in hot condition (High temperature) will make the silver solder melting and GPU pin cap turned dark. The only logical reason to expand GPU life span, you are not playing game on max capacity of GPU can handle, by definition, you are not forcing GPU into maximum utility. With this method, you will gives GPU a space to breath. I know, when people buying expensive GPU, they expected more from the price they get for performance, in result, people need to feed their own 'Ego', 😬 Thats why modern GPU is tend to die faster, because of the culture gaming it self...😬
It is the problems i actually explained. Holding the components at a higher temperature isnt the huge issue, but as i said the fluctuation is because of thermal expansion and the stress acted onto the solder
The room im in unfortunately has very bad echo so i need to have the mic closer to my mouth, otherwise its sounds bad. And yes, its a good quality mic lol
Completely false lol they all have these issues bur of course the more tech the more advancements the more parts equal more failures. But we also have 10x more gpus being sold meaning the failure rate goes higher it's statistics. It's not that gpus are worse it's that we have more sold and we have more parts and pieces. It's logical and commonsense. They aren't made worse they are actually made better regardless of what you think or believalso this high power draw? Its got more performance and less qatts per fps now. The 4000 series is pullinf less power then the 3000 seriees. I mean 320watts for a 4080 super? 200 watts for a 4070? Sure the 4090 is 450watts but i mean wtf do you expect? The 2080 was 215watts a 4070 super is 220 watts. Not to far off buddy! We are acrually gettng more effecient while giving much more performance. So much false information in this video.
I disagree. You obviously have to compare each card from each seeies so a 2080 to a 4080 for example looking at thet, power comsumption jumped 100 watts. And then compare the 2080 ti to a 4090 because a 4080 is just a very small upgrade but on the 20 series the ti was a huge upgrade. While lower end cards are getting more fps per watt yes, that is true but expected and therefore enharantly more expensive Also i didnt say anything about them being worse really but the sorrounding things like high power consumption and energy density just exasturbate existing problems. Not to mention the phzsical strain on the cards with the huge coolers.
poor manufacture and design.. and greedy companies.. sony just took 8k out from their packaging for the ps5 who can bearly do 4k with upscaling .. motherboard manufacturers have temp and over voltage protection on some components and cpu to prevent burning out...not the shitty video cards cos nobody can break the nvidia monopoly..!
Absolutely with you on that. I think the consoles are going to suffer most because of poor maintanance especielly with the liquid metal only supposed to be lasting 5 years or so, but could also be a good thing for repair businesses i guess.
In theory the lower end cards shouldnt have these problems as much as the high end ones as long as the designs are SOMEWHAT decent
Nvidia monopoly 😂 bludd stating something market now more prefere amd card rather than nvidia for valie performance wise .
😂😂
I think some corners were cut with some Ampere PCBs to make margins and a few were dodgy. Consoles have always been marginal on cooling - Xbox's rings of death etc spring to mins so it wouldn't surprise me if this gen also had issues
overclocking is the main issue, that very stupid thing. and overclocking was never meant to be permanent, but majority run on overclock every single second the machine is on.
Nice summary of the failure modes for gpus!
I didn't see a mention of physical damage due to a warped pcb from how heavy the cards are. This causes cracking solder joints and accelerating solder wear. Unleaded solder is particularly prone to this as it's considerably more brittle than leaded. Can't 100% prove this but it does seem to be a thing going by repair channels with what repairs typically get done. It's always the bottom part of the core and memory that have corrosion/ripped pads.
You are correct, i did not mention physical damage but i think i may do a video about that in combination with building a gpu support bracket that can be 3d printed
My broseph thought my pc's default was to be as loud as his ps4... He didn't realize I have a choice in making noise. His HAS TO, otherwise it fries itself.
If the cooling setup is configured correctly, most pcs can be pretty quiet, especially with modern hardware!
@@limitless-hardware1866 I know. I just like seein lower numbers on the temps.
Usually cards tend to fail due to user error, dying gpu or indeed prolonged overheating of some of the components on the cards, especially when the card manufacturer decides to cheap out on some of the components (a "GPU" is much more than just the core).
Modern cards are pretty smart and will run within specification at all times. I haven't seen any 40 series card or rdna 2&3 that hits junction temperature on a stock card.
Like with every product there is always a failure rate expressed in %. I highly doubt it is higher these days than before though. 40 series had some major (while rare) issues with their new power 12 pin connector (it is called 12VHPWR).
it's not THAT rare failure, sadly....
@@vladvah77 I have no data, by consequence I can't really comment.
Might not be that rare... NVIDIA might not care much about the GPU division and don't mind a higher failure rate than normal even though that cost them a lot.
This is completely true, although not really user error in that sense, as most users just dont know any better and even if they clean their card, never change the thermal paste, because they think "its ok"
@@limitless-hardware1866 Well, user error in the broad sense.
I got an example:
A few years back i was a happy owner of a GTX 770 4GB, that card did tend to get quite hot. Throughout the first years of the lifespan of that card i didn't know much about hardware, the card worked like a charm until I got into benchmarking and taking note of different metrics.
As a i saw how the card got reallly hot, especially the VRAM, I decided to change pads and thermal paste. Changing the thermal paste worked like a charm, sadly i probably messed up with the pads i got (they were probably to thick, even though they shouldn't).
I broke or caused damage to the VRAM.
User error can also just be the eager willingness of the user to fix what isn't broken.
Another example would be the case of the 12VHPWR connector, where poeple seemed to not clip it correctly inside of the connector.
Some long term hardware failure might of cause be caused by negligence because the user don't know better. In my personal experience that seems only to be the case in laptops though, dry thermal paste and clogged up fans do kill hardware. How often will that happen on a desktop though (other than after 5-10 years of frequent utilisation).
You would think that cards are smart and then you have something like "Silent gaming", which starts the fans only when the temp is 65C or something, which is crazy. The fans are pretty "silent" unter 60% of max speed anyway, so why the need to "silent" them? First thing I did I made my owm fan curve, UV + OC. And now my temps are 56-58C on full load. But I could cook omelette on my card with the stock technology called "Silent gaming". So how many are there of these "smart" technologies? lol
My 1080 is a tank. Still going strong
so is my 70. lets see what 50 series lower cards will offer.
So is my gtx 1060 6gb and i expect my kids will inherit it lol (temps are low 60s as far as i remember and noise is low)
My fury X rocks on 45C full load 😉
1080 and 1080ti where always tanks, its crazy to think how good they where for their time and how they still are.
Thats why I set the power-limit of my 4080 from 350 to 250W. I lost about 5% performance, but the card runs so much cooler and quieter. And I dont have to worry about the burning cable thing.
My method of GPU preservation is to limit the FPS to 120 and lowering the voltage.
This is a perfect FPS for my games and the GPU is only at 60% to 80% utilization.
Correct, thats one way to do it the outher would be to reduce the power limit while increasing clock offset a bit.
I always adjust the fan curve on my graphics cards. I find the defaults have become too conservative. In rhe past, GPU fans would sound like a jet taking off but these days they're very quiet. This is nice but results in very high temperatures. I'd rather deal with a bit more noise if it means my GPU lasts a few more years. Plus it results in higher boost speeds even without overclocking.
Totally understand you, but compared to about ten years ago, temperatures today are pretty low, looking back at 2014 for example an r9 290x was designed to run at 95 degrees celsius under load at all times.
Let’s also not fail to mention that certain manufactures (looking at you Gigabyte) do not put any fuses on their GPU boards, which allows the most sensitive components on the board to burn to a crisp if there are voltage anomalies.
True, but i dont know if that happens as often, most of the time for the users those fuses actually arent doing anything as most defective cards just get sent in for warranty or thrown away.
The issue is that it would save valuable resources as repair in general would be easier and more cost effective.
Something that wasn't mentioned is the bigger cards sag in the pcie slot if your card is horizontal and it breaks the pcb of the card and is super hard to repair.
Absolutely correct!!! I have forgotten that and DEFINETLY is an issue!
Do not undervolt a 4090. Lower the power limit instead. 40 series behaves very differently than previous generations. When they run at a lower voltages they will lose performance even with a higher core clock. Lowering the power limit will give you the same effect of reducing power but is much easier to do. The power limit will also has the benefit of setting a max power draw, where as an undervolt can still draw 450w - 600w under the right work load. See the optium video about undervolting a 4090 for benchmarks.
I have tested it, depends heavily, obviously you have to look at the performance to see how much it looses, but i have tried on a 4090 undervolting or rather lowering powerlimit and then increasing clocks the 4090 still was able to achieve 90 percent of its performance at 350 watts in power heavy games or benchmarks.
I have 4080S, I did UV (950 mV) + OC (core: + 270 MHz, VRAM: + 1500 MHz) - it does have the same performance as stock settings, I checked the difference in game with RivaTuner. The power draw on full load is 220-230W and the temp 56-58 C. So no, I heavily recommend UV + OC. Idk, maybe it's just something wrong with 4090.
These issues with the 4080 and 4090 are why I went with a 4070 Ti Super.
Still on a 1080ti and water cooling it from day 1 and she's happy :)
Most of the performance gains going from a 2080 to a 3080 were from the increased power usage. Thankfully the 40 series is noticeably more efficient than the 30 series.
On my 4080 I just set my power limit to 90%, increased core clock by 80mhz and RAM by 800 and it’s been brilliant. Uses less than 300 watts under full load and performs better than stock while staying nice and cool.
I have 4080S, I did UV (950 mV) + OC (core: + 270 MHz, VRAM: + 1500 MHz) - it has the same performance as stock settings, I checked the difference in game with RivaTuner. The power draw on full load is 220-230W and the temp 56-58 C. Super and non-Super are pretty much the same anyway, I think you can tweak your card better. Try my settings, via curve, not via power limiting in %.
Come on engineers! Build a complete GPU SoC with 32GB of embedded vram. One product line. Keep the power draw less than 70Watts. Perhaps split the SoC into DirectX and OpenGL into their own optimized pipelines thus shutting down unused logic. Make the bus structure very wide thus less need for higher clocks and keeping thermals low. Work in some virtual memory that won't crash the whole system. Build one product that fits all with the ability to swap out the GPU SoC via some type of optical socket. Use optical bus signaling on the PCB reducing copper trace usage.
HMB vram. 😂
Thanks for the info. I have a question about the asus proart series. they use low profile coolers, and assume they would be affected more right?
got xtx nitro+ and i don't want to talk about power consumption lol.
Those cards can't get enough power - if I got one I'd be throwing 500W into it 😂 , which my case can't deal with so I got a 4080 Super instead which I've pushed 370W into.
I don't think enough people are talking about the HUGE jump in power from the 2000 to the 4000 series, it literally doubled. Performance has stayed pretty linear when you adjust for power. A 250W 40 series card performs very similar to the top of the line 250w 2080ti or 3080. And of course double the power needs double the cooling and mass.
This is true as i elaborated in the video. electron migration because of hihgh power draw is a massive problem, although voltages dropped, due to the high amound of cores the amperage drawn are ridiculous a lot of the times.
BUT! if you undervolt a 4090 to like 300-350 Watts, it can still provide 80-90 percent of the performance. sooo in theory when tweaking the car yourself you can gain a LOT from it!
Just not true. I owned a 3080 and now a 4080 super.
The 3080 pulled about 360 watts.
My 4080 super pulls about 310 and is about 40 percent faster across the board.
Its at least 50 percent more efficient watt for watt.
Although some might find the gains disappointing, they do add up over the generations.
This makes absolutely no sense . A rtx 4070 will out perform a 3070ti by 20% while using like 130 less watts
That's like saying 2080Ti have similar performance to 4070Ti. both card consumer around the same power.
I'd say that's true for Ampere but not Lovelace , a 4070Ti Super crushes a 2080ti for similar power consumption..
The electromigration angle I've not come across before. With newer 30/40 series cards I'm a little skeptical - not heard much else about the cores dying straight out on these. If anything it's the PCBs getting warped and damaged solder balls/traces. On the CPU side I did think it was a bit much sending VIDs of 1400mV down a 10nm process on a 13900k when we did similar with the old Core 2 Quads on 65nm. But those seem to be holding out , as do Ryzens on 7nm and 14nm Intels. The newest Ryzens are designed for very high operating temps too. So I'm not sure if electromigration is going to be a massive problem for those who don't send 900W down their 4090s (I have seen a video here of someone doing exactly that , and the card dies pretty quicky 😂).
I do however have a very relevant story of a 2080ti dying though recently almost exactly 2 years after I bought it second hand .
Reason was that the memory controller basically cooked itself , with a part of the core near the edge being not cooled enough - but not near sensors which would have showed in hotspots. I got this second hand towards the end of the etherium mining boom so the previous owner would have possibly mined with it.
The other thing I noticed about that is that the default fan curve was extremely conservative and had the hotspot running to 90+C (in hindsight I should have repasted it but it didn't seem too much higher than the core temp ). I did overclock and max the power budget out on this card as I needed the performance. But I imagine this card being hammered with little cooling when mined with -there wouldnt be a fan profile for max memory load and min core load.
Wow I didn't know GPUs had 200 temp sensors. For that reason alone I will subscribe.
this is really scary ..4k ips monitor + 4090 +13700 kf easily pull out at least 700wtt when gaming..at this point not oly gpu but whole pc might dying
True that! BUT its important to look at the power drae of eah component. A 4070 for example only pulls like 220 watts wich is pretty efdicient
wow..great vid..hopefully my 4070ti super is o.k...the specs seem to indicate it is power efficient compared to a 3090 or 3080 for that matter..my 3070ti likes to suck power also..the price of performance is way too high.
Simply said the cards are at their limit. 450W seems is already too much to have reliability. INTEL has the solution via REBAR to pair the ARC with CPU and double the performance. I would advice to look little bit deeper on INTEL ARC. If there is 60FPS the performance equals 120FPS. You have two silicone's. AMD/Nvidia are mostly standalone cards and CPU plays very small part. If you don't believe me A770 for 4k and 60FPS gives you better experience than 4070ti at 120FPS. Less waiting. My guess Nvidia looking to make their own CPU's has this in mind.
Absolutely, they shouldnt have gone above like 300 watts, even that was a lot for the smaller coolers nowadays the coolers just get bigger and bigger to be able to somewhat keep these gpus cool, i think it just got out of hand kinda
@@limitless-hardware1866 Intels approach seem logical to split the tasks between two silicone's. It doesn't show in FPS count that much but from 1080p where INTEL has half the caped FPS in games that are gaped and 4k does better job you have to realize that there is something else going on when it is clear the ARC just gets more competitive with higher resolution. Traditional values from FPS seem to be wrong. Almost like the calibration that gives the FPS can't see that there are two silicone's and only giving the half that it can see.
4070ti 12gb VRAM 192 bit bus 285W
a770 16gb VRAM 256 bit bus 225W
The point really is that ARC 225W + i7-13700K 253W = 475W all up. Wattage usually is good indication of the performance. Something to ponder I guess.
I am turning off zero rpm mode and using custom fan curve to prevent this. Maybe it will shorten the life of the fans, but fans are hundreds of times cheaper than GPU.
They went from leaded solder to unleaded, this is the main problem
When did that transition happen?
July , 2006
Good to know, yeah back then the 8800 series was released and that series started to have massive issues, qlthough those were the first cards having relatively high power draw too
Use AMD. Break Nvidia monopoly.
Amd right now has better value in most cases too. Its strange that nvidia still has such a big marketshare in comparison
@@limitless-hardware1866 I think it boils down to driver support and optimisations. I am not a gamer, my rendering softwares are optimised only on Nvidia platform. Also, history is the witness, AMD hasn't been very supportive in long lasting driver updates.
Or used nvidia, your choice
Intel
Hey, nice to see you again after you quit at PCGH.
So is it mainly temperature that’s the issue or temperature and wattage? My 2080ti spikes above 250w in certain games, but almost always is below 60C under full load, and idles around 30C. Is that pretty safe or should I adjust the fans to allow it to idle at a higher temp in order to reduce temperature fluctuation? Is higher wattage always worse for silicon degradation even if temperatures are kept relatively low (below 80C)?
Its a combination of both, or one or the other. the silicon degredation happens mostly with high power draw through the GPU with electron migration. Temperatures accelerate that slightly. The thing that high temps mostly cause are cracks in solder joints, especially when the difference between idle and load is really high, like 30 and 85 degrees or even more.
in addition to that sometimes very heavy cards can also be a factor as the pcb bends and can lead to cracked joints too.
Temp yes but also physics. Material will have to change to get more performance or liquid cooling will need to become the norm.
@@limitless-hardware1866 Thank you for the added detail. That makes alot of sense.
Hype of small ,micro, smallest, thin, ultra thin brings that result.
Would repasting a GPU help?
Anything that reduces temperatures will help. And because manufacturers often use low quality thermal ipaste, qimed at a long service life, its often a good idea
I'm seeing more and more UA-camrs using these mic's in their hand ? Is there a genuine reason or is it just a new image/fad ? Legit question! Just curious!
I’m waiting for the 50 series to drop. I will wait for several months after that before I buy anything
Thats good practice, as first issues can be ironed out by then and a lot of manufacturers have newer revisions of their cards, if issues come up
Imagine spending 2k on a gpu, and it fails on you. Man, I would crash out
Oh so my 3070ti and 1080ti are chilling for now
id say the 3070 ti is at less of a risk than the 1080 ti hence the lower power consumption. depending on the cooler, the 1080 ti could benefit from lower temperatures, as in that generation, stock coolers were kinda weak sauce but at least they had a temp limit that was reasonable
@@limitless-hardware1866 honestly ive only ever gamed at 1080p and maybe let the 3070ti try push the frames up to 360 for my 360hz monitor but with the 1080ti being a blower style card i have to adjust the fan curve so it wont start dying so soon and lock it to 144fps in most of my 1080p games usually i find as long as temps don’t exceed 65 it hasn’t degraded performance or started the dreaded coil whine yet
@@pranavraval5282 What 1080ti card do you have, is it FE? Regardless, 1080ti is the greatest card, sadly no more 600 usd top of the line gpus. Here I am in sweden paying 400-450 usd for a 7600xt smh.
@@Apalis Dang, luckily with the uk market it feels a lot cheaper with used gpu’s most of the time, got my 1080ti for around 150 a few years back, it isn’t FE but is the MSI blower style. Too plastic y for my taste but honestly it does the job great for the case its stuck in
My 3070ti is drawing 350-380 Watts lol@@limitless-hardware1866
What do you expect to charge ridiculous prices. People will go broke and full of regrets.
Well the high prices nowadays are just unreasonable, 8-10 years ago the most expensive gpus were also 700 usd and that was really expensive. Charging triple that is just unreasonable. And it wold be possible for especially nvidia to offer these gpus at a much lower pricetag if they were willing to do ao
Any info if the problem also includes laptop gpu´s? (i mean their power limit is far lower)
laptop GPUs are power wise petty good, but im not sure how well they are cooled especially in the higher end cards, if the laptop is set up well and the cards stay below 85 core average then that should be ok but on a laptop 4090 might be a different story. ALTHOUGH laptop components in general seem to be having more issues therefore im thinking that the GPUs might not be the biggest issue in most cases.
@@limitless-hardware1866 My rtx 4080 laptop is kept pretty cool even under load. Seems that part is well designed on the scar 17.
@@limitless-hardware1866 The gpu doesn´t even reach 70 so far. usual during gaming session between 56-65 depending on load.
Laptop manufacturers historically cheap out on everything. Even high end laptops. There's a floor for how large a company needs to be to get into laptop manufacturing so it's pretty much medium to large corporations only. The companies which will do things to save 5 cents on a 1000-2000 dollar bill of components and compromise the entire thing. Bad screws, weak hinges, off brand caps, undersized coolers to shave a few grams off expensive copper. Simple things like not enough glue to hold a bezel in place because an extra dollop is a cent saved. The works.
@@halycon404 I certainly would agree that there are costs cut. But in general the last laptops i had over the year all ended up to be reliable and the only thing i did was clean them out every now and while at it add some upgrades here and there.
One of the "worst" cost cuttings i had was the hp pavillion i have having a pretty useless microsd card slot that is limited to 25mb/s... which... well is slow as eff. But hey if THAT is the bad thing they cheap out on then fine, i ordered an external one and that worked fine.
The only laptop that ever broke on me was a 280 bucks asus laptop with that intel atom quad core cpu... heck that thing was slow but did it´s job until it decided to no longer power up.
My 3dfx Voodoo 5500 PCI runs like a Champion after all that Time 🤪
It's Aliens...
Probably is 🙈
Can you please say if i should play game on turbo mode or silent mode or performance mode my laptop has two gpu one is 4050 and another is 780 and also please say when should i update the drivers i did stop updating as i got scared ones updating drivers once a white screen appeared while updating 4050 after closing the white screen it says the update was complete and another time a black screen appeared during 780 update i had to turn off the laptop and then do the update again so i am also scared to do the new updates.
A fan replacement is cheaper than the GPU replacement so turn up those fans!
Absolutely correct but most fans nowadays are of relatively decent quality and whould easily outlast a gpu under load 🙂
to resolve that problem the next generation of GPU will be less powerfull.....
To resolve the issue it probably wouldnt need to be less powerful, as a 4090 can reach about 90 percent of its performance on 350w or depending on the game even 300w too!
Honestly a surge of how to overclock videos don't help
I do videos on both overclocking and undervolting, everybody can decide for themselves what to do wirh their hardware.
My MSI 3090 is doing fine, though I might need to repaste it soon
Repasting is never a bad idea. I highly recommend a paste of good quality with a high lifespan such as noctua. Because thermal grizzly kryonaut that i see recommended often, isnt as durable/doesnt last as long as per their description
Here if you are still powering on with a 1060 GPU...💪
Not a fan of Nvidia .. long time ago i used nvidia and i had problems with the cards as long as i can remember.. i never ever use nvidia anymore in my life.. same with AMD CPU's .. they're cursed..... intel cpu and amd gpu only from MSI works perfect without issues
What about Laptops?
In theory same thing but power consumtion is much lower so you shouldt have to worry in that reguard, only thing is temperatures, if the laptop isnt cooled well, it may cause issues
too much power! I have here a 13600T and a 4060TI 16gb... They draw at te wall 250w with screen!
thats not too bad, the 4060 ti is relatively efficient and most coolers are pretty ok so i would doubt that this card would have issues with heat and power, dont worry!
Hummmm. I have an all AMD laptop and I have not had any issues.
it also depends on how hot the cards run, the laptop gpus dont always suffer from the same issues, as their power draw is much lower, but in some cases they are not cooled as well. Though many of manufacturers in thicker notebooks have been able to actually be capable of building very potent cooling systems so the gpus stay relatively cool even while gaming
@@limitless-hardware1866 mine usually gets around 85 to 90 degrees. And that is while gaming. Mine is a MSI gaming laptop. 6700M GPU. And those temps are not too bad. Now if you have a Dell of any kind then yes you will have a GPU bottle neck due to the temps. Even the XPS are now horrible. They keep using the same old design when they release them. Even the fan is not enough anymore for those.
A step backwards for greedy nvidia. There should be an investigation going on for these clowns.
So i should be worried about my 4080 then??? Nvidia are gonna get alot of shouting from me on their support page.
That issue is not only aparent on nvidia cards but can also be a problem on the amd side, although les pronounced as even their high end cards tend to pull less power
Buy Intel, very good reviews.
When even intel show more reliable GPUs 🤣
I mean they dont have high end unitsyet and their sold units are lower so well have to see how that turns out haha, but competition is always good! As far as im concerned the intel gpus seem to be pretty good
@@limitless-hardware1866 I think that was a deliberate move on Intel they could already produce a reasonably high end card and certainly with the extra efficiency on battlemage but I think that they had enough trying (and mostly succeeding) to get the drivers right to risk problems at the high end. Maybe by the time that they get to celestial they will try their hand at the high end. Now if only the Intel CPU side of the company could work as hard to fix the problems. Oh well.
AMD owners: 💪🗿
That high power input to gpus is worse invention ever made they need to think ways to make gpus run lower wats not how they can get more wats in 😬
Your analyze was kinda bias and technical... What make GPU die faster because people using GPU at maximum capacity, like playing game at max setting in a long time, for every day.
The longer GPU in hot condition (High temperature) will make the silver solder melting and GPU pin cap turned dark.
The only logical reason to expand GPU life span, you are not playing game on max capacity of GPU can handle, by definition, you are not forcing GPU into maximum utility.
With this method, you will gives GPU a space to breath.
I know, when people buying expensive GPU, they expected more from the price they get for performance, in result, people need to feed their own 'Ego', 😬
Thats why modern GPU is tend to die faster, because of the culture gaming it self...😬
It is the problems i actually explained. Holding the components at a higher temperature isnt the huge issue, but as i said the fluctuation is because of thermal expansion and the stress acted onto the solder
Just turn the fans up simple as that who cares about noise if you have a freakin headset on your ears
Why are you holding a mic? Is this 1980? lol 😂
The room im in unfortunately has very bad echo so i need to have the mic closer to my mouth, otherwise its sounds bad. And yes, its a good quality mic lol
Completely false lol they all have these issues bur of course the more tech the more advancements the more parts equal more failures. But we also have 10x more gpus being sold meaning the failure rate goes higher it's statistics. It's not that gpus are worse it's that we have more sold and we have more parts and pieces. It's logical and commonsense. They aren't made worse they are actually made better regardless of what you think or believalso this high power draw? Its got more performance and less qatts per fps now. The 4000 series is pullinf less power then the 3000 seriees. I mean 320watts for a 4080 super? 200 watts for a 4070? Sure the 4090 is 450watts but i mean wtf do you expect?
The 2080 was 215watts a 4070 super is 220 watts. Not to far off buddy! We are acrually gettng more effecient while giving much more performance. So much false information in this video.
I disagree. You obviously have to compare each card from each seeies so a 2080 to a 4080 for example looking at thet, power comsumption jumped 100 watts. And then compare the 2080 ti to a 4090 because a 4080 is just a very small upgrade but on the 20 series the ti was a huge upgrade.
While lower end cards are getting more fps per watt yes, that is true but expected and therefore enharantly more expensive
Also i didnt say anything about them being worse really but the sorrounding things like high power consumption and energy density just exasturbate existing problems. Not to mention the phzsical strain on the cards with the huge coolers.
too much stupid overclocking
😂 modern gpus are fine.
First comment hehe
You mean mostly high end Nvidia cards are failing right?
Nvidia and amd also, but as amd high end cards tend to draw a little less power it might be a bit different, although they still get hot!