Could you test 3950X vs 9900KS (delidded), both OC and with tightened timings, in gaming? And gaming + streaming? Would love to see an absolute best case scenario (without LN2) from both teams, for enthusiast PC gamers. Thank you for all the amazing content!
yo GN. rockstart says it added -cpuloadrebalancing launch option to RDR2 to help with stalls on 4 and 6 core cpus with nvidia driver. maybe you coudl try and see if this option exists on gta and see if it helps. anyway the patch is out and it seems to help. not saying a retest is in order just maybe if you could mention your thoughts in an ask GN thing or something.
@@maybenot7298 dude chill lol this thing basically matches the 9900k in single thread performance and has literally double the cores quit your whining and bootlicking, nobody cares
Within margin of error of the 3900x in games. I'm glad I didn't hold off and pulled trigger on my 3900x. I personally don't do enough non-gaming workflow to justify the price hike. Both chips are just monsters. I couldn't be happier
Same. I was worried I'd be upset when - not 2 months after I got thee best am4 part ever, ever - another, even bester part comes out for a small amount more money. Almost had preemptive buyers remorse. But now I don't!
@@joek81981 Seeing that the 3950X is trading blows with the 9980XE in multi-threaded workflow, I'm really curious now to see what 3rd Gen TR brings to the table.
What a game, right? I say it a lot here in the comments, but it deserves repeating. This is a golden age for computers-people. Great time to be a PC gearhead.
Personally, I'm very interested in disabling an entire chiplet to try mimicking an intel 8c/16t for some games and apps; some things don't like the zen arch
_Oh boy, how mistaken you are!_ That was just a grazing shot, although it's a pretty lethal one already - we'll see if it's killing it. If not, there ought to be some blood bath later in the month … Since *if* it wasn't so… _Phew_ I guess, jacking up cores again, The Ripper™ just threats coming for a third (and probably final) time and will most definitely kill the heart of Intel's HEDT pretty fast. It's just that Intel had to decide whether or not taking some narcotics for the sake of numbness beforehand - or actually really wanted to attend TR3's bloodbath in person. Seems with that price-cut Intel issued lately they just took the pill, to make it less bloody. Since AMD is still carrying their youngest baby yet - and it seems to be a huge-grown!
@T R The problem with Intel is their entitled prices. Even when AMD was kicking their ass with Athlon 64, those Pentium 4 oven machines were still sold at high prices. I believe that entitled pricing policy comes from their high idea of their own name(a little how Apple thinks they can charge too hell just because their logo is on a product). So people kind of feel happy when AMD are kicking their asses because they need some humility lessons as to start pricing at what they offer and stop thinking their shit are made of gold.
@T R Waaaaay ahead too. Right up until the FX-60/70 started demanding really big money. :( Then I was sorta happy to see the Core2duo/Quad ect come out.
@T R I was still a basement nerd with a socketA Barton 2500/3200 when those benchmarks were getting done. my first big boy system was a Phenom II 940 w/a 4870 1gigger... what a joy that was for it's time. it lasted through 3 different video cards and then spent 3 years serving mom movies on demand at her house before my brother turned it on w/o the heatsink one day while servicing it. :( rip.
@@TheVillainOfTheYear We can clearly see from the benchmarks in this video that the 3900X is often much faster with SMT off (beating the 3950X in Civ 6 and GTA V), but for some reason they didn't try the 3950X with SMT off.
@@LycanWitch haha I watched LTT right after this one. Sometimes LTT does some wacky things in their reviews which is cool and entertaining but I watch GN first to get his almost as unbiased as they come opinion before I get all hyped up by Linus (no disrespect to papa Linus either btw)
Please add DaVinci Resolve Studio in your productivity suite because its approach to CPU usage is uniquely different than Premier and is one of the fastest growing video editing tools out there
I'm amazed you guys talk about all core OC, but don't expand on the potential of Per CCX OC to get a good blend of fast cores and slow cores, then use Process lasso to set affinities so all background tasks run on the slow cores. That's what I do on my 3900X 4.5/4.4/4.375/4.375 at 1.331V LLC3 (1.306 vdroop) and is golden! Not that I do, but you can also soft disable SMT in Process lasso. This program has become very relevant now with high core count CPUs
Sacco Belmonte I dont really trust ProcessLasso, I feel like people think it's just some magical program that will instantly set affinities and increase performance
Steve, I think you and Wendell need to get together again and do a deep dive on both your 3950x chips, his results show his stock 3950x performing on par with his 4.8ghz 7980xe meanwhile yours seems to be way behind. There must be some major silicon variance involved or some wierd HW/SW issue at play here...
@@Bobis32 in that case it might be a good idea to test with 1usmus' ryzen power plan as it is supposed to give similar performance to the new windows update by partially fixing the scheduling issue
Zen 2 is very sensitive to a lot of things, e.g., temperatures, RAM/IF clock speed. Different cooling, different cases, different RAM could all make these results vary noticeably.
What about streaming? CPU encoding while gaming is a good reason to invest in high core count. I can easily render videos with a 4 core CPU, just take x3 the time. But when it comes to streaming definitely can't use X264 encoding at a decent quality for streaming for lack of cores :P
AMD is up to par or better in performance BUT Intel still has a much much bigger bank, so it's gonna be a while until AMD is really equal dog...if they can keep this up.
The real lesson here is that developpers seriously need to code for multi-core multi-thread CPUs now. It's looking like we're still in the dark age with single core frequency being so important.
Nobody took AMD seriously when they started upping the core count. Now that Intel has had to follow suit to play catch-up I bet we'll see devs making changes.
They already are but it takes time for the cycles to come to fruition. The next few years will see a lot more shipped games taking better advantage of core counts, and within 5 years it will be almost universal.
Y'Know, even though the introduction is very factual and "dry", I can hear the anticipation and excitement from you! This processor is so cool, because AMD not only as the underdog made something competitive, they made something for a market that definitely exists and pushes huge boundaries, leaving the ball in everybody else's court.
GN is more technical. Linus is more enjoyable. I feel that with linus I am watching a story. Its engaging. There's a progression and excitement. At GN I feel like it is a technician giving a review, which is also good. Different approaches, but it's interesting to compare.
Sovietbird agreed. Those 3 are THE channels to watch to get a balance. There are several others that go way more technical, or way more niche on gaming or rumors, but LTT + HUB + GN usually covers everything worth seeing.
3900X is still a beast. My Blender viewport with cycles render is basically in real time, setting the maximum core render viewport to the same number of threads (24). For real time stuff like modeling, UV management, texturing and sculpting a good CPU like Ryzen 9 series and enough RAM is sufficient. For AMD / Linux users: check Core Control utility. It is very useful with CPU profiles (not only GPU) and if you set your Ryzen to "ondemand" consumption, it will work in a very optimized way for your specific workflow. You won't regret it.
Adobe considers AMD to be subpar for it's software and doesn't optimize for it. I have a link to a thread where they completely blow off the 3000 series as being viable for workflow over intel. It is crazy.
@@pantzman I didn't wanna throw links on someone elses video but here ya go: community.adobe.com/t5/Premiere-Pro/iPhone-11-Pro-4k-60FPS-HEVC-will-not-work-in-Adobe-Premiere-CC/m-p/10736475#M237073
Tristan Pope this is ridiculous for adobe to pull this. Read through the whole thread. It’s amazing they don’t even have one Ryzen system. You’d think adobe would want to save money and switch their test models to cheaper yet faster Ryzen systems. Imagine a company being that stuck in their ways.
Adobe uses Intel's compiler and 95% of corporate users stick with Intel who are there customers. Also Apple is an Intel shop with most Photoshop users being Mac fanatics. Intel c compiler is both on Mac and PC which makes porting easier.
I remember when Intel charge 2017 $1000 for 8 core CPUs 2019 $950 for 18 core CPUs 2020 $500 for 18 core CPUs In the future at this rate, it will happen. 2020 amd will make out more cores. and intel will have to compete or lose more market
They can afford to lose a bit more marketshare, and most likely won't (much) in the more profitable ones. AMD is pretty much at max production capacity already and has no real way to ramp it up. Something Intel is investing in given the shortages. Not a good thing for consumers.
@@GamersNexus personally I'd be interested in compilation of say, tensorflow. Something like this hub.docker.com/repository/docker/tarfeef101/tflow-builder
@@GamersNexus Is that GCC test a realistic test though? If so then it's still relevant. A large cache is a potent way to offset latency; if AMD is going down that route then the fact that GCC benefits from such a design decision is surely the whole point of why the large cache is there. Seems a bit odd to drop a test because it sits in cache, when enabling a task to sit in cache is the whole reason why the cache is so big. To extend the logic to the extreme, if AMD released something with 16GB L1, would all tasks be dropped? ;)
@@GamersNexus It seems really hard to find compilation benchmarks for CPUs. For this CPU in particular, I think it's important to have them since people are buying this CPU for work and would love to know what really matters. A caching benchmark to me seems pretty awesome. It lets people know that clock speed is not the only factor. I write a lot of high performance software and one of the ways we get make faster programs is cache optimization. This is also how console games squeeze out more performance out of known hardware. Here's a video if you want a good example of how game devs goto the extreme for performance (note it's from a C++ talk). ua-cam.com/video/rX0ItVEVjHc/v-deo.html
I'm curious how the 3950X would hold up against a 9900KS in gaming in a real world scenario when you got a handful of launchers, discord, bittorrent, web browser with a bunch of tabs and other stuff running in the background.
Thank you for including the 7700k on your charts. It just came out in 2017, but I can hardly EVER find it in any other channel's comparison charts. It's a big help for those of us poor creatures who needed an upgrade right before the Core Wars started.
Are there any reasons except "low interest" for not doing compilation benchmarks? The increase in L3 caches has been making me happy, since I have found that to be one of the most important parameters. Say, a Linux or chromium compilation would make me very happy.
Agreed. For tech Jesus Adobe I'd important but software developers, virtualization engineers, engineers, and even statisticians use CPUs like this. Intel also has other advantages too like Intel RST real time storage and raid. Someone doing video work will want many tbs of storage with Adobe where if you have 4 ssds Intel has an advantage in I/0 and express lanes as 3950 has cut them
Please consider an Overclocked 8700k to 5 ghz for ur futture CPU testing for the future, many of us 6c12t Gamers want to know where we are standing in 2019/20. Thank u for u work :)
Got a question for you, Steve: When gaming alone, that's cool. But what about gaming + Streaming / Recording H264? Main example being twitch streamers, but also gameplay youtubers. How much impact does live encoding add to a gaming experience?
Ever thought of doing a CPU comparison video with "mainstream" enthusiast cards priced between $300-$600? It would be great to see benchmarks of current high end CPU's with GPU's like 2070S, 5700xt, 5700, 2060S... Its a more realistic use case too because obviously not everyone buys $1200+ GPU's even if they buy high-end consumer CPU's!
@KingOfGorillas Yes it will be interesting indeed. It will be more like "ok ok we been jerking around long enough and let amd have there fun but now it's time get serious and send lil brother back down to the minor league" I imagine it will be something like that :)
I wish audio work would get mentioned in these tests. Indeed music production is probably one of the best tests for a processor, as it stresses it to the max but in realtime performance, which when working with music needs to be glitch-free at all times, no clicks or pops, unlike offline renders which just crunch the numbers.
will you make streaming benchmarks? i think this may be a really good cpu for gaming + streaming to twitch, and i likes your dropled frames for both sides graph the last time
Ideally you would find a way to restrict gaming cores to one CCD and streaming cores to the other. E.g. with process lasso. That way you prevent expensive core hopping and inter-CCD talk. Perhaps you could run an streaming VM (virtual machine) on 8 distinct cores, which would be easier to configure (process lasso needs to be set up for each game individually, iirc).
@@bas2085 That is true. What about a VM (virtual machine) for streaming (hardware pass through of a capture card) that hogs one CCD? Should work for all games/titles without any configuration after everything is set up.
I'd love to see OC results with SMT off. Just out of curiosity. Less work means less power means less temperature means lower voltage needed means (somewhat) higher frequency.
If memory serves, Steve has stated in an older video that all tests are done with all AMD exploits and Intel exploit patches and fixes installed up to the date of the test.
@@ps2232 I see. So it's likely that all the catching up AMD has been doing, is at least partially due to Intel's fault at being more vulnerable, requiring patches that lower their CPU performance via software.
i feel sorry for steve. linus released a video within the same minute as steve yet again steve only has 600 views whereas linus has 6.2k views already! tbh i prefer steve’s content tho😂
I watched linus first myself, simply put I didn't have alot of time this morning and linus is usually slightly more brief,, and about 5 minutes can be cut out by skipping for 30sec-1min after the segways :P
Can you guys add the HW News time bars to your benchmark videos moving forward? And maybe teardowns? Not so I can skip around, but to make rewinding easier. The time bars are literally my favorite part about your channel, followed by the OC content, the hardware teardowns, the Jensen teardowns, and then Snowflake.
I have a few questions so forgive me laziness as I'm sure these are mentioned in the video. Is this with the newest windows update? From what I read it comes with improvements to the scheduler. Will you bench some games with SMT turned off? 32t is obviously overkill for games. Have you seen the custom power plan by 1usmus? Perhaps it's not needed anymore with the newest windows update.
first impressions i've seen around seem to show that the upcoming update doesn't really have an effect in the real world, whereas the 1usmus powerplan seems to actually help with 3-5% gains in single thread workloads.
After this latest bios update, my 3900X regularly hits 4545-4585mhz on all cores now, and this is happening at lesser temperatures than I was commonly hitting before the update, all on a X370 CH VI Hero. So yeah, I'd say AMD and the mobo manufacturers have straightened out most of the bugginess of boost speeds and temps. I would expect most of the AMD chips you test from now on to behave much better, and be as straight forward as this one is.
"dont buy this for your gaming rig" dont waste your breath steve. before amd got 12 and 16 cores it was "the extra performance from intel wasnt worth it" and now that amd have 12 and 16 cores those some people, who only game, are buying them left and right. which is fine. but by the time a game needs anything close to that these cores will be outdated and youll have to update anyway. if you only game it should be 3600, or if you want a little extra the 3700x or something. not "should i get the 12 core or the 16 core"
do you have a source for the statement " but by the time a game needs anything close to that these cores will be outdated and youll have to update anyway" or is it just your opinion and nothing science based?
It's true that quite a few people bought the 2700x over the 2600(x) and the first gen 8 cores over the first gen 6 cores, which made no sense whatsoever. But these days, looking at the charts, depending on the game and your resolution you can actually make a decent case for a 3700x. Not to "future proof" but to get better average and more importantly better 1% and 0.1% lows - in a meaningful way. I'm currently on a 2600x and there are some games where going to a 3700x would boost my lows quite significantly. We are now effectively at a point where 6 cores with 12 threads are entry level (enthusiast) gaming CPU's. I'll prbly chill for now and wait for the 4th gen, if that still runs on X470 I'll pick up a 4700x or whatever 8 core they have then. If it doesn't the 3700x will prbly drop in price.
@@MegaDominik1 dude its just the truth. its from experience and history. this has all happened before with the last console gen. doubt it if you want. waste your money if you want. but if youre just gaming by the time a game actually sees a difference between 8 and 12 cores with SMT\HT, those cores will be so outdated that a new 8 core or less cpu will be faster. i mean are you saying that amd wont come out with faster cores in the next 3 years cause thats AT LEAST how long its gonna take. i mean RIGHT NOW take any 6 core cpu with 12 threads and take an 8 core\16 thread cpu with the same cache and clocks and compare them. they will basically be identical in just about any game. a couple might get 3% more performance. keep in mind that amd 6 core cpus have lower clocks than 8 core cpus so thats where the difference really is. the 12 core cpus have double the cache because they have 2 dies and they have higher clocks. and they use the 3700x instead of the 3800x which has a lower tdp, so lower boosts as well. the difference in performance between a 3800x and a 3900x in games is DOWN TO THE CACHE. now the next gen of consoles are gonna have an 8 core 12 thread cpu at about 3ghz. the last gen had 8 threads period and it took 3 years into that consoles life before games actually started using even close to all those cores. the next gen of consoles will also have games that are supposed to work on the old gen and the new gen. so they will have to work with 8 threads in mind for a year or so. i mean its incredibly difficult to even get to where we are where a game can use 10-12 threads. an 8 core cpu has 16 threads. itll take 2-3 years from now before games can even really use all 16 threads AND ACTUALLY SEE A PERFORMANCE GAIN!! i mean some games can use all 16 threads a bit now but either a handful of threads are barely used OR if all threads are used sometimes the games even perform WORSE than if you set them to use 12 or so instead. a 12 core cpu is 500 bux. so with a cheap board and tax thats about 600 dollars. buy a 70 dollar board and a ryzen 3600 or at most a 3700x and it will perform the same as a 12 core for the next 2-3 years at least. at that point amd and intel will have cores that have higher ipc and faster clocks with more cache. 5th or 6th gen ryzen 6 cores will beat current 8 cores easily and most likely even todays 12 core cpus in games. compare a ryzebn 1800x, the best 8 core amd had 2 years ago that cost 500 dollars to amd's current 3600 6 core that costs 200. the 3600 blows it away in gaming and basically matches or beats it in multi core. do you not think the same thing will happen in a year or 2? do you really think games are gonna start using DOUBLE the thread count in a year or 2? cause thats what it would take for a 12 core cpu for gaming to actually be worth buying. its not gonna happen.
AMD is an exciting insurgent company. Intel has fossilized. These chips are close enough in most games and they do nearly everything else better. They’re forcing innovation and it’s fun when we had been stuck on quad core with hyper threading for years on mainstream platforms.
Thank you for adding the R5 1600 into the benchmark for RDR2!! I just bought it and am running a 1080ti and have been worried about the performance but I’m happy to see I’ll be able to achieve 60fps easily with minimal cpu bottleneck
AMD's Ryzen processors are truly the pickup trucks of the realm of computers. You can make do with a smaller, faster sports car, but you never know when you need to pull a tree and haul logs at the same time.
AMD is now the great equalizer now which is a good thing for everyone! Great video as always Steve and Gamers Nexus as always yall have the best info keep it up!
AMD needs to adjust the way boast clocks are advertised. Advertising 4.7ghz when it can't hit 4ghz all core (I'm hoping that's addressed via bios update) doesn't feel 100% honest.
Chess engines will use all the cores you have, but they never use chess engines when testing, hell engines are even good to test overclocking stability!. So sad they always ignore them...
Only when Intel is overclocked and has more cores then it's on top 😂 I don't like charts where overclocked cpus are shown because only very small amout of people will overclock and no companies will overclock cpus.
In a high level enthusiast part, it's definitely a valid comparison. People who want the best performance are going to do it. Now lower parts, no. I doubt the extra frames from overclocking some i5 or a 3400g are gonna come into play but on these 500 dollar and up bits, it matters. I just wish the OCd parts were marked in a more quickly visible way. There is a lot of data in these graphs to take in rather quickly
Can you include tests for streaming/recording while gaming? That seems like a really common use case for processors like this. Would love to see how it compares to stuff like 10900k
Adobe and AMD need to spend a bit of money to optimise their software for the hardware - otherwise competing products will impact on Adobe going forward...
I would love to see you arrange the FPS charts according to the "0.1% low", which is a far more accurate measure of smoothness than average FPS. In fact, I wonder why AMD didn't use this metric instead of average FPS. The 3950x appears far more competitive against even the best Intel CPU if we compare the "0.1% low" performance instead.
@@faycalassaad6066 What are you talking about? The Scythe Ninja 5 performs about as good as Noctua DH-N15 and is quieter due to low speed fans. Replace the fans with faster ones and it becomes pretty much the most powerful aircooler on the market and it currently costs $55
Finally upgrading from a 2600k. been waiting and waiting for the right one... and I think this CPU is it boys.... gonna suck to be a whole paycheck... but then again 16 core 32 thread and those benches vs the more costly intel HEDT... just hot damn! Those gaming benchmarks are pretty respectable too given the huge advantage everywhere else and still a HUGE upgrade for me even in games. Gonna have to get something better than my 4k60 monitor to make use of it in games and the only drawback I can see is that it wont be overclockable without massive power draw which is not really that big of a deal to me as i don't even like OC'ing the 2600k and tend to go for longevity and only OC if I need it.
i upgraded that old boy last year to an i5 8400 im happy but if i knew more i would have waited and bought an 3600 instead lol. good luck with this lil monster
Just wondering why you do not test the games at 4k ? People with this sort of money will have a 2080 or 2080 Ti and will be running at 4k, I suspect that the frame rates would be very similar, therefore the difference between these CPU's should be measured only on content creation ??
Nobody games seriously at 4k. 1080p is still the most common resolution for gaming. Many people who game competitively play 1080p and 240hz low settings where the i9 9900ks crushes every other cpu in performance. 4k is still a shitty gimmick for PC-gaming.
@@Manakuski Not everyone is a competitive gamer.... I rather play The Witcher 3, Fallout 4, Yakuza Kiwami 2, RDR2 or FFXV at 4K60 Ultra than 1080p240Hz Low because staring at the scenery or watching dialogues/conversations at 240Hz doesn't add to the game...
At 4k you are in the GPU limit, CPU does not matter much there. No point for a CPU review that is supposed to show differences. Loot into reviews of graphics cards, if 4k is interesting for you.
I really do enjoy your content but I will say this AMD will be the underdog for a very long time, even if they double Intel's performance, and while I don't agree with their marketing choices with boost clocks I feel you and the tech community in general need to remember that we only have competition in the DIY brackets currently. The two biggest markets, Data Centers and pre-builds, are seeing a tiny movement to AMD but its no where close. Intel has a revenue stream tenfold that of AMD. If anyone thinks AMD can take Intel out just because they have better processors haven't payed to much attention to history. This has happened before and Intel was the winner last time as well, all due to having more money.
Not true. AMD Threadripper (HEDT) is and will be extremely powerful, await Nov 25. AMD Zen server parts are extremely good, only the industry takes a while to adopt new technology. AMD is already gaining market share in the server segment which can only come from competitive products.
Competition makes both sides better. It limits the prices Intel can charge. It benefits consumers. If AMD wasn't cranking out good high-end processors, then Intel would have no motivation to innovate and could charge whatever they wanted. So AMD DOES have a big impact in the market, regardless of how many or few 3950X's it actually ships. It benefits everyone to see AMD do well. But based on these results, I think AMD will sell quite a few of these 3950X's. Just my two cents.
I feel intel would have been in a better position with shit if AMD was able to bring competition to the table with intel for the past years. Instead of them basically flopping for so long. Intel figured fuck it. Why bother doing the extreme when we can just milk out our cpus with small % increment gains. And while intel was sitting back chillin and grillin, AMD was forced to get thier shit together before being swallowed up. Now they kinda do, and intel is scrambling.
I know this would be a stretch request but I'd love to see some work load tests an things beyond imaging type work. I am a systems engineer and design FPGAs and utilize a lot of cross-compilation for arm. Vivado and make use a lot of threads and cores when before building for their respective architectures respectively. I'd love to see how processors like these perform on work loads like these. Thanks for all the work you do GN.
It tends to be hard to find good benchmarks for engineering loads. One reason I suspect is that all the software costs something in the 4 digits if you get a good deal. If xilinx published a benchmark maybe you could convince some reviewers to use it. I'd love to see HFSS or other FEM software bench marked since they tend to be somewhat transferable between software.
@@CarbonPanther Totally agree. Don't believe Gamer Meld AT ALL. If ever there was a channel for hype, but not shill, then that is it. As Steve says the free pass that AMD is given for what amount to blatant lies on these boost frequencies won't last long, Intel would be crucified (rightly so) and the community should be "brand agnostic" when it comes to marketing hype. If it doesn't do what was promised from the get go they deserve all they get. 3950X is not a consumer CPU really, it's an HEDT CPU that can be used on an existing consumer platform, no more. 3900X or 9900K(or S) far better value imo as you won't need to pay the price for quality X570 board which this CPU demands. If money no object then Threadripper likely to be better.
It would be nice to see what kind of improvements the 3950X would get with Ryzen-optimized RAM speeds, i.e 3600MHz CL16 or 3733MHz CL15, as I imagine anyone spending $750 USD on this CPU would have no issues spending extra for the best RAM.
Let's read graphs! Intel (9900K) : Gaming AMD (3950X) : Render It seems like the bottleneck is most of the time between the monitor and the chair. (see below)
Man, can't realistically even hold ~4.5GHz and they're advertising 4.7GHz?? I love what Ryzen has done for the CPU market, but AMD needs to be called out hard for that claim. EDIT: If AMD fixes this and the claim becomes true under realistic conditions, that's great. I'm only going off of the data we have here and now.
Intel is doing the same with laptop cpus "5ghz" "boost". And for the hundredth time boost is only on 1-2 cores not allcore. And the new bios updates should fix some of the missing mhz.
Check out part 2 of our DIY AMD NAS, ft. Level1Techs! ua-cam.com/video/SqaAmVN4J4A/v-deo.html
Can you try disabling half the cores or smt to see if you can get higher clocks for gaming? 16 threads should be enough for gaming.
Gamersnexus it is contemporary when it comes to equal.
Could you test 3950X vs 9900KS (delidded), both OC and with tightened timings, in gaming? And gaming + streaming? Would love to see an absolute best case scenario (without LN2) from both teams, for enthusiast PC gamers.
Thank you for all the amazing content!
yo GN. rockstart says it added -cpuloadrebalancing launch option to RDR2 to help with stalls on 4 and 6 core cpus with nvidia driver. maybe you coudl try and see if this option exists on gta and see if it helps. anyway the patch is out and it seems to help. not saying a retest is in order just maybe if you could mention your thoughts in an ask GN thing or something.
And it’s released finally... no more slack!!!
Can't believe 3 years ago the best mainstream processor was a 4 core 8 thread 7700k, now we have 16 core from AMD.
Moore's law is back!
I'm not sure you can call 750usd mainstream
Thats my CPU
Allah praise AMD
@@KillBoyUK It's called mainstream simply because it's still a consumer class CPU
I was disappointed that you didn't perform a "drop test" on the CPU. I guess I will have to watch the LTT video for that.
Wont be dissapointed
@@filipantoncik2604 The only thing Linus hasn't dropped is his high pitched voice
@@brazeiar9672 big burn
Spoiler* he drops it lol.
@@SirNickyT really?
Please not.
26:09 This is just amazing. While having twice as many cores as the 9900K, 3950x pulled only 30w more at most. Simply incredible.
Its hilarious to see how a 16 core chip on stock still consumes less than intel's 9900KS on stock.
@@maybenot7298 dude chill lol this thing basically matches the 9900k in single thread performance and has literally double the cores
quit your whining and bootlicking, nobody cares
@@maybenot7298 "average user" well we need several of those workstation for work and Intel is extremely slow in renders
@@maybenot7298 LoL @you if you think this processor is for the average user
@@maybenot7298 and an average user won't pay over $300 for a cpu
@@maybenot7298 AMD has a bigger cock. what can ya do
Within margin of error of the 3900x in games. I'm glad I didn't hold off and pulled trigger on my 3900x. I personally don't do enough non-gaming workflow to justify the price hike. Both chips are just monsters. I couldn't be happier
Same. I was worried I'd be upset when - not 2 months after I got thee best am4 part ever, ever - another, even bester part comes out for a small amount more money.
Almost had preemptive buyers remorse. But now I don't!
@@joek81981 Seeing that the 3950X is trading blows with the 9980XE in multi-threaded workflow, I'm really curious now to see what 3rd Gen TR brings to the table.
What a game, right? I say it a lot here in the comments, but it deserves repeating.
This is a golden age for computers-people. Great time to be a PC gearhead.
Personally, I'm very interested in disabling an entire chiplet to try mimicking an intel 8c/16t for some games and apps; some things don't like the zen arch
@@aj0413_ True but even in those fringe cases the difference isn't that massive esp if not comparing it to the 9900K/KS
I would have liked to see Intel HEDT power draw also in the chart, to demonstrate how much worse they are in performance/watt. Maybe next time.
We'll have it in X-Series benchmarks for the 10980XE.
I saw where the 9900ks was 400+Watts total system draw and the 3950x was around 260+Watts total system draw.
LTT did a test like that. Check their vid
Holy smokes is that correct?!? 400 vs 260?
@@coachnutt61 HOLY mother of power draw O.o 400 watts?! now imagine the heat generated.
I love that all of the reviews came out seconds from each other, but I came to you first, Steve.
steve> all
Of course we come to Steve's first, duh!
@@sdj33 both steves > all
Watching GN last.
I'm the save the best for the last guy.
COMMITTED FANS LETS GO!
TL:DR - Intel entire HEDT range is now pointless, thanks to AMD and this desktop CPU
Unless you specifically need extra pcie lanes
Well, 9980XE was consistently above 3950X. In gaming also. If someone buys 9980XE etc and doesn't OC then wtf.
@@TheNaitsyrk $2000 vs $500 AMD , as i said, makes HEDT pointless
@@simonh317 lel
@Warm Soft Kitty only if you buy 1000+ processors.
"Let's close this out with Total War." - I thought that was what the entire video was about.
@@imadecoy. Nah. Total War is just poorly optimized. If a TW game could make use of 16-cores we could have 20 000 unit battles.
@@imadecoy. then why do you watch them if you don't agree with their testing?
@@imadecoy. I think he sarcastically referred to this cpu as being the beginning of a total war against intel
@Dex4Sure The Linux versions use Vulkan...
_Oh boy, how mistaken you are!_ That was just a grazing shot, although it's a pretty lethal one already - we'll see if it's killing it.
If not, there ought to be some blood bath later in the month …
Since *if* it wasn't so… _Phew_ I guess, jacking up cores again, The Ripper™ just threats coming for a third (and probably final) time and will most definitely kill the heart of Intel's HEDT pretty fast. It's just that Intel had to decide whether or not taking some narcotics for the sake of numbness beforehand - or actually really wanted to attend TR3's bloodbath in person. Seems with that price-cut Intel issued lately they just took the pill, to make it less bloody.
Since AMD is still carrying their youngest baby yet - and it seems to be a huge-grown!
This old man remembers when the phrase "wintel" was coined, the king is dead, long live the king!
@T R The problem with Intel is their entitled prices. Even when AMD was kicking their ass with Athlon 64, those Pentium 4 oven machines were still sold at high prices.
I believe that entitled pricing policy comes from their high idea of their own name(a little how Apple thinks they can charge too hell just because their logo is on a product).
So people kind of feel happy when AMD are kicking their asses because they need some humility lessons as to start pricing at what they offer and stop thinking their shit are made of gold.
@T R Waaaaay ahead too.
Right up until the FX-60/70 started demanding really big money. :(
Then I was sorta happy to see the Core2duo/Quad ect come out.
@T R
I was still a basement nerd with a socketA Barton 2500/3200 when those benchmarks were getting done.
my first big boy system was a Phenom II 940 w/a 4870 1gigger... what a joy that was for it's time. it lasted through 3 different video cards and then spent 3 years serving mom movies on demand at her house before my brother turned it on w/o the heatsink one day while servicing it. :( rip.
Dropping SMT would make more sense IMO than to drop some cores for those troublesome games.
3950 without X is still possible
Dropping one chiplet/CCD (8 cores) would make sense to remove inter-CCD communication over the infinity fabric.
@@ole7736 this. On a monolithic die, disable SMT. On multiple chiplet design, disable a CCD to reduce latency.
@@TheVillainOfTheYear Why I'm heavily eyeing switching to be honest
@@TheVillainOfTheYear We can clearly see from the benchmarks in this video that the 3900X is often much faster with SMT off (beating the 3950X in Civ 6 and GTA V), but for some reason they didn't try the 3950X with SMT off.
Steve, you get the first click my friend. Trust your opinions over all others!
@@LycanWitch haha I watched LTT right after this one. Sometimes LTT does some wacky things in their reviews which is cool and entertaining but I watch GN first to get his almost as unbiased as they come opinion before I get all hyped up by Linus (no disrespect to papa Linus either btw)
Am i the only person that makes the GN open noise along with it at the start of every of video? I can't resist it. I've tried
Please add DaVinci Resolve Studio in your productivity suite because its approach to CPU usage is uniquely different than Premier and is one of the fastest growing video editing tools out there
I'm amazed you guys talk about all core OC, but don't expand on the potential of Per CCX OC to get a good blend of fast cores and slow cores, then use Process lasso to set affinities so all background tasks run on the slow cores. That's what I do on my 3900X 4.5/4.4/4.375/4.375 at 1.331V LLC3 (1.306 vdroop) and is golden!
Not that I do, but you can also soft disable SMT in Process lasso. This program has become very relevant now with high core count CPUs
Yes, could use a benchmark of this solution vs hardware enablement. Especially now with those Windows 10 1909 CPU scheduling changes.
You need to put a resume together and hit buyers of new Tencent Epyc.
How interesting, you can have high power cores and low power cores similar to a phone CPU's?
I want to see how this benchmarks
Sacco Belmonte I dont really trust ProcessLasso, I feel like people think it's just some magical program that will instantly set affinities and increase performance
@@fardnia9434 It does no harm. I have been using it since I got my 3900X and is more benefit than anything else. I don't use ProBalance though.
Steve, I think you and Wendell need to get together again and do a deep dive on both your 3950x chips, his results show his stock 3950x performing on par with his 4.8ghz 7980xe meanwhile yours seems to be way behind.
There must be some major silicon variance involved or some wierd HW/SW issue at play here...
i would bet its with the update to the windows scheduler that released recently allowing wendel to have his chip in the lead
@@Bobis32 in that case it might be a good idea to test with 1usmus' ryzen power plan as it is supposed to give similar performance to the new windows update by partially fixing the scheduling issue
@@Bobis32 We are testing with the newest Windows version.
Zen 2 is very sensitive to a lot of things, e.g., temperatures, RAM/IF clock speed. Different cooling, different cases, different RAM could all make these results vary noticeably.
Sounds like an even better reason for another collab vid then, cause there seems to be a pretty big performance Delta here
What about streaming? CPU encoding while gaming is a good reason to invest in high core count.
I can easily render videos with a 4 core CPU, just take x3 the time.
But when it comes to streaming definitely can't use X264 encoding at a decent quality for streaming for lack of cores :P
"Blender Render", say that 3 times fast. If you cant drink some nitrogen...
Render Blender Rlender Bender Blender Render
That That That
@@tommytomthms5 I think he meant "that" not "That"
@@MalePifko He was clearly joking
Blenderer Rendererer
"AMD is Equal Dog"
- Tech Jesus 2019/11
AMD is up to par or better in performance BUT Intel still has a much much bigger bank, so it's gonna be a while until AMD is really equal dog...if they can keep this up.
@@xxJOKeR75xx yeah they'll need to keep making faster chips for at least another two years.
ROFLMAO
TECH JESUS! LMAO
The real lesson here is that developpers seriously need to code for multi-core multi-thread CPUs now.
It's looking like we're still in the dark age with single core frequency being so important.
Nobody took AMD seriously when they started upping the core count. Now that Intel has had to follow suit to play catch-up I bet we'll see devs making changes.
John Terpack I really hope so!
They already are but it takes time for the cycles to come to fruition. The next few years will see a lot more shipped games taking better advantage of core counts, and within 5 years it will be almost universal.
No point supporting more cores if optimisation is garbage. See cyberpunk 2077
Fan settings:
Red fans - excited
Blue fans - salty
^PSU brand: GENERIC
Me: "s***t brown" fans. Thanks Noctua!
Y'Know, even though the introduction is very factual and "dry", I can hear the anticipation and excitement from you!
This processor is so cool, because AMD not only as the underdog made something competitive, they made something for a market that definitely exists and pushes huge boundaries, leaving the ball in everybody else's court.
GN is more technical. Linus is more enjoyable. I feel that with linus I am watching a story. Its engaging. There's a progression and excitement. At GN I feel like it is a technician giving a review, which is also good. Different approaches, but it's interesting to compare.
Its the difference between a JESUS and a Mapple Leaf.
Better to go hardcore ;)
Sovietbird agreed. Those 3 are THE channels to watch to get a balance. There are several others that go way more technical, or way more niche on gaming or rumors, but LTT + HUB + GN usually covers everything worth seeing.
Aurora Niloufer what about me Hahahah guys crack me up slackers
Hair > earrings. Sorry.
3900X is still a beast. My Blender viewport with cycles render is basically in real time, setting the maximum core render viewport to the same number of threads (24). For real time stuff like modeling, UV management, texturing and sculpting a good CPU like Ryzen 9 series and enough RAM is sufficient. For AMD / Linux users: check Core Control utility. It is very useful with CPU profiles (not only GPU) and if you set your Ryzen to "ondemand" consumption, it will work in a very optimized way for your specific workflow. You won't regret it.
Hardware Unboxed noticed that Adobe wasn't really using four of the cores.
Adobe considers AMD to be subpar for it's software and doesn't optimize for it. I have a link to a thread where they completely blow off the 3000 series as being viable for workflow over intel. It is crazy.
@@TristanPope If you have it, why not just put it in here?
@@pantzman I didn't wanna throw links on someone elses video but here ya go: community.adobe.com/t5/Premiere-Pro/iPhone-11-Pro-4k-60FPS-HEVC-will-not-work-in-Adobe-Premiere-CC/m-p/10736475#M237073
Tristan Pope this is ridiculous for adobe to pull this. Read through the whole thread. It’s amazing they don’t even have one Ryzen system. You’d think adobe would want to save money and switch their test models to cheaper yet faster Ryzen systems. Imagine a company being that stuck in their ways.
Adobe uses Intel's compiler and 95% of corporate users stick with Intel who are there customers. Also Apple is an Intel shop with most Photoshop users being Mac fanatics. Intel c compiler is both on Mac and PC which makes porting easier.
The single core boost behavior is much improved since this video came out. Kind of amazing actually.
This video needed Snowball as this CPU looks like it's the cat's meow!
I remember when Intel charge
2017 $1000 for 8 core CPUs
2019 $950 for 18 core CPUs
2020 $500 for 18 core CPUs
In the future at this rate, it will happen.
2020 amd will make out more cores. and intel will have to compete or lose more market
The best part is th4 games will start going all out with support for high multithreads
They can afford to lose a bit more marketshare, and most likely won't (much) in the more profitable ones. AMD is pretty much at max production capacity already and has no real way to ramp it up. Something Intel is investing in given the shortages.
Not a good thing for consumers.
So, it's good to keep AMD there for pressing Intel processor pricing. Buyers Winning!!
Not good for consumers.. LoL
@@Calyptico That's why Intel has so many paperlaunches.
I thought the max advertised single core boost was 4.7 GHz, not 4.6?
Yes it should be? I checked the Amd site as wondered if it had changed to 4.6 following all the boost fiasco , but its still 4.7 per Amds website.
It does
He just hit bad on the silicone lottery, other people are going 4.7 pretty easily
@@JROCThaGreat That may be true, but the max boost listed on AMD's website is 4.7 GHz.
No compile test? I would have loved to see how Zen 2 performs in compilation
We dropped the GCC benchmark because it's just a cache benchmark. If we can figure out a better one, it'll come back.
I think there is a series of articles on XDA Developers for compiling AOSP (and possibly other things?).
@@GamersNexus personally I'd be interested in compilation of say, tensorflow. Something like this hub.docker.com/repository/docker/tarfeef101/tflow-builder
@@GamersNexus Is that GCC test a realistic test though? If so then it's still relevant. A large cache is a potent way to offset latency; if AMD is going down that route then the fact that GCC benefits from such a design decision is surely the whole point of why the large cache is there. Seems a bit odd to drop a test because it sits in cache, when enabling a task to sit in cache is the whole reason why the cache is so big. To extend the logic to the extreme, if AMD released something with 16GB L1, would all tasks be dropped? ;)
@@GamersNexus It seems really hard to find compilation benchmarks for CPUs. For this CPU in particular, I think it's important to have them since people are buying this CPU for work and would love to know what really matters. A caching benchmark to me seems pretty awesome. It lets people know that clock speed is not the only factor. I write a lot of high performance software and one of the ways we get make faster programs is cache optimization. This is also how console games squeeze out more performance out of known hardware. Here's a video if you want a good example of how game devs goto the extreme for performance (note it's from a C++ talk). ua-cam.com/video/rX0ItVEVjHc/v-deo.html
I'm curious how the 3950X would hold up against a 9900KS in gaming in a real world scenario when you got a handful of launchers, discord, bittorrent, web browser with a bunch of tabs and other stuff running in the background.
BitTorrent? Why would gamers be downloading Linux ISOs?
Looks like someone is a torrent user here.
more cores is better
Thank you GN for always providing unbiased and informative videos, appreciate the time and effort!
80% performance is "not bad for half the cores". Gotta love Steve's understated commentary. :)
27:49 THANK YOU Steve! At least one objective reviewer left.
i did it anyway lol
Holy shit I had no idea this was coming out today
Shitty hole
Thank you for including the 7700k on your charts. It just came out in 2017, but I can hardly EVER find it in any other channel's comparison charts.
It's a big help for those of us poor creatures who needed an upgrade right before the Core Wars started.
Thank you, Tech Jesus
Thanks AMD for progress!
I got ryzen 5 2600x and i love it so much
"We're the hipsters using blender"
But still not using 2.8
LOL
ikr. how disgusting lol
That makes it even more hipster🤔
Great video, very knowledgeable and speak very well regarding the material!
Are there any reasons except "low interest" for not doing compilation benchmarks? The increase in L3 caches has been making me happy, since I have found that to be one of the most important parameters.
Say, a Linux or chromium compilation would make me very happy.
Agreed. For tech Jesus Adobe I'd important but software developers, virtualization engineers, engineers, and even statisticians use CPUs like this. Intel also has other advantages too like Intel RST real time storage and raid. Someone doing video work will want many tbs of storage with Adobe where if you have 4 ssds Intel has an advantage in I/0 and express lanes as 3950 has cut them
Please consider an Overclocked 8700k to 5 ghz for ur futture CPU testing
for the future, many of us 6c12t Gamers want to know where we are
standing in 2019/20. Thank u for u work :)
Got a question for you, Steve: When gaming alone, that's cool. But what about gaming + Streaming / Recording H264? Main example being twitch streamers, but also gameplay youtubers.
How much impact does live encoding add to a gaming experience?
Yeah GN used to have that gaming + streaming test. I hope they will do the streaming test.
They probably prefer two rigs or nvenc in those scenarios tho
Ever thought of doing a CPU comparison video with "mainstream" enthusiast cards priced between $300-$600? It would be great to see benchmarks of current high end CPU's with GPU's like 2070S, 5700xt, 5700, 2060S... Its a more realistic use case too because obviously not everyone buys $1200+ GPU's even if they buy high-end consumer CPU's!
I'm interested to see some numbers in streaming and Davinci Resolve.
Dang, that transition to your store at 12:51 would make even impress even Linus.
Virtual Machine usage could really make this my next CPU. I need to have fast VMs running 50% of the time.
Competition is good for everybody. I heard Linus say it years ago and I repeat it all the time because it is just so true.
Interesting, Zen 2 tends to be behind in average and maximum fps and then in the 1% and 0.1% kicks ass. Funny.
Higher overall latency but less bottlenecks. so you end up with closer highs and lows
Which gives you a way smoother experience. Even if Intel did say 20-30 fps more than the AMD system it would feel smoother overall.
@KingOfGorillas Yes it will be interesting indeed. It will be more like "ok ok we been jerking around long enough and let amd have there fun but now it's time get serious and send lil brother back down to the minor league" I imagine it will be something like that :)
@@quajay187 Doesn't look that way.
@@quajay187 hope that happens before they reuse that 14 nm LUL
I wish audio work would get mentioned in these tests. Indeed music production is probably one of the best tests for a processor, as it stresses it to the max but in realtime performance, which when working with music needs to be glitch-free at all times, no clicks or pops, unlike offline renders which just crunch the numbers.
12:55 That was one Linus Sebastian like self promo Segway. Duuuudde
You mean maple syrup smooth transitions
Good honest review 👍
will you make streaming benchmarks?
i think this may be a really good cpu for gaming + streaming to twitch, and i likes your dropled frames for both sides graph the last time
Ideally you would find a way to restrict gaming cores to one CCD and streaming cores to the other. E.g. with process lasso. That way you prevent expensive core hopping and inter-CCD talk. Perhaps you could run an streaming VM (virtual machine) on 8 distinct cores, which would be easier to configure (process lasso needs to be set up for each game individually, iirc).
@@ole7736 process lasso is great, but a huge pain in the butt to configure
@@bas2085 That is true. What about a VM (virtual machine) for streaming (hardware pass through of a capture card) that hogs one CCD? Should work for all games/titles without any configuration after everything is set up.
If you're streaming just use an Nvidia GPU with nvenc. There is very little performance loss using it even when using a "weak" CPU.
@@Stagmuffins Sure, for low to mid-tear streaming setups, that is a very good solution. Professionals probably want higher quality.
Damn what happened subscribers were like 20k now it’s so big! Congrats GN
I'd love to see OC results with SMT off. Just out of curiosity. Less work means less power means less temperature means lower voltage needed means (somewhat) higher frequency.
I liked when you put timestamps to the videos :)
Are all these benchmarks done with CPU-exploit fixes in place? Eg: Meltdown, Spectre, Zombieload.
If memory serves, Steve has stated in an older video that all tests are done with all AMD exploits and Intel exploit patches and fixes installed up to the date of the test.
@@ps2232 I see. So it's likely that all the catching up AMD has been doing, is at least partially due to Intel's fault at being more vulnerable, requiring patches that lower their CPU performance via software.
Would be nice to see Digital Audio Workstation benchmarks. Thanks for the review.
i feel sorry for steve. linus released a video within the same minute as steve yet again steve only has 600 views whereas linus has 6.2k views already! tbh i prefer steve’s content tho😂
kaelyn Linus has 10x the subs so the math works out.
Steve started from scratch, Linus started from the top, big difference.
@@Jdmonealp More like 15x. Linus has over 9.5 million to GN's 662k.
I watched linus first myself, simply put I didn't have alot of time this morning and linus is usually slightly more brief,, and about 5 minutes can be cut out by skipping for 30sec-1min after the segways :P
@@DarkLinkAD Linus did not start from the top. Wtf is wrong with you???
Can you guys add the HW News time bars to your benchmark videos moving forward? And maybe teardowns? Not so I can skip around, but to make rewinding easier. The time bars are literally my favorite part about your channel, followed by the OC content, the hardware teardowns, the Jensen teardowns, and then Snowflake.
I have a few questions so forgive me laziness as I'm sure these are mentioned in the video.
Is this with the newest windows update? From what I read it comes with improvements to the scheduler.
Will you bench some games with SMT turned off? 32t is obviously overkill for games.
Have you seen the custom power plan by 1usmus? Perhaps it's not needed anymore with the newest windows update.
first impressions i've seen around seem to show that the upcoming update doesn't really have an effect in the real world, whereas the 1usmus powerplan seems to actually help with 3-5% gains in single thread workloads.
The whole cpu is overkill for games anyway.
I'd love a separate video about 3950x performance tweaks. Just the right stuff for enthusiasts! :)
After this latest bios update, my 3900X regularly hits 4545-4585mhz on all cores now, and this is happening at lesser temperatures than I was commonly hitting before the update, all on a X370 CH VI Hero. So yeah, I'd say AMD and the mobo manufacturers have straightened out most of the bugginess of boost speeds and temps. I would expect most of the AMD chips you test from now on to behave much better, and be as straight forward as this one is.
"dont buy this for your gaming rig" dont waste your breath steve. before amd got 12 and 16 cores it was "the extra performance from intel wasnt worth it" and now that amd have 12 and 16 cores those some people, who only game, are buying them left and right. which is fine. but by the time a game needs anything close to that these cores will be outdated and youll have to update anyway. if you only game it should be 3600, or if you want a little extra the 3700x or something. not "should i get the 12 core or the 16 core"
do you have a source for the statement " but by the time a game needs anything close to that these cores will be outdated and youll have to update anyway" or is it just your opinion and nothing science based?
It's true that quite a few people bought the 2700x over the 2600(x) and the first gen 8 cores over the first gen 6 cores, which made no sense whatsoever.
But these days, looking at the charts, depending on the game and your resolution you can actually make a decent case for a 3700x.
Not to "future proof" but to get better average and more importantly better 1% and 0.1% lows - in a meaningful way.
I'm currently on a 2600x and there are some games where going to a 3700x would boost my lows quite significantly.
We are now effectively at a point where 6 cores with 12 threads are entry level (enthusiast) gaming CPU's.
I'll prbly chill for now and wait for the 4th gen, if that still runs on X470 I'll pick up a 4700x or whatever 8 core they have then.
If it doesn't the 3700x will prbly drop in price.
@@MegaDominik1 yes i have a report from the gaming council.
@@MegaDominik1 dude its just the truth. its from experience and history. this has all happened before with the last console gen. doubt it if you want. waste your money if you want. but if youre just gaming by the time a game actually sees a difference between 8 and 12 cores with SMT\HT, those cores will be so outdated that a new 8 core or less cpu will be faster. i mean are you saying that amd wont come out with faster cores in the next 3 years cause thats AT LEAST how long its gonna take.
i mean RIGHT NOW take any 6 core cpu with 12 threads and take an 8 core\16 thread cpu with the same cache and clocks and compare them. they will basically be identical in just about any game. a couple might get 3% more performance. keep in mind that amd 6 core cpus have lower clocks than 8 core cpus so thats where the difference really is. the 12 core cpus have double the cache because they have 2 dies and they have higher clocks. and they use the 3700x instead of the 3800x which has a lower tdp, so lower boosts as well. the difference in performance between a 3800x and a 3900x in games is DOWN TO THE CACHE.
now the next gen of consoles are gonna have an 8 core 12 thread cpu at about 3ghz. the last gen had 8 threads period and it took 3 years into that consoles life before games actually started using even close to all those cores.
the next gen of consoles will also have games that are supposed to work on the old gen and the new gen. so they will have to work with 8 threads in mind for a year or so.
i mean its incredibly difficult to even get to where we are where a game can use 10-12 threads. an 8 core cpu has 16 threads. itll take 2-3 years from now before games can even really use all 16 threads AND ACTUALLY SEE A PERFORMANCE GAIN!! i mean some games can use all 16 threads a bit now but either a handful of threads are barely used OR if all threads are used sometimes the games even perform WORSE than if you set them to use 12 or so instead.
a 12 core cpu is 500 bux. so with a cheap board and tax thats about 600 dollars. buy a 70 dollar board and a ryzen 3600 or at most a 3700x and it will perform the same as a 12 core for the next 2-3 years at least. at that point amd and intel will have cores that have higher ipc and faster clocks with more cache. 5th or 6th gen ryzen 6 cores will beat current 8 cores easily and most likely even todays 12 core cpus in games.
compare a ryzebn 1800x, the best 8 core amd had 2 years ago that cost 500 dollars to amd's current 3600 6 core that costs 200. the 3600 blows it away in gaming and basically matches or beats it in multi core. do you not think the same thing will happen in a year or 2? do you really think games are gonna start using DOUBLE the thread count in a year or 2? cause thats what it would take for a 12 core cpu for gaming to actually be worth buying. its not gonna happen.
AMD is an exciting insurgent company. Intel has fossilized. These chips are close enough in most games and they do nearly everything else better. They’re forcing innovation and it’s fun when we had been stuck on quad core with hyper threading for years on mainstream platforms.
Thank you for adding the R5 1600 into the benchmark for RDR2!! I just bought it and am running a 1080ti and have been worried about the performance but I’m happy to see I’ll be able to achieve 60fps easily with minimal cpu bottleneck
Wait, I'm confused. I thought it was out on the 25th, but Steve said it's getting released today
Most likely made the video in preparation for the 25th launch but AMD lifted the embargo early.
@@MrGlutting That would make sense, you're right
This probably the "soft launch" which is just a nonsense marketing term for "we let reviewers and media release reviews".
Steve lives 11 days in the future
AMD's Ryzen processors are truly the pickup trucks of the realm of computers.
You can make do with a smaller, faster sports car, but you never know when you need to pull a tree and haul logs at the same time.
Best analogy I have heard comparing AMD to Intel right now. Thumbs up from me :)
🔥AMD ALL THE WAY!🔥
AMD is now the great equalizer now which is a good thing for everyone! Great video as always Steve and Gamers Nexus as always yall have the best info keep it up!
finally my 480mm loop will actually be worth it, maybe I need to upgrade to dual 480 to be overkill.
Wtf are you going to do with those custom water cooled rigs ? Running earth simulator ?
480mm dedicated to cpu shud be plenty.
yeah this should put your 480mm to work. Gonna be fun
Fantastic content as always!!
2:15 the boost is advertised as 4.7GHz not 4.6
Yeah, so it's even worse lol
AMD needs to adjust the way boast clocks are advertised. Advertising 4.7ghz when it can't hit 4ghz all core (I'm hoping that's addressed via bios update) doesn't feel 100% honest.
@@garethevans9789 But its a single core boost, not all core boost
Gareth Evans gamer melds latest video states that amd is coming with a patch for 200-250mhz increase on ryzen 3rd gen
@@stellar0001 We've heard that before... multiple times. I don't think Gamer Meld is particularly good of a source.
R9 Ryzen processors are a beast especially for computer work loads.
Still a happy camper with my golden silicon R7 1700.
For one type of game this makes a great choice: Chess.
Chess yeah, but I doubt it could run solitare 1080p 60 fps
Chess engines will use all the cores you have, but they never use chess engines when testing, hell engines are even good to test overclocking stability!. So sad they always ignore them...
ty for setting the standard for the most scientific reviews on youtube among tech reviewers
Only when Intel is overclocked and has more cores then it's on top 😂 I don't like charts where overclocked cpus are shown because only very small amout of people will overclock and no companies will overclock cpus.
Most of people who need performance do overclock and they include oc potential when comparing cpus.
No company? My SI (Digital Storm) sure did overclock for me
I dont anyone that doesnt overclock. I guess its who you kick it with.
In a high level enthusiast part, it's definitely a valid comparison. People who want the best performance are going to do it. Now lower parts, no. I doubt the extra frames from overclocking some i5 or a 3400g are gonna come into play but on these 500 dollar and up bits, it matters.
I just wish the OCd parts were marked in a more quickly visible way. There is a lot of data in these graphs to take in rather quickly
@@cleverja Pretty much everyone I know are on Intel/Nvidia systems and they dont overclock. I dont know any overclockers in RL.
Can you include tests for streaming/recording while gaming? That seems like a really common use case for processors like this. Would love to see how it compares to stuff like 10900k
Adobe and AMD need to spend a bit of money to optimise their software for the hardware - otherwise competing products will impact on Adobe going forward...
If you read the comments, Adobe sucks ass.
I would love to see you arrange the FPS charts according to the "0.1% low", which is a far more accurate measure of smoothness than average FPS. In fact, I wonder why AMD didn't use this metric instead of average FPS. The 3950x appears far more competitive against even the best Intel CPU if we compare the "0.1% low" performance instead.
How about the Scythe Ninja 5 cooller? Should handle it
@@faycalassaad6066 What are you talking about? The Scythe Ninja 5 performs about as good as Noctua DH-N15 and is quieter due to low speed fans. Replace the fans with faster ones and it becomes pretty much the most powerful aircooler on the market and it currently costs $55
@@Taureor And btw my initial advice is a pretty solid one. Don't see how that's a problem but ok. 👍
You guys miss to mention how much faster desktop 3950x is then workstation Threadripper 1950x that we drooled over not that long ago. Astonishing!
Hey, no tape on the CPUs this time!
Excellent review!!
New youtube homepage / subscriptions page look horrible wth
A hallmark of autism is unreasonable resistance to change.
@@Asdayasman "unreasonable"
Finally upgrading from a 2600k. been waiting and waiting for the right one... and I think this CPU is it boys.... gonna suck to be a whole paycheck... but then again 16 core 32 thread and those benches vs the more costly intel HEDT... just hot damn! Those gaming benchmarks are pretty respectable too given the huge advantage everywhere else and still a HUGE upgrade for me even in games. Gonna have to get something better than my 4k60 monitor to make use of it in games and the only drawback I can see is that it wont be overclockable without massive power draw which is not really that big of a deal to me as i don't even like OC'ing the 2600k and tend to go for longevity and only OC if I need it.
good choice, this cpu will be gud for a long time
i upgraded that old boy last year to an i5 8400 im happy but if i knew more i would have waited and bought an 3600 instead lol. good luck with this lil monster
What an idiotic comment.
@@newrez fuck you too buddy
So Intel is still king for gaming.
If you are planning on spending 2k and play at 1080p, then yes. Intel is the king of 720p to 1080p gaming
@@rickyricardo2006 lol exactly. Fix a rig with a sick cpu and gpu.. Then go game at 1080p... So silly.
Just wondering why you do not test the games at 4k ? People with this sort of money will have a 2080 or 2080 Ti and will be running at 4k, I suspect that the frame rates would be very similar, therefore the difference between these CPU's should be measured only on content creation ??
Nobody games seriously at 4k. 1080p is still the most common resolution for gaming. Many people who game competitively play 1080p and 240hz low settings where the i9 9900ks crushes every other cpu in performance. 4k is still a shitty gimmick for PC-gaming.
@@Manakuski Not everyone is a competitive gamer.... I rather play The Witcher 3, Fallout 4, Yakuza Kiwami 2, RDR2 or FFXV at 4K60 Ultra than 1080p240Hz Low because staring at the scenery or watching dialogues/conversations at 240Hz doesn't add to the game...
At 4k you are in the GPU limit, CPU does not matter much there. No point for a CPU review that is supposed to show differences. Loot into reviews of graphics cards, if 4k is interesting for you.
@@Manakuski I game at 4k. Although sometimes I doubt it the evidence points to me being somebody.
I really do enjoy your content but I will say this
AMD will be the underdog for a very long time, even if they double Intel's performance, and while I don't agree with their marketing choices with boost clocks I feel you and the tech community in general need to remember that we only have competition in the DIY brackets currently.
The two biggest markets, Data Centers and pre-builds, are seeing a tiny movement to AMD but its no where close. Intel has a revenue stream tenfold that of AMD. If anyone thinks AMD can take Intel out just because they have better processors haven't payed to much attention to history. This has happened before and Intel was the winner last time as well, all due to having more money.
So you want no competition? You want Intel to be the top dog, you will get expensive processors.
Not true. AMD Threadripper (HEDT) is and will be extremely powerful, await Nov 25. AMD Zen server parts are extremely good, only the industry takes a while to adopt new technology. AMD is already gaining market share in the server segment which can only come from competitive products.
Competition makes both sides better. It limits the prices Intel can charge. It benefits consumers. If AMD wasn't cranking out good high-end processors, then Intel would have no motivation to innovate and could charge whatever they wanted. So AMD DOES have a big impact in the market, regardless of how many or few 3950X's it actually ships. It benefits everyone to see AMD do well.
But based on these results, I think AMD will sell quite a few of these 3950X's. Just my two cents.
@@n00bfishie I think we are on the same page on that.
I feel intel would have been in a better position with shit if AMD was able to bring competition to the table with intel for the past years. Instead of them basically flopping for so long. Intel figured fuck it. Why bother doing the extreme when we can just milk out our cpus with small % increment gains. And while intel was sitting back chillin and grillin, AMD was forced to get thier shit together before being swallowed up. Now they kinda do, and intel is scrambling.
I know this would be a stretch request but I'd love to see some work load tests an things beyond imaging type work. I am a systems engineer and design FPGAs and utilize a lot of cross-compilation for arm. Vivado and make use a lot of threads and cores when before building for their respective architectures respectively. I'd love to see how processors like these perform on work loads like these.
Thanks for all the work you do GN.
It tends to be hard to find good benchmarks for engineering loads. One reason I suspect is that all the software costs something in the 4 digits if you get a good deal. If xilinx published a benchmark maybe you could convince some reviewers to use it. I'd love to see HFSS or other FEM software bench marked since they tend to be somewhat transferable between software.
3950x has advertised boost of 4.7ghz doesn't it? (you said 4.6ghz)
gr82bAnAUTiger watch gamer melds video as there is a patch coming for a 200-250mhz bump in clockspeeds
@@stellar0001 Gamermeld is awful.
@@CarbonPanther rumour -mill- meld
@@CarbonPanther Totally agree. Don't believe Gamer Meld AT ALL. If ever there was a channel for hype, but not shill, then that is it. As Steve says the free pass that AMD is given for what amount to blatant lies on these boost frequencies won't last long, Intel would be crucified (rightly so) and the community should be "brand agnostic" when it comes to marketing hype. If it doesn't do what was promised from the get go they deserve all they get.
3950X is not a consumer CPU really, it's an HEDT CPU that can be used on an existing consumer platform, no more. 3900X or 9900K(or S) far better value imo as you won't need to pay the price for quality X570 board which this CPU demands. If money no object then Threadripper likely to be better.
@@clansome Yeah, AMD is setting themselves up to get sued again.
I'm watching an 8 minute ad about 3rd gen Threadripper before this video. I'm not even kidding.
It would be nice to see what kind of improvements the 3950X would get with Ryzen-optimized RAM speeds, i.e 3600MHz CL16 or 3733MHz CL15, as I imagine anyone spending $750 USD on this CPU would have no issues spending extra for the best RAM.
Exactly, they even have made videos saying to use 3600 at the least for ryzen, not sure why 3200 was used.
So the 3950X is the ultimate CPU for UA-camrs who like to game lol!
Let's read graphs!
Intel (9900K) : Gaming
AMD (3950X) : Render
It seems like the bottleneck is most of the time between the monitor and the chair. (see below)
AMD: Everything
@@stephenvsawyer...sigh
What did I say already
As always you are best.. 😍
Man, can't realistically even hold ~4.5GHz and they're advertising 4.7GHz?? I love what Ryzen has done for the CPU market, but AMD needs to be called out hard for that claim.
EDIT: If AMD fixes this and the claim becomes true under realistic conditions, that's great. I'm only going off of the data we have here and now.
Guys go watch gamer melds latest video a patch Is gonnna come out
@@stellar0001 Yes. They already working on it brother
Intel is doing the same with laptop cpus "5ghz" "boost".
And for the hundredth time boost is only on 1-2 cores not allcore.
And the new bios updates should fix some of the missing mhz.
They're not lying, but they're not being clear. 4.7 is highest you'll see on 1 core for what could be a fraction of a second.
In other reviews they hold it so nope
Love that you use celsius
I love how intel fanboys forget about spectre and meltdown lmao
That is like.... old news? You don't have to be an Intel fanboy to forget about that.
Zombieload, foreshadow hahha
@@maybenot7298 I just saw you reply the same thing on a different comment lmao, quit spamming copy & paste for a minute
@@maybenot7298 nice copy pasto
@@maybenot7298 it's actually pathetic that you assume most people spending all that money on pc are just gaming lmao
More quality content from the Bob Ross of the tech world. Fascinating insight, a reassuring voice and, of course, wonderful hair 😉
When the CPU temp is >= 80 *C, the clock limit drops to 4.4GHz.