I think that would take longer than 4 minutes to do justice. But I guess you could break it down as AMD focuses on having few ultra-wide cores, and also puts an emphasis on software control (for example, divergent branch instructions are implemented with code rather than hardware resources). Intel focuses on many ultra-small cores, more similar to how GPUs were made before unified shaders became the norm, and NVidia is somewhere between the two with an emphasis on hardware instruction support. So in theory, Intel would be good for lots of divergent workload (like many small triangles), AMD is good for lots of uniform work (like many big triangles), and NVidia is adaptable (it can do both types of work, but not as efficiently). Most rendering applications have a mix though, which is why NVidia usually does better.
@@possamei Apple's GPU architecture is based on the PowerVR TBDR, which has routes in the GPU used by the SEGA Dreamcast. It's closest to Intel's GPU architecture (compared to AMD and NVidia), but unlike Intel, it has a special shared memory specifically for the framebuffer. In that architecture, the screen is drawn in tiles, and blended within the shared memory before being written back to the VRAM (the goal is to never fetch the framebuffer from VRAM), but in exchange it imposes restrictions on latency - you need to set up your frame one frame before it's drawn (so you get 1+ frames of lag with optimal performance).
I know he gets mentioned rather frequently, but Anthony is a godsend for this channel. His voice, his mannerisms, his general disposition is just perfect, especially in videos like this.
@@Papa-Murphy Agree. Anthony is good at explaining things and has a kind of "normal", relatable matter to him. But I personally prefer the other hosts for their energy, rapid delivery, and comedic timing.
Linus actually dislikes(not really dislikes, but more like avoids) working with Anthony, because he can(in Linus's own words) get a bit too technical. I do enjoy Anthony ''s content a lot tho
Anthony's Tone, Inflection, and personability on screen plus how he arranges his content makes the information he is presenting easy to digest and does not leave you feeling lost. I feel like Anthony is writing the Electronics for Dummies LTT version while making you feel smart just listening to him. He is a Great and invaluable asset to the team.
Thank you for taking the time to explain the differences between Intel & AMD, especially since the marketshare between the two are now neck-in-neck and not the blowout Intel once had. I guess what it boils down to for someone who does a lot of programming and some casual gaming on older games like EVE Online and WoW, the differences really doesn't matter. It's like trying to compare a detached house with a semi-detached house. The architecture might be different but the house is still your own.
The terms "Zen 3" and "Zen 2" are misused here to explain CCD, what you actually mean is "Vermeer" and "Matisse"... there are other Zen 3 and Zen 2 CPUs like Cezanne and Renoir that are monolithic and don't use CCDs.
This, and AMD seem to have dropped the "CCX" terminology for Vermeer / Milan, because these chips no longer have a crossbar connecting the 4 cores, instead all 8 connected via ringbus.
@@saricubra2867 Because it uses zen 3 chips. The 5700g actually loses to the 5800x by quite a huge margin, so much that it is much closer to a 3700x in multicore performance due to the lack of cache.
@@saricubra2867 Yes, the 5700g beats the 3600 and 3700x in single core, but that has nothing to do with its packaging. Its monolithic form factor lets it down in multicore performance, because it is restricted in cache capacity. Your original comment was "5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count". Why mention the core count if you were comparing single core performance? It is really an unfair statement when you are comparing zen 3 monolithic to zen 2 chiplets, then concluding that chiplets are worse because faster zen 3 cores on a monolithic package are faster in single core compared to older, slower cores on a chiplet design. You should be comparing either the 4700g and 3700x, or the 5700g and 5800x, not the 5700g and 3700x, if you want to argue about the packaging technique for the cores.
You're describing a different technology called package-on-package. Chip stacking is 3D-integration using through-silicon vias, and significantly more complicated and expensive to do.
@@hjups yeah you are right I confused the two. But the raspberry pi zero 2 does have true chip stacking with wire bonding. Just take a look at the x-rays!
@@bonnome2 I wouldn't consider wire-bonding to be stacking. It's more like one of those weird package-in-package things. An evolution of multiple dies on a fiber composite like what Microchip did with some of their SAMD MPUs. Chip stacking would imply that wire bonds are not used.
@@asterphoenix3074 Not necessarily. It has to do with the interconnect size. Package on package can work for a LPDDR4 chip for example (~60 pins), whereas 3D stacking can be full-scale (~10,000 pins). Also, you get higher parasitics with PoP and still need to translate the signal to something that can go external (that's fine for LPDDR4 though, because it's using the LPDDR4 standard). 3D stacking on the other hand typically just has re-drivers (buffers) to go between dies. So I guess tl;dr. If you want to stack something that you could otherwise put on the motherboard, then PoP is fine. If you need something higher performance, you want 3D stacking.
I know this title seems catchy, but it's an over simplification on a rather trivial difference... The big difference between AMD and Intel performance comes down to the CCX and internal core architecture, and not the package technology used... The package technology has more of an impact for manufacturing costs and yields than for performance. You could have spent time talking about how the cache sizes and philosophy is different, how the inter-core communication strategy is different, how the branch predictors and target caches are different, how the instruction length decoding is different, how the instruction decoders themselves are different, the differences in the scheduling structure, the difference in the register files and re-order buffer, etc. But instead... you discuss the manufacturing difference and still don't get that quite right... So a few clarifications. 1) The latency in infinity fabric is largely due to the off-die communication. The signals within the die are far weaker and have to be translated into something that can leave the die and then translated into something that can work in the next die. It's sort of like fiber-optic ethernet, you have to translate the electrical signal into light, travel along the fiber, and then translate the light back into an electrical signal. However, the latency for infinity fabric for die-die communication, is on par with the far ring communication on intel CPUs. So it's not the major contributing factor for performance. 2) Infinity fabric is not serial, at least from what I could find. It utilizes SERDES for fewer wires, but it is still able to transfer 32-bits at the 1.6-1.8 GHz interconnect speed. That does not make it serial - it's effectively identical to a 32-bit bus. It should be noted that infinity fabric is a NoC, just like the ring-bus on Intel chips, where the flits are 32-bit. Granted though, the Intel ring bus NoC is likely wider (possibly 128-bits). I don't believe this is public knowledge, so I'm not sure about the exact parameters. 3) The video said that the core-core communication is slower across infinity fabric, however, it should be noted that the majority of the communication is not core-core. Instead, it's cache-cache communication (i.e. maintaining memory consistency and executing atomic operations). Core-Core communication would imply mailboxes, IRQs, or some sort of MSR based messaging.
@@richardsalazar4817 No, the 3d-vcache is just to have a bunch of cache. To do any sort of computation, data needs to be moved from memory into the CPU. If it's in DRAM, then that takes a relatively long amount of time (1000s of CPU cycles), whereas if it's in SRAM (cache), that can be as low as 3 cycles for L1, or 50 cycles for the L3. This is largely due to the inherent properties of the memory technology itself (DRAM vs SRAM). So ideally, you want most of your data in SRAM. But SRAM also has the problem that it's not very dense, making it expensive in large quantities. However, if instead of making the CPU die bigger to fit more SRAM, you can put it in another die sitting atop the CPU die (the 3d-vcache), then you don't need a very big die for the SRAM. There are still limits though, which is why vcache isn't GBs in size.
Current Ryzen & Epyc chiplets do not use a silicon interposer. They use traces in the package substrate to connect the chiplets. However AMD already has an answer to Intel EMIB by using Elevated Fanout Bridge (EFB) from TSMC in their Instinct MI200.
@@niks0987 Apple M1 Ultra uses TSMC InFO_LI (Parallel) as confirmed by TSMC. Check the article published in Tom's Hardware on 27-Apr-2022. This is similar to what AMD uses in its Instinct MI200.
That's incorrect. 7nm EUV (as well as 5, 4, and 2 nm) can still do full wafer sized chips (i.e. one chip per wafer). The lithography constraint is that you need to expose the wafer in many small intervals. If what you said was true, then Nvidia and Intel would be unable to manufacture their monolithic chips, and neither could AMD manufacture the PS5 / Xbox X/S, both of which are also monolithic.
@Finkel - Funk that’s honestly what I was hoping to see in their April Fools video, where Linus is replaced by Anthony and gradually loses everything before waking up at the end of the video revealing it was all a nightmare of his lol. Maybe next year.
Would assume it's as simple as less wattage = less heat = longer lifespan. and why in the early years of athlons. Amd systems were ready to give up the ghost after a year or two while intel chipsets rarely failed but were simply outclassed and shelved due to newer technology and higher speeds.
This is a really good video. Just the right amount of depth, pacing and audio/video content. Anthony is very articulate and covers the stuff I care about. Thank you!
Disclaimer : What Cpu to choose for gaming(Read fully) Dont go with amd cpus if u care about latency in games and run ram but higher speed. Because they use a technology called infinity fabric by that they pack lots cores into 2 dies so input Latency is significant and intel doesnt use this so the prices are usually higher than amd cpus. Highly recommend Intel if you are playing online mutiplayer games.If you dont play that much then u can go with amd.
I usually bought Intel CPU's most of the time as they were always reliable, but over 1 year ago i went for AMD Ryzen 9 5900X instead. 100% satisfied with that too.
AMD ones are now as reliable as Intel, but because they are built differently it affects certain processing tasks. I'm a 3d visualizer and have been using intel chips for my rendering process, as well as them being the standard for most rendering farms. No problems all this while, until I switched to AMD and while the creation process is very much the same, when it comes to rendering AMD computes differently from Intel hence the render results are different and inconsistent with those rendered using Intel cpus. So I had to stick back to Intel for my work, but for anything else like coding or gaming there's no issue. I believe it would also affect physics simulation as well. I guess what I'm saying is that for the average user it won't matter the way AMD and Intel chips are built differently but for calculation sensitive tasks it does.
@@kenhew4641 AMD(AMF/VCE) definately sucks when it comes to rendering and encoding compare to Nvidia NVENC and Intel QSV (EposVox made good analysis on this)
Way more informative than Google searches. I'm upgrading my laptop eventually and I have been fighting to find current specs and upcoming technology improvement predictions.
Very good video. I enjoyed it because it discussed the underlying tech of something we use instead of a million dollar server that’ll never use or need in my life.
This was actually quite informative. I was expecting more benchmarking and specific tasking head to head, but I definitely learned something new and useful. Always good to see Anthony showing out, good stuff, great channel and as always, I look forward to moer!
Parallel transmission of data suffers from one drawback, synchronisation. Remember when we had parallel interfaces connecting our hard-disks and printers? Remember how limited they were in speed because of the required acknowledgements, synchronisation, and reassembly silicon (parallel cache) used to ensure data was not lost? Remember when SATA and USB arrived and suddenly we had better drive speeds and device hubs were now possible? No? Oh, well. Just remember parallel data transmission architectures work most efficiently when using separate serial streams in parallel where each stream is independent and synchronisation is optional - just like PCIe. I'd be surprised if the Intel "parallel" EMIB was actually truly parallel. It is more likely it is used as a way to overlap execution ports on the cores. The giveaway is the lack of reassembly buffers.
One makes you buy a new mainboard every 12 months. The other makes you buy a new mainboard... every 2-3 years? I don't care which I buy so long as the CPU uses at least 1000W of power. Post those power bills online for street cred.
There were a lot of differences between AMD and Intel that I really wasn’t familiar with when doing my first build. Like, I saw a lot of things mentioning XMP profiles for RAM, and then I spent god knows how long trying to figure out how to enable XMP, because that’s what you’re supposed to do… nobody ever said anything about DOCP. I wouldn’t even know it existed!
Yup. Always had Intel till the 3600 launched and actually had to google AMD XMP to figure out it was called DOCP though the manual probably would have mentioned that had I read it. Still can't wrap my head around overclocking
The actual difference between Intel and AMD is Intel overclocks older processors and sells them as next gen. AMD does not. Intel reports insider trading of their own stocks in their quarterly profit reports. AMD does not. Intel preys on low intelligence people with their plebian advertising campaigns. AMD does not. In general, Intel is run by unscrupulous criminals. AMD is not.
The BIGGEST difference is AMD does not impose there standards to try to kill the competition like Intel does. There is a reason why WinTell was called such. Because both Intel and Microsoft tried to shove it to us users. Also to a lesser extent Nvidia.
WOW! Another AWESOME video!! What would be so cool, awesome and appreciated is if you guys did a video on which one (Intel vs. AMD) is good for Cybersecurity, Coding, Programming and the like, although it would be subjective it would also be great to be able to pick your minds about it all. Somewhat a "Knowing What We Know," Series. There are a whole lot of aspiring Cybersecurity/ Coding enthusiasts [such as myself] who are coming into it all blind and even caught up in picking between which one? CES 2022 had us confused even more with the plethora of awesomeness in the CPUs but now...which one would be good for what? Thanks!!!!
AMD fan here, not because of design or fabrication or anything technical. I have simply loved watching AMD periodically hand Intel it's own gluteus maximus always at the point where Intel has, again, reached peak ego, forcing them to rethink everything. It's been great for consumers every single time.
on a lower level, the cores are also structured differently between brands, with intel favoring having a large branch predictor and having much higher transistor count for instructions to push through (beyond the more complex branch predictor). This leads to marginally better single core performance, higher power draw and less space on the die for cores (ignoring MOSFET size differences). Because AMD favors less branch prediction and generally less transistors in a instruction path, they are generally able to have more cores that run more efficiently with marginally worse single core performance due to worse branch prediction. There's a lot more to it, but that has been a big difference between the 2 brands since AMD started making their own x86 chips
yep, this is why in games (which mostly require high single core performance) intel beats AMD, while workload processes (such as decompression and compression, physics simulations) run better on AMD because it is better suited for it than intel..
@@petrkdn8224 and also at the end of the day,both chips can do gaming and workloads :) unless you are obsesed with numbers....for us it doesn't matter what you choose :)
@@robb5828 yes of course, both are good.. I have an i3 7100, sure I can't run modern games on high settings, but it still runs everything (except warzone because that shit is unoptimized as fuck)
@@robb5828 to add to your point, if hardware/software has "solved" your workload already (common example being word processing) any chip will do and many tasks like gaming are more demanding on other systems within a computer/network. So the differences being marginal already have even smaller impacts if at all in the larger picture.
I'm reading about Operating systems and i just discovered that the big difference lies in the Architecture....both perform the same tasks quite differently but the results of a a top tier AMD or Intel CPU are hard for the average user to even notice
they are counting laser disabled ones... because thats how they are made... its a six core part, but it has the entire 8 core chip. in theory, 1 or 2 of those cores didnt meet validation requirements due to defacts so they laser them off and sell it as a 6 core CPU instead. its the cheapest way to maufacture at scale, at least for now anyway...
@@holobolo1661 The yield on TSMC N7 by now is so high that you can bet they are crippling a tonne of perfectly good chiplets to fullfill demand of 5600(X). That is the sole reason why AMD up to now didn't offer a non-X 5600 at reduced prices. They only do now because of actual competition by Intel with parts like the 12400.
Anthony, your presence here is great! It looks WAY more natural when you're not trying to hide the 'clicker' thingie :) If anything, this fits YOU very well, since YOU are the one who shows us how things work IN DEPTH. So it fits 'conceptually' too. I approve wholeheartedly. We all know 'how the pie is made' by now; so much 'behind the scenes' information about LMG; ...there's no need to pretend you're on network television or something :)
@@HULK-HOGAN1 Yet here you are, commenting on a video with Anthony in the thumbnail. It seems 'going out of your way to avoid anything with Anthony in the thumbnail' does not include 'NOT CLICKING on anything with Anthony in the thumbnail'. Lightly stated; there are some flaws in your methodology. More firmly; do something positive in your life - something that you truly love - that drains the energy and need from you to want to be negative towards others. Anthony makes complicated topics feel understandable to regular people, and is able to make 'us regular folk' feel excited about things we had no idea even existed 2 seconds ago That is an exceptional skill. - My question to you is; WHY do you waste your time commenting negative shit; especially if you didn't even feel like watching this video "because Anthony's in the thumbnail"? - There's enough negativity in this world. Whenever you want to feel better about yourself by dragging others down, just because your own life isn't working out like you pictured... I don't need to hear/read your '2 cents'. - ... And.if that last part is the case; happy to talk sometime, or maybe go see a psychologist (it can help out a lot - trust me on that one). You're not alone in your misery; there's better times to come, even if you can't picture them right now. I know how tough shit can get. It gets better. Ain't no shame to ask for help along the way - that can save you a couple years (again; trust me. I know) Anyways; no more negativity towards people on the internet, please. Talk to people about how you feel instead. It's scary as hell at first. You'll get used to it. And you might find out who your best friends truly are (they might not be the ones you think of first) One love, yo
The difference is you are not replacing your motherboard every time with AMD. Gotta love spending $200 bucks on a motherboard for a $300 processor. AMD BABY.
@@fahrai4983 Yea great then I will have the AM5 board for the next 6-8 years. The point is a new generation does not mean a new board EVERY SINGLE TIME like intel does purposefully. There is zero reason for it. "Oh we added a pin so its 1151 pins instead of 1150 now, that extra pin does nothing but we changed the pattern just to screw you." I understand AMD has to update their socket with technologies but we got so many glorious years of AM4, and before that, AM3.
For me the difference is that AMD does not offer a no compromise CPU and integrated graphics at the same time. You either get a great CPU without integrated graphics or an inferior CPU (less cores or cache) with integrated graphics
As a newish gaming pc user something that has made me wonder, is if an amd gpu works more efficiently when paired with an amd cpu, or if it matters at all if you pair your gpu with what ever brand processor? This would be a useful video topic for a lot of people I believe.
I really enjoy your presentations & thank you for presenting complicated technology that even an iliterate junky like me might understand.Your voice is easy to to follow, you don't talk too fast, nor above a person status.
Anthony is just someone who can probably explain almost anything you need to understand - maybe, he should narrate that "easy" quantum mechanics book by Hawking - "The Theory of Everything."
Another great Anthony video. Personally I would love it if he would be allowed to make them even more technical, but I do understand the reasoning of LMG wishing to appeal to a wider audience
Alright boys.. hear me out here. Way back when I was a fetus in 1996; high end graphics PCs like the SGI line up had boards for the internal parts like the GPU and CPU. So, of Chiplets are so good and we can sorta plug and play them, why not have a board with dedicated slots for another gpu and cpu chiplet to be implanted? Why not be able to upgrade the VRam chips..? Yes, I get designing is complicated but why not? Imagine plugging in a second 3080 chiplet and upgrading your VRam chips to have a supercomputer on your desk?
I thought this would be about the architecture of the x86 designs they each use, but it turned out to be just about the recent way they're each implementing multicore.
@@hjups I'm not sure if the K6 was the last per-core equivalence. The last truly identical cores where Intel 80486 and AMD Am486. As for other cores, AMD until the K10 (Phenom) did not fundamentally change the architecture. Bulldozer (FX) was the first major overhaul. Intel changed things up a fair bit sooner, with Netburst (Pentium 4). Funnily enough both Netburst and Bulldozer were ultimately dead ends, worse than their predecessors. Intel brought back the i686 design in the form of first Pentium M and later Core2. Core2 competed against K8 and K10, which I think share the same lineage to the first microcoded "inner-RISC" CPU's like K6 and Pentium Pro. AMD instead started over once again, and that brings us to Zen. What I find interesting is that Zen3/Vermeer and Golden Cove/Alder Lake are very good at trading blows: depending on what you're doing, one can be wildly faster than the other. As far as I can see though, that mostly seems to be caching matters; a Cezanne chip does not have the same strengths as Vermeer, but does have the same weaknesses, as far as I can see. I'm also curious how far hybrid architectures are going to go. On mobile, they're a massive success, and Alder Lake has proven them to be very useful on desktop as well.
@@scheurkanaal I think you misunderstood my statement. I'm not referring to performance, I'm referring to architecture. Obviously, there are going to be differences that have a substantial effect, even as far as the node in which the processors are fabricated on. Yes, the last time they were identical in architecture was the 486, however, the K5/K6 and the Pentium Pro/Pentium 2/Pentium 3, were all quite similar internally. AMD then diverged with the K7/K8+ line, while Intel tried Netburst with the Pentium 4. After the failure of Netburst, Intel returned to the Pentium 3 structure and expanded it into Core 2/Nehalem/etc. and have a similar structure to this day. Similarly, AMD maintains a similar structure to the K10, with families like Bulldozer diverging slightly in how multi-core was implemented with shared resources. Also note that AMD since the K5, and Intel since the original Pentium and the Pentium Pro have used a "RISC" micro-operation based architecture. The original Pentium is the odd one out there though, since it was less apparent due to it being an in-order processor while the others have all been out-of-order. Hybrid architectures may not really go much further than Alder Lake and Zen 4D. There isn't much room to innovate in the architectural space, where most of the innovation needs to happen at the OS level (how do you schedule the system resources). It's also driven by the software requirements though. Other than that, there may be some innovation in the efficiency cores themselves, to save power even further, but in exchange for lower performance (the wider the gap, the more useful they will be).
@@hjups I was also talking about architecture :) I was just not under the impression K7 was much different from K6, since it did not seem all that different from what Intel was doing circa Pentium 3 (which is like "a P2 with SSE", and the P2 in turn was just a tweaked Pentium Pro), and the numbers also imply a more incremental improvement (although to be fair, K5 and K6 were quite different). That said, I wouldn't be so sure if Zen and K10 are that similar. As far as I know, Zen was (at least in theory) a clean-sheet design, more-or-less. I was also referring to micro-operations when I said "inner-RISC". The word "micro-operation" just did not occur to me. Finding something that said whether or not the original Pentium was based on such a design was also quite hard, so I assumed it didn't. It was superscalar, but I think the multi-issue was quite limited in general, which gave me the impression the decoder was like the one on a 486, just wider (for correctly written code). I don't know how far efficiency cores will go. Their use is not from a wider gap, but rather, more efficiency (power per watt). Saving 40% of power but reducing performance by 50% is not very effective. Also, in desktop machines, die size is a very big consideration, not just power. And little cores are useful here. Keep in mind that the E-cores from Alder Lake are significantly souped up compared to earlier Atom designs. That's important to maximize their performance in highly threaded workloads. I think the next thing that should be looked at is memory and interconnect. CPU's are getting faster, and it's becoming harder and harder to keep them properly fed with enough data.
@@scheurkanaal Maybe we have different definitions of architecture. SSE wouldn't be included in that discussion at all, since it's just a special function unit added to one of the issue ports, similar to 3DNow! (which came before SSE). The K5 and K6 are much more similar than the K6 and K7... The K5 and K6 even use the same micro-op encoding as I understand it. The K7 diverged from simple operations though into more complex unified operations, that's also when AMD split up the integer and floating point paths. The cache structure changed, the whole front end changed, the length decoding scheme changed, etc. As for P2 vs Pentium Pro, the number of ports changed, and the front end was improved to include an additional decoder (which has a substantial difference for the front end performance - it negatively impacts it, requiring a new structure). The micro-op encodings may have also changed with the P2 (I believe they still used the Pentium uops in the Pentium Pro which are very similar to the K5 and K6 uops). Zen may have been designed from the "ground up", but it still maintains the same structure and design philosophy - that's likely for traditional reasons (they couldn't think outside of the box). Although, it does have some significant benefits in terms of design complexity over what Intel does - especially when dealing with the x87 stack (the reason why the K5 and K6 performed so poorly with x87 ops, and why the K7 did much better). Yeah, I knew what you meant by "inner-RISC". I just used more technical terms. The P1 was touted as two 486's bolted together, but that was an overly simplified explanation meant for marketing people who couldn't tell the difference between a vacuum tube and a transistor. In reality, you're correct, the dual issue was very restricted, since the second pipeline really could only do addition and logical ops, as well as FXCH which was more impactful (again for x87). I would guess that most of the performance improvements came from being able to do CMP with a branch, a load/store and a math op, or two load/stores. As for specific information about the P1 using uops, you're not going to find that anywhere, because it's not published. But it can be inferred. You would have to look at the instruction latencies, pipeline structure, know that a large portion of the die / effort was spend on "emulating instructions" (via micro-code), and have knowledge of how to build something like the Pentium Pro/2/K6. At that point, you would realize that the P1 essentially had two of what AMD called "long decoders" and one "vector decoder", which it could either issue two "long" instructions or one "vector" instruction. The long decoders were hard coded though, and unlike the K6/P2, the uops were issued over time rather than area (i.e. the front end could only issue 2 uops per cycle, and many instructions were 3 uops. So if logically they should be A,B,C,D,E,F, the K6 would issue them as [A,B,C,D] then [E,F], but the P1 issues them as [A,C],[B,D],[E,F]). Yes, power efficiency is proportional to performance. The wider the gap implies more power efficient. But there's also the notion of making the cores smaller too and throwing more at the problem (making them smaller also improves power efficiency with fewer transistors). If the performance is too high though, there's no reason to have the performance cores, which is what I meant by the wide gap being important. Memory and interconnect are an active area of research. One approach is to reduce the movement of data as much as possible, to the extent of performing the computation in RAM itself (called processing in memory). It's a tricky problem though, because you have to tradeoff flexibility with performance and design complexity (which is usually proportional to area and power usage - effectively energy efficiency).
You can guarantee that Meteor Lake with 'tiles' will be delayed for a year. Intel never gets things out on time and with a huge change in the way the chip is Fab'd I would expect 3rd quarter 2024 to 1st quarter 2025. ARC is a perfect example to remind you of what happens when Intel tries new things. Intel Arc’s (then Xe) original release date was set for 2020. Sure, it has to be driver issues right? Intel did not deliver on time and they even delayed the announcement. By the time they release the GPU AMD and Nvidia will have there next Gen launch and it will be rolled out so Intel better have something decent as in the GPU or a low, low price. Intel stated they want to sell 4 million ARC GPUs throughout 2022 and that their average GPU price will be around $75. $75? Yeah they stated that! That leads me to believe it may be crap.
In the most simplistic terms, Intel had the bank to crush fair competition, and they had AMD licked on single core performance for ages. It is only within the last decade that multicore performance really started to become more prominent in the mainstream. AMD went back to the drawing board for their chiplet design and continued mutlicore performance improvements, which has made then as competive and moreso in recent years. There are tonnes more reasons, but those two stand out most to me
Venkat and his wife madhavi are new ceo of my company. You will see more snd more competition as I have manufacturing unit in every house , dont underestimate the power of sales owner ramya vallabh and her vadas and mirchi bajji. It can make kings , it can make bhagwans
When I put together my pc, I went team red simply because I intended to upgrade later and I knew amd cpus have a habit to be chipset backwards compatible with older mobo chipsets. I still haven't upgraded though... (Still rocking a 2400g) I'd like to say with this edit I went to 3600 and it's amazing but I hit my limit, I need to get a new motherboard if I ever do upgrade further.
All these tiles and multiple processor cores with high speed serial (or parallel) communication are an adaptation of the Inmos Transputers from the 1980. Inmos was just way ahead of it’s time.
I'd love to see a video of if it's possible to add your own CCD if you could get the parts to just add more cores to your existing CPU using a cpu with an empty CCD section. Might want to get a microscope for that one and I doubt you could ever do it at home but would be interesting to see if it is possible.
I mean you probably could but there would be tons of issues. The chip would not be suppported by any motherboard and would need a custom bios. You'd probably have differences in the chips that ones produced together would not have. It would be insanely easy to mess up. It might be fused off which would completely neglect doing anything I'm pretty sure people have added more vram to gpus and it has wroked but very was very unstable.
@@gamagama69 Seems that if the chip can use the signals used to identify 3900x or 3950x silicon then maybe, you could use existing In bios signatures for existing ryzen chips to make a 3800x into a 3950x but that would be extremely difficult without Nanometer scale precision tools.
Would've been nice to mention that AMD still uses monolithic designs on it's laptops and APUs. Would have been an interesting aside to about the space disadvantages of chiplets. Great video though!
I decided to try an AMD machine after being with intel for a few generations. The Infinity Fabric was completely unstable and caused any audio playing to be filled with static and blue screening occasionally. I tried for a week with updates, tweaks, tutorials but couldn't stabilise it. I sold the parts and bought intel parts and had no problems at all. I've been building computers for myself since I was twelve years old (20 years), and that AMD machine was the only time I was forced to give up when presented with an issue. I've bounced back and forth between the two, as well as ATI and nVidia over the decades, but that experience really put me off AMD for the moment.
Lol i build since 15 years, just went with intel. This time the hype about Zen 4 was so huge, i could not resist to try. I bought a 7900x. Hat stuttering issues, because of ftpm, issues with memory controller. Then after two days i returned it and bought a 13600k, wich worked perfectly since. I cant relay my money anymore to AMD. First impression not good.
Does this mean that intel is eventually going to break through some wall to a better solution? Where as AMD's infinity fabric and chiplet building blocks will eventually be a problem when the limits of the speed of electricity are going to be a problem?
i've never really cared about either lol id just try to build comparable systems and then decide based on over all price / taking into consideration reviews on all the parts around them. was working on it a bit last night as im considering upgrading and noticed that i7-12700k out performs the ryzzen 9 5900x by a decent margin and is cheaper which was interesting to me as a step up on either side was a huge price jump for not a big jump in power.
One correction here. Meteor lake doesn't use EMIB, it uses Foveros. Basically 3D stacking of silicon. But unlike TSMC/ AMD 3D v cache, meteor lake can be overclocked like normal CPUs.
It still fascinates me, 2 companies, started in the same decade (okay intel was named different back then), and they are still competing against each other as "top dog" as such. Kind of reminds me of 2 brothers constantly trying to 1 up each other.
@The Deluxe Gamer they arent really "competing" as would be in other countires where there is an acutal left / center party rather than the US's 2 right wing parties, if it were really comparable to US political system then both intel and amd would have to be competing in a single sector of proccessors such as workload or gaming , while they do both (and they have different features etc etc)
The main difference between and AMD and and Intel CPU is that having the pins on the motherboard side means you gotta keep track of the little socket cover and will always have a bit of nagging anxiety anytime you go through your parts bins because "oh no what if I knock it on the floor and someone thinks it's a piece of trash and throws it away"
@@rk3senna61 they've had it for a long time, they just typically weren't discrete gpus, just integrated, they're starting to do discrete, but gpus in-of-themselves is not new to them.
Intel also has less cache compared to AMD cpu's which tend to slow down your computer after a while. For example I had switched from a ryzen 3700x to i5 11400 system because I had sold that computer to a friend and at the time intel 11400 system costed much less than zen 3 systems. And i5 11400 is supposed to be faster than 3700x for single threaded applications right? Yes it is faster in games but after only 9-10 months of usage, the web browsing experience and a couple applications like obs got significantly slower compared to 2 years of heavy usage on ryzen 3700x. I am now just lazy to reinstall windows due to my job taking too much of my time and leaving no room backing up stuff. And for those who might ask, I don't have more programs or I'm not using an antivirus, I still have the same ssds, I am up to date on drivers and I don't use browser extensions... And no the cpu or memory usage isn't high. And I got significantly faster memory on this system with super low timings. And yes memory overclock is stable, It has passed memtest 1500%, linpack, time spy, y cruncher all of that. So yeah, at least as far as I can tell 11th gen intel sucks in that case which I think is caused by 32 megabytes of l3 cache vs 12 megabytes. Making a video on youtube full screen on chrome is taking a couple seconds for example. I mean like wtf...
@@zatchbell366 hwinfo shows total 9tb host writes, its a samsung 970 evo plus 2tb and it only has windows and some programs, total 230 gb is used so I dont think thats the case
@@danieljimenez1989 Just reinstalled the windows and all the other programs and all the windows updates on the same SSD. Everything is now running flawlessly fast. So apparently it was windows and software updates bloating the system which made the cpu or cache no longer being able to keep up in some programs. I have got my bookmarks and everything else back on the chrome. And all the "default programs" are still running at startup, got the same drivers installed as I had my "install" folder remaining on another drive which I keep all my driver setups. I also have my steam and games installed as well. So it was not SSD nor anything else. Just stupid windows bloating things up.
A version of this explaining the difference between Nvidia, AMD, and Intel's GPU architecture would be amazing!
for real
I think that would take longer than 4 minutes to do justice. But I guess you could break it down as AMD focuses on having few ultra-wide cores, and also puts an emphasis on software control (for example, divergent branch instructions are implemented with code rather than hardware resources). Intel focuses on many ultra-small cores, more similar to how GPUs were made before unified shaders became the norm, and NVidia is somewhere between the two with an emphasis on hardware instruction support. So in theory, Intel would be good for lots of divergent workload (like many small triangles), AMD is good for lots of uniform work (like many big triangles), and NVidia is adaptable (it can do both types of work, but not as efficiently). Most rendering applications have a mix though, which is why NVidia usually does better.
And Apple's!
Yes please
@@possamei Apple's GPU architecture is based on the PowerVR TBDR, which has routes in the GPU used by the SEGA Dreamcast. It's closest to Intel's GPU architecture (compared to AMD and NVidia), but unlike Intel, it has a special shared memory specifically for the framebuffer. In that architecture, the screen is drawn in tiles, and blended within the shared memory before being written back to the VRAM (the goal is to never fetch the framebuffer from VRAM), but in exchange it imposes restrictions on latency - you need to set up your frame one frame before it's drawn (so you get 1+ frames of lag with optimal performance).
Please do more videos like this. Focused on the chips, technologies behind and so on. It's awesome content.
Yes, it's nice to get a glimpse into how the hell this stuff works
And have Anthony host them
He needs to do Mediatek vs Qualcomm
This. But I wish they were TechLongies. Ran through it too fast to really comprehend and didn't go into deep details.
@@xADDxDaDealer dis iz de wey
I know he gets mentioned rather frequently, but Anthony is a godsend for this channel. His voice, his mannerisms, his general disposition is just perfect, especially in videos like this.
Anthony is great at explaining things. Love him.
I like the topics he chooses, but Riley, James, Linus, etc are still my preferred hosts.
Anthony is the best. and he has a bad ass track suit.
@@Papa-Murphy Agree.
Anthony is good at explaining things and has a kind of "normal", relatable matter to him. But I personally prefer the other hosts for their energy, rapid delivery, and comedic timing.
Linus actually dislikes(not really dislikes, but more like avoids) working with Anthony, because he can(in Linus's own words) get a bit too technical. I do enjoy Anthony ''s content a lot tho
AMD: "We're introducing chip stacking"
Pringles: 😎
lol
And once you pop, the fun don't stop
Anthony's Tone, Inflection, and personability on screen plus how he arranges his content makes the information he is presenting easy to digest and does not leave you feeling lost. I feel like Anthony is writing the Electronics for Dummies LTT version while making you feel smart just listening to him. He is a Great and invaluable asset to the team.
He will be missed.
@@RodrigoAReyes95 He died?
@@vengeance2825 no, but she isn’t “Anthony” anymore, if you know what I mean 😑
@@RodrigoAReyes95 Ohhh, him became a shim... dang.
Thank you for taking the time to explain the differences between Intel & AMD, especially since the marketshare between the two are now neck-in-neck and not the blowout Intel once had.
I guess what it boils down to for someone who does a lot of programming and some casual gaming on older games like EVE Online and WoW, the differences really doesn't matter. It's like trying to compare a detached house with a semi-detached house. The architecture might be different but the house is still your own.
The terms "Zen 3" and "Zen 2" are misused here to explain CCD, what you actually mean is "Vermeer" and "Matisse"... there are other Zen 3 and Zen 2 CPUs like Cezanne and Renoir that are monolithic and don't use CCDs.
This, and AMD seem to have dropped the "CCX" terminology for Vermeer / Milan, because these chips no longer have a crossbar connecting the 4 cores, instead all 8 connected via ringbus.
5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count and don't have integrated graphics lmao.
I'm on team monolithic.
@@saricubra2867 Because it uses zen 3 chips. The 5700g actually loses to the 5800x by quite a huge margin, so much that it is much closer to a 3700x in multicore performance due to the lack of cache.
@@mingyi456 That is not true in terms of single core speed.
@@saricubra2867 Yes, the 5700g beats the 3600 and 3700x in single core, but that has nothing to do with its packaging. Its monolithic form factor lets it down in multicore performance, because it is restricted in cache capacity.
Your original comment was "5700G outperforms Zen 2 chips that have twice the L3 cache with similar core count". Why mention the core count if you were comparing single core performance? It is really an unfair statement when you are comparing zen 3 monolithic to zen 2 chiplets, then concluding that chiplets are worse because faster zen 3 cores on a monolithic package are faster in single core compared to older, slower cores on a chiplet design. You should be comparing either the 4700g and 3700x, or the 5700g and 5800x, not the 5700g and 3700x, if you want to argue about the packaging technique for the cores.
Ones innovative,
And the other ...
Is also innovative
READ MY NAME!!!!!
!
Stacking chips is actually used a lot in mobile phones. Even the raspberry pi zero has stacked chips
You're describing a different technology called package-on-package. Chip stacking is 3D-integration using through-silicon vias, and significantly more complicated and expensive to do.
@@hjups yeah you are right I confused the two. But the raspberry pi zero 2 does have true chip stacking with wire bonding. Just take a look at the x-rays!
@@bonnome2 I wouldn't consider wire-bonding to be stacking. It's more like one of those weird package-in-package things. An evolution of multiple dies on a fiber composite like what Microchip did with some of their SAMD MPUs.
Chip stacking would imply that wire bonds are not used.
@@hjups is package on package less efficient or something?
@@asterphoenix3074 Not necessarily. It has to do with the interconnect size. Package on package can work for a LPDDR4 chip for example (~60 pins), whereas 3D stacking can be full-scale (~10,000 pins). Also, you get higher parasitics with PoP and still need to translate the signal to something that can go external (that's fine for LPDDR4 though, because it's using the LPDDR4 standard). 3D stacking on the other hand typically just has re-drivers (buffers) to go between dies.
So I guess tl;dr. If you want to stack something that you could otherwise put on the motherboard, then PoP is fine. If you need something higher performance, you want 3D stacking.
I really enjoyed the detail in this. Interesting to deep dive into how the tech actually works. Thanks!
I wouldn’t call a five minute video on something as complex as CPUs as a deep dive.
lol u think this was a "deep" dive
I know this title seems catchy, but it's an over simplification on a rather trivial difference...
The big difference between AMD and Intel performance comes down to the CCX and internal core architecture, and not the package technology used... The package technology has more of an impact for manufacturing costs and yields than for performance.
You could have spent time talking about how the cache sizes and philosophy is different, how the inter-core communication strategy is different, how the branch predictors and target caches are different, how the instruction length decoding is different, how the instruction decoders themselves are different, the differences in the scheduling structure, the difference in the register files and re-order buffer, etc. But instead... you discuss the manufacturing difference and still don't get that quite right...
So a few clarifications.
1) The latency in infinity fabric is largely due to the off-die communication. The signals within the die are far weaker and have to be translated into something that can leave the die and then translated into something that can work in the next die. It's sort of like fiber-optic ethernet, you have to translate the electrical signal into light, travel along the fiber, and then translate the light back into an electrical signal. However, the latency for infinity fabric for die-die communication, is on par with the far ring communication on intel CPUs. So it's not the major contributing factor for performance.
2) Infinity fabric is not serial, at least from what I could find. It utilizes SERDES for fewer wires, but it is still able to transfer 32-bits at the 1.6-1.8 GHz interconnect speed. That does not make it serial - it's effectively identical to a 32-bit bus. It should be noted that infinity fabric is a NoC, just like the ring-bus on Intel chips, where the flits are 32-bit. Granted though, the Intel ring bus NoC is likely wider (possibly 128-bits). I don't believe this is public knowledge, so I'm not sure about the exact parameters.
3) The video said that the core-core communication is slower across infinity fabric, however, it should be noted that the majority of the communication is not core-core. Instead, it's cache-cache communication (i.e. maintaining memory consistency and executing atomic operations). Core-Core communication would imply mailboxes, IRQs, or some sort of MSR based messaging.
Yeah!
Is that why amd is implementing 3dvcache?
@@richardsalazar4817 No, the 3d-vcache is just to have a bunch of cache. To do any sort of computation, data needs to be moved from memory into the CPU. If it's in DRAM, then that takes a relatively long amount of time (1000s of CPU cycles), whereas if it's in SRAM (cache), that can be as low as 3 cycles for L1, or 50 cycles for the L3. This is largely due to the inherent properties of the memory technology itself (DRAM vs SRAM). So ideally, you want most of your data in SRAM. But SRAM also has the problem that it's not very dense, making it expensive in large quantities. However, if instead of making the CPU die bigger to fit more SRAM, you can put it in another die sitting atop the CPU die (the 3d-vcache), then you don't need a very big die for the SRAM. There are still limits though, which is why vcache isn't GBs in size.
@@hjups who are you?
@@gabadu529 A computer architecture researcher, who doesn't work for Intel or AMD.
Current Ryzen & Epyc chiplets do not use a silicon interposer. They use traces in the package substrate to connect the chiplets. However AMD already has an answer to Intel EMIB by using Elevated Fanout Bridge (EFB) from TSMC in their Instinct MI200.
Its interesting to know what apple uses in their ultrafusion, I mean if that is serial interconnect like amd or parallel like intel.
@@niks0987 Apple M1 Ultra uses TSMC InFO_LI (Parallel) as confirmed by TSMC. Check the article published in Tom's Hardware on 27-Apr-2022. This is similar to what AMD uses in its Instinct MI200.
@@srikanthramanan thanks, apple indeed does a serious business! Great info.
now we just need to combine them and get an intamd in Wich it has 40 cores
lol auto correct wants it to be intend
Smaller Chiplets are actually due to EUV Lithography.
Because they have to use mirrors instead of lenses, the area of the chip is quite limited.
I wish i could get my hands on some EUV lenses lol i wanna build a EUV microscope
@@mastershooter64 you will probably get a Nobel price, if you manage to make EUV work with lenses.
That's incorrect. 7nm EUV (as well as 5, 4, and 2 nm) can still do full wafer sized chips (i.e. one chip per wafer). The lithography constraint is that you need to expose the wafer in many small intervals. If what you said was true, then Nvidia and Intel would be unable to manufacture their monolithic chips, and neither could AMD manufacture the PS5 / Xbox X/S, both of which are also monolithic.
The size limit is still around 800mm² (or 400mm² for high NA), much larger than the compute chiplets AMD has been making (
@master shooter64 everything absorbs EUV, so it would be less useful than electron microscopes, and lower resolution.
Oh if only I understood one thing you said 😩🤦🏾♀️ Layman’s terms please 🙏🏾
Git gud.
I don’t know what that means
Linus, give the man his own show already!
Sorry Linus, this is Anthony's Tech Tips now. ATT.
@Finkel - Funk that’s honestly what I was hoping to see in their April Fools video, where Linus is replaced by Anthony and gradually loses everything before waking up at the end of the video revealing it was all a nightmare of his lol. Maybe next year.
Lol, he can make his own channel whenever he wants.
@@TH3C001 I hope they see this for next year.
"man" lol
Would assume it's as simple as less wattage = less heat = longer lifespan. and why in the early years of athlons. Amd systems were ready to give up the ghost after a year or two while intel chipsets rarely failed but were simply outclassed and shelved due to newer technology and higher speeds.
This is a really good video. Just the right amount of depth, pacing and audio/video content. Anthony is very articulate and covers the stuff I care about. Thank you!
Disclaimer : What Cpu to choose for gaming(Read fully)
Dont go with amd cpus if u care about latency in games and run ram but higher speed. Because they use a technology called infinity fabric by that they pack lots cores into 2 dies so input Latency is significant and intel doesnt use this so the prices are usually higher than amd cpus. Highly recommend Intel if you are playing online mutiplayer games.If you dont play that much then u can go with amd.
I usually bought Intel CPU's most of the time as they were always reliable, but over 1 year ago i went for AMD Ryzen 9 5900X instead. 100% satisfied with that too.
New intel cpu somehow happens to be cheaper here so I use it instead.
AMD ones are now as reliable as Intel, but because they are built differently it affects certain processing tasks. I'm a 3d visualizer and have been using intel chips for my rendering process, as well as them being the standard for most rendering farms. No problems all this while, until I switched to AMD and while the creation process is very much the same, when it comes to rendering AMD computes differently from Intel hence the render results are different and inconsistent with those rendered using Intel cpus. So I had to stick back to Intel for my work, but for anything else like coding or gaming there's no issue. I believe it would also affect physics simulation as well. I guess what I'm saying is that for the average user it won't matter the way AMD and Intel chips are built differently but for calculation sensitive tasks it does.
Does AMD still make their chips run hotter than hell? The only one I ever owned fried itself. i have used Intel since(mid 90's).
@@kenhew4641 AMD(AMF/VCE) definately sucks when it comes to rendering and encoding compare to Nvidia NVENC and Intel QSV (EposVox made good analysis on this)
@@h.mandelene3279 sometimes, it depends on the set up but AMD set ups are usually hotter and more power consuming than Intel ones
Way more informative than Google searches. I'm upgrading my laptop eventually and I have been fighting to find current specs and upcoming technology improvement predictions.
Very good video. I enjoyed it because it discussed the underlying tech of something we use instead of a million dollar server that’ll never use or need in my life.
Wow
This was actually quite informative. I was expecting more benchmarking and specific tasking head to head, but I definitely learned something new and useful.
Always good to see Anthony showing out, good stuff, great channel and as always, I look forward to moer!
Man I loved that Pringles joke way too much. Great simplicity in the explanation!
I want to see a technological overview on the history of cpu coolers
You must be a fan.
@@GregMoress this fan spins as well
Parallel transmission of data suffers from one drawback, synchronisation. Remember when we had parallel interfaces connecting our hard-disks and printers? Remember how limited they were in speed because of the required acknowledgements, synchronisation, and reassembly silicon (parallel cache) used to ensure data was not lost? Remember when SATA and USB arrived and suddenly we had better drive speeds and device hubs were now possible?
No? Oh, well. Just remember parallel data transmission architectures work most efficiently when using separate serial streams in parallel where each stream is independent and synchronisation is optional - just like PCIe. I'd be surprised if the Intel "parallel" EMIB was actually truly parallel. It is more likely it is used as a way to overlap execution ports on the cores. The giveaway is the lack of reassembly buffers.
Ridiculously rude commercial break at 1:52, regular programming resumes at 2:23 🙂
hey I have a ryzenn 7 5800x with 3080ti should I upgrade to intel i9 1290k for best gaming performance? or no? pls help me
if you have a 4K monitor, the answer is No
@@JustRelx ye I haven’t. 1440p is more than fine
0:55 that seems labelled wrong. 5600 is Zen3.
There is no 5600 only 5600x and yes they meant 3600
@@Alirezarz62 there is 5600, since yesterday
One makes you buy a new mainboard every 12 months. The other makes you buy a new mainboard... every 2-3 years? I don't care which I buy so long as the CPU uses at least 1000W of power. Post those power bills online for street cred.
There were a lot of differences between AMD and Intel that I really wasn’t familiar with when doing my first build. Like, I saw a lot of things mentioning XMP profiles for RAM, and then I spent god knows how long trying to figure out how to enable XMP, because that’s what you’re supposed to do… nobody ever said anything about DOCP. I wouldn’t even know it existed!
Yup. Always had Intel till the 3600 launched and actually had to google AMD XMP to figure out it was called DOCP though the manual probably would have mentioned that had I read it. Still can't wrap my head around overclocking
The actual difference between Intel and AMD is Intel overclocks older processors and sells them as next gen. AMD does not. Intel reports insider trading of their own stocks in their quarterly profit reports. AMD does not. Intel preys on low intelligence people with their plebian advertising campaigns. AMD does not. In general, Intel is run by unscrupulous criminals. AMD is not.
I'd love to see a video talking about the differences in instruction sets between CPUs - x86/PowerPC/ARM, etc...
The BIGGEST difference is AMD does not impose there standards to try to kill the competition like Intel does. There is a reason why WinTell was called such. Because both Intel and Microsoft tried to shove it to us users. Also to a lesser extent Nvidia.
WOW! Another AWESOME video!! What would be so cool, awesome and appreciated is if you guys did a video on which one (Intel vs. AMD) is good for Cybersecurity, Coding, Programming and the like, although it would be subjective it would also be great to be able to pick your minds about it all. Somewhat a "Knowing What We Know," Series. There are a whole lot of aspiring Cybersecurity/ Coding enthusiasts [such as myself] who are coming into it all blind and even caught up in picking between which one? CES 2022 had us confused even more with the plethora of awesomeness in the CPUs but now...which one would be good for what? Thanks!!!!
AMD fan here, not because of design or fabrication or anything technical. I have simply loved watching AMD periodically hand Intel it's own gluteus maximus always at the point where Intel has, again, reached peak ego, forcing them to rethink everything. It's been great for consumers every single time.
on a lower level, the cores are also structured differently between brands, with intel favoring having a large branch predictor and having much higher transistor count for instructions to push through (beyond the more complex branch predictor). This leads to marginally better single core performance, higher power draw and less space on the die for cores (ignoring MOSFET size differences). Because AMD favors less branch prediction and generally less transistors in a instruction path, they are generally able to have more cores that run more efficiently with marginally worse single core performance due to worse branch prediction. There's a lot more to it, but that has been a big difference between the 2 brands since AMD started making their own x86 chips
Interesting!!
yep, this is why in games (which mostly require high single core performance) intel beats AMD, while workload processes (such as decompression and compression, physics simulations) run better on AMD because it is better suited for it than intel..
@@petrkdn8224 and also at the end of the day,both chips can do gaming and workloads :) unless you are obsesed with numbers....for us it doesn't matter what you choose :)
@@robb5828 yes of course, both are good.. I have an i3 7100, sure I can't run modern games on high settings, but it still runs everything (except warzone because that shit is unoptimized as fuck)
@@robb5828 to add to your point, if hardware/software has "solved" your workload already (common example being word processing) any chip will do and many tasks like gaming are more demanding on other systems within a computer/network. So the differences being marginal already have even smaller impacts if at all in the larger picture.
I'm reading about Operating systems and i just discovered that the big difference lies in the Architecture....both perform the same tasks quite differently but the results of a a top tier AMD or Intel CPU are hard for the average user to even notice
0:46 the 5600X has 6 cores, not 8 (unless you're counting laser-cut ones?). And 0:56, the 5600 is not based on Zen2. did you mean the 3600?
I was confused when it mentioned 5600 as zen 2.
lmao who even made that? they should double check those
they are counting laser disabled ones... because thats how they are made... its a six core part, but it has the entire 8 core chip. in theory, 1 or 2 of those cores didnt meet validation requirements due to defacts so they laser them off and sell it as a 6 core CPU instead. its the cheapest way to maufacture at scale, at least for now anyway...
@@William-Morey-Baker I hope they're not disabling perfectly good cores... that's so stupid.
@@holobolo1661 The yield on TSMC N7 by now is so high that you can bet they are crippling a tonne of perfectly good chiplets to fullfill demand of 5600(X). That is the sole reason why AMD up to now didn't offer a non-X 5600 at reduced prices. They only do now because of actual competition by Intel with parts like the 12400.
This guy is great.
Anthony, your presence here is great!
It looks WAY more natural when you're not trying to hide the 'clicker' thingie :)
If anything, this fits YOU very well, since YOU are the one who shows us how things work IN DEPTH.
So it fits 'conceptually' too.
I approve wholeheartedly.
We all know 'how the pie is made' by now; so much 'behind the scenes' information about LMG;
...there's no need to pretend you're on network television or something :)
I don’t like seeing Anthony in videos. I usually go out of my way to avoid clicking on any video with him in the thumbnail
@@HULK-HOGAN1 Care to elaborate why?
@@HULK-HOGAN1 Yet here you are, commenting on a video with Anthony in the thumbnail.
It seems 'going out of your way to avoid anything with Anthony in the thumbnail' does not include 'NOT CLICKING on anything with Anthony in the thumbnail'.
Lightly stated; there are some flaws in your methodology.
More firmly; do something positive in your life - something that you truly love - that drains the energy and need from you to want to be negative towards others.
Anthony makes complicated topics feel understandable to regular people,
and is able to make 'us regular folk' feel excited about things we had no idea even existed 2 seconds ago
That is an exceptional skill.
-
My question to you is;
WHY do you waste your time commenting negative shit;
especially if you didn't even feel like watching this video "because Anthony's in the thumbnail"?
-
There's enough negativity in this world.
Whenever you want to feel better about yourself by dragging others down, just because your own life isn't working out like you pictured...
I don't need to hear/read your '2 cents'.
-
... And.if that last part is the case; happy to talk sometime, or maybe go see a psychologist (it can help out a lot - trust me on that one).
You're not alone in your misery; there's better times to come, even if you can't picture them right now.
I know how tough shit can get. It gets better. Ain't no shame to ask for help along the way - that can save you a couple years (again; trust me. I know)
Anyways; no more negativity towards people on the internet, please.
Talk to people about how you feel instead. It's scary as hell at first. You'll get used to it.
And you might find out who your best friends truly are (they might not be the ones you think of first)
One love, yo
@@HULK-HOGAN1 Opposite of the rest of us then
@@HULK-HOGAN1 Before anyone else responds to this, please remember: Do not feed the trolls.
I appreciate the knowledge. I also like your friendly tone, brother.
Ahh yes, thanks for making the entirely more relatable link to modern basketball court construction, certainly something I'm far more in tune with :)
The difference is I am an intel zealot. LOL!
Anthony is my favorite person, nice to see him in a video
READ MY NAME!!!!!
!
Agreed. I love the way he explains stuff. He does it so clearly, but for some reason, I can't process or maintain the videos he's in.
The amount of influence Linux Tech Tips has over what people buy is insane
The difference is you are not replacing your motherboard every time with AMD.
Gotta love spending $200 bucks on a motherboard for a $300 processor.
AMD BABY.
That’s not true this generation. The 5000 series is the last supported one for AM4.
@@fahrai4983 Yea great then I will have the AM5 board for the next 6-8 years. The point is a new generation does not mean a new board EVERY SINGLE TIME like intel does purposefully. There is zero reason for it. "Oh we added a pin so its 1151 pins instead of 1150 now, that extra pin does nothing but we changed the pattern just to screw you."
I understand AMD has to update their socket with technologies but we got so many glorious years of AM4, and before that, AM3.
@@fahrai4983 AM4 has been the latest since 2016, that's a long time. AM5 will probably last around the same amount of time.
And it’s slower
For me the difference is that AMD does not offer a no compromise CPU and integrated graphics at the same time. You either get a great CPU without integrated graphics or an inferior CPU (less cores or cache) with integrated graphics
As a newish gaming pc user something that has made me wonder, is if an amd gpu works more efficiently when paired with an amd cpu, or if it matters at all if you pair your gpu with what ever brand processor? This would be a useful video topic for a lot of people I believe.
go nvidia + intel
I really enjoy your presentations & thank you for presenting complicated technology that even an iliterate junky like me might understand.Your voice is easy to to follow, you don't talk too fast, nor above a person status.
When you invest, you are buying a day that you don't have to work.
I pray everyone reading this becomes successful.
You are absolutely right 👍
Investing in crypto is very cool, especially with the current rise in the market.
I really don't know why people still remain poor out of ignorance.
It is not all about ignorance, there are lots of unprofessional brokers in the market.
I will introduce you to my trader Mr Lennart Antero, his methods works like magic and is working for me at the moment.
For Anthony, always a like :3
Anthony is just someone who can probably explain almost anything you need to understand - maybe, he should narrate that "easy" quantum mechanics book by Hawking - "The Theory of Everything."
I didn't CC-understand.
Nice presentation and explanation of Intel vs AMD tech. It will be hard to imagine what chip design will be like in 20-50 years.
ryzen 5 5600 is zen 2 >>
Another great Anthony video. Personally I would love it if he would be allowed to make them even more technical, but I do understand the reasoning of LMG wishing to appeal to a wider audience
Your explanations of tech news are given in away that takes the intimidation a person may feel when trying to understand the information thank you!
Anthony's videos are informative AND entertaining. Well done sir, well done!
Alright boys.. hear me out here. Way back when I was a fetus in 1996; high end graphics PCs like the SGI line up had boards for the internal parts like the GPU and CPU. So, of Chiplets are so good and we can sorta plug and play them, why not have a board with dedicated slots for another gpu and cpu chiplet to be implanted? Why not be able to upgrade the VRam chips..? Yes, I get designing is complicated but why not? Imagine plugging in a second 3080 chiplet and upgrading your VRam chips to have a supercomputer on your desk?
This was very interesting Anthony and helped clear up a number of things I wasn't sure about 👍
Suggestion : Video on Semiconductor manufacturing and all different types of companies involved in it.
ccccccccccccccccccccccccccccccc
I thought this would be about the architecture of the x86 designs they each use, but it turned out to be just about the recent way they're each implementing multicore.
The x86 architecture difference is more interesting, in my opinion. They're vastly different strategies, which were last unified with the AMD K6.
@@hjups I'm not sure if the K6 was the last per-core equivalence. The last truly identical cores where Intel 80486 and AMD Am486. As for other cores, AMD until the K10 (Phenom) did not fundamentally change the architecture. Bulldozer (FX) was the first major overhaul.
Intel changed things up a fair bit sooner, with Netburst (Pentium 4). Funnily enough both Netburst and Bulldozer were ultimately dead ends, worse than their predecessors. Intel brought back the i686 design in the form of first Pentium M and later Core2. Core2 competed against K8 and K10, which I think share the same lineage to the first microcoded "inner-RISC" CPU's like K6 and Pentium Pro. AMD instead started over once again, and that brings us to Zen.
What I find interesting is that Zen3/Vermeer and Golden Cove/Alder Lake are very good at trading blows: depending on what you're doing, one can be wildly faster than the other. As far as I can see though, that mostly seems to be caching matters; a Cezanne chip does not have the same strengths as Vermeer, but does have the same weaknesses, as far as I can see.
I'm also curious how far hybrid architectures are going to go. On mobile, they're a massive success, and Alder Lake has proven them to be very useful on desktop as well.
@@scheurkanaal I think you misunderstood my statement. I'm not referring to performance, I'm referring to architecture. Obviously, there are going to be differences that have a substantial effect, even as far as the node in which the processors are fabricated on.
Yes, the last time they were identical in architecture was the 486, however, the K5/K6 and the Pentium Pro/Pentium 2/Pentium 3, were all quite similar internally. AMD then diverged with the K7/K8+ line, while Intel tried Netburst with the Pentium 4. After the failure of Netburst, Intel returned to the Pentium 3 structure and expanded it into Core 2/Nehalem/etc. and have a similar structure to this day. Similarly, AMD maintains a similar structure to the K10, with families like Bulldozer diverging slightly in how multi-core was implemented with shared resources.
Also note that AMD since the K5, and Intel since the original Pentium and the Pentium Pro have used a "RISC" micro-operation based architecture. The original Pentium is the odd one out there though, since it was less apparent due to it being an in-order processor while the others have all been out-of-order.
Hybrid architectures may not really go much further than Alder Lake and Zen 4D. There isn't much room to innovate in the architectural space, where most of the innovation needs to happen at the OS level (how do you schedule the system resources). It's also driven by the software requirements though. Other than that, there may be some innovation in the efficiency cores themselves, to save power even further, but in exchange for lower performance (the wider the gap, the more useful they will be).
@@hjups I was also talking about architecture :) I was just not under the impression K7 was much different from K6, since it did not seem all that different from what Intel was doing circa Pentium 3 (which is like "a P2 with SSE", and the P2 in turn was just a tweaked Pentium Pro), and the numbers also imply a more incremental improvement (although to be fair, K5 and K6 were quite different).
That said, I wouldn't be so sure if Zen and K10 are that similar. As far as I know, Zen was (at least in theory) a clean-sheet design, more-or-less.
I was also referring to micro-operations when I said "inner-RISC". The word "micro-operation" just did not occur to me. Finding something that said whether or not the original Pentium was based on such a design was also quite hard, so I assumed it didn't. It was superscalar, but I think the multi-issue was quite limited in general, which gave me the impression the decoder was like the one on a 486, just wider (for correctly written code).
I don't know how far efficiency cores will go. Their use is not from a wider gap, but rather, more efficiency (power per watt). Saving 40% of power but reducing performance by 50% is not very effective. Also, in desktop machines, die size is a very big consideration, not just power. And little cores are useful here. Keep in mind that the E-cores from Alder Lake are significantly souped up compared to earlier Atom designs. That's important to maximize their performance in highly threaded workloads.
I think the next thing that should be looked at is memory and interconnect. CPU's are getting faster, and it's becoming harder and harder to keep them properly fed with enough data.
@@scheurkanaal Maybe we have different definitions of architecture. SSE wouldn't be included in that discussion at all, since it's just a special function unit added to one of the issue ports, similar to 3DNow! (which came before SSE).
The K5 and K6 are much more similar than the K6 and K7... The K5 and K6 even use the same micro-op encoding as I understand it. The K7 diverged from simple operations though into more complex unified operations, that's also when AMD split up the integer and floating point paths. The cache structure changed, the whole front end changed, the length decoding scheme changed, etc.
As for P2 vs Pentium Pro, the number of ports changed, and the front end was improved to include an additional decoder (which has a substantial difference for the front end performance - it negatively impacts it, requiring a new structure). The micro-op encodings may have also changed with the P2 (I believe they still used the Pentium uops in the Pentium Pro which are very similar to the K5 and K6 uops).
Zen may have been designed from the "ground up", but it still maintains the same structure and design philosophy - that's likely for traditional reasons (they couldn't think outside of the box). Although, it does have some significant benefits in terms of design complexity over what Intel does - especially when dealing with the x87 stack (the reason why the K5 and K6 performed so poorly with x87 ops, and why the K7 did much better).
Yeah, I knew what you meant by "inner-RISC". I just used more technical terms. The P1 was touted as two 486's bolted together, but that was an overly simplified explanation meant for marketing people who couldn't tell the difference between a vacuum tube and a transistor. In reality, you're correct, the dual issue was very restricted, since the second pipeline really could only do addition and logical ops, as well as FXCH which was more impactful (again for x87). I would guess that most of the performance improvements came from being able to do CMP with a branch, a load/store and a math op, or two load/stores.
As for specific information about the P1 using uops, you're not going to find that anywhere, because it's not published. But it can be inferred. You would have to look at the instruction latencies, pipeline structure, know that a large portion of the die / effort was spend on "emulating instructions" (via micro-code), and have knowledge of how to build something like the Pentium Pro/2/K6. At that point, you would realize that the P1 essentially had two of what AMD called "long decoders" and one "vector decoder", which it could either issue two "long" instructions or one "vector" instruction. The long decoders were hard coded though, and unlike the K6/P2, the uops were issued over time rather than area (i.e. the front end could only issue 2 uops per cycle, and many instructions were 3 uops. So if logically they should be A,B,C,D,E,F, the K6 would issue them as [A,B,C,D] then [E,F], but the P1 issues them as [A,C],[B,D],[E,F]).
Yes, power efficiency is proportional to performance. The wider the gap implies more power efficient. But there's also the notion of making the cores smaller too and throwing more at the problem (making them smaller also improves power efficiency with fewer transistors). If the performance is too high though, there's no reason to have the performance cores, which is what I meant by the wide gap being important.
Memory and interconnect are an active area of research. One approach is to reduce the movement of data as much as possible, to the extent of performing the computation in RAM itself (called processing in memory). It's a tricky problem though, because you have to tradeoff flexibility with performance and design complexity (which is usually proportional to area and power usage - effectively energy efficiency).
You can guarantee that Meteor Lake with 'tiles' will be delayed for a year. Intel never gets things out on time and with a huge change in the way the chip is Fab'd I would expect 3rd quarter 2024 to 1st quarter 2025.
ARC is a perfect example to remind you of what happens when Intel tries new things. Intel Arc’s (then Xe) original release date was set for 2020. Sure, it has to be driver issues right? Intel did not deliver on time and they even delayed the announcement. By the time they release the GPU AMD and Nvidia will have there next Gen launch and it will be rolled out so Intel better have something decent as in the GPU or a low, low price. Intel stated they want to sell 4 million ARC GPUs throughout 2022 and that their average GPU price will be around $75.
$75? Yeah they stated that! That leads me to believe it may be crap.
Would have loved to see some background and why Intel was better for so long
In the most simplistic terms, Intel had the bank to crush fair competition, and they had AMD licked on single core performance for ages. It is only within the last decade that multicore performance really started to become more prominent in the mainstream. AMD went back to the drawing board for their chiplet design and continued mutlicore performance improvements, which has made then as competive and moreso in recent years. There are tonnes more reasons, but those two stand out most to me
Venkat and his wife madhavi are new ceo of my company. You will see more snd more competition as I have manufacturing unit in every house , dont underestimate the power of sales owner ramya vallabh and her vadas and mirchi bajji. It can make kings , it can make bhagwans
Imagine how much profits AMD has enjoyed since the first Zen cpu launch due to their greater silicon yield.
When I put together my pc, I went team red simply because I intended to upgrade later and I knew amd cpus have a habit to be chipset backwards compatible with older mobo chipsets. I still haven't upgraded though... (Still rocking a 2400g)
I'd like to say with this edit I went to 3600 and it's amazing but I hit my limit, I need to get a new motherboard if I ever do upgrade further.
What is “chipset backwards”
All these tiles and multiple processor cores with high speed serial (or parallel) communication are an adaptation of the Inmos Transputers from the 1980. Inmos was just way ahead of it’s time.
I just wanted to buy a laptop how did I fall into a rabbit hole
0:56 it says Ryzen 5 5600 is Zen 2, you probably meant 3600...
I'd love to see a video of if it's possible to add your own CCD if you could get the parts to just add more cores to your existing CPU using a cpu with an empty CCD section. Might want to get a microscope for that one and I doubt you could ever do it at home but would be interesting to see if it is possible.
I mean you probably could but there would be tons of issues.
The chip would not be suppported by any motherboard and would need a custom bios.
You'd probably have differences in the chips that ones produced together would not have.
It would be insanely easy to mess up.
It might be fused off which would completely neglect doing anything
I'm pretty sure people have added more vram to gpus and it has wroked but very was very unstable.
@@gamagama69 Seems that if the chip can use the signals used to identify 3900x or 3950x silicon then maybe, you could use existing In bios signatures for existing ryzen chips to make a 3800x into a 3950x but that would be extremely difficult without Nanometer scale precision tools.
It's pretty much impossible to do by yourself even if you could afford the needed tooling you ain't getting the microcode on to the CPU.
One has 3 letters and the other has 5 letters.
Would've been nice to mention that AMD still uses monolithic designs on it's laptops and APUs. Would have been an interesting aside to about the space disadvantages of chiplets. Great video though!
Core Complex? I find it quite simple, actually.
Video Suggestion: How are Programming Languages created?
READ MY NAME!!!!!
!
Its not that complicated. C compiler is written in C, java compiler is written in java, Python compiler is written in python,...
@@Slada1 python interpreter is written in c. java compiler is written in java/c/c++ ;)
@@Slada1 Yea, but how are those compilers created then?
Informative video. Thumbs up.
Can you make a video on the difference between amd and Intel in terms of performance for different uses? Like a quick guide on which to get
I'm sure you can find a video about that already LoL
I'm a simple man. I see an Anthony video and I up-vote.
Glad Anthony is getting lots of screen time. He's great
Nope
@@smilinandlaughin what
He really just made my fatass crave Pringles ☠️ btw do a video on AMD's Smart access memory and the effects of extra cache on the 5800X3D
I decided to try an AMD machine after being with intel for a few generations. The Infinity Fabric was completely unstable and caused any audio playing to be filled with static and blue screening occasionally. I tried for a week with updates, tweaks, tutorials but couldn't stabilise it. I sold the parts and bought intel parts and had no problems at all. I've been building computers for myself since I was twelve years old (20 years), and that AMD machine was the only time I was forced to give up when presented with an issue. I've bounced back and forth between the two, as well as ATI and nVidia over the decades, but that experience really put me off AMD for the moment.
Lol i build since 15 years, just went with intel. This time the hype about Zen 4 was so huge, i could not resist to try. I bought a 7900x. Hat stuttering issues, because of ftpm, issues with memory controller. Then after two days i returned it and bought a 13600k, wich worked perfectly since. I cant relay my money anymore to AMD. First impression not good.
Does this mean that intel is eventually going to break through some wall to a better solution? Where as AMD's infinity fabric and chiplet building blocks will eventually be a problem when the limits of the speed of electricity are going to be a problem?
More Anthony!
READ MY NAME!!!!!
!
5600 is zen 2?
i've never really cared about either lol id just try to build comparable systems and then decide based on over all price / taking into consideration reviews on all the parts around them. was working on it a bit last night as im considering upgrading and noticed that i7-12700k out performs the ryzzen 9 5900x by a decent margin and is cheaper which was interesting to me as a step up on either side was a huge price jump for not a big jump in power.
Can you do a video about Phone 4k and 8k recording, and why they can only do 10 min. at a time?
Anthony always leaves me satisfied and smiling
Sponsor is a pain... ask for credit card before watch the product it's simply stupid.
One correction here. Meteor lake doesn't use EMIB, it uses Foveros. Basically 3D stacking of silicon. But unlike TSMC/ AMD 3D v cache, meteor lake can be overclocked like normal CPUs.
Emib sounds like thing in Rick and morty 🤣🤣🤣
Can you do a video on x86 vs Arm?
thanx 4 your video. iam happy with my 12600kf. i do only gaming. (i like amd too) best regards.
It still fascinates me, 2 companies, started in the same decade (okay intel was named different back then), and they are still competing against each other as "top dog" as such. Kind of reminds me of 2 brothers constantly trying to 1 up each other.
Same can be said for Microsoft and Apple, with Windows and Mac/iOS
@The Deluxe Gamer they arent really "competing" as would be in other countires where there is an acutal left / center party rather than the US's 2 right wing parties,
if it were really comparable to US political system then both intel and amd would have to be competing in a single sector of proccessors such as workload or gaming , while they do both (and they have different features etc etc)
The main difference between and AMD and and Intel CPU is that having the pins on the motherboard side means you gotta keep track of the little socket cover and will always have a bit of nagging anxiety anytime you go through your parts bins because "oh no what if I knock it on the floor and someone thinks it's a piece of trash and throws it away"
I remember when you could swap an intel CPU for an AMD. How the times have changed.
Oh Socket 7... those were some good times.
@@SpinDlsc
A version of this explaining the difference between Nvidia, AMD, and Intel's GPU architecture would be amazing!
thats a good idea
Intel has gpus?
Edit: no way intel has gpus
@@literallysteel yes they do now
@@rk3senna61 they've had it for a long time, they just typically weren't discrete gpus, just integrated, they're starting to do discrete, but gpus in-of-themselves is not new to them.
@@tuxshake i still prefer intel
love your voice man
Intel also has less cache compared to AMD cpu's which tend to slow down your computer after a while. For example I had switched from a ryzen 3700x to i5 11400 system because I had sold that computer to a friend and at the time intel 11400 system costed much less than zen 3 systems. And i5 11400 is supposed to be faster than 3700x for single threaded applications right? Yes it is faster in games but after only 9-10 months of usage, the web browsing experience and a couple applications like obs got significantly slower compared to 2 years of heavy usage on ryzen 3700x. I am now just lazy to reinstall windows due to my job taking too much of my time and leaving no room backing up stuff. And for those who might ask, I don't have more programs or I'm not using an antivirus, I still have the same ssds, I am up to date on drivers and I don't use browser extensions... And no the cpu or memory usage isn't high. And I got significantly faster memory on this system with super low timings. And yes memory overclock is stable, It has passed memtest 1500%, linpack, time spy, y cruncher all of that. So yeah, at least as far as I can tell 11th gen intel sucks in that case which I think is caused by 32 megabytes of l3 cache vs 12 megabytes. Making a video on youtube full screen on chrome is taking a couple seconds for example. I mean like wtf...
ssds slow down over time
its not the cpu
@@zatchbell366 hwinfo shows total 9tb host writes, its a samsung 970 evo plus 2tb and it only has windows and some programs, total 230 gb is used so I dont think thats the case
@@zatchbell366 Agreed. My Macbook takes ages to read or write, but when something is running in the memory it's as fast as ever.
@@danieljimenez1989 Just reinstalled the windows and all the other programs and all the windows updates on the same SSD. Everything is now running flawlessly fast. So apparently it was windows and software updates bloating the system which made the cpu or cache no longer being able to keep up in some programs. I have got my bookmarks and everything else back on the chrome. And all the "default programs" are still running at startup, got the same drivers installed as I had my "install" folder remaining on another drive which I keep all my driver setups. I also have my steam and games installed as well. So it was not SSD nor anything else. Just stupid windows bloating things up.
@@danieljimenez1989 Thanks pal