$17K Sapphire Rapids Server CPU! Hands-on with the new Intel Xeon

Поділитися
Вставка
  • Опубліковано 1 сер 2024
  • Intel just launched the $17,000 Intel Xeon Platinum 8490H processor as part of the 4th Gen Intel Xeon Scalable/ Sapphire Rapids line. Check out the new SPR Supermicro X13 series here: • Supermicro Introduces ...
    We get hands-on with benchmarks. We also discuss the SKU stack, the platform technologies like DDR5, PCIe Gen5, CXL 1.1, and acceleration. This is the MEGA Sapphire Rapids guide.
    STH Main Site Article: www.servethehome.com/4th-gen-...
    STH Top 5 Weekly Newsletter: eepurl.com/dryM09
    ----------------------------------------------------------------------
    Become a STH YT Member and Support Us
    ----------------------------------------------------------------------
    Join STH UA-cam membership to support the channel: / @servethehomevideo
    STH Merch on Spring: the-sth-merch-shop.myteesprin...
    ----------------------------------------------------------------------
    Where to Find STH
    ----------------------------------------------------------------------
    STH Forums: forums.servethehome.com
    Follow on Twitter: / servethehome
    Follow on LinkedIn: / servethehome-com
    Follow on Facebook: / servethehome
    Follow on Instagram: / servethehome
    ----------------------------------------------------------------------
    Timestamps
    ----------------------------------------------------------------------
    00:00 Introduction
    01:03 A Word on the AMD EPYC 9654 96-core Genoa Part
    02:19 DDR5 Memory Upgrade
    02:54 80x PCIe Gen5 Lanes Per CPU
    03:42 CXL 1.1 Support is different in Intel Xeon vs AMD EPYC
    05:55 Checking out Intel's Mega Data Center Lab
    07:07 Intel Sapphire Rapids Accelerators AMX, DSA, QAT, DLB, IAA
    10:43 Intel Xeon Platinum 8490H 4S and 8S Server Support
    11:53 $415-$17K Intel SKU List and Features
    16:04 Intel On Demand
    19:05 Processor Cores and XCC/ MCC Deep Dive
    21:51 Platform, Memory, and Intel Optane PMem 300 Crow Pass
    23:58 4th Gen Intel Xeon Scalable Sapphire Rapids Performance
    29:42 Intel Sapphire Rapids Acceleration
    34:32 All you need to know about Intel Sapphire Rapids
    ----------------------------------------------------------------------
    Other STH Content Mentioned in this Video
    ----------------------------------------------------------------------
    - AMD EPYC 9004 "Genoa": • AMD EPYC 9004 Genoa Ga...
    - CXL Overview: • CXL in Next-gen Server...
    - Glorious Complexity of Optane DIMMs: • The Glorious Complexit...
    - ASUS Genoa Server: • 384 Thread MEGA Server...
    - Intel QAT Acceleration on Ice Lake-D: • Intel Xeon D's Go-FAST...
    - Future server tech: • This New Server Tech i...
    - Intel QAT Cards in servers: • Intel QuickAssist is a...
    - Arm v. AMD v. Intel in 2022-2023: • More Cores, More Bette...
    - Hands-on Intel Sapphire Rapids Accelerators: www.servethehome.com/hands-on...
  • Наука та технологія

КОМЕНТАРІ • 229

  • @thaddeus2447
    @thaddeus2447 Рік тому +67

    Can't wait to see it for 20$ in 10 years on aliexpress

    • @TR2000LT
      @TR2000LT Рік тому +6

      lmao so true

    • @dfsafadsDW
      @dfsafadsDW Рік тому +3

      my thought exactly

    • @lolmao500
      @lolmao500 Рік тому +3

      If only

    • @lastone032085
      @lastone032085 Рік тому +1

      Yeah, I. Totally getting 8 of those for an 8 socket build.

    • @DJ_Dopamine
      @DJ_Dopamine Рік тому +2

      Damn, you beat me to it bro!
      Xeon E5-2696 v3 @ 50 bucks for the win!

  • @johnh1353
    @johnh1353 Рік тому +58

    The compute nodes without the DRAM are going to be very interesting (IE: only HBM on the SOC is system memory, no dimm slots)

    • @DigitalJedi
      @DigitalJedi Рік тому +9

      You could see some crazy 4S or 8S boards with the HBM only setup in a relatively compact form factor. The biggest hurdle would just be what to do with 20-40 5.0 x16 slots.

    • @RazorSkinned86
      @RazorSkinned86 Рік тому +5

      those DRAM cards along with both the Intel Xeon HBM skus and the AMD Epyc APU skus are going to be a game changer for engineering and scientific computing happening outside a government lab with access to a national super computing center. It's legit super computing-cluster memory and interconnect features that help so much with math heavy simulation workloads coming to enterprise hardware that an engineering firm or university research center can run in an office rack. Cloud super computing capabilities offered by cloud vendors like Microsoft Azure or IBM are great but often the datasets you are having to transfer to the storage of wherever you are crunching the numbers makes using cloud providers for such workloads less than optimal because only the US-DOE and CERN have internet connections like a 46 terabit per second ESnet6 connection that can move massive amounts of data offsite in only minutes rather than days.

    • @BobHannent
      @BobHannent Рік тому +1

      I am interested in the H parts with QAT for use as a high speed caching node without DIMMs

    • @AlpineTheHusky
      @AlpineTheHusky Рік тому

      @@DigitalJedi Using them as NVMe JBOD controllers

  • @pingtime
    @pingtime Рік тому +37

    Finally a year later after this platform deployed in server worldwide, we gonna have dirt cheap 1st/2nd gen scalable xeons with bizzare Aliexpress motherboard combo option running H130 chipset (hopefully) 😂

  • @d00dEEE
    @d00dEEE Рік тому +13

    Leave it to Intel to not proofread their price sheet and miss the extra zero in the CPU prices.

  • @StarGuardian8180
    @StarGuardian8180 Рік тому +1

    I've been waiting to see what these can do. :)

  • @pieluver1234
    @pieluver1234 Рік тому +11

    Patrick, I was hoping that you'd give harsher criticism of the new chips. The price is absolutely ridiculous compared to Genoa. 60 vs 64 core comparison doesn't even make sense when I can get 3x 64 cores for the price of one 60 core

    • @merouanebenderradji1582
      @merouanebenderradji1582 Рік тому +2

      the large data centers will order ton of chips and thus they will negotiate the price and they will get a huge discount no one buys them at msrp except for enthusiasts or guys that'll buy one-offs I won't be surprised if they can get it at half MSRP but it goes both ways for amd and intel

    •  Рік тому +1

      @@merouanebenderradji1582 and then, large data centers are buying AMD CPUs in bulk and they can negotiate the prices.

  • @BobHannent
    @BobHannent Рік тому +19

    Totally agree about the QAT sentiments. I am helping procure a few thousand cores and AMD stands out for me right now. My workloads are heavily dependent on SSL and network throughput.

    • @mz4637
      @mz4637 Рік тому

      I assemble office furniture. Help me

  • @Alan_Skywalker
    @Alan_Skywalker Рік тому +27

    As someone who had an early access, I would say that this thing is basically half-finished. The latency of L3 cache and memory is way too high while also having a weird pattern. There is an SNC4 option you should try(especially when comparing with EPYCs), which Intel claimed, could reduce the latency, but this feature was bugged and dragged the L3 latency up to 60ns when I was testing it. In some tests 4th gen xeon even loses to 3rd gen, despite the huge core-count advantage. I don't think their architecture has fundemental problems, as the lower-side latency is really good, as well as loaded latencies, but I don't really think they could fix it before Emerald Rapids and relive the glory of 3rd gen(which in many cases I tested, can beat EPYCs at half the core count).
    I do not have any doubt about their manufacturing nodes though. They are used to produce huge dies en masse throughout the years, even their desktop dies are getting bigger. They also have a higher standard about the electric spec and defect rate, as if they only want their cpus to run at 2-3GHz like most EPYCs, or if they don't care that cores could generate FP errors(Reference: Cores that don’t count, a paper from Google and FB),their 10nm process would have already been ready at least five years ago. It's just it that the media love to catch the wind.

    • @JMurph2015
      @JMurph2015 Рік тому +3

      I'm no lithography expert, but as far as I know, even "large" dies of yesteryear (think circa 2018) are just normal or even small dies today. The Nvidia HPC accelerators have been essentially at the reticle limit for a given node for the last 5-7 years, and have just continued to follow/push the reticle size up with each new node.

    • @Alan_Skywalker
      @Alan_Skywalker Рік тому +1

      ​@@JMurph2015 Yes but have you noticed that, Intel almost always provide full spec options to their large dies(with the exception of Xeon Phi, which is still more than 95% good), while Nvidia GPUs like A100 had 1/6-1/5 of its SPs chopped off, most likely, to improve yield. You can always chop off the defective part, but to consistently make a large die all good is way harder.
      At the CPU end, AMD desktop CPUs, while having a much simpler architecture(4-way decode and discrete add and mutiply units), can't reach the same frequency hight, even when many of them are already over-binned(check the fail rate and heat dissipation).
      Only do you really look into it can you find something was off.

    • @Alan_Skywalker
      @Alan_Skywalker Рік тому

      @Brendon Lee O'Connell And that lab in middle east absolutely smoked AMD at its career hight back in 2007 lol.

    • @JMurph2015
      @JMurph2015 Рік тому +3

      @@Alan_Skywalker the Nvidia V100 had only 4 out of 84 SMs disabled with a die size of 815mm2. The largest monolithic Intel chip I can think of is the Xeon Platinum 8180 which was a 694mm2 monolithic die. Nvidia's follow-on A100 was 824mm2 design with 20/128 disabled on TSMC N7 followed by another 814mm2 H100 with 12/144 SMs disabled on TSMC 4N. You may have a point on them needing to disable cores here and there, but their size more than makes up for it. They have more active die area than Intel ever has. And don't get me started on transistor counts, it's even more of a bloodbath there.

    • @Alan_Skywalker
      @Alan_Skywalker Рік тому +1

      @@JMurph2015 694 and 815mm2, it's only 121mm2 difference, with the cutdown it's even less. The problem is not even that simple, if a defect happens on a die, it's no longer "perfect", you can only consider it a defect product and cut it down for repurposing. If you can't consistently make "perfect" dies, you can't let others order from you. But if you have designed the chip to be a cut down version, you can have one or even multiple defects on every die, but still use them no problem. Calculations can ba a bit more complexed, but I still think 694mm2 of perfect die is harder.
      You do have a point about transistor count though, but I think Intel is just taking a different path, that is, performance and reliability first, density second. It has some advancement TSMC does not until recently, such as Superfin, SAQP, COAG(TSMC just added it at N5) and eCu interconnections, but it's EUV update, which plays a major part in density advancements, is not available until recently. Also you could encounter transistors that can't run at high enough frequency, that may not be a problem for GPUs, but for CPUs it's more likely to be a defect.

  • @josephwright5147
    @josephwright5147 Рік тому +1

    Great video, thank you!

  • @FriedrichWinkler
    @FriedrichWinkler Рік тому +2

    Could you get access to a 8S system? I would love to the see the motherbaord connectivity configuration on one of those?

  • @Gorbachevfield
    @Gorbachevfield Рік тому +25

    Here's the issue, the Intel accelerators might be worth paying a premium for in certain workloads. The issue is that Intel is charging customers a premium three times to access the accelerators. Once when you buy a Xeon Scalable instead of an EPYC, a second time when you pay for a higher end sku with that actual silicon, and a third time when you pay your server vendor for a license. The accelerators are Intel's only significant advantages and they will likely remain niche due to the high platform cost and many pcie alternatives. Unless Intel sells for far below MSRP, simply don't see the value proposition here.

    • @captainobvious9188
      @captainobvious9188 Рік тому +1

      If it is significantly faster for certain things, I wonder how the total cost of ownership would compare to just using a bunch of cores up front, in terms of power usage.

    • @nahimgudfam
      @nahimgudfam Рік тому +1

      @@captainobvious9188 I can't imagine it is much faster, the benefit is low latency more than anything. Having entire extra CPU core to do it in software is more flexible and sometimes faster but AMD won't be able to deliver low latency services the way Sapphire Rapids can. So you need to ask if your services are able to batch requests 100 at a time and deliver responses every 10ms-20ms or if you need to respond to every single request in less than 1ms if possible. It's a completely different type of requirement. AMD does it cheaper and with less energy but if your system is part of a large and complex code base, the whole system is going to be less bottlenecked if you have the capacity and will pay for it.

    • @FnkDck
      @FnkDck Рік тому

      Would you justify Intel for a server with 30 clients accessing SAP + SQL?

    • @nahimgudfam
      @nahimgudfam Рік тому +1

      @@FnkDck i would need a lot more info, sweetheart. But yes, if you want to have 30 very productive clients then going with a Intel is the best option. If you want to handle 150 and don't need them to be the most productive then AMD.

    • @tstager1978
      @tstager1978 Рік тому

      @@FnkDck I don't really see Intel performing any better than a comparable AMD chip here.

  • @fbifido2
    @fbifido2 Рік тому +2

    @28:33 so, the EPYC 9554 is better than the Platinum 8490H,
    but is the Platinum 8490H cheaper???

  • @HoboVibingToMusic
    @HoboVibingToMusic Рік тому +24

    17k for a Xeon CPU, christ, I'd prolly just get an Epyc at that point, at least for more of... "Broad range usage" . _.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +16

      Closer to two Genoa

    • @one_step_sideways
      @one_step_sideways Рік тому

      @Brendon Lee O'Connell So much for the "jewish big brain" propaganda

    • @ghostofdre
      @ghostofdre Рік тому +2

      Seems DOA for most use cases, there are edge cases where the accelerators would prove to be an advantage.

    • @AndrewTSq
      @AndrewTSq Рік тому +1

      How expensive is epyc? Cause the threadripper was $9000 here.

  • @seylaw
    @seylaw Рік тому +13

    As a user that got addicted to re-purpose used server hardware for personal desktop usage (great value!), I am keen on the information if there are Sapphire Rapids Xeon SKUs that will work on their Fishawk Falls HEDT platform. If I remember correctly, Fishhawk Falls uses the monolithic die version. As used prices for highly available server CPUs get better over the years, it would be desireable if the equivalent server SKUs could also be used on HEDT motherboards.

    • @averyoldYoutubeuser
      @averyoldYoutubeuser Рік тому +2

      I got the same addiction with you!!!

    • @Mr.Leeroy
      @Mr.Leeroy Рік тому +2

      As a homelab repurposer I am at loss what to upgrade to after LGA1151.
      Two SKUs that are interesting in terms of core count vs frequency are 6434 and 5415+, ridiculously expensive now, and probably then, unless they are deployed somewhere in great quantities. And power requirement ofc makes no sense, since it would be more benefitial to have couple 4c8t nodes with more PCI-E slots that one more power hungry with density constrains..
      It is frustrating.

    • @uncrunch398
      @uncrunch398 Рік тому +1

      Aside from validatable ECC support, what is the benefit over using a mid to high tier current gen consumer grade part and supporting platform? I'd wonder if the current gen consumer hardware could compete with or outperform enterprise grade old enough to be cost comparative even if most of the virtual memory is swap space vs DRAM.

    • @seylaw
      @seylaw Рік тому +1

      @@uncrunch398 That's simple and very convincing: A far better price-performance ratio for core-heavy applications. I got the 18-core CPU, a "new" motherboard (new components with used re-puposed server chipsets) and used ECC-RAM for just a fraction of the cost of a new consumer CPU + motherboard + non-ECC-RAM combination and still get great performance for my needs.

  • @MK-xc9to
    @MK-xc9to Рік тому +11

    The last 5 or now 6 Years was one big innovation Cycle startet in 2017 by AMD with Ryzen and the chiplet Technologie , Performance/ Core Counts for all , Desktop/HEDT/ Server has gone up crazy . Ryzen and Epyc was a big kick in Intels Butt . The new AMD 7040 Laptops will have " Ryzen AI " , the upcoming new Epyc CPU will have AI Acceleration as well , maybe they stack FPGA AI chiplets on Top of the normal Chips like they did with Cache Chips in Milan X . We are living in interresting Times ...

  • @mediawow6917
    @mediawow6917 Рік тому +2

    OEMs have been enjoying the torture of asking for having features turned on and off : )
    Intel has been good at having their customers "pay"!

  • @fteoOpty64
    @fteoOpty64 Рік тому

    Oh Patrick, you shaking your hand with the $17K chip made me very very nervous!. If you were Linus, that chip is already toast!. Great video TQ.

  • @DavidMohring
    @DavidMohring Рік тому +20

    The Intel Xeon CPU Max 9462 Processor 75M Cache, 2.70 GHz with it's maximum 64GB of HBM memory would make a magnificent desktop workstation in HBM only mode.
    Even with limiting memory to only 64GB it would probably out pace any similar priced workstation on the market.
    Since the release of Apple's M1 system I think it is enviable that both Intel, AMD & even IBM will produce high end workstation CPUs with HBM in competition.

    • @aravindpallippara1577
      @aravindpallippara1577 Рік тому +8

      Yep mi300 from amd is almost a full system on chip with cpu gpu and hbm

    • @DavidMohring
      @DavidMohring Рік тому

      @@aravindpallippara1577 ua-cam.com/video/_p8k2SQvxuI/v-deo.html

    • @ander1482
      @ander1482 Рік тому

      Lets see when because i dont think is coming to xeon W... Probably have to wait 2 more years.

    • @jimatperfromix2759
      @jimatperfromix2759 Рік тому

      David, thanks for your comment since that plus a little sleuthing on Intel's web site helped me resolve a puzzle that Pat failed to clarify in his otherwise excellent video. There was some confusion between the parts that support 4-way and 8-way servers and those that support 64GB of HBM2 memory. He kept saying it was the H-suffix parts, but that's not the case - the H-suffix is not associated with the Max Series at all, apparently (unless someone can prove me wrong, and if you can, please do so). That confusion was aided by the fact that there is a huge pile of part numbers ending with H, but it's only the five HPC part numbers (lower right on original Intel chart) that have the HBM memory built in. These are called the XEON CPU Max Series, and are essentially HPC parts, and all contain the 64GB of HMB memory (in 4 silicon pieces surrounding the main silicon). This includes the one you mentioned above, and I'll list them all for convenience of anyone interested ...
      $12980 9480 XEON Max HPC 56 core
      $11590 9470 XEON Max HPC 52 core
      $9900 9468 XEON Max HPC 48 core
      $8750 9460 XEON Max HPC 40 core
      $7995 9462 XEON Max HPC 32 core
      The parts that Pat also focused on, including the $17000 part in the title, are Platinum server parts that allow 4-way and 8-way servers. These all have the H-suffix, but the mystery is, do they or do they not have any HBM memory built in. It appears not from the Intel documentation thus far. However, confusingly, Intel calls some of these its Data Center GPU Max Series. From one of the videos I did notice that these chips had some extra silicon bits surrounding the main silicon - but these may just be acceleration engines (which, if they're that big, explains why Intel is offering them a la carte). The primary series of these (including the $17000 part) is the 84xxH series ...
      $17000 8490H Max Platinum 60 core
      $13923 8468H Max Platinum 48 core
      $10710 8460H Max Platinum 40 core
      $6540 8454H Max Platinum 32 core
      $4708 8450H Max Platinum 28 core
      $4234 8444H Max Platinum 16 core
      There are also some 83xxH parts available. For example in one of Pat's performance slides it quoted a quad-socket Platinum 8380H server as being a little slower than a dual-socket 8490H server - so one can guess the 83xxH series to be maybe half as fast as the comparable 94xxH series parts. I would think that Intel would eventually want the option of HBM memory on some subset of these cloud server parts, and not just on the five HPC parts.
      If interested in the HPC Max Series with HBM parts, a good project to keep track of is at the Los Alamos supercomputer lab - a new project called Crossroads. Intel had an interesting video on this. Their apps are memory bandwidth limited, and thus far have shown 2-4x speedups using the new parts with faster HBM memory. Part of this speedup might be due to faster AVX512 and AMX, but I gathered most of the speedup came from the faster memory. This makes sense, as prior experiments on AMD EPYC chips with 3D-Vcache have shown similar speedups. Intel bragged besting last-gen EPYCs with 3D-Vcache, but of course the 9004 series EPYCs with added 3D-Vcache are not out yet.

  • @callum2277
    @callum2277 Рік тому +6

    is there a typo in the title "A $17L Server CPU!" I think that should be K right?

  • @lurick
    @lurick Рік тому +8

    A 17 liter CPU?!?! xD

  • @VictorMistral
    @VictorMistral Рік тому +4

    Isn't DSA what's normally called "DMA" and already present since a long time?
    I mean, I remember looking a memcpy with Haswell, and we could clearly see the point where the copy started to be done by the CPU, instead of the offload, there would be light load, and why you added a few more thread that did heavy copy, all of a sudden, the cpu was heavily used, but ram wasn't near it's max bandwidth yet...
    And I programmed on mcu with DMA, where you basic set copy X byte from address A to address B to some mmio to offload off the main core.
    And remember seeing conversation about RDMA
    So it doesn't seems to be something new.

    • @arthurmoore9488
      @arthurmoore9488 Рік тому

      My thoughts as well. At a guess, they updated the DMA to allow significant queuing and "smarts" / prefetching. Like how the latest consoles have fancy transfer prioritization for game assets from the drive. If done correctly, and especially taking advantage of HBM as cache, they could keep the cores fed almost all the time.
      Real life example, and repeating the basics for non coders. Caching systems often pull chunks of data, assuming sequential reads. That's good for simple arrays, but we often don't code like that. Instead we also have arrays of objects, or of pointers. So, if I'm doing something like totaling a field, I need say 4 bytes that are spaced 2048 bytes apart. Times 10,000! If this new system is "smart" it can determine that access pattern and prefetch the data. Personally, I don't consider that an "accelerator", so it's more likely to be the thing with Video Games.

  • @coltmarshmallow
    @coltmarshmallow Рік тому +10

    I think it looks like Intel has done a really good job finally getting there with Sapphire Rapids. I'm super excited with their CXL implementation especially their plug in nics. But as a platform engineer working on several projects in quick succession, I won't commit to taking advantage of accelerators into my code base recipes if even I went Intel server exclusive I can't guarantee accelerators being widely available and being adopted, look at Optane ..
    AMD is either lazy or understands I'm only going to adopt something if I can replicate my functional workload across my infrastructure stack and feature enhancement is generational not prescriptive.

    • @JMurph2015
      @JMurph2015 Рік тому +10

      The thing Intel possibly misunderstands about accelerators is that accelerating random stuff isn't really popular as a CPU feature. Part of the reason CXL exists is to enable a "pluggable" accelerator ecosystem. This is very popular with the hyperscalers (at least) as demonstrated by things like Google's TPUs, Google-UA-cam's dedicated video transcoder ASICs, Amazon's Nitro DPU, etc etc. A lot of places have some form of custom silicon add-in card at this point and really just want an efficient, high-performance, general-purpose CPU to run the business logic. Inner-loop type stuff tends to get farmed out to an FPGA or custom ASIC, and networking stuff is increasingly getting on-NIC acceleration. In summary, I think AMD understands that their CPU may not be everything for everyone, and therefore isn't trying to be that, but Intel may be attempting to make up for their lack of general purpose performance by trying to be "the whole stack" for a wide base of customers. We'll see how that goes for them.

    • @aravindpallippara1577
      @aravindpallippara1577 Рік тому +5

      AMD is also going non standard silicon route with the ai engine added on to the phoenix range of laptops
      I wonder how they will enable developers to take advantage of that

  • @the-perfidious
    @the-perfidious Рік тому +1

    Great review! Lots of cool charts for easy understanding by my simple primordial rodent brain

  • @maxhammick948
    @maxhammick948 Рік тому +3

    I don't think the On Demand part is a terrible business practice - companies buying servers ought to have the expertise to characterise their workloads before buying $10ks of hardware, so they can work out what accelerators they need and will appreciate only needing to pay for those ones. It also gives intel some very solid ground truth on what accelerators are actually getting established in the market (as intel will know exactly which features customers are willing to pay for). The problem with it is that the fastest way to kill off a neat new feature like this is to lock it behind a paywall, so very few will bother developing anything to take advantage of it.

    • @RobinCernyMitSuffix
      @RobinCernyMitSuffix Рік тому

      It's a perfect business practice, sell the part and then charge them after you have sold them on a monthly bases. It's already produced, but artificially locked down, the dream for every hyper capitalist fan.
      It's the biggest drop kick in the balls for the customer, which becomes just a consumer.
      What do you do with second hand parts?
      What do you do about repurposing those parts in 6-8+ years?
      It's crap, no matter how you polish it, it will still be crap.

  • @woodmanvictory
    @woodmanvictory Рік тому +1

    Hmm waiting to see a MCC end up in a workstation chip

  • @jaffarbh
    @jaffarbh Рік тому +1

    I think Nginx doesn't support QAT out of the box. You need to use a "special" distro by Intel and compile it for QAT to work.

  • @jannegrey593
    @jannegrey593 Рік тому +7

    Okay - you at least included a price, since I didn't get that particular part from Level1Techs. This is interesting, because Intel seems to be able to compete now a bit. And if stuff gets optimized (like QAT) they can get scary. I do wonder what EPYC would be capable of if stuff was optimized to the same level as for Intel stuff - though given their market share that is actually quite rapidly happening.
    I'll say what I said in other review. I'm still a bit sus on Intel's ability to rapidly advance process nodes with sufficient yields. And even if they can, iterating on that in such a rapid pace - that's going to be hard. For no delays they would have to calibrate machines perfectly each time. And Genoa-X will be "response" (albeit not exactly the same niche) to SR HBM. Things will get scary with Turin in 2024. AMD accelerators, which they start releasing already, but they will be in Enterprise. I wish Intel the best - so we could have a lot of competition and force down the prices. Especially if this ends up in workstations it could force AMD to go back to Threadripper in less "sorry, we can't service you, you're not important enough to get part of our manufacture" kind of way. Though Manufacture is AMD's problem. Intel can outproduce them. Which makes the 30%+ market share almost impossible for AMD. In that situation they would actually be able to keep expanding safely and be a constant R&D. Something that AMD historically didn't have the luxury of doing.
    Also MI300 looks like a beast. IDK if you will ever get it, but I hope you will and check what it's capable of.

    • @jannegrey593
      @jannegrey593 Рік тому +7

      And I would prefer if Intel was more aggressive on price. To compete. But I guess they all try to be expensive and crazy thing is for most enterprise stuff, this isn't even the main cost. Licenses are.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +9

      Pricing strategy and SKUs are extremely important in this generation

    • @jannegrey593
      @jannegrey593 Рік тому +2

      @@ServeTheHomeVideo True. I do wonder if Intel will be able to follow up this with Granite/Emerald in production quantities. I mean - they've been stagnant for a lot of things. They have returned in almost full force (though lack of certain accelerators in most of the SKU's is annoying), but IDK if they can survive by releasing a great chip every 3-5 years. They will need to iterate and process and on microarch. And I'm not sure if they can do both without significant delays.
      But if they can, they can outproduce AMD to oblivion. Had they priced a bit more aggressively, they might have even made it super difficult for AMD. With those prices I feel like clients will weigh pros and cons more rather than just "go with Intel". Though you know market way better than I do. So I trust you when you say something is important.

    • @hermanlau4431
      @hermanlau4431 Рік тому +4

      @@jannegrey593 Not that easy when the stock price of Intel has been performing poorly in the last few year, same story on the AMD side, as the advanced Node usage lower than ever, they have more chips to sell compare to last year, they can price EPYC cheaper than now due to the chiplet design, but they won't, shareholder care gross margin.

    • @zbigniewmalec4816
      @zbigniewmalec4816 Рік тому +2

      @@jannegrey593 it will be hard for Intel to be price competitive. SPR is bigger and more complicated than ICL. And with ICL Intel already started to lose income due to the aggressive pricing.

  • @danieltrump7081
    @danieltrump7081 Рік тому +3

    I didn't think any AI applications are run on EPYC? Wouldnt the competitor to AI workloads be things like MI250/300, Nvidia, and other asics?

  • @kristofferaribal7514
    @kristofferaribal7514 Рік тому +3

    Are they banking on the AI accelerator service? I mean can we have some benchmark of this compared to Nvidia's last gen products?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +4

      The use case is a bit different. NVIDIA benchmarks its maximum throughput on a PCIe accelerator. On-CPU AI inference is more for workloads that are running a database, web front end, and so forth, but then need to do inference on 1% of the workload. Batch size is 1 or near 1 there. Having everything on-chip means that you do not spend to add and power an under utilized PCIe accelerator, and can lower latency by staying on the CPU mesh instead of going off package to PCIe devices. We might do more CPU versus like the NVIDIA T4's that we have, but it would be more like showing maximum transactions under a SLA.

  • @autarchprinceps
    @autarchprinceps Рік тому +8

    The problem with making accelerators disappear behind a paywall, is that it will turn already marginal support for these things into something only done when Intel or you yourself code it for very bespoke applications. 99% percent of server applications will not benefit at all, and all the costs for Intel are still there, especially in terms of silicon. If they can be turned on at any point, those accelerators can’t even have defects after all, unlike if they just had models with it hard dis- and enabled.
    At the point where AI, video or other acceleration would be useful enough for your application to go that far, you no longer compete against Epyc, but GPUs and even dedicated special purpose hardware, which some Intel on die accelerators will always loose against. Today it is a lot easier to just start a AWS Trainium or Inferentia, or similar products on other clouds. It's no longer broadly the case, that the person who decides what to run on, also has to commit to a limited pool of hardware and will therefore just select what provides all the features in one box.
    For businesses looking at total cost of ownership, this kind of cost model is less offensive than for us consumers, but it may still be a disadvantage in the end, if the buying price and running costs aren't equally reduced. But this kind of usability impairment, that costs a lot of mandays extra, is just a no go for them.

    • @kwinzman
      @kwinzman Рік тому +2

      Exactly what I wanted to say, but you already wrote it better than I could.

  • @wghardy5577
    @wghardy5577 Рік тому +4

    Can't wait to buy them 7 years later for 400 euro

  • @darreno1450
    @darreno1450 Рік тому +3

    You sold me on the Epyc.

  • @cgwworldministries83
    @cgwworldministries83 Рік тому +1

    I want to game on one of these so badly lol

  • @wmopp9100
    @wmopp9100 Рік тому +3

    adoption of features requires broad availability.
    intel did the same madness in the early days of virtualization (some bigger SKUs having fewer features and thus worse performance)

  • @excitedbox5705
    @excitedbox5705 Рік тому +3

    Intel likes to nickle and dime customers. They even do things like memory limits that can make a 16k CPU cost 35k. Just for using all your memory lanes.

  • @skyhawk21
    @skyhawk21 Рік тому +1

    Hey Patrick, do high wattage sever CPUs degrade from being used full load all the time?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +1

      No. That is what they are designed to do. There are cores that eventually fail in very large populations of chips, but those are fairly rare.

  • @Tech215Studios
    @Tech215Studios Рік тому +2

    I can't wait until in 8 years it's 150 bucks on ebay. Or until we go to recycle it at work and i take it home with me!!! HaHa!! Can you say "life cycle 2030?!" LOL!!! SUBBED!!!

  • @axavio
    @axavio Рік тому +5

    $17L? Is that higher than $17K?

  • @efimovv
    @efimovv Рік тому

    Regarding those "after sale enabled" features my opinion is simple: if it in hardware, unlock it. (or maybe we have to ask some help from good guys...)

  • @numlockkilla
    @numlockkilla Рік тому +9

    17k processor. Twice the performance as last generation at twice the cost. Staying put with what I have.

    • @jmtake85
      @jmtake85 Рік тому

      50 dollar after 10 year

  • @riverwolf695
    @riverwolf695 Рік тому +1

    Wow, that makes my two e5-2630v4 look like a calculator CPUs

  • @neurolepticer1284
    @neurolepticer1284 Рік тому +5

    The Epyc 9654 is 70% faster, and costs only €8,972.10

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +4

      That is a wise observation.

    • @jmtake85
      @jmtake85 Рік тому

      faster mean nothing , accelerators make the price

  • @soldiersvejk2053
    @soldiersvejk2053 Рік тому +4

    I am so excited to see it sold at $200 fifteen years from now.

  • @cuongtang9539
    @cuongtang9539 Рік тому

    so what is better Xeon or Epyc ??

  • @Bot.number.69420
    @Bot.number.69420 Рік тому +3

    So do you get all accelerators for that 17k?
    Or do you need to pay more?
    What a mess they have made. I feel just the hassle of that paywalling will increase Epyc sales.

  • @Sirikiller
    @Sirikiller Рік тому +6

    I wonder how the AMD MI300 slots into all this.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +5

      MI300 will be a competitor more to the Xeon Max and GPU Max for now, and then the 2025 Intel Falcon Shores.

  • @stuartlunsford7556
    @stuartlunsford7556 Рік тому +4

    Thanks for the works/support statement lol. ECC isn't supported by AMD, but it works great in my 5900x home server!

  • @ByrnesPCGarage
    @ByrnesPCGarage Рік тому

    My company sells vms per core and per gb of ram pricing. Will vms even be able to use those accelerators? They all have to share one accelerator per cpu?

    • @DigitalJedi
      @DigitalJedi Рік тому

      There is one accelerator per CPU tile IIRC, so on the 60-core chip that's one for each group of 15 cores.

  • @hailongvan8285
    @hailongvan8285 Рік тому

    whats its gaming performance

  • @ander1482
    @ander1482 Рік тому +1

    When are the max skus coming for review?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому

      I think Xeon MAX production was scheduled for the end of Dec/ early Jan. So figure ~6-8 weeks to really get them out into the market. My sense is that end of Feb/ or March is the earliest we will get them. The ES/QS MAX chips are in tighter supply so ours will likely come with retail stepping parts.

    • @ander1482
      @ander1482 Рік тому

      @@ServeTheHomeVideo Thanks Patrick. Do you think many common workloads are memory bottlenecked this days or just few niche cases?

  • @chrisbaker8533
    @chrisbaker8533 Рік тому +3

    I just can't get over these monstrous(physical size) cpus.

  • @thesa542
    @thesa542 Рік тому +7

    At 1:45 you said that "most of the cpus that are out there are really only going to be like 32 cores or something like that in this generation..." My impression is that, nowadays, most (server) CPUs are going to hyperscalers who want top-end, high-core-count, sometimes custom skus. More "normal" customers might be fine with smaller core counts, but I am wondering if continuing to lag behind here is going to accelerate the adoption of AMD (and ARM) among some of Intel's biggest customers.

    • @jamestewell8368
      @jamestewell8368 Рік тому

      Per core performance is valued for many applications which these less dense chips excel at. I’m thinking things with a per core licensing model like Microsoft SQL server is where 32 core chips are still big. I also wonder what the market adoptions of 96 core cpus will be seems like a lot of potential for virtualization and energy efficiency concerned workload.

    • @flintstone1409
      @flintstone1409 Рік тому

      I don't think that Hyperscalers are the majority. In most companies, you use smaller servers, and especially if you have to license Windows Servers (for example for and AD domain) you really want smaller core counts with higher single core performance.

    • @JMurph2015
      @JMurph2015 Рік тому +2

      @@flintstone1409 hyperscalers buy a truly mind boggling number of servers. As of 2017, it was speculated that Google operated _ten million servers_. They spent $30B on their data centers in that time period. AWS is likely in the same ballpark if not bigger. Azure is probably smaller since they are smaller than AWS or Google's fleet, but still it's sure to be extremely impressive. If I had to guess, something like 25-30% of all datacenter CPUs are bought by hyperscalers in a given year and those are almost all very density and efficiency optimized SKUs (high core count, modest clockspeeds, near top of the binning pile). They can afford them because they can make those CPUs work for them 24x7.

    • @prashanthb6521
      @prashanthb6521 Рік тому

      If you have too many cores, the memory channels will become bottleneck. So too many cores doesnt make sense.

    • @JMurph2015
      @JMurph2015 Рік тому +2

      @@prashanthb6521 that's why the high end CPUs have more memory channels...

  • @Pressbutan
    @Pressbutan Рік тому +2

    Good lord. And I thought 8380 was expensive at $10k per 🌚

  • @skaltura
    @skaltura Рік тому +4

    Btw, if you need QAT you can buy expansion card for that, shouldn't those work just as well on AMD platform than Intel? Those are "only" 300$ each on Fleabay. Ofc enterprise wants new, but AMD being so much cheaper i think the difference is covered :)

    • @skaltura
      @skaltura Рік тому +2

      Wow! They exist in U.2 formfactor too... This could be huge!
      Imagine all kinds of accelerators on the abundant 2.5" U.2 bays :D

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +1

      The other part to keep in mind, is that 4x QAT is 800Gbps. A single PCIe Gen5 x16 slot can handle 400Gbps. The cards you find on ebay are usually 100Gbps and slower, so that means taking 8x cards that are running PCIe Gen3 x16 (since even the 8970 is a Lewisburg PCH on a card.) That would mean you need 128 PCIe lanes (8x 8970's) for the 800Gbps QAT throughput of the on-package acceleration. Then you would want 2x 400GbE NICs so you would need another 2x PCIe Gen5 x16 lanes for those. Genoa runs out of PCIe lanes for QAT cards, so that is why I suggested that DPUs would be the answer.

  • @Capeau
    @Capeau Рік тому

    finally a fair review, so many reviewers seem to have AMD stock on youtube, its actually kinda sad

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 Рік тому +1

    They tried to find a niche market. make an accelerator for it, so they can ask jackpot prices, its a good product if you are in that niche. Most companies want general performance that is flexible, price/performance worthy and keeps power usage within certain boundaries, Intel clearly is going to get hurt even more in the servermarket.

  • @tstager1978
    @tstager1978 Рік тому +1

    Intel will always be Intel. Used to have to pay a premium to use onboard RAID. Subscription hardware will make these even less competitive.

  • @mzamroni
    @mzamroni Рік тому +2

    Such many variants shows that Intel duv fake 7 has low yield for such large xeon chip.
    Intel has to create many bin levels to get revenue from many imperfect chips (which can't pass top bin quality test)

  • @EyesOfByes
    @EyesOfByes Рік тому

    33:20 Even Nvidia has realised that having the accelerators on all SKUs is good for them money. NVENC is not exclusive to the 3090

  • @FinlayDaG33k
    @FinlayDaG33k Рік тому +3

    $17L? How much zeroes is that supposed to be?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +3

      I am barely making it as you can see from the low energy in the video. 10,000 word main site review with lots of charts.

  • @jbianco2112
    @jbianco2112 Рік тому +1

    You should not use MariaDB but Percona MySQL or Percona XtraDB if clustering databases

  • @lolmao500
    @lolmao500 Рік тому

    Its crazy how server CPU have such low clock speeds.

  • @2xKTfc
    @2xKTfc Рік тому +1

    A whole tray of $17k CPUs, and not hanging from the ceiling? Did Intel put a gag-order on your inner child? 😂

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +1

      Ha! Those were hung with F34 trussing and steel cables! This one has a "hope it stays like that" Supermicro server on the table behind me

  • @JeffMcJunkin
    @JeffMcJunkin Рік тому +14

    Typo in title: "$17L" should be "$17K". Thank you for being awesome!

  • @movax20h
    @movax20h Рік тому

    Accelereators are way to go. IBM and many others were doing this for a decade, and now as we are reaching limits of silicon and density, we need to offload this relatively easy but mundane tasks to separate block, and leave CPU cores do what they are good at, generic versatile code. DSA is just DMA on steroid and easier use. AMX (pseudo-accelerator, but essentially with all efficiency gains of accelerator) is also nice. Lack of QAT and IAA on low end skus are really problematic tho, as it will hamper adoption a lot. A lot of developers would love to have this on smaller cpus, dev servers or workstations, but now they cannot play with it easily, unless they spend 10k on test system. For big companies it is not an issue, but for smaller ones, it is a problem.
    QAT would be great, but at the same time, often this is something that is better can be accelerated on a NIC instead. (and many NICs already do that for years)
    I hope AMD has good response in the works. They might have been good with density and IPC improvements, and cost reduction, but acceleration is the next game.

  • @Abu_Shawarib
    @Abu_Shawarib Рік тому +2

    The reality with accelerators is that there are too many workloads to optimize for and only few things in common (like crypto and Vector/Matrix). At some point it's better to have dedicated (external) accelerators or CPU SKU dependent accelerators then to ship every CPU with accelerators. Don't think it's good strategy to do in the long run, especially with the prices they are asking for and the competition from AMD.

  • @cromefire_
    @cromefire_ Рік тому +3

    Just hope Intel on-demand stays in Server-Land, where it has a place, while I don't believe putting it on consumer chips will make things better.

    • @teamtechworked8217
      @teamtechworked8217 Рік тому

      It's only a matter of time before it comes to consumers. Look at AVX 512 on 12th gen. It is physically there, but they shut them off so you can't use them.

    • @cromefire_
      @cromefire_ Рік тому +2

      @@teamtechworked8217 Yeah but I think they did that because it's almost exclusively used for professional stuff and they want that to be on Xeon. And it only benefitted very few people because you had to turn off E-Cores...

    • @RobinCernyMitSuffix
      @RobinCernyMitSuffix Рік тому

      @@teamtechworked8217 They already tried it in the past (about 12 years or so ago).
      Sadly they will try the "CPU as a subscription" thing again.

  • @nexonnera.k.a.8796
    @nexonnera.k.a.8796 Рік тому +2

    probably in a few years it will be possible to unlock accelerators with some hacks

  • @theodanielwollff
    @theodanielwollff Рік тому

    Need to delid the CPU to get 5Ghz all-core. Lets Go!

  • @skaltura
    @skaltura Рік тому +2

    So ... at 17k $ per CPU, Intel is now able to match performance of last gen EPYC? :) Well, at least that is progress

  • @ferdievanschalkwyk1669
    @ferdievanschalkwyk1669 Рік тому +1

    Regarding the accelerators. If its physically in the product, it should be enabled. Anything else is just a cash grab. There are no benefits to the consumer in the business model. It does not make the product cheaper. Its primary purpose is to catch people out and fleece them for more money once they already have the product and cant return it.

  • @gearboxworks
    @gearboxworks Рік тому +7

    I wonder how long it will take for that $17,000 CPU to show up on eBay for sale at $100? 7 years? More?
    Along those lines, I wonder if Intel would be willing to consider unlocking their CPUs after some amount of time, like 7 years? That would help with keeping them out of the landfill and upcycling them for new uses. #amIjustdreaming?

    • @jfbeam
      @jfbeam Рік тому +1

      Maybe a decade? They'll have to be pretty useless tech to be that cheap. (and the rest of the hardware to use them, too.) As for unlocking... never going to happen. Problem #1, how does the processor know it's 7yo? Intel is not going to want to bother with any code generator to mess with an old processor line. Plus, you'd likely need a BIOS update to hold the new "unlocked" microcode, just to get to an OS that could load anything newer. Intel is in the business to sell new hardware, not make their old stuff last longer.

    • @gearboxworks
      @gearboxworks Рік тому

      One can dream?

  • @robster3323
    @robster3323 Рік тому +6

    Hey Patrick, is there a list of prices for the accelerators out there yet? How much can you add on to that $17K CPU with all accelerators activated? Is there going to be a fee per activation? IBM has played this game for years on their Power platform, and I personally hated it. If you are not careful you can spend more on activation event costs than on the features themselves.

    • @nexonnera.k.a.8796
      @nexonnera.k.a.8796 Рік тому +2

      in this $17k cpu there is no intel on demand, because all accelerators are already turned on

  • @oscarcharliezulu
    @oscarcharliezulu Рік тому +2

    Wow, Intel lost me with their paywall disabled features. I know it’s common in the industry but you know, it seems sneaky and underhand.

  • @Capeau
    @Capeau Рік тому

    Also, the accelleration is there on the cpu, which you have payed for, but to make it work you have to pay subscription?
    doesnt make sense to me and feels like a rip-off...

  • @2xKTfc
    @2xKTfc Рік тому

    Everyone getting worked up about unlockable features misses the point. Companies buying these CPUs know ahead of time that the features are locked, what they cost, and price out the entire platform including all unlock fees. If it's cheaper than alternative offers that meet your requirements you buy it. At the end of the day, for the customer, it does not matter if you pay for a CPU or for a CPU and unlock fees - it's the exact same amount, because you priced it out before and the unlockable offer came out best. It's a form of price discrimination that gets more money out of the customers, yes. But the reason it works is that it's designed to still be (marginally) cheaper for the customers, so it makes sense to go with it.

    • @seylaw
      @seylaw Рік тому

      I still want to see fire sales, 30-day-trials or a give-aways for these accelerators, maybe even a form of compensation when the next security flaw eats away performance per watt.

  • @El_Croc
    @El_Croc Рік тому +4

    Paywall for chip functions on a CPU you already bought is like buying a car but the included turbo or the heater only works if you pay more fees. How will this dumb idea effect the secondhand market? Intel now truly deserves to lose the race - go AMD go!

  • @joemarais7683
    @joemarais7683 Рік тому

    This would have been a great product years ago.

  • @hgbugalou
    @hgbugalou Рік тому

    Quite honestly, I think all DRAM will be on the CPU in another decade or so.

  • @mikebruzzone9570
    @mikebruzzone9570 Рік тому

    $17K Intel suggested to the high volume OEM / 2. Patrick r u worth a 100% margin gain as VAR SI design build install and qualify consultancy subject design build contract? That's what it takes to earn x2 over the Intel price. mb

    • @mikebruzzone9570
      @mikebruzzone9570 Рік тому

      or u learning on setting up the table we need to talk so we don't duplicate data Patrick u need to add data not duplicate my data. I have all the filters for taking the data and getting rid of primary research duplication of every query. mb

    • @mikebruzzone9570
      @mikebruzzone9570 Рік тому

      Did u just say Patrick Intel is attempting to get rid of certain peripheral subsystem developers from the lunch and roll out? mb

    • @mikebruzzone9570
      @mikebruzzone9570 Рік тому

      Right I've been after Intel for enterprise, generally business compute so its mid core count, got it. mb

    • @mikebruzzone9570
      @mikebruzzone9570 Рік тому

      interesting quandy to use or not use any of the 4 acceleration blocks. And Intel will decide into future on that added block revenue customer test, vote, qualification. I agree they r there just turn them on. mb

  • @hgbugalou
    @hgbugalou Рік тому

    I really don't like optane dying as the architecture is better than NAND in some scenarios.

  • @johnkost2514
    @johnkost2514 Рік тому

    Binning silicon and maximizing the SKU for revenues..

  • @stuartlunsford7556
    @stuartlunsford7556 Рік тому +4

    It's literally been years since I've been excited by Intel server. These chips are HOT! Now we get to see if AMD can actually compete in hardware acceleration, not just cache and scaling.

  • @lolmao500
    @lolmao500 Рік тому +1

    Wheres the Genoa-X CPU...

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому

      My guess is that we will be reviewing Genoa-X and Bergamo at the same time in the summer but we do not have either yet.

  • @ellenorbjornsdottir1166
    @ellenorbjornsdottir1166 Рік тому

    The base clocks are so damn low.

  • @satria4195
    @satria4195 Рік тому

    Single Genoa support 24 dimm than singel rapids that suppprt 16 dimm

  • @Phil-D83
    @Phil-D83 Рік тому +3

    Price is ridiculous, but ok

  • @ewenchan1239
    @ewenchan1239 Рік тому +8

    Without the accelerators, AMD is just KILLING it these days.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +9

      Wise observation. One area AMD needs to fill in though is sub 200W TDP CPUs

    • @ewenchan1239
      @ewenchan1239 Рік тому +1

      @@ServeTheHomeVideo
      AMD has the EPYC 9124, 9224, 9254, and the 9334, which are all between 200-220 W TDP parts, ranging from 16-32 cores.
      (Source: en.wikipedia.org/wiki/Epyc#Fourth_generation_Epyc_(Genoa))
      I would think or surmise that if you want fewer cores and lower power consumption than that, there's now the AMD Ryzen non-X lineup that might be able to help fill out that space, but nothing from the Genoa/EPYC lineup, unfortunately.

  • @news_IT-my1610
    @news_IT-my1610 Рік тому +1

    Wow crazy price.. 😅

  • @Haskellerz
    @Haskellerz Рік тому +4

    Intel got destroyed in every benchmark other than OneDNN

  • @Doesntcompute2k
    @Doesntcompute2k Рік тому +1

    Tell me your video is sponsored by Intel without telling me....

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +5

      Intel did send CPUs, but we used Supermicro and another vendor's systems (that launch next week). I actually had a call last Friday to explicitly confirm that this video was not being sponsored by Intel. So I can confirm it is not sponsored by Intel.

  • @jannegrey593
    @jannegrey593 Рік тому +1

    To answer your question: First of all, I think that licencors are going to start to charge per accelerator as well. That would make Intel happy, because it would justify them selling parts of the chip that you already physically have.
    What I think Intel should have done to appease both customers and their corporate interests? Activate all accelerators. Just perhaps only 1 or 2 for CPU. that means that if you do something from time to time or just 1 client on your Virtual Machine does, you don't get penalized. But if you want full power - you have to pay extra. That wouldn't at least feel as a douche move as much. Because despite OEM's doing this stuff for years, when I heard it I was like - "that's such an Intel move". And it wasn't a positive feeling. Though if they were smart about it they would have just allowed all acceleration, drove AMD out (well, maybe not, but it would make it significantly harder for AMD) and once people get completely "addicted" to those accelerators, start renting them out. Or selling them. Not that I'm particularly happy about that.
    But that's one of the reasons why I think licenses will also cover costs of acceleration. ATM you can cheat the system and buy like 16-core part that will behave in certain workloads better than 64 core part without acceleration. And licencors don't like this. They want to maximize profit. So does Intel, so I think it will mutually help them that this is locked behind paywall, because they can shift blame on each other and at the same time it starts to spread and thin out and you start to pay. I was in politics. I know how to make people do things that are bad for them and to make them being happy that they do this self-harmful things.

  • @SodomEndGomorra
    @SodomEndGomorra Рік тому +1

    i'm sure you can't smash it with hammer

  • @jbianco2112
    @jbianco2112 Рік тому +5

    The base CPU of 1.7Ghz? What is this? a Raspberry Pi… The base CPU speed, even the Turbo Boost is not fast at all

  • @mz4637
    @mz4637 Рік тому +1

    ur krazy

  • @Peteryzhang
    @Peteryzhang Рік тому +2

    If Intel builds something on their chips, and don't turn it on, sounds just stupid.

  • @BogusAmogusss
    @BogusAmogusss Рік тому +2

    $17L XD

  • @kevin666b
    @kevin666b Рік тому

    "weird flex but ok"

  • @shephusted2714
    @shephusted2714 Рік тому +2

    arm should steamroll both intel and amd - for consumers sake - competition will benefit everyone ultimately

  • @richardahlquist5839
    @richardahlquist5839 Рік тому +4

    I hate the idea that ALL Intel business customers are subsidizing the hardware that is going out not turned on.
    If it were your Car, say an electric vehicle. And there was 30% more battery cells in the car that were not active, so a definite hardware cost, just like building silicon with advanced capabilities. You can license turning those cells on later, just like the features on these CPUs.
    They are going to recoup the actual cost of that hardware at sale time, make no mistake, they wont sell the car at a loss anymore than Intel will sell these chips to businesses at a loss. You are paying 100% of the hardware cost even if its not activated.
    Then they are going to rent the function to you.
    Toxic level capitalism.
    Eventually we will have limits of compute ability if we go down this path, If you combine that with the total loss of privacy of today, you will have companies who work to cost you your compute stored credits by pushing garbage at you online, so that their partner can charge you for more.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Рік тому +4

      I think that you are right that there is some cross-subsidy on the gross margin side. The equation may be a bit more complex on the On Demand side. In the example, Lenovo needs to sell the Intel enablement, and so both companies will need to generate revenue and margin from that event.