IBM Power10 A Glimpse Into the Future of Servers

Поділитися
Вставка
  • Опубліковано 30 січ 2025
  • Check out the main site article: www.servetheho...
    In this article, we discuss the IBM Power10 architecture and some of the new and unique features of IBM's Q4 2021 Power platform. Not only do we discuss some of the cool features of the IBM platform, but we also discuss what they mean for the future of Intel Xeon (Sapphire Rapids), AMD EPYC (Genoa), and future Arm processors such as Ampere's next-gen and Marvell ThunderX4.
    We are going to update the main site article during Hot Chips 32 (2020) with additional microarchitectural details. Instead of going through them in this article, we wanted to focus on what the IBM Power10 can tell us about the future of servers and the pain points IBM is hearing its customers ask the company to address.

КОМЕНТАРІ • 144

  • @TheJonathanc82
    @TheJonathanc82 Рік тому +3

    I have worked on power systems since the power5 generation. They are amazing machines.

  • @brettryan3298
    @brettryan3298 4 роки тому +4

    Amazing stuff here. In the 90's I used to be subscribed to the IBM Systems Journal and saw things that were 10 years into the future.

  • @teadott
    @teadott 4 роки тому +3

    Man, thank goodness for you, I don't think alot of companies will be able to get this info across.

  • @АбракадабраКобра259
    @АбракадабраКобра259 3 роки тому +3

    I'm watching this video almost a year since you've uploaded it, so you're right, maybe a year after I watched this.

  • @esra_erimez
    @esra_erimez 4 роки тому +21

    The importance of the architectural innovations cannot be over stated. However, it is a long time coming. I thought we'd see these types of memory architectures with Memristor/3D XPoint

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому +5

      In theory, you could make an OMI controller for those new memory types. My sense is that is the near-term path to higher memory capacity.

  • @JeremyBowling
    @JeremyBowling 11 місяців тому +2

    Installing a Power 10 tomorrow.

  • @virtualinfinity6280
    @virtualinfinity6280 4 роки тому +44

    First of all you need to respect that IBM subbornly maintains its POWER architecture while almost all good RISC architectures are long gone and forgotten. And quite frankly, I admire IBM for that.
    However, all these new technologies are designed to solve problems, IBMs customer base has - while >90% of the overall userbase does not.
    All "new" IBM technologies, like PowerAXON (remote memory access), OMI (the "new" memory bus) and CAPI are not so new. Let's take them one by one:
    CAPI: Having a (cache-coherent) interface to coprocessors is a Good Thing(TM). It's actually such a good idea, that it has been implemented multiple times. Call it Hypertransport (AMD), Omnipath(Intel) or CAPI(IBM/Power consortium), they are essentially all suited for coherent coprocessor attachments. The reason, that all fell back to CPU-to-CPU interconnects being soley used by their inventors was, that there is no standard. Companies like Nvidia will not build a Hypertransport, an Omnipath AND a CAPI version of their chip. Its not scalable in economic terms. There have been multiple attempts to create a standard, all failed.
    Well, not *really* all. PCI-e has extensions for exactly this usecase. And nowadays on all modern CPUs, PCI-e controllers are actually integrated in the CPU and are able to directly r/w-access the CPUs cache(s). That works so well today, that AMD is actually using PCI-e as a CPU interconnect on EPYC, both package-internal (between CCX) and package-to-package in a two-socket config. So, instead of developing their own interconnect standard, AMD is actually using PCI-e exactly the way, it was intended to be used from day one, when it still was called 3GIO or HSI. Nowadays, there is no need for CAPI anymore.
    OMI: It is clear, that parallel differential signalling is coming to an end. With DDR, you simply need an enormous amount of CPU pins if you want to have enough parallel channels. EPYC already has >4000 pins which include 8 DDR4 channels. Having more than 8 memory channels using DDR is next to impossible. Routing all the signals for your memory bus across you PCB for 8 channels at 1,6Ghz (DDR4-3200) is already a nightmare. It is next to impossible for 12 or even 16 channels. If you switch to serial interfaces with ultra-high speeds, you need a controller on both ends - the CPU and the memory module. This not only increases cost for memory modules, adds complexity and lets your power-envelope rise through the roof, it also adds latency. A lot of latency. This is why it hasn't already been done (although RAMBUS actually tried something alike back in the days). To be able to accept such a large latency penalty. you have to add insanely big caches to the CPU. Which is what we will see in the next 4 years, when CPU makes will add HBM-like memory stacks on their CPU dies. Then the world is ready for OMI. However, this will need to be a cheap or free standard, as all memory makes will have to support it. Remember, why RAMBUS failed? Go look it up. "Cheap or free" are two words not going along quite well with "IBM".
    PowerAXON: Having a uniformly adressable memory across a cluster of independant system is a good idea. It is actually so good, that it is already there. Sort of. Actually the interconnect is there, but contemporary CPUs do not support the addressing model. The interconnect is Infiniband. It was exactly designed to be used for that (reference: Infiniband fabric). It's just a fact, that nobody uses it. RDMA is actually a special case, as it describes the remote-memory access by a device, which is not a CPU. Add RDMA capability to your CPU and viola - remote memory access. The problem again is latency. You stated a "50-100ns penalty, which would be quite good. Infiniband EDR is at 500ns, but this includes two transceivers, which - to my knowledge - is not the case with PowerAXON. Infiniband stil struggles with adoption as Intel stubbornly refuses to support it. To cope with the latency penalty, applications would need to be re-written to account for the latency penalty while accessing remote-memory. That would require some sort of memory-hierachy. However, applications have never and most likely will never be re-written that way. Instead, applications are re-written to scale horizontally by distributing chunks of workload to distinct systems. I doubt, PowerAXON will be successful in the non-IBM world, while so many before have failed.
    All of the above technologies however, are suitable to solve IBMs problems. Mainly in their mainframe business. IBM has total control there: OS, Software, Systems- and memory architecture, peripherals and the like. All technologies enable IBM to build insanely big and powerful mainframes - and earn a fortune by upgrading their mainframe customer base. Which is very large.

    • @Speak_Out_and_Remove_All_Doubt
      @Speak_Out_and_Remove_All_Doubt 4 роки тому +4

      RISC is dead? Isn't ARM based around a form of RISC?

    • @virtualinfinity6280
      @virtualinfinity6280 4 роки тому +2

      @@Speak_Out_and_Remove_All_Doubt Which is why I wrote "ALMOST" all good RISC architectures....

    • @Speak_Out_and_Remove_All_Doubt
      @Speak_Out_and_Remove_All_Doubt 4 роки тому

      AMD uses PCI-E? I thought it developed it's own Infinity Fabric?

    • @Speak_Out_and_Remove_All_Doubt
      @Speak_Out_and_Remove_All_Doubt 4 роки тому +1

      @@virtualinfinity6280 I'm not looking to argue but isn't that like saying almost everyone has given up on x86 because there's only really Intel and AMD left?

    • @virtualinfinity6280
      @virtualinfinity6280 4 роки тому +4

      @@Speak_Out_and_Remove_All_Doubt That is why I wrote "RISC ARCHITECTURES" and not "RISC companies". Almost all RISC architectures are dead. x86 is not. It is actually predominant. Regardless of the number of companies producing x86 chips.

  • @rubenb9432
    @rubenb9432 3 роки тому +1

    Patrick is so engaging and enthusiastic and I always learn something new. Thanks

  • @Speak_Out_and_Remove_All_Doubt
    @Speak_Out_and_Remove_All_Doubt 4 роки тому +16

    Obviously x86 is pretty different but how different is PowerPC vs RISC-V vs ARM in terms of architecture?

    • @kungfujesus06
      @kungfujesus06 4 роки тому +6

      They are both load store risc ISAs that are open. The nitty gritty details you'd have to read the entire ISA. RISC-V is not exactly complete yet in all aspects (SIMD for example). POWER is pretty mature and has existed for decades.

    • @alirobe
      @alirobe 4 роки тому +2

      And it’s now an open design like RISC-v, which is pretty cool, prob v attractive at hyperscale

    • @atisbasak
      @atisbasak 3 роки тому

      ARM is also pretty old. It came out originally in 1992.

  • @thedanyesful
    @thedanyesful Рік тому

    I've watched quite a few of your videos and I enjoyed this one the most.

  • @transposestudios
    @transposestudios Рік тому

    The company I work for (printing industry) has been using the Power series for what seems like decades now - upgrading to the Power10 soon

  • @rikpt
    @rikpt 4 роки тому +1

    Awesome content and super to the point delivery ;) well done!

  • @duanecook4227
    @duanecook4227 4 роки тому +2

    It is not the differential signals of DDR that take die area. OMI uses mostly differential signals, hence they are called DDIMM. DDR only uses differential signals for clock and data strobe. The issue is the massively parallel multi-drop bus, timing training and protocol logic of talking directly with the DRAM. It will be interesting to see if OMI takes off more than FB-DIMM did.

    • @lawrencemanning
      @lawrencemanning 4 роки тому

      Also, how big are the local parallel caches going to have to be? Arent we going to end up back where we are now?

  • @robertamarch122
    @robertamarch122 3 роки тому +2

    Such a good presentation (also quality video)!
    OMI is exciting and we need this! this could mean that moving to fiber interconnects will be made even more so easier. POWER10 sounds like a game changer!

  • @stonent
    @stonent 4 роки тому +1

    Power is used in the PowerSystems (pSeries and iSeries) and in their zSeries mainframes. Though in the mainframe space, you're not officially running Power code at the OS level. IBM loads microcode on to the Power CPU that makes it present itself as a mainframe CPU and is backwards compatible with the instruction set back to the 1960s S/360 mainframes. (Like on chip emulation)

    • @henrikwannheden7114
      @henrikwannheden7114 4 роки тому +2

      POWER is not used in zArchitecture (it was like 15 years ago that they were called iSeries, pSeries and zSeries..). zArch is using its own CISC based processors, that share some functional units with POWER (like its decimal floating point unit, and previously GX bus) but not much else. Z15 is the latest.

    • @ernestoditerribile
      @ernestoditerribile 2 роки тому

      @@henrikwannheden7114nope Z16 is

  • @alexfolsom3910
    @alexfolsom3910 4 роки тому +1

    Solid video on IBMPower10.

  • @dupajasio4801
    @dupajasio4801 2 роки тому

    I commented 2 years ago just below and actually changed my mind. I want this Power shyt and IBM gone already. Have you seen any young person ever review this thing? It might be around for some time but not very long. None of IBM's UA-cam marketing videos allow comments. I'm not surprised. I remember when there were printed IT magazines IBM had those adds that never talked about their products. Those adds basically said if you have IT problems contact IBM. They will sell you way overpriced solution. As always cool video Patrick.

  • @alphafort
    @alphafort 4 роки тому +4

    first time i heard the term "they're just slide-ware". cool!

  • @nickname1392
    @nickname1392 4 роки тому +1

    Whoa. Didn't know any of that was a thing. It's awesome.

  • @Shadowauratechno
    @Shadowauratechno 4 роки тому +6

    I'm so excited for Power10. Power9 aio is great for flexing their capabilities, but 10 will show the direction the entirety of the server world will move in. I've been hesitant to invest in a Raptor system so far but I'll 100% grab a power10 system if they release one

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому +4

      I totally agree. This is super exciting

    • @Vatharian
      @Vatharian 4 роки тому

      For general computing Power is already good enough. IBM completed successfully mind-boggling task of porting whole software stack from OS, compilers to heaviest software in big data to Power, and basically it's ready out of the box. We have now proper choice - Intel x86, AMD x86 or IBM Power (ARM is NOT ready yet) for actual deployment right in DC like now.
      Talon/Raptor hardware is basically devbox, IBM provides virtual Power servers, and sometimes Power9 pops up on AWS bare metal, but pricing is rather high. :(

    • @atisbasak
      @atisbasak 3 роки тому

      @@Vatharian ARM datacenter processors like Ampere Altra Max and AWS Graviton 2 based on Neoverse N1 and future processors based on Neoverse N2 and V1 like the Marvell Octeon 10 processors and Sipearl Rhea are ready for servers and data centers. Nvidia Grace CPU is also an interesting CPU for heavy tasks like machine learning and high performance computing.

  • @DocNo27
    @DocNo27 4 роки тому +2

    Apple already has ML and Neural Engine *cores* on their silicon in Phones and Tablets - and there is no doubt they will expand their use in their upcoming Apple Silicon desktop SoC chips. It's going to be fun to see where this all leads!

  • @henrikwannheden7114
    @henrikwannheden7114 4 роки тому +1

    That was AWESOME! Thanks!

  • @ConsistentlyAwkward
    @ConsistentlyAwkward 4 роки тому +1

    Hello STH, could you clarify if the max memory configuration of 2 petabytes (3-6 terabyte per dimm) is with a Volatile type of memory or a Non-Volatile kind?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому +1

      IBM used future memory types. ODI has options beyond DDR4/5 possible.

  • @digitalsparky
    @digitalsparky 4 роки тому +7

    Pretty soon 640PB ought to be enough for anybody... :P.

  • @ash98981
    @ash98981 4 роки тому +1

    AMD was rumored to go SMT4/8 after Genoa from Zen

  • @thomasholte1828
    @thomasholte1828 4 роки тому +1

    Thank you for this. Good info.

  • @seylaw
    @seylaw 4 роки тому +2

    So why did IBM lose the super computer contracts to AMD then? Maybe it wasn't bringing something exciting enough to the table what they needed? Is there any analysis on this topic?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому +1

      We did not go into this here. That is a big topic itself. Also PowerAXON does not speak NVLINK in this generation which was a factor in the HPC wins

    • @seylaw
      @seylaw 4 роки тому +1

      ​@@ServeTheHomeVideo Thanks, maybe it is an idea for a future video or in-depth article. ;)

  • @cptechno
    @cptechno 4 роки тому +1

    Great presenation! I like it! I wish IBM implemented STM2 also, that is allow 2 threads per core like current x86 processors. Not all code can take advantage of 4 and 6 threads per core. It can be wastement in some application.

    • @atisbasak
      @atisbasak 3 роки тому

      By the way, ARM architecture takes the opposite approach. It thinks that SMT is bullshit and that's why almost no ARM architecture, neither standard ARM architecture nor custom architectures support SMT.

  • @siddheshthakur6079
    @siddheshthakur6079 4 роки тому +20

    Man, I wish Nvidia just gifts me a DGX-A100 someday.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому +19

      Can we get one too?

    • @siddheshthakur6079
      @siddheshthakur6079 4 роки тому +1

      @@ServeTheHomeVideo If you do get one, I will help you benchmark it with Deep Learning applications for sure if you want it. :) Great video!

    • @garethevans9789
      @garethevans9789 4 роки тому

      Or just wait a decade or so. What took an entire rack cabinet from 20 years ago (I remember being blown away seeing a bunch of Sun Big Iron ~2004), is now pretty mediocre. Wikipedia is full of dead links of Sun before they got taken over by the dark side. I'd love to benchmark my desktop against a fully loaded Sun E10K, I have a feeling my desktop would win.🤔

    • @Vatharian
      @Vatharian 4 роки тому

      @@garethevans9789 Two weeks ago I scrapped 'Cisco academy' at work. Five racks, absolutely chock-full of top of the line Cisco offerings collected from different departments after decomissiong, like stacked 100G switches, hubs... oh wait, did I write 100G? Out of habit, I suppose... so 100Mbps switches, hubs, firewalls, routers, couple 1G devices (think over 2U module in 8U chassis that has... 6 ports), servers, and gods knows what else. We asked several schools in the area if they wanted it for training (everything was functional), but none accepted, so we had to scrap (no private gifts policy). My point is, single 48+4 1/10G switch has more bandwidth than that whole museum.
      When I was in college in early 2000s my department got new supercomputer, 50 TFlops monster, and I have currently more sitting under my desk in two render boxes, casually sipping a little over 2 kW under load. On top of that I have bigger SSD than this supercomputer had total capacity in its SAN - so you're probably right...

    • @catchnkill
      @catchnkill 4 роки тому

      @@Vatharian 2KW makes a good heater. It will be cold tonight. 7 degree C. Will be nice to have those running

  • @팩스
    @팩스 4 роки тому +2

    i love ibm

  • @chriss4365
    @chriss4365 2 роки тому +1

    This would prob be a beast in a home pc.

  • @josephroblesjr.8944
    @josephroblesjr.8944 4 роки тому +5

    i just want IBM power on the desktop as an actual competitor to x86

    • @JohnSmith-yz7uh
      @JohnSmith-yz7uh 4 роки тому +5

      Well without windows support no one will buy it, there are workstation boards with POWER9 out there. The company is called Raptor, but they are kinda pricy between $3000 and $5000.

    • @josephroblesjr.8944
      @josephroblesjr.8944 4 роки тому

      @@JohnSmith-yz7uh i just wish there was a more consumer friendly version of the chips. like to compete with ryzen or the core series. I do understand what you are saying thoughj

    • @atisbasak
      @atisbasak 3 роки тому

      @@josephroblesjr.8944 ARM will eventually take over the laptop and desktop PC market as well as the workstation and server market.

  • @spambot7110
    @spambot7110 4 роки тому

    4:35 i think you meant to say "parallel", not "differential"

  • @eDoc2020
    @eDoc2020 4 роки тому +1

    Wow, it's basically a gymnasium-sized barbecue.

  • @hisuiibmpower4
    @hisuiibmpower4 4 роки тому

    now I understand why IBM P series memory are that expensive,they have build in controller on borad

  • @prashanthb6521
    @prashanthb6521 4 роки тому

    Game Changing !

  • @hariranormal5584
    @hariranormal5584 4 роки тому

    even some HP SuperDome FLEX does like so many nodes as one "system" upto 32 processors and 48TB ram

  • @alex.quiniou
    @alex.quiniou 4 роки тому +2

    1:17 hard choice

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому

      Ha! There was a funnier version of that one that got pulled last minute.

  • @goodyKoeln
    @goodyKoeln 4 роки тому +1

    Sounds exciting, not sure how well this will compare against Genoa end of 2021. 🤔

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому

      Talked Genoa a bit since they will be contemporaries along with Sapphire Rapids

    • @goodyKoeln
      @goodyKoeln 4 роки тому

      @ServeTheHomeVideo
      Yes, you did.
      Will be an interesting year for the server space.

    • @atisbasak
      @atisbasak 3 роки тому

      IBM POWER10 will outperform AMD Epyc Genoa and Intel Xeon Sapphire Rapids.

  • @SamWhitlock
    @SamWhitlock 4 роки тому +6

    With disaggregated memory and dark fiber between your computer and a nearby datacenter, the old meme of "downloading more RAM" may soon turn out to be true!

    • @catchnkill
      @catchnkill 4 роки тому +1

      It will not happen. Even in a Asian highly populated city where I live, the average distance between the data centre and residential area is still more than 10 miles away. Not a single citizen can have his/her private optic cable direct to the data centre. The government will not grant you the permit to dig up the road to lay your cable unless you are a telecomm provider or a mobile phone network company etc.

    • @SamWhitlock
      @SamWhitlock 4 роки тому

      @@catchnkill I jest. There are a lot of nanoseconds in even one mile ua-cam.com/video/9eyFDBPk4Yw/v-deo.html

  • @dntknwhw
    @dntknwhw 3 роки тому +1

    Omi, is a great tech. However, hope it doesn't introduces too much latency.

    • @atisbasak
      @atisbasak 3 роки тому

      OMI is high bandwidth low latency memory bus.

  • @NilsJakobson
    @NilsJakobson 4 роки тому

    Isnt that the architecture has its beginnings back when IBM was designing that Cell processor for Sony Playstation 3? I have no idea just guessing that this could be the further evolution of that cell architecture technology..

    • @freddown
      @freddown 4 роки тому

      Way before the Cell processor, I was working on RS/6000 Power hardware in 1990, basically POWER1

    • @atisbasak
      @atisbasak 3 роки тому

      @@freddown Yes the POWER ISA arctually began with POWER1 in 1990. It has since evolved into the PowerPC architecture beginning in 1993 and by 1998, the original POWER ISA was deprecated. In 2006, the PowerPC ISA evolved into the third Power ISA.

  • @JonMasters
    @JonMasters 4 роки тому +1

    50-100ns hit going to another box is *nothing*. A load from local DDR is going to get close to that 100ns number anyway

  • @okoeroo
    @okoeroo 4 роки тому

    Awesome

  • @Xyxox
    @Xyxox 4 роки тому

    Wow if you were to strategically locate this infrastructure in datacenters around the globe, you could leverage for prime time applications during the day in any given region and use And to migrate virtual instances of heavy compute to infrastructure that is under utilized, get Maguire ROI. You could even lease time to third parties and IT could go from a cost center to profit center in large enterprises.

  • @RaspyYeti
    @RaspyYeti 4 роки тому

    You missed out the impact the technology would have on cloud gaming.
    If power 10 ended up being a Stadia CPU powering a game like Call of duty: Warzone the total memory used across all 120 players would be a fraction of what is used by 120 PC or console players.

  • @KangoV
    @KangoV 3 роки тому

    So, 16 * 30 * 4. Wow, that's 1,920 threads in one chassis!

  • @Jormunguandr
    @Jormunguandr 4 роки тому +1

    IBM

  • @JonMasters
    @JonMasters 4 роки тому +1

    Who wouldn’t watch the whole video????

  • @tommihommi1
    @tommihommi1 4 роки тому +3

    these processors are some chunky monolithic boys

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому +1

      They now have dual die so not just monolithic!

    • @atisbasak
      @atisbasak 3 роки тому

      @@ServeTheHomeVideo Dual chip and multi chip modules are nothing new for IBM POWER processors. All POWER CPUs have been available in multi chip module configurations.

  • @ewenchan1239
    @ewenchan1239 4 роки тому

    If I understand this correctly -- PowerAXON = RDMA at the OEM level.

  • @gdevelek
    @gdevelek Рік тому +1

    ...and then IBM released pricing for Power-10.

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 4 роки тому

    Why dont they go really mad and put stacks of HBM 3 next to the die. Maybe they can get 64GB of L3 cache.

    • @catchnkill
      @catchnkill 4 роки тому

      Manufacturing. HBM 3 are 3D chips using TSVs to connect the dies vertically. Very difficult to make and very expensive. Thus very niche market. TSMC is now so happy to produce those Apple chips and just will not care to make such difficult stuff

  • @UiNeilSandys
    @UiNeilSandys 4 роки тому +1

    2:12 ~ good mention, #cool, which of you are not running. somehow it seems impossible to turn off either of the 4 corner chips. So something like Knights Templar Crusader 4x4 grid from the middle outwards. ~ The Symmetric Multiple Thread(ing) never made sense to me, except math matrix of 128 SIMD=64+64=32+32+32+32=16*8, yet 512bit and 2048 exist out there, the threads thing also never made sense, because "out of order" operation, execution, delay prefetch, services, always meant since Windows NT=Jesus, New Technology, allowed services to do swing dancing, so maybe 1,000 threads per core = explains how intel 8088 1 million instructions persecond, while running 5 or 10 megahertz max (needs checking, completes 1 operation per 5 or 10 clock cycles, huge difference). and 233 mhz and 1.8Ghz and 2.0 GHz (Weirdest 200 Mhz performance snappy responsive boost ever, memory L1, L2, internal clock or DRAM clock magic, was DDR400\800 so 5x was a pleasant wow. compared to 1.8GHz, always told the multiplier of timing did not matter, insultingly wrong) and then 4 Ghz and 5(000) Ghz on tv, youtube, onscreen typing speed looks about the same to me. Using google search auto complete is faster than, google docs and microsoft word. Use my document library, and help me the fuck out!

    • @UiNeilSandys
      @UiNeilSandys 4 роки тому

      6:45 ~ Damn. When you opensource your API or ISA and logic gate maps, i sort of think Google Authored their new Einstein System. I was saying we need to address the maximum of SDUC sandisk cards, 128TB, are they multiplying this per 16 channels, 1 per 16 cores.

    • @UiNeilSandys
      @UiNeilSandys 4 роки тому

      8:42 each processor for speed reasons, would have its own address table. some sort of logic, master file table, controls cores = nodes, access and just a bunch of drives, or locked to another drive sector management dip switch add or subtract who can write. for speed i would ignore who can read, i like reading, and cores can individually encrypt their storage to secure any node, admin, client reads.

    • @UiNeilSandys
      @UiNeilSandys 4 роки тому

      11:43 ~ #Hotchips really is the most exciting, should we be #masturbating while they are presenting what seems to be industry secrets, that i have never been republished or revisited in detail outside of this dimly lighted, possible pole dancing disco ball slide show. if any one at home is trying, i respect that, it is so intimate with details we never hear discussed, that it definitely seems a marriage pollinating exposition.

  • @derekfoulk4692
    @derekfoulk4692 4 роки тому +1

    Still no vector processor design additions.....kinda lame

    • @atisbasak
      @atisbasak 3 роки тому

      There are lot of vector and matrix processing SIMD extensions added to POWER10.

  • @tombouie
    @tombouie 4 роки тому +1

    Thks, en.wikipedia.org/wiki/POWER10

  • @plebetopro5786
    @plebetopro5786 4 роки тому

    With the SMT8, you make it sound like there is some benefit to 30 "cores" with SMT4, yet you don't explain why it would be a benefit... You also make it sound like the Power10 will only be available in the 15 core config. So the 30 core version would be something like SMT2, and then running SMT4 on that SMT2 setup.

    • @henrikwannheden7114
      @henrikwannheden7114 4 роки тому +1

      What constitutes a core is what functional unit share L1/L2 caches, scheduler and registers. So a SMT4 core share all that with its SMT4 twin, and one SMT2 cores share its caches, scheduler and registers with three other SMT2 cores. In a regular SMT2 x86 system, each SMT2 core has those facilities for themselves
      The hypervisor (PowerVM and KVM) seems to have logical partition facilities that are able to securely partition a core down to a SMT2 slice. Some workloads really benefits on being spread over as many threads as possible, while others need the extra cache per thread that comes with a core not handling that many threads. This seems to be able to be dynamically adjustable by the hypervisor. So.. It depends.. Somethimes 2x SMT4 is better than 1x SMT8, sometimes it's not.

    • @atisbasak
      @atisbasak 3 роки тому

      IBM POWER10 will have upto 30 SMT8 cores per socket.

  • @billbob4243
    @billbob4243 3 роки тому +1

    2030, Notepad will require 64GB RAM.

  • @wilsonacero6722
    @wilsonacero6722 9 місяців тому

    amazing!, now please give me a good intel server with GPU and no worries about finding the right .DEB file or python libraries!

  • @garyslatter9854
    @garyslatter9854 4 роки тому +1

    #IBM still around?

    • @lawrencemanning
      @lawrencemanning 4 роки тому

      The share price ain't doing so well. Still lagging after coronavirus, unlike pretty much every other tech stock. Says a lot, I think. Annoying cos I bought some!

  • @JonathanSwiftUK
    @JonathanSwiftUK 4 роки тому +1

    Captive user base = locked in. Future will be AMD and Intel, I'd be surprised if ARM made any inroads in the data center. IBM is only really used for AS/400. Intel and AMD chips have plenty of cores/power right now for virtualisation. IBM is very niche in the data center. Nobody wants PB of memory in any machine, but having a couple of TB is nice, and .. practical. Putting too many VMs on a single box isn't such as good ideals. Intel and AMD put the memory controller on the cpu die for good reason, it would be interesting to see if in the future technology advances meant it made sense to decouple. IBM kit is .. very expensive. AWS and Azure would continue with their commodity hardware approach. Having said that, love to see what they can come up with. Unfortunately IBM tends to invent betamax-style products, whilst the world buys VHS.

    • @JonathanSwiftUK
      @JonathanSwiftUK 4 роки тому

      @@owowowdhxbxgakwlcybwxsimcwx very interesting, but unlikely to have appeal to those who spec and choose hardware in most corporate / enterprise environments. ARM is unproven for Windows, and not supported, as far as I know, for VMware. Most physical kit runs virtualisation like VMware or Hyper-v. Intel / AMD are proven parts.

    • @JonathanSwiftUK
      @JonathanSwiftUK 4 роки тому

      @@owowowdhxbxgakwlcybwxsimcwx had a further look at this, server cores, cache, memory capability, PCIe lanes etc look really good, very impressive, but it does come down to support. It would have to run VMware, other hypervisors, Redhat, CentOS, Windows server, etc, to break into the DC market.

    • @atisbasak
      @atisbasak 3 роки тому

      @@JonathanSwiftUK IBM POWER CPUs run various Linux distributions like RedHat, Suse Enterprise Linux, CentOS, Debian etc. along with IBM i and AIX and virtualisation software like VMware. Plus Google Cloud is already using IBM POWER CPUs.

    • @JonathanSwiftUK
      @JonathanSwiftUK 3 роки тому

      @@atisbasak Yes, it's true IBM has a presence in the data center, particularly with AS/400, but on price I don't believe they're competitive with HPE/Dell. Like Apple they are a closed eco-system in terms of their PowerPC offerings - vendor-locked. Where they use Intel CPUs (and they do, I've swapped Intel CPUs in IBMs) they should be viable for those who are IBM fans. Cisco (I'm thinking UCS) use Intel chips and have a good Enterprise management platform. I don't have enough experience of what management tools IBM have. HPE/Dell have good management tools. Managing hundreds or thousands of servers needs good management tools for deployment / maintenance (firmware/software/driver upgrades). 20 years ago Microsoft told me to buy AMD for Exchange servers and Intel for everything else. These days just about everything is virtualised, so when managing hardware and hypervisors integration is needed. Then you have storage, and that ties in with your integration, i.e. deploy these hypervisors, on these hardware platforms, and use these LUNs.

    • @atisbasak
      @atisbasak 3 роки тому

      And in addition to that, ARM servers CPUs like Ampere Altra Max and Marvell ThunderX3 based on Neoverse N1, Octeon 10 based on Neoverse N2, Sipearl Rhea based on Neoverse V1 and Nvidia Grace CPU based on ARM will revolutionise the data centers in all workloads including machine learning and high performance computing.

  • @antihero939
    @antihero939 4 роки тому +1

    Those new jump cuts really aren't doing you any favours

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому

      Still gaining confidence the S1 will keep focus when I am looking at the C200. This is partially testing out that feature.

  • @andljoy
    @andljoy 4 роки тому +6

    Can we finally get away from all this x86 and x86-64 nonsense please and leave the mess of CISC in the 80s where it belongs.
    That memory controller tho :). I want a POWER10 with a bank of HBM2 for ram :D

    • @tommihommi1
      @tommihommi1 4 роки тому +3

      with HBM2E you can basically max out that 1TB/s interface with just two stacks.

    • @tappy8741
      @tappy8741 4 роки тому

      @@tommihommi1 So create a special SFF mobo for workstation use with built in HBM2e

  • @dupajasio4801
    @dupajasio4801 4 роки тому

    Finally a promise from IBM for my new media server. I'll keep it down to 1PB of RAM. It should be OK. If something happens I can always invite a 64 year old expert to fix it. Amazing this crap is still around ...

  • @shapelessed
    @shapelessed 4 роки тому

    15c/120t CPU? What the actual f**k?

    • @shapelessed
      @shapelessed 4 роки тому +1

      Sorry for swearing but honestly, why so many threads for so little actual cores?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  4 роки тому

      A Power10 thread is a bit different than a Xeon/ EPYC thread

    • @shapelessed
      @shapelessed 4 роки тому

      @@ServeTheHomeVideo I'm currently designing a NAS server for a raspberry Pi in Node (which is a JavaScript runtime)... 4 ARM cores is plenty of power to simply serve a few files and do automatic backups through my app, even more since I'm splitting most tasks to multiple threads, but I'm still kind of interested who exactly would need a CPU like this, it's actually more like a GPU from the thread prespective (Don't get me wrong, I'm looking kind of loosely from that perspective, but still)
      I guess that's more about the flexibility, you could have a few *strong* cores or more weaker cores, if that's possible to split them like this... Also this approach could mean you could design your software in a way that would utilize cores normally for heavier tasks, and use the smaller "splitted" ones to do some 2th grade smaller tasks, meaning a more optimized CPU load.

  • @loveanimals-0197
    @loveanimals-0197 4 роки тому +3

    Guys, I know the Power architecture in and out. This is a whole load of BS. The main reason other companies don't care about SMT and memory clustering is because the world has moved on to scale-out and cheaper models with the best virtualizers like VMware. Power has a bullshit virtualization engine, PowerVM, and noone wants to use it.

    • @lawrencemanning
      @lawrencemanning 4 роки тому

      This is the whole commodity vs bespoke computing argument, which I too thought was decided 20 years ago for 99.9% of folks. I'm guessing IBM think as long as they offer something slightly cheaper as an upgrade then there customers throwing it all in the bin and going with someone who offers a migration onto a standard platform..... they'll win.

  • @cythascruseo6696
    @cythascruseo6696 4 роки тому

    Fat protogens

  • @n0tfr3shm1lk
    @n0tfr3shm1lk 4 роки тому +1

    бла бла бла.... много рожи мужика и мало картинок!

  • @ZeZeBatata69
    @ZeZeBatata69 4 роки тому +1

    No, a lot of us don't use Power CPUs, but nice try to justify the contents IBM sent you. :)