Jim Keller: Arm vs x86 vs RISC-V - Does it Matter?

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 135

  • @ryshask
    @ryshask 8 місяців тому +23

    When I saw Jim I knew it would be insanely great explanation.

  • @oraz.
    @oraz. Рік тому +82

    "what limit's computer performance is predictability". That's huge quote.

    • @bobbastian760
      @bobbastian760 6 місяців тому

      Yeah I was thinking the same thing, what crazy times we live in...

    • @bakedbeings
      @bakedbeings 6 місяців тому

      Yeah, in other words, avoiding the long access/retrieval times of RAM vs cpu cycle length 😢

    • @Satanist-zm2rq
      @Satanist-zm2rq 5 місяців тому

      It's quite natural, you can do anything faster if you can predict future needs.

    • @niks660097
      @niks660097 5 місяців тому

      @@bakedbeings If you can perfectly predict, long access times of RAM won't even matter, since you can queue up 1000s of memory fetches thus overlapping them as one and hiding their latency, if you have enough memory bandwidth, with perfect predictability CPUs will act like GPUs.

    • @bakedbeings
      @bakedbeings 5 місяців тому

      @@niks660097 Yep, predict to avoid.

  • @freakinccdevilleiv380
    @freakinccdevilleiv380 Рік тому +18

    I could listen to Jim Keller's insights for hours. Just serving him coffee I would feel like I'm wasting his time 😅

  • @scroopynooperz9051
    @scroopynooperz9051 3 роки тому +64

    CPU Jesus Jim Keller is the real deal

  • @kartikpodugu
    @kartikpodugu 2 роки тому +108

    OMG, he has explained so much, so clearly in so less time.
    If you understand, you can make it simple for others.
    Profound knowledge.

  • @lionelt.9124
    @lionelt.9124 3 роки тому +23

    Clean Architecture for software seems to have vary similar principles for the creation and maintenance of clean hardware architecture. Makes a lot of sense.

    • @bobbastian760
      @bobbastian760 6 місяців тому

      Except Clean Architecture becomes the aim rather than building apps. If your language does it out of the box fine, but most don't.

  • @c128stuff
    @c128stuff Рік тому +15

    The parallels between CPU design and Operating System design are interesting.
    Uncontrolled complexity as a result of adding features but not removing features? Totally.
    Leaky abstractions as a result of 'quick fixes' and premature optinizing? Absolutely.
    I'm currently writing an OS for an extremely minimal system (as in, memory measured in kilobytes, cpu speed in single digit megahertz, etc). Am at incarnation 3 of the design now. Yes, the previous 2 worked, but as I kept adding features which were 'required', things got more complex, making for more involved decisions, resulting in more overhead. Poking holes in abstraction layers did speed up some things, but ended up causing longer lasting resource contention, which ended up lowering overall performance, etc. Incarnation 3 takes all the things from the previous 2 incarnations, but with a new and clean design, with clean and unbroken abstractions, resulting in less complexity, and removal of functionality which was in the end just providing alternative ways to do the same things.
    Without being a CPU designer, this discussion is still very relatable.

  • @user-qf6yt3id3w
    @user-qf6yt3id3w 3 роки тому +93

    Keller is currently CTO of Tenstorrent, an AI company. They're using SiFive's RISC-V processors. What's interesting about this is that Keller is obviously keen to do his own RISC-V microarchitecture implementation. Someone who has had experience of high performance x86/x64 chips doing a RISC-V microarchiture would be really interesting. I particularly liked the way he was bullish about high performance chips with variable length instructions. RISC-V's base instruction set is fixed length 32 bit but the C extension allows for 16 bit instructions too. Because it's popular and part of the Linux ABI it would obviously be good to support. And Apple's M1 shows that you can get very high performance out of ARM64.

    • @howardc1964
      @howardc1964 2 роки тому +6

      As Jim notes, RISC-V is just the latest clean design for basic scalar processor. Great if you want what he calls "baby computers" OR use as a control processor for big block of specialized processing like AI. Thus the natural fit to AI processor startup opportunity. The design focus isn't even on the scalar processor so just license the scalar proc implementation. Focus on the AI parts which don't use any of the traditional scalar processing tool chain (C/C++ compile/link/debug) nyways.

    • @DVRC
      @DVRC Рік тому +3

      Keller isn't new to RISC architectures engineering: he worked on DEC Alpha (before it on the VAX 8800, which is a CISC ISA), MIPS and ARM SoCs implementations.

    • @RobBCactive
      @RobBCactive 6 місяців тому +2

      Ummm, I've heard Keller talking about ARM being unwilling to add the data types Tenstorrent needed. So he looked at RISC V and could quickly add the RTL for the types.
      Ironic as ARM came about because Intel wouldn't customise their 16bit 8086 for Acorn.

    • @обычныйчел-я3е
      @обычныйчел-я3е 6 місяців тому +1

      Tenstorrent is using their own cores. SiFive X280 was mentioned in a timeline just once, but it could be a midway solution or a prototype, because they seem much more proud about the next generation (and with their networking architecture the X280 seems useless for big accelerators)

    • @RobBCactive
      @RobBCactive 6 місяців тому

      @@обычныйчел-я3е I certainly had the impression Keller was rolling his own, specific to their AI project.

  • @LaurentLaborde
    @LaurentLaborde 2 роки тому +13

    you don't need branch predictor if you don't have branch :D

  • @aichrist
    @aichrist 9 місяців тому +4

    Jim Keller is a smart dude

  • @rakpiotr
    @rakpiotr 3 роки тому +53

    The real power of RISC-V is in common software ecosystem (toolchains, debuggers, OSes, system libraries, etc.) that anyone can use for their own needs. Neat, clean ISA is just nice bonus on top of that.

    • @SimGunther
      @SimGunther 2 роки тому

      Developing software alongside the ISA will let you eat the cake and have it too ;)

    • @mrdr9534
      @mrdr9534 6 місяців тому

      ?? I thought that RISC-V was "open arcchitecture" meaning that there potentially would be an even higher "proliferation" of "different ways to solve the same problem" than is the case when the architecture is "locked down" ?
      What am I missing ?
      Best regards.

    • @reindeer8890
      @reindeer8890 3 місяці тому

      @@mrdr9534 My impression is that the ISA is open, the implementation may not be. No doubt people will license reference designs or chips.
      My take on the main advantage of RISC-V is the licensing. Hopefully there are multiple vendors and cheaper prices.
      I do wonder why they simply didn't start with an existing workstation ISA, like MIPS or SPARC, but besides cruft they're probably tied up in copyright/patent knots.

  • @goldnoob6191
    @goldnoob6191 3 роки тому +16

    Outstanding video, I like that short format ! Every time I see Jim on YT I barely have time to finish the video.
    Great subject about instruction set, as a développer I've been thinking about the mess for quite a long time.
    Everything look alike now but that's good to have the point of an actual designer.
    Also many thanks for having designed so many great CPUs ! If you can advocate for faster memory and lower timings overall 🙏🥳

  • @MagierMax
    @MagierMax 3 роки тому +24

    I love how ian praises jim in his videos and jim himself is like:"F'd my future self so many times already"
    Thinking about improvement i imagine

    • @woolfel
      @woolfel 2 роки тому +2

      when a person knows the craft, they don't need to bullshit :) I think any engineer that's been working for more than a decade can relate. Jim is clearly a master of his craft and knows from experience.

  • @jolness1
    @jolness1 2 роки тому +17

    Jim is fucking brilliant. He’s also a great communicator which is super super important for running a team (and rare in tech from my personal experience)

  • @dixztube
    @dixztube Рік тому +2

    Great little clip going to watch more. Boy he sure is smart lol

  • @RobBCactive
    @RobBCactive 2 роки тому +1

    Ultra cool that Jim mentioned Perl in his IPC example! I missed that in the long version interview, I tend to listen while doing something routine. From my testing, despite csr/ret having a slow reputation it was other things bottle necking code and the spaghetti from long routines and duplication lead to bugs and maintenance issues.

  • @dp8jl
    @dp8jl 7 місяців тому +3

    He explained everything so easily

  • @manw3bttcks
    @manw3bttcks 3 місяці тому

    The thing that's neat about risc-v (if I understand), is that beyond the core design the new optional stuff is defined in optional extensions. So if I want to make a music player, I could just buy a risc-v processor that only implements the core plus just the extensions that are of use for audio encoding decoding. I could leave off stuff related to graphics for example if that had no use in my music player

  • @esra_erimez
    @esra_erimez 5 місяців тому

    I think this might be the 12th time I'm watching this video. The importance of this video cannot be overstated

  • @jasonchen-alienroid
    @jasonchen-alienroid 6 місяців тому +1

    Architecture shouldn't matter when you learn the best from each other. The issues arises when you have design decisions that's mutual exclusive in the long run and doesn't have a path forward.

  • @psikeyhackr6914
    @psikeyhackr6914 2 роки тому

    In the late 70s when the 16 bit processors were just entering the market Intel admitted that they could have made a better processor if they had forgone all compatibility with the 8080. There were two programs that would translate 8080 assembly language into 8086 assembly.
    One was Conv-86. I don't remember the other. It makes sense that the intelligence design is not the optimum after decades of super 8080s.

  • @Kneedragon1962
    @Kneedragon1962 6 місяців тому +1

    LOL ~ I hate that, where future Jim comes back and goes "What the f ck did you do that for?"

  • @AVX512
    @AVX512 2 роки тому +3

    If 80% is basically 8 instructions, the other thousand instructions must actually save a shit ton of time? Although I see massive speed gains when programs are compiled with SSE/AVX instructions for sure so maybe we could find a way to use those for more things, like sparse usage for programs that don't parallelize as well

    • @radivojevasiljevic3145
      @radivojevasiljevic3145 2 роки тому +3

      Problem is that there are SSE/AVX/AVX2/AVX512 versions of same instruction. Cray style vectors (and RISC V vector extension) enable exactly the same code executes on every CPU, change of vector length or number of execution units just change speed. Program vectorization is an old topic, Intel bought company which made such compilers 20 years ago. But in order that such technology work, ISA must have support. No scatter/gather operations? No vectorization of spare matrix vector multiplication. No vector masks? No conditional execution. More flexible vector ISA is, more applications can be vectorized. For applications with irregular parallelism OoO can handle it. Vectors are good as cheap way to have number of operations without having to made too wide superscalar (now) or to have simple instruction fetch while keeping multiple execution units busy (Cray 1).
      With x86 "same" instructions are not same at all, even simple move is actually few different op codes. And some normal instructions like jumps have crazy options, everything is documented in Intel's manuals. So it is far from just 8 instructions.

    • @RobBCactive
      @RobBCactive 6 місяців тому

      80% of general purpose program instructions.
      If you don't use SIMD it won't help you, but if you're doing heavy float64 calculations then pipelining them by operating on a vector is a huge acceleration.
      A general purpose CPU needs to handle well what the market expects. It's no good saying floats are not common and share the fp unit between 2 cores, when reviews do benchmarks on multi-threaded fp code, where your CPU is going to suck.

  • @canislupus616
    @canislupus616 Рік тому +2

    Is there a fully stable and official Python interpreter specifically tailored for RISC-V?

  • @salty4
    @salty4 3 роки тому +6

    I was waiting for this clip, lmao

    • @Supermrloo
      @Supermrloo 3 роки тому +1

      No way! Lmfao

    • @salty4
      @salty4 3 роки тому

      @@Supermrloo high IQ roast, I see.

  • @RetroPaul6502
    @RetroPaul6502 6 місяців тому

    What architecture/technology is Keller saying at 7m55s? The edit garbled the audio referring to an architecture that deprecated (sic.) a legacy mode. I'll have to dig out my architecture book.

    • @0MoTheG
      @0MoTheG 6 місяців тому

      AMD Zen. Not sure why he considers it clean slate though.
      But as he has been saying for the past decade: ISA doesn't matter.

  • @shanemshort
    @shanemshort 6 місяців тому

    "that's a problem for ron..." ... "later ron"

  • @phoenixsub7072
    @phoenixsub7072 3 роки тому +4

    Hehe i have minimum knowledge about these things ,i only know the differens between say CPU vs FPU but theres much Joy in listening to the words, tryin to get a hunch in these fields of knowledge.

    • @justinp9170
      @justinp9170 3 роки тому

      It's fun. Almost like getting a front row seat into the possible future lol

  • @sylviam6535
    @sylviam6535 Рік тому +6

    You can clearly see why Jim Keller is a legend in CPU design.

  • @LouisDuran
    @LouisDuran 6 місяців тому +1

    How did Intel let Jim get away? I guess he probably felt limited there.

  • @joesligo1516
    @joesligo1516 6 місяців тому

    Holy smokes, what a mind!

  • @davivify
    @davivify Рік тому +2

    Moving from x86 to an Arm-based processor certainly benefitted Apple, with their M1 and now M2 chips. They made several brave moves in their history - switching form OS-9 to X, then from Moto to Intel. And now, leaving Intel entirely. These transitions were not without pain. They forced their customers to come along for the ride and it was a bumpy one. But you could argue that it also made the product much healthier.

    • @senjaz
      @senjaz 3 місяці тому

      Apple has the advantage that it can design its chips around its specific workloads. One optimisation they made was with memory management. Since Apple frameworks use reference counting for memory management retain and release performance can be considered almost as important as load/store/branch so they made sure it was fast. It makes it difficult to compare architectures generally. While Apple silicon is definitely much better than x86 at running Mac OS and software, it's not necessarily so much better at running anything else.

  • @Armand79th
    @Armand79th 2 роки тому +1

    And now we have RISC-V cores that do 16/32bit modes..

  • @saultube44
    @saultube44 5 місяців тому

    Jim Keller: The Obi-Wan Kenobi of CPU Architecture, Master... I'm your loyal Apprentice 🤟🤘

  • @volodymyrdobrovolsky8610
    @volodymyrdobrovolsky8610 2 роки тому

    Dear Radivoje, the branch prediction is a bad decision as well. My replacement idea is much better.

  • @salty4
    @salty4 3 роки тому +9

    Another dumb question: Besides the legal issues, is it possible to add instruction decoders ( and other smart stuff) to any open source ISA like RISC V and make them compatible with x86 or ARM. Will intel release a real hybrid cpu with RISC V and a bunch of instruction decoders ( I read somewhere that m1 has something like this for accelerating x86 emulation ) or whatever and make this chip able to run multiple ISA codes etc ?

    • @DanielVDGarde
      @DanielVDGarde 3 роки тому +10

      Most chips are already very close to RISK once your past the decoder. And even modern (x86) decoders don't require much modification to support other ISA's.
      But what is the point, you really don't need to support multiple ISA's and in the end, you optimize your design for one specific ISA.

    • @seylaw
      @seylaw 3 роки тому +4

      There is no legal problem once these x86/x86-64 patents expire. :D

    • @law-abiding-criminal
      @law-abiding-criminal 3 роки тому

      @@seylaw when will they expire?

    • @seylaw
      @seylaw 3 роки тому +7

      @@law-abiding-criminal While some did expire already, unfortunately it will take a while to be practical: www.blopeur.com/2020/04/08/Intel-x86-patent-never-ending.html

    • @catchnkill
      @catchnkill 3 роки тому +1

      @@seylaw Yes, there is. Not just patents. Intel's x86 is copyrighted. The copyright protection is expired usually some years after the death of the copyright holder. Since Intel will not die soon, the x86 copyright is there to stay for a long time.

  • @lnostdal
    @lnostdal 2 роки тому

    Sounds very similar to complexity debt in software.

  • @beingatliberty
    @beingatliberty 2 роки тому +2

    does neuromorphic cpus come into this at some point? or is it a snare and delusion/fantasy to think practically of cpu's that restructure themselves on the basis of analysis of the instructions they are about to perform in the application they are going to run?

  • @ianoconnor1515
    @ianoconnor1515 5 місяців тому

    I would like to see raspberry pi release a reduced x64 chip.

  • @0MoTheG
    @0MoTheG 5 місяців тому

    ISA doesn't matter. Let's talk about what does!
    The answer is always: That depends.
    As the software isn't written by humans anymore and code size is not an issue anymore the old requirements and constrains are irrelevant.

  • @godnyx117
    @godnyx117 2 роки тому +5

    RISC-V is the future!
    Everything Open Source beats anything proprietary sooner or later so it's just a matter of time! I also heard that the instructions of RISC-V are less and the assembly is overall much easier and enjoyable to work with!
    Just give it some time and will see ;)

    • @rosomak8244
      @rosomak8244 6 місяців тому

      I know that it's a bit early, but I'm anticipating the superior linux desktop is just around the corner. (Sarcasm).

    • @godnyx117
      @godnyx117 6 місяців тому

      @@rosomak8244 Linux is PRACTICALLY not open source as it gets lots of sponsorships (both money and code) from companies. But it hasn't become unstable amd unusable (yet).

  • @alialibaba6672
    @alialibaba6672 6 місяців тому

    It could be an interesting interview if there was no kitchen background noise

  • @Ahmad-iv3dr
    @Ahmad-iv3dr 3 роки тому +1

    Could x86 be SoC?

  • @jimcallahan448
    @jimcallahan448 6 місяців тому

    Linear algebra is a high level description of potentially thousands or millions of multiplies and adds -- all very predictable from a terse equation and dependent on the exact data. This makes AI, Data Science and Statistics on a large scale possible. It should also enable physics and engineering calculations. All of this implemented in low level languages and callable from Python!

  • @kingoftennis94
    @kingoftennis94 2 роки тому +2

    This guy could spin up a new ISA to help conqure the galaxy before lunch tomorrow. He's not doing it cause he's in it for the ride, not the destination.

  • @salty4
    @salty4 3 роки тому +13

    Quick dumb question: What's stopping Intel and AMD from cleaning up the legacy garbage? Is there any talk or initiative happening recently? What will happen if AMD suddenly decides to release a chip without old legacy and useless instructions? Is this possible for custom chips like the consoles? Sorry if I am speaking nonsense btw :D

    • @Mpdarkguy
      @Mpdarkguy 3 роки тому +24

      it's not exactly "old legacy and useless" it's more of "legacy base with newer features added on". Changing the old stuff would mean changing almost literally everything since needed changes will just ripple downhill. It becomes a chain reaction of "we need to rethink everything" and at that point newer instruction sets like RISC-V already have that kind of work done.
      Simply put, it just wouldn't be x86 anymore

    • @RobBCactive
      @RobBCactive 3 роки тому +21

      They did clean up with AMD64, but older ops can be eliminated and emulated in micro-code.
      Modern chips pay more attention to running what is used today, obsolete instructions run slower.

    • @Freshbott2
      @Freshbott2 3 роки тому +13

      It's not a dumb question at all. To Intel's credit they moved away from a strictly CISC architecture when they released the Core series. The actual cores are effectively RISC cores and a lot of stuff is translated or dealt with by other circuitry on the chip. So the instructions actually carried out have been cleaned up and are easy for the prediction. The idea that x86 is falling behind stems from the fact Intel failed to implement their new process node for years and Apple brought out a stellar chip. But Apple gained from leveraging custom accelerators and really wide decode, big out of order execution buffer etc. Which Intel can will do. The real gains from RISC are more about the ecosystem, licencing, ease of implementation etc. There's nothing wrong with x86 Intel just had a marketing guy steering the ship and AMD had no one steering at all it seems like.

    • @aladdin8623
      @aladdin8623 3 роки тому +6

      Sorry, but the explanation sounds more like corporation politics and excuses for intel and less like a real unbiased technical analysis.
      CISC to RISC translation is not a clean up at all. RISC is indeed much more efficient and much faster, even faster than Keller claims. Apple's M1 is just one example for that. If the ISA really did not matter, our smartphones would be driven by x86 atom CPUs, but intel lost that performance per watts game too. Probably the biggest proof for the superiority of RISC is intel's switch themselves, by translating CISC to RISC in later CPU generations. Some people might still remember the Pentium 4 which was known for its very high heat output. After that the core design came out with much shorter pipelines than the P4's netburst. Branch and instruction prediction is not everything. The P4 was also an Out-of-Order CPU, but this increases the energy consumption. The solution was to step down to a less complex structure in combination with a better prediction. But what does hold back intel to switch to another ISA completely? Its because of the patents and compatibility.
      Apple absolutely made the right choice with the M1 and its x86 compatibility layer until they can leave behind old x86 completely. Also unlike intel they are in the position to switch and establish both, the OS and the underlying hardware.
      RISC-V has already landed in the industry, but there is still a long way to go. It needs virtual extensions and x86 layers etc. to dethrone x86 and finally bring more competition and innovation in the CPU landscape similar to the GPU market. In that sense I appreciate intel's step into the high end gpu market though, where they are finally forced to step up with more new ideas. But if there were no common API standards like vulkan etc. intel would probably play the same monopolistic cards like they do in the CPU segment for desktop pcs.

    • @Freshbott2
      @Freshbott2 3 роки тому +4

      @@aladdin8623 I don't think that's necessarily true. Even some of the OG RISC proponents have said there's nothing fundamentally more efficient about RISC but the benefits come from all the (potential) architecture surrounding the cores. Apple's decode and buffers and cache etc. are massive compared to Intel's. They also put the rest of the memory etc. Very close to the CPU. Intel's and AMD's isn't even the same width and AMD's isn't even the same internally, and at different process nodes with different implementations of the same ISA they've produced equivalent performance to Intel. I think that speaks to how arbitrary it is.
      The whole calling card of RISC is fixed instruction length, and Intel implementing 'risc' cores let them do all the complicated reassembly of everything coming out of the other end of multiple cores, and then again with hyperthreading. It let them optimise the prediction. So in theory, they're at no fundamental instruction-length disadvantage. And it's not particularly simpler apart from that. It was Jim Keller himself who said ARM's and MIPs and whatever other ISA is unfathomably complicated.
      Comparing Apple's Intel models with a single Apple SoC would have been like saying CISC is inherently better because IBM's Power architecture couldn't keep up in 2006.
      Even with Apple's amazing job the cores themselves aren't THAT powerful. Everything they've excelled at has leveraged the prediction or acceleration. Everything else puts them around about the same as smaller cores from Intel (Apples are huge). This is all with Intel at a process node disadvantage, and no LITTLE.big configuration. To call Atom cores a failure of Intel's ISA would be like taking only the small cores from a Qualcomm chip from 2013. ARM claims big.LITTLE can save up to 75% on power. Intel just hasn't sought to do that until now, and has only just successfully managed it.
      Apple originally approached Intel to design their mobile SoCs but Intel ignorantly thought it would be menial and wouldn't fit in their licensing model.
      To make a Watt for watt comparison isn't really possible because you're comparing the whole system not just the cores, let alone ISA let alone the implementation etc etc. If Intel brings themself to a level playing field then this is my opinion but I think there'll be little to no gap at all. RISCs lead will only be in licensing models, extensibility etc. As it currently is.
      This is all my opinion of course but it seems like Intel's finally had the kick in the arse they needed to stop making terrible business decisions, and x86 Will be around for a long time.
      None of this is excusing Intel. They don't deserve the position they're in.

  • @madmotorcyclist
    @madmotorcyclist 2 роки тому +1

    Bring back lisp machines :-)

  • @MarquisDeSang
    @MarquisDeSang 2 роки тому +1

    X86-64 a design that does not make any sens in a mobile world, ARM is better but it will get replaced by Risc-V for the same reason Linux has 100% of servers.

  • @volodymyrdobrovolsky8610
    @volodymyrdobrovolsky8610 2 роки тому +4

    The RISC-V is based on 40-year old ideas as RISC-V Foundation claims. There is no sense to port the huge x86 and ARM software ecosystems on it. Thus, RISC-V will never gain a victory over x86 and ARM. The most of positives about the RISC-V processor are arbitrary speculations. The advantage of RISC-V is open architecture. RISC-V has instructions of variable lengths. This is bad, it is a departure from the RISC architecture principles.
    The Contemporary microprocessors contain 8 specific hardware components: (1) SMT (Simultaneous Multithreading), (2) register renaming, (3) instruction reordering, (4) out-of-order execution, (5) speculative execution, (6) superscalar execution, (7) delayed branch, (8) branch prediction. These components make up some kind of a “magnificent eight” of components which essentially raise the performance of microprocessors. But unfortunately they are very complex. A processor core having these components is a full-fledged one, otherwise it is good for simple applications, e. g. for embedded systems.
    The “magnificent eight” of components is very hard to design, only the experienced firms and developers are able to do this, and much know-how was acquired, some effective solutions are patented. Particularly complex is the SMT. Only powerful and advanced firms like Intel, AMD, IBM are able to equip their processors with the “magnificent eight” components. It is not surprising that some Intel processors, and the famous Apple's M1 processor do not contain SMTs. If a company is able create the full-fledged RISC-V processor with all “magnificent eight” components then it would be a serious achievement, and such RISC-V would be considered of the World's class comparable with x86, with ARM, but not more. As far as I understand most of the developed RISC-V processors have no components from the “magnificent eight”, and are intended for embedded systems.
    A course directed on further development of RISC-V is a wrong way, and leads the computer architecture to deadlock. The RISC-V is not perspective for computer industry. The World demands absolutely novel microprocessor having much more higher performance than all contemporary ones. The novel and effective ideas on computer architectures do exist! Here’s such a novel processor architecture:
    V. K. Dobrovolskyi. Microprocessor Based on the Minimal Hardware Principle. Electronic Modeling, 2019, vol 41, No 6. pp. 77-90. The article is posted (under the Cyrillic name добровольский.pdf):
    www.emodel.org.ua/en/ touch ARCHIVE, then move to 2019, then to VOL 41, NO 6(2019) pp. 77-90.
    This processor does not have the “magnificent eight”, it is not necessary at all. This comment reflects different view on the RISC-V architecture, and the computer community has a right to become familiar with such a view. I’m Volodymyr Dobrovolskyi.

    • @radivojevasiljevic3145
      @radivojevasiljevic3145 2 роки тому

      Delayed branches? Bad drugs, with branch prediction delayed branches are not needed and if ISA has them (like MIPS and SPARC), they are small annoying artifact.

    • @StupidusMaximusTheFirst
      @StupidusMaximusTheFirst 7 місяців тому

      Hi, I don't have your knowledge on CPU internals etc, but why do you think this is bad somehow? RISC-V has an open ISA, which you are allowed to modify and if it is true what you say, that it misses some specific fundamental instructions/components, I'm sure a manufacturer can add those if they deem this as necessary, and if they are truly fundamental, isn't this the case? Where exactly is the problem?
      Also, I will respectfully disagree, performance is not all that matters, even if this is the end goal for a successful cpu and arch, there are many other issues with x86 and ARM, far more important than a slight concession on performance in RISC-V. What you concede with x86 and ARM is greater than what you potentially lose with a small performance hit. And it's still early days for this arch, I'm sure it will get there.

    • @volodymyrdobrovolsky8610
      @volodymyrdobrovolsky8610 7 місяців тому

      @@StupidusMaximusTheFirst The RISC-V absorbed all the best features of all the RISC processor architectures known, but not more. The RISC-V Foundation postulates: “The RISC-V ISA is based on computer architecture ideas that date back at least 40 years.” The only advantage of RISC-V is its open architecture

  • @erikboris8478
    @erikboris8478 6 місяців тому +2

    Now I understand why he comes in, designs an architecture, and then leaves for the next company. That way Future Jim won't have to deal with his mess.

  • @Noitisnt-ns7mo
    @Noitisnt-ns7mo Рік тому

    Lofty minds designing devices to be first utilized by the grosses souls.

  • @chengong388
    @chengong388 6 місяців тому +2

    Everybody keep saying instruction sets don't matter, except nobody could touch Apple in single thread performance with cores that run 30% higher clocks at 5x the power.
    Beat Apple, and then I'll believe "instruction sets don't matter"

    • @blipman17
      @blipman17 6 місяців тому

      Apple made a damn good chip! The fact that the decoder decodes ARM instead of RISC-V is kinda inconsequential. The die area to solve the decoding step also becomes minimal.

  • @ymi_yugy3133
    @ymi_yugy3133 5 місяців тому

    I have now seen multiple CPU designers, state that the architecture isn't really important. But in the CPUs actually out there there seems to be a huge power efficiency gap between ARM and AMD64.

  • @bobbastian760
    @bobbastian760 6 місяців тому

    Instruction sets are like government. The bureaucracy exists to feed the bureaucracy.

  • @bakedbeings
    @bakedbeings 6 місяців тому

    Need moar JK.

  • @hovant6666
    @hovant6666 6 місяців тому

    ARM can't do division, pretty fail tbh

    • @profbx5258
      @profbx5258 5 місяців тому

      Yea, really holds them back… 😴

  • @zorabixun
    @zorabixun 2 роки тому +1

    Yes, some differences always are over there,
    But what finally is the better cpu taking all of these differences ?
    X86
    RISC-V
    ARM
    I think practically we don't see any differences on the monitor during a work time. UI is waiting to type in a name, all processors do it in the same time, calculation is done in the same time perhaps different is 0.000001 second,
    They say about legacy and new instructions, but programming with C lang is the same in these 3 CPUs, programming with Assembler lang we need only some instructions to realise the logic the same in these 3 CPUs
    So imo, everything is about the processor's speed, for example 4GHz, 10GHz and so on, parallel computing on CPU 64 cores, or on GPU's 5000 cores
    Today i have seen a video about RISC-V computer, the Linux system started in the same time as on X86 ....
    I think we can gossiping only what architecture is better 😏 🤔 but practically for the end user it is irrelevant 🐰

  • @yourma-uh5um
    @yourma-uh5um Рік тому

    We could do with a throw out of old, unused instructions that have built up over the 40-50 years that x86 has been around.
    Problem is shifting all of the software, new and old over to a new ISA that has all over the fundamental and modern/exotic instructions, if people can't go back and use old software or play old games because developers don't want to recompile the source code for their software to support a new CPU architecture then people aren't going to buy a new CPU using the new ISA.
    I hope we do shift away from x86 eventually but it's going to cost a lot of time, money and energy doing so.

  • @domainmojo2162
    @domainmojo2162 Рік тому +2

    Mr. Jordan Peterson's brother in law, Jim...

  • @browaruspierogus2182
    @browaruspierogus2182 2 роки тому +1

    adding complexity is increasing failure

    • @StupidusMaximusTheFirst
      @StupidusMaximusTheFirst 7 місяців тому

      You can't avoid it though. At least the way things are going they have the right idea with specialised cpus for certain tasks, graphics, AI, etc. x86 kinda tried this with the math co-cpus, then they decided to just add everything on the cpu itself. Which eventually will cause issues or slow down your cpu, or even limit you in what you can do. A specialised chip for something specific, allows you to design for this, frees you to modify your chip, instructions etc anything you need for this to achieve your goal, if you had added all to a cpu you'd be kinda limited and it will hurt you eventually. The main reason they done this is because it is initially faster - having a 2nd or a 3rd co-cpu, those will need to talk to each other, and maybe back then cpus were getting faster quicker than buses or ways to have cpus exchange info. During the years I assume they have found ways to overcome those issues, like keeping those cpus close to each other and maybe they have other tricks too to achieve super fast exchanges between cpus.

  • @pe6649
    @pe6649 5 місяців тому

    We write 2024, Mai. Let' face it, he has not foreseen the victory of Arm over x86 2022, when Apple already has switched gears, and he has not foreseen the raise of NVidia and GPU computing WILL be AI computing.
    So obviously, it was not so obvious.
    Finally, RISC (Arm) has beaten CISC (x86)- it was predicted more than 30 years ago, but didn't happen for some time.. I read the first books..

  • @aladdin8623
    @aladdin8623 3 роки тому

    Hi Jim, i hope you don't mind some critique despite the fans here. Your opinions do sound quite interesting, but how can people be sure, that you don't take a more neutral position in terms of the ISA, because of your profession on the basis of an contracter for the IT industry.
    You have worked for several competitive corporations with different tech. If you favoured one ISA over another, you would take a side, which might anger other corporations and potential future clients.
    You yourself praised engineers in the teams, you have worked with for being highly intelligent, skilled people, without whom you could not have achieved such products. And there are many renowned engineers out there, who are saying in contrast to your claims, that the ISA very much does matter a lot.
    If it really did not, then why even intel themselves switched to a RISC basis in later CPU generations, to translate CISC. This is just one example of many, where RISC gets favoured.
    If you are not allowed to criticize liberally probably because of NDAs, than i understand that.

    • @tylerdurden3722
      @tylerdurden3722 3 роки тому +2

      Intel tried to kill off x86 more than once, but failed. It's the software world that won't let go.
      Intel is kinda unwillingly stuck with x86.
      But I heard that Intel is gonna buy RISK-V to start manufacturing those CPUs at its own foundries.
      I also heard that Intel is gonna start manufacturing ARM CPUs for Qualcomm, etc.
      So I don't think he's trying to be neutral.
      What he said was, that a new clean ISA is the best. But, eventually they all become bloated. RISC-V is currently the least bloated.
      X86 is the oldest, and thus the most bloated. So it's not that x86 is inherently significantly worse... Its that it's bloated. If you cleaned x86 up, then it's comparison to e.g. ARM would be neutral. But, the software world would embrace such a move.

    • @aladdin8623
      @aladdin8623 3 роки тому +4

      @@tylerdurden3722 You seem to believe, that intel was some kind of a good, innocent buddy or something like that. But history proofs, intel is more like a big blue shark, eating up small fishes, who might endanger intel's monopolylike power. For example intel killed transmeta and their revolutionary chips. RISC-V even poses a bigger threat to intel and therefore intel tries to buy SiFive. In Order to get rid of their competition intel even fud and bribed merchants to prefer their chips against AMD's. After that Intel got sued for that. But after the cross-licensing with AMD, intel somewhat stopped the dirty campaign against AMD, which did not stop intel to collaborate recently with their buddy Microsoft, to fit alder lake closely to Windows 11 to the disadvantage of AMD's latest CPUs.
      Yes intel tried to introduce a new ISA by itanium. But in truth they failed because the architecture, which has been developed with HP, was so bad. Itanium was not supposed to supersede x86. Whoever believes that, is naive towards a company like intel. The x86 patents are intel's bred and butter for decades and ensure their influential power. They are very aggressive to protect it and even gave Microsoft a warning, when the latter worked on their x86-64 emulator for Windows on Arm. Intel is very rich and got capable engineers, who try to work around the caveats of x86 to some extent. For many generations now, their CPUs aren't even pure x86 anymore. The micro architecture of intel chips are based on RISC internally, but outside they communicate over a x86 layer with programs. If intel wanted, they easily could introduce a real hybrid cpu in terms of different, mixed ISAs like for example apple's M1 to introduce a new ISA but to keep x86 as a compatibility layer for Rosetta 2. Intel only did adapt their CPUs after AMD extended the x86 by x64 extensions, which intel cross licensed.
      Let me finally speak about the mentioned bloat. Yes it is a problem for us pro users, developers and resources worldwide and all tech enthusiasts, who yearn for real progress and innovation. But it is not the same from the perspective of a big company like intel, which earns big money by their proprietary patent protected ISAs. As i proofed above, intel could easily introduce a new hybrid CPU with different ISAs. But why cut of a proprietary cash cow, when you still can milk it until it's death. They slow down the whole innovation process by keeping alive old patent protected technologies. When you look closely enough, you are going to notice, that intel from time to time tries to extend x86 by new proprietary extensions like MMX, SSE and AVX crap. By those, they try to establish new proprietary standards, on which programs depend. In that way, when x86-64 is going to lose patent protection after years, programs are still depend on intel's next extensions. Intel earns money by bloated chips.
      Real bloat-free Chips are only possible on the grounds of open and free APIs and ISAs. When those get obsolete after some years because of progress and innovation, they simply can be superseded by hybrid chips with compatibility layers. Or they even can get virtualized or temporarily wired on FPGAs etc.
      Hopefully my message does not get deleted by the channel again, which seems to be close with intel. Also no affront intended towards intel fans. But critique should be allowed in free and educated circles for the sake of innovation. Problems have to be addressed and not be kept under the carpet, if we want to come forward. Thanks

    • @aronhighgrove4100
      @aronhighgrove4100 2 роки тому +1

      @@tylerdurden3722 x86 is not simply bloated. It has many instructions that were meant to be used when programming in pure assembly, combining several steps in one. Now you would focus more simpler instructions that the compiler combines as needed.

  • @fbritorufino
    @fbritorufino 6 місяців тому

    But "80% of executions being composed of 6 instructions" isn't the same as "80% of the EXECUTION TIME is spent on those same instructions".

    • @0MoTheG
      @0MoTheG 5 місяців тому

      And your point is?

    • @fbritorufino
      @fbritorufino 5 місяців тому

      @@0MoTheG What I said. The information is imprecise and most probably understates the importance of the other instructions.

    • @0MoTheG
      @0MoTheG 5 місяців тому

      @@fbritorufino My body is 70% the same as a bucket of water
      and my genome is 99.9% that of an ape and 99% that of a pig.
      What do those numbers do for you?

    • @fbritorufino
      @fbritorufino 5 місяців тому

      @@0MoTheG Sorry, but WTF are you even talking about lol. If anything, you're further bolstering my point.

    • @0MoTheG
      @0MoTheG 5 місяців тому

      @@fbritorufino As you have no point you would interpret any number >50% the way you do.

  • @KushLemon
    @KushLemon 6 місяців тому +1

    Why do people find it so hard to speak coherently these days? The interviewer is an example of that.